[fedora-atomic][k8s] Support default Keystone auth policy file

With the new config option `keystone_auth_default_policy`, cloud admin
can set a default keystone auth policy for k8s cluster when the
keystone auth is enabled. As a result, user can use their current
keystone user to access k8s cluster as long as they're assigned
correct roles, and they will get the pre-defined permissions
set by the cloud provider.

The default policy now is based on the v2 format recently introduced
in k8s-keystone-auth which is getting more useful now. For example,
in v1 it doesn't support a policy for user to access resources from
all namespaces but kube-system, but v2 can do that.

NOTE: Now we're using openstackmagnum dockerhub repo until CPO
team fixing their image release issue.

Task: 30069
Story: 1755770

Change-Id: I2425e957bd99edc92482b6f11ca0b1f91fe59ff6
This commit is contained in:
Feilong Wang 2019-03-14 16:49:37 +13:00
parent 05c27f2d73
commit d8df9d0c36
12 changed files with 349 additions and 24 deletions

View File

@ -46,6 +46,7 @@ MAGNUM_CERTIFICATE_CACHE_DIR=${MAGNUM_CERTIFICATE_CACHE_DIR:-/var/lib/magnum/cer
MAGNUM_CONF_DIR=/etc/magnum
MAGNUM_CONF=$MAGNUM_CONF_DIR/magnum.conf
MAGNUM_API_PASTE=$MAGNUM_CONF_DIR/api-paste.ini
MAGNUM_K8S_KEYSTONE_AUTH_DEFAULT_POLICY=$MAGNUM_CONF_DIR/k8s_keystone_auth_default_policy.json
MAGNUM_POLICY=$MAGNUM_CONF_DIR/policy.yaml
if is_ssl_enabled_service "magnum" || is_service_enabled tls-proxy; then
@ -98,6 +99,8 @@ function configure_magnum {
create_magnum_conf
create_api_paste_conf
create_k8s_keystone_auth_default_poliy
}
# create_magnum_accounts() - Set up common required magnum accounts
@ -117,6 +120,10 @@ function create_magnum_accounts {
"$MAGNUM_SERVICE_PROTOCOL://$MAGNUM_SERVICE_HOST:$MAGNUM_SERVICE_PORT/v1" \
"$MAGNUM_SERVICE_PROTOCOL://$MAGNUM_SERVICE_HOST:$MAGNUM_SERVICE_PORT/v1"
# Create for Kubernetes Keystone auth
get_or_create_role k8s_admin
get_or_create_role k8s_developer
get_or_create_role k8s_viewer
}
# create_magnum_conf() - Create a new magnum.conf file
@ -224,6 +231,8 @@ function create_magnum_conf {
default_volume_type=$(iniget /etc/cinder/cinder.conf DEFAULT default_volume_type)
iniset $MAGNUM_CONF cinder default_docker_volume_type $default_volume_type
iniset $MAGNUM_CONF drivers send_cluster_metrics False
iniset $MAGNUM_CONF kubernetes keystone_auth_default_policy $MAGNUM_K8S_KEYSTONE_AUTH_DEFAULT_POLICY
}
function create_api_paste_conf {
@ -231,6 +240,10 @@ function create_api_paste_conf {
cp $MAGNUM_DIR/etc/magnum/api-paste.ini $MAGNUM_API_PASTE
}
function create_k8s_keystone_auth_default_poliy {
cp $MAGNUM_DIR/etc/magnum/keystone_auth_default_policy.sample $MAGNUM_K8S_KEYSTONE_AUTH_DEFAULT_POLICY
}
# create_magnum_cache_dir() - Part of the init_magnum() process
function create_magnum_cache_dir {
# Create cache dir

View File

@ -34,6 +34,7 @@ created and managed by Magnum to support the COE's.
#. `Container Monitoring`_
#. `Kubernetes External Load Balancer`_
#. `Rolling Upgrade`_
#. `Keystone Authentication and Authorization for Kubernetes`_
Overview
========
@ -3236,3 +3237,9 @@ Rolling Upgrade
===============
.. include:: rolling-upgrade.rst
=======
Keystone Authentication and Authorization for Kubernetes
========================================================
.. include:: k8s-keystone-authN-authZ.rst

View File

@ -0,0 +1,145 @@
Currently, there are several ways to access the Kubernetes API, such as RBAC,
ABAC, Webhook, etc. Though RBAC is the best way for most of the cases, Webhook
provides a good approach for Kubernetes to query an outside REST service when
determining user privileges. In other words, we can use a Webhook to integrate
other IAM service into Kubernetes. In our case, under the OpenStack context,
we're introducing the intergration with Keystone auth for Kubernetes.
Since Rocky release, we introduced a new label named `keystone_auth_enabled`,
by default it's True, which means user can get this very nice feature out of
box.
Create roles
------------
As cloud provider, necessary Keystone roles for Kubernetes cluster operations
need to be created for different users, e.g. k8s_admin, k8s_developer,
k8s_viewer
- k8s_admin role can create/update/delete Kubernetes cluster, can also
associate roles to other normal users within the tenant
- k8s_developer can create/update/delete/watch Kubernetes cluster resources
- k8s_viewer can only have read access to Kubernetes cluster resources
NOTE: Those roles will be created automatically in devstack. Below is the
samples commands about how to create them.
.. code-block:: bash
source ~/openstack_admin_credentials
for role in "k8s_admin" "k8s_developer" "k8s_viewer"; do openstack role create $role; done
openstack user create demo_viewer --project demo --password password
openstack role add --user demo_viewer --project demo k8s_viewer
openstack user create demo_editor --project demo --password password
openstack role add --user demo_developer --project demo k8s_developer
openstack user create demo_admin --project demo --password password
openstack role add --user demo_admin --project demo k8s_admin
Those roles should be public and can be accessed by any project so that user
can configure their cluster's role policies with those roles.
Setup configmap for authorization policies
------------------------------------------
Given the k8s Keystone auth has been enable by default, user can get the
authentication support by default without doing anything. However, user can't
do anything actually before setup a default authorization policies.
The authorization policy can be specified using an existing configmap name in
the cluster, by doing this, the policy could be changed dynamically without
the k8s-keystone-auth service restart.
Or the policy can be read from a default policy file. In devstack, the policy
file will be created automatically.
Currently, k8s-keystone-auth service supports four types of policies:
- user. The Keystone user ID or name.
- roject. The Keystone project ID or name.
- role. The user role defined in Keystone.
- group. The group is not a Keystone concept actually, its supported for
backward compatibility, you can use group as project ID.
For example, in the following configmap, we only allow the users in
project demo with k8s-viewer role in OpenStack to query the pod information
from all the namespaces. So we need to update the configmap
`k8s-keystone-auth-policy` which has been created in kube-system namespace.
.. code-block:: bash
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: k8s-keystone-auth-policy
namespace: kube-system
data:
policies: |
[
{
"resource": {
"verbs": ["get", "list", "watch"],
"resources": ["pods"],
"version": "*",
"namespace": "default"
},
"match": [
{
"type": "role",
"values": ["k8s-viewer"]
},
{
"type": "project",
"values": ["demo"]
}
]
}
]
EOF
Please note that the default configmap name is `k8s-keystone-auth-policy`, user
can change it, but they have to change the config of the k8s keystone auth
service configuration as well and restart the service.
Now user need to get a token from Keystone to have a kubeconfig for kubectl,
user can also get the config with Magnum python client.
Here is a sample of the kubeconfig:
.. code-block:: bash
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: CERT-DATA==
server: https://172.24.4.25:6443
name: k8s-2
contexts:
- context:
cluster: k8s-2
user: openstackuser
name: openstackuser@kubernetes
current-context: openstackuser@kubernetes
kind: Config
preferences: {}
users:
- name: openstackuser
user:
exec:
command: /bin/bash
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- -c
- >
if [ -z ${OS_TOKEN} ]; then
echo 'Error: Missing OpenStack credential from environment variable $OS_TOKEN' > /dev/stderr
exit 1
else
echo '{ "apiVersion": "client.authentication.k8s.io/v1alpha1", "kind": "ExecCredential", "status": { "token": "'"${OS_TOKEN}"'"}}'
fi
Now after export the Keystone token to OS_TOKEN, user should be able to list
pods with kubectl.

View File

@ -0,0 +1,76 @@
[
{
"users":{
"roles":[
"k8s_admin"
],
"projects":[
"$PROJECT_ID"
]
},
"resource_permissions":{
"*/*":[
"*"
]
},
"nonresource_permissions":{
"/healthz":[
"get",
"post"
]
}
},
{
"users":{
"roles":[
"k8s_developer"
],
"projects":[
"$PROJECT_ID"
]
},
"resource_permissions":{
"!kube-system/['apiServices', 'bindings', 'componentstatuses', 'configmaps', 'cronjobs', 'customResourceDefinitions', 'deployments', 'endpoints', 'events', 'horizontalPodAutoscalers', 'ingresses', 'initializerConfigurations', 'jobs', 'limitRanges', 'localSubjectAccessReviews', 'namespaces', 'networkPolicies', 'persistentVolumeClaims', 'persistentVolumes', 'podDisruptionBudgets', 'podPresets', 'podTemplates', 'pods', 'replicaSets', 'replicationControllers', 'resourceQuotas', 'secrets', 'selfSubjectAccessReviews', 'serviceAccounts', 'services', 'statefulSets', 'storageClasses', 'subjectAccessReviews', 'tokenReviews']":[
"*"
],
"*/['clusterrolebindings', 'clusterroles', 'rolebindings', 'roles', 'controllerrevisions', 'nodes', 'podSecurityPolicies']":[
"get",
"list",
"watch"
],
"*/['certificateSigningRequests']":[
"create",
"delete",
"get",
"list",
"watch",
"update"
]
}
},
{
"users":{
"roles":[
"k8s_viewer"
],
"projects":[
"$PROJECT_ID"
]
},
"resource_permissions":{
"!kube-system/['tokenReviews']":[
"*"
],
"!kube-system/['apiServices', 'bindings', 'componentstatuses', 'configmaps', 'cronjobs', 'customResourceDefinitions', 'deployments', 'endpoints', 'events', 'horizontalPodAutoscalers', 'ingresses', 'initializerConfigurations', 'jobs', 'limitRanges', 'localSubjectAccessReviews', 'namespaces', 'networkPolicies', 'persistentVolumeClaims', 'persistentVolumes', 'podDisruptionBudgets', 'podPresets', 'podTemplates', 'pods', 'replicaSets', 'replicationControllers', 'resourceQuotas', 'secrets', 'selfSubjectAccessReviews', 'serviceAccounts', 'services', 'statefulSets', 'storageClasses', 'subjectAccessReviews']":[
"get",
"list",
"watch"
],
"*/['clusterrolebindings', 'clusterroles', 'rolebindings', 'roles', 'controllerrevisions', 'nodes', 'podSecurityPolicies']":[
"get",
"list",
"watch"
]
}
}
]

View File

@ -30,6 +30,7 @@ from magnum.conf import drivers
from magnum.conf import glance
from magnum.conf import heat
from magnum.conf import keystone
from magnum.conf import kubernetes
from magnum.conf import magnum_client
from magnum.conf import neutron
from magnum.conf import nova
@ -60,6 +61,7 @@ drivers.register_opts(CONF)
glance.register_opts(CONF)
heat.register_opts(CONF)
keystone.register_opts(CONF)
kubernetes.register_opts(CONF)
magnum_client.register_opts(CONF)
neutron.register_opts(CONF)
nova.register_opts(CONF)

36
magnum/conf/kubernetes.py Normal file
View File

@ -0,0 +1,36 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may not
# use this file except in compliance with the License. You may obtain a copy
# of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from oslo_config import cfg
kubernetes_group = cfg.OptGroup(name='kubernetes',
title='Options for the Kubernetes addons')
kubernetes_opts = [
cfg.StrOpt('keystone_auth_default_policy',
default="/etc/magnum/keystone_auth_default_policy.json",
help='Explicitly specify the path to the file defined default '
'Keystone auth policy for Kubernetes cluster when '
'the Keystone auth is enabled. Vendors can put their '
'specific default policy here'),
]
def register_opts(conf):
conf.register_group(kubernetes_group)
conf.register_opts(kubernetes_opts, group=kubernetes_group)
def list_opts():
return {
kubernetes_group: kubernetes_opts
}

View File

@ -6,7 +6,7 @@ step="enable-keystone-auth"
printf "Starting to run ${step}\n"
if [ "$(echo $KEYSTONE_AUTH_ENABLED | tr '[:upper:]' '[:lower:]')" != "false" ]; then
_prefix=${CONTAINER_INFRA_PREFIX:-docker.io/k8scloudprovider/}
_prefix=${CONTAINER_INFRA_PREFIX:-docker.io/openstackmagnum/}
CERT_DIR=/etc/kubernetes/certs
# Create policy configmap for keystone auth
@ -65,26 +65,7 @@ metadata:
namespace: kube-system
data:
policies: |
[
{
"resource": {
"verbs": ["list"],
"resources": ["pods", "services", "deployments", "pvc"],
"version": "*",
"namespace": "default"
},
"match": [
{
"type": "role",
"values": ["member"]
},
{
"type": "project",
"values": ["$PROJECT_ID"]
}
]
}
]
$KEYSTONE_AUTH_DEFAULT_POLICY
EOF
}
@ -182,4 +163,4 @@ EOF
fi
printf "Finished running ${step}\n"
printf "Finished running ${step}\n"

View File

@ -10,6 +10,8 @@
# License for the specific language governing permissions and limitations
# under the License.
import json
from oslo_log import log as logging
from oslo_utils import strutils
@ -158,6 +160,7 @@ class K8sFedoraTemplateDefinition(k8s_template_def.K8sTemplateDefinition):
extra_params['max_node_count'] = cluster.node_count + 1
self._set_cert_manager_params(cluster, extra_params)
self._get_keystone_auth_default_policy(extra_params)
return super(K8sFedoraTemplateDefinition,
self).get_params(context, cluster_template, cluster,
@ -180,6 +183,35 @@ class K8sFedoraTemplateDefinition(k8s_template_def.K8sTemplateDefinition):
ca_cert.get_private_key(),
ca_cert.get_private_key_passphrase()).replace("\n", "\\n")
def _get_keystone_auth_default_policy(self, extra_params):
# NOTE(flwang): This purpose of this function is to make the default
# policy more flexible for different cloud providers. Since the default
# policy was "hardcode" in the bash script and vendors can't change
# it unless fork it. So the new config option is introduced to address
# this. This function can be extracted to k8s_template_def.py if k8s
# keystone auth feature is adopted by other drivers.
default_policy = """[{"resource": {"verbs": ["list"],
"resources": ["pods", "services", "deployments", "pvc"],
"version": "*", "namespace": "default"},
"match": [{"type": "role","values": ["member"]},
{"type": "project", "values": ["$PROJECT_ID"]}]}]"""
keystone_auth_enabled = extra_params.get("keystone_auth_enabled",
"True")
if strutils.bool_from_string(keystone_auth_enabled):
try:
with open(CONF.kubernetes.keystone_auth_default_policy) as f:
default_policy = json.dumps(json.loads(f.read()))
except Exception:
LOG.error("Failed to load default keystone auth policy")
default_policy = json.dumps(json.loads(default_policy),
sort_keys=True)
washed_policy = default_policy.replace('"', '\"') \
.replace("$PROJECT_ID", extra_params["project_id"])
extra_params["keystone_auth_default_policy"] = washed_policy
def get_env_files(self, cluster_template, cluster):
env_files = []

View File

@ -563,10 +563,14 @@ parameters:
default:
true
keystone_auth_default_policy:
type: string
description: Json read from /etc/magnum/keystone_auth_default_policy.json
k8s_keystone_auth_tag:
type: string
description: tag of the k8s_keystone_auth container
default: 1.13.0
default: v1.14.0
monitoring_enabled:
type: boolean
@ -965,7 +969,10 @@ resources:
$enable-ingress-octavia: {get_file: ../../common/templates/kubernetes/fragments/enable-ingress-octavia.sh}
template: {get_file: ../../common/templates/kubernetes/fragments/enable-ingress-controller.sh}
- get_file: ../../common/templates/kubernetes/fragments/kube-dashboard-service.sh
- get_file: ../../common/templates/kubernetes/fragments/enable-keystone-auth.sh
- str_replace:
template: {get_file: ../../common/templates/kubernetes/fragments/enable-keystone-auth.sh}
params:
"$KEYSTONE_AUTH_DEFAULT_POLICY": {get_param: keystone_auth_default_policy}
- get_file: ../../common/templates/kubernetes/fragments/enable-auto-healing.sh
- get_file: ../../common/templates/kubernetes/fragments/enable-auto-scaling.sh
# Helm Based Installation Configuration Scripts

View File

@ -27,6 +27,15 @@ CONF = magnum.conf.CONF
class TestClusterConductorWithK8s(base.TestCase):
def setUp(self):
super(TestClusterConductorWithK8s, self).setUp()
self.keystone_auth_default_policy = ('[{"match": [{"type": "role", '
'"values": ["member"]}, {"type": '
'"project", "values": '
'["project_id"]}], "resource": '
'{"namespace": "default", '
'"resources": ["pods", '
'"services", "deployments", '
'"pvc"], "verbs": ["list"], '
'"version": "*"}}]')
self.cluster_template_dict = {
'image_id': 'image_id',
'flavor_id': 'flavor_id',
@ -109,6 +118,7 @@ class TestClusterConductorWithK8s(base.TestCase):
'master_flavor_id': 'master_flavor_id',
'flavor_id': 'flavor_id',
'project_id': 'project_id',
'keystone_auth_default_policy': self.keystone_auth_default_policy
}
self.worker_ng_dict = {
'uuid': '5d12f6fd-a196-4bf0-ae4c-1f639a523a53',
@ -335,6 +345,7 @@ class TestClusterConductorWithK8s(base.TestCase):
'max_node_count': 2,
'master_image': 'image_id',
'minion_image': 'image_id',
'keystone_auth_default_policy': self.keystone_auth_default_policy
}
if missing_attr is not None:
expected.pop(mapping[missing_attr], None)
@ -473,6 +484,7 @@ class TestClusterConductorWithK8s(base.TestCase):
'max_node_count': 2,
'master_image': 'image_id',
'minion_image': 'image_id',
'keystone_auth_default_policy': self.keystone_auth_default_policy
}
self.assertEqual(expected, definition)
@ -591,6 +603,7 @@ class TestClusterConductorWithK8s(base.TestCase):
'max_node_count': 2,
'master_image': None,
'minion_image': None,
'keystone_auth_default_policy': self.keystone_auth_default_policy
}
self.assertEqual(expected, definition)
self.assertEqual(
@ -1020,6 +1033,7 @@ class TestClusterConductorWithK8s(base.TestCase):
'max_node_count': 2,
'master_image': 'image_id',
'minion_image': 'image_id',
'keystone_auth_default_policy': self.keystone_auth_default_policy
}
self.assertEqual(expected, definition)
self.assertEqual(

View File

@ -661,6 +661,7 @@ class AtomicK8sTemplateDefinitionTestCase(BaseK8sTemplateDefinitionTestCase):
mock_cluster = mock.MagicMock()
mock_cluster.labels = {}
mock_cluster.uuid = '5d12f6fd-a196-4bf0-ae4c-1f639a523a52'
mock_cluster.project_id = 'e2a6c8b0-a3c2-42a3-b3f4-1f639a523a52'
mock_osc = mock.MagicMock()
mock_osc.magnum_url.return_value = 'http://127.0.0.1:9511/v1'
@ -717,6 +718,7 @@ class AtomicK8sTemplateDefinitionTestCase(BaseK8sTemplateDefinitionTestCase):
mock_cluster = mock.MagicMock()
mock_cluster.labels = {"ingress_controller": "octavia"}
mock_cluster.uuid = '5d12f6fd-a196-4bf0-ae4c-1f639a523a52'
mock_cluster.project_id = 'e2a6c8b0-a3c2-42a3-b3f4-1f639a523a52'
mock_osc = mock.MagicMock()
mock_osc.magnum_url.return_value = 'http://127.0.0.1:9511/v1'
@ -770,6 +772,7 @@ class AtomicK8sTemplateDefinitionTestCase(BaseK8sTemplateDefinitionTestCase):
mock_cluster = mock.MagicMock()
mock_cluster.labels = {"ingress_controller": "octavia"}
mock_cluster.uuid = '5d12f6fd-a196-4bf0-ae4c-1f639a523a52'
mock_cluster.project_id = 'e2a6c8b0-a3c2-42a3-b3f4-1f639a523a52'
mock_osc = mock.MagicMock()
mock_osc.magnum_url.return_value = 'http://127.0.0.1:9511/v1'

View File

@ -0,0 +1,9 @@
---
issues:
- |
With the new config option keystone_auth_default_policy, cloud admin
can set a default keystone auth policy for k8s cluster when the
keystone auth is enabled. As a result, user can use their current
keystone user to access k8s cluster as long as they're assigned
correct roles, and they will get the pre-defined permissions
defined by the cloud provider.