Kubernetes/OpenShift drivers: allow setting dynamic k8s labels

Just like for the OpenStack/AWS/Azure drivers, allow to configure
dynamic metadata (labels) for kubernetes resources with information
about the corresponding node request.

Change-Id: I5d174edc6b7a49c2ab579a9a0b1b560389d6de82
This commit is contained in:
Benjamin Schanzel 2023-08-31 15:44:55 +02:00
parent 785f7dcbc9
commit 4660bb9aa7
No known key found for this signature in database
16 changed files with 207 additions and 49 deletions

View File

@ -277,6 +277,46 @@ Selecting the kubernetes driver adds the following options to the
that this field contains arbitrary key/value pairs and is
unrelated to the concept of labels in Nodepool.
.. attr:: dynamic-labels
:type: dict
:default: None
Similar to
:attr:`providers.[kubernetes].pools.labels.labels`,
but is interpreted as a format string with the following
values available:
* request: Information about the request which prompted the
creation of this node (note that the node may ultimately
be used for a different request and in that case this
information will not be updated).
* id: The request ID.
* labels: The list of labels in the request.
* requestor: The name of the requestor.
* requestor_data: Key/value information from the requestor.
* relative_priority: The relative priority of the request.
* event_id: The external event ID of the request.
* created_time: The creation time of the request.
* tenant_name: The name of the tenant associated with the
request.
For example:
.. code-block:: yaml
labels:
- name: pod-fedora
dynamic-labels:
request_info: "{request.id}"
.. attr:: annotations
:type: dict

View File

@ -240,6 +240,46 @@ Selecting the openshift pods driver adds the following options to the
that this field contains arbitrary key/value pairs and is
unrelated to the concept of labels in Nodepool.
.. attr:: dynamic-labels
:type: dict
:default: None
Similar to
:attr:`providers.[openshiftpods].pools.labels.labels`,
but is interpreted as a format string with the following
values available:
* request: Information about the request which prompted the
creation of this node (note that the node may ultimately
be used for a different request and in that case this
information will not be updated).
* id: The request ID.
* labels: The list of labels in the request.
* requestor: The name of the requestor.
* requestor_data: Key/value information from the requestor.
* relative_priority: The relative priority of the request.
* event_id: The external event ID of the request.
* created_time: The creation time of the request.
* tenant_name: The name of the tenant associated with the
request.
For example:
.. code-block:: yaml
labels:
- name: pod-fedora
dynamic-labels:
request_info: "{request.id}"
.. attr:: annotations
:type: dict

View File

@ -288,6 +288,46 @@ Selecting the openshift driver adds the following options to the
that this field contains arbitrary key/value pairs and is
unrelated to the concept of labels in Nodepool.
.. attr:: dynamic-labels
:type: dict
:default: None
Similar to
:attr:`providers.[openshift].pools.labels.labels`,
but is interpreted as a format string with the following
values available:
* request: Information about the request which prompted the
creation of this node (note that the node may ultimately
be used for a different request and in that case this
information will not be updated).
* id: The request ID.
* labels: The list of labels in the request.
* requestor: The name of the requestor.
* requestor_data: Key/value information from the requestor.
* relative_priority: The relative priority of the request.
* event_id: The external event ID of the request.
* created_time: The creation time of the request.
* tenant_name: The name of the tenant associated with the
request.
For example:
.. code-block:: yaml
labels:
- name: pod-fedora
dynamic-labels:
request_info: "{request.id}"
.. attr:: annotations
:type: dict

View File

@ -91,6 +91,7 @@ class KubernetesPool(ConfigPool):
pl.volumes = label.get('volumes')
pl.volume_mounts = label.get('volume-mounts')
pl.labels = label.get('labels')
pl.dynamic_labels = label.get('dynamic-labels', {})
pl.annotations = label.get('annotations')
pl.pool = self
self.labels[pl.name] = pl
@ -154,6 +155,7 @@ class KubernetesProviderConfig(ProviderConfig):
'volumes': list,
'volume-mounts': list,
'labels': dict,
'dynamic-labels': dict,
'annotations': dict,
'extra-resources': {str: int},
}

View File

@ -33,10 +33,12 @@ class K8SLauncher(NodeLauncher):
self.log.debug("Creating resource")
if self.label.type == "namespace":
resource = self.handler.manager.createNamespace(
self.node, self.handler.pool.name, self.label)
self.node, self.handler.pool.name, self.label,
self.handler.request)
else:
resource = self.handler.manager.createPod(
self.node, self.handler.pool.name, self.label)
self.node, self.handler.pool.name, self.label,
self.handler.request)
self.node.state = zk.READY
self.node.python_path = self.label.python_path

View File

@ -156,22 +156,16 @@ class KubernetesProvider(Provider, QuotaSupport):
break
time.sleep(1)
def createNamespace(self, node, pool, label, restricted_access=False):
def createNamespace(
self, node, pool, label, request, restricted_access=False
):
name = node.id
namespace = "%s-%s" % (pool, name)
user = "zuul-worker"
self.log.debug("%s: creating namespace" % namespace)
k8s_labels = {}
if label.labels:
k8s_labels.update(label.labels)
k8s_labels.update({
'nodepool_node_id': node.id,
'nodepool_provider_name': self.provider.name,
'nodepool_pool_name': pool,
'nodepool_node_label': label.name,
})
k8s_labels = self._getK8sLabels(label, node, pool, request)
# Create the namespace
ns_body = {
@ -309,7 +303,7 @@ class KubernetesProvider(Provider, QuotaSupport):
self.log.info("%s: namespace created" % namespace)
return resource
def createPod(self, node, pool, label):
def createPod(self, node, pool, label, request):
container_body = {
'name': label.name,
'image': label.image,
@ -365,16 +359,7 @@ class KubernetesProvider(Provider, QuotaSupport):
'privileged': label.privileged,
}
k8s_labels = {}
if label.labels:
k8s_labels.update(label.labels)
k8s_labels.update({
'nodepool_node_id': node.id,
'nodepool_provider_name': self.provider.name,
'nodepool_pool_name': pool,
'nodepool_node_label': label.name,
})
k8s_labels = self._getK8sLabels(label, node, pool, request)
k8s_annotations = {}
if label.annotations:
k8s_annotations.update(label.annotations)
@ -391,7 +376,7 @@ class KubernetesProvider(Provider, QuotaSupport):
'restartPolicy': 'Never',
}
resource = self.createNamespace(node, pool, label,
resource = self.createNamespace(node, pool, label, request,
restricted_access=True)
namespace = resource['namespace']
@ -439,3 +424,23 @@ class KubernetesProvider(Provider, QuotaSupport):
def unmanagedQuotaUsed(self):
# TODO: return real quota information about quota
return QuotaInformation()
def _getK8sLabels(self, label, node, pool, request):
k8s_labels = {}
if label.labels:
k8s_labels.update(label.labels)
for k, v in label.dynamic_labels.items():
try:
k8s_labels[k] = v.format(request=request.getSafeAttributes())
except Exception:
self.log.exception("Error formatting tag %s", k)
k8s_labels.update({
'nodepool_node_id': node.id,
'nodepool_provider_name': self.provider.name,
'nodepool_pool_name': pool,
'nodepool_node_label': label.name,
})
return k8s_labels

View File

@ -94,6 +94,7 @@ class OpenshiftPool(ConfigPool):
pl.volumes = label.get('volumes')
pl.volume_mounts = label.get('volume-mounts')
pl.labels = label.get('labels')
pl.dynamic_labels = label.get('dynamic-labels', {})
pl.annotations = label.get('annotations')
pl.pool = self
self.labels[pl.name] = pl
@ -162,6 +163,7 @@ class OpenshiftProviderConfig(ProviderConfig):
'volumes': list,
'volume-mounts': list,
'labels': dict,
'dynamic-labels': dict,
'annotations': dict,
'extra-resources': {str: int},
}

View File

@ -34,14 +34,15 @@ class OpenshiftLauncher(NodeLauncher):
self.log.debug("Creating resource")
project = "%s-%s" % (self.handler.pool.name, self.node.id)
self.node.external_id = self.handler.manager.createProject(
self.node, self.handler.pool.name, project, self.label)
self.node, self.handler.pool.name, project, self.label,
self.handler.request)
self.zk.storeNode(self.node)
resource = self.handler.manager.prepareProject(project)
if self.label.type == "pod":
self.handler.manager.createPod(
self.node, self.handler.pool.name,
project, self.label.name, self.label)
project, self.label.name, self.label, self.handler.request)
self.handler.manager.waitForPod(project, self.label.name)
resource['pod'] = self.label.name
self.node.connection_type = "kubectl"

View File

@ -130,19 +130,11 @@ class OpenshiftProvider(Provider, QuotaSupport):
break
time.sleep(1)
def createProject(self, node, pool, project, label):
def createProject(self, node, pool, project, label, request):
self.log.debug("%s: creating project" % project)
# Create the project
k8s_labels = {}
if label.labels:
k8s_labels.update(label.labels)
k8s_labels.update({
'nodepool_node_id': node.id,
'nodepool_provider_name': self.provider.name,
'nodepool_pool_name': pool,
'nodepool_node_label': label.name,
})
k8s_labels = self._getK8sLabels(label, node, pool, request)
proj_body = {
'apiVersion': 'project.openshift.io/v1',
@ -228,7 +220,7 @@ class OpenshiftProvider(Provider, QuotaSupport):
self.log.info("%s: project created" % project)
return resource
def createPod(self, node, pool, project, pod_name, label):
def createPod(self, node, pool, project, pod_name, label, request):
self.log.debug("%s: creating pod in project %s" % (pod_name, project))
container_body = {
'name': label.name,
@ -286,15 +278,7 @@ class OpenshiftProvider(Provider, QuotaSupport):
'privileged': label.privileged,
}
k8s_labels = {}
if label.labels:
k8s_labels.update(label.labels)
k8s_labels.update({
'nodepool_node_id': node.id,
'nodepool_provider_name': self.provider.name,
'nodepool_pool_name': pool,
'nodepool_node_label': label.name,
})
k8s_labels = self._getK8sLabels(label, node, pool, request)
k8s_annotations = {}
if label.annotations:
@ -355,3 +339,23 @@ class OpenshiftProvider(Provider, QuotaSupport):
def unmanagedQuotaUsed(self):
# TODO: return real quota information about quota
return QuotaInformation()
def _getK8sLabels(self, label, node, pool, request):
k8s_labels = {}
if label.labels:
k8s_labels.update(label.labels)
for k, v in label.dynamic_labels.items():
try:
k8s_labels[k] = v.format(request=request.getSafeAttributes())
except Exception:
self.log.exception("Error formatting tag %s", k)
k8s_labels.update({
'nodepool_node_id': node.id,
'nodepool_provider_name': self.provider.name,
'nodepool_pool_name': pool,
'nodepool_node_label': label.name,
})
return k8s_labels

View File

@ -83,6 +83,7 @@ class OpenshiftPodsProviderConfig(OpenshiftProviderConfig):
'volumes': list,
'volume-mounts': list,
'labels': dict,
'dynamic-labels': dict,
'annotations': dict,
'extra-resources': {str: int},
}

View File

@ -27,7 +27,8 @@ class OpenshiftPodLauncher(OpenshiftLauncher):
pod_name = "%s-%s" % (self.label.name, self.node.id)
project = self.handler.pool.name
self.handler.manager.createPod(self.node, self.handler.pool.name,
project, pod_name, self.label)
project, pod_name, self.label,
self.handler.request)
self.node.external_id = "%s-%s" % (project, pod_name)
self.node.interface_ip = pod_name
self.zk.storeNode(self.node)

View File

@ -37,6 +37,11 @@ providers:
image: docker.io/fedora:28
labels:
environment: qa
dynamic-labels:
# Note: we double the braces to deal with unit-test
# pre-processing of this file. The output and actual
# file syntax is single braces.
tenant: "{{request.tenant_name}}"
privileged: true
node-selector:
storageType: ssd

View File

@ -45,6 +45,8 @@ providers:
shell-type: csh
labels:
environment: qa
dynamic-labels:
tenant: "{{request.tenant_name}}"
privileged: true
node-selector:
storageType: ssd

View File

@ -198,7 +198,8 @@ class TestDriverKubernetes(tests.DBTestCase):
'nodepool_node_id': '0000000000',
'nodepool_provider_name': 'kubespray',
'nodepool_pool_name': 'main',
'nodepool_node_label': 'pod-extra'
'nodepool_node_label': 'pod-extra',
'tenant': 'tenant-1',
},
})
self.assertEqual(pod['spec'], {

View File

@ -233,7 +233,8 @@ class TestDriverOpenshift(tests.DBTestCase):
'nodepool_node_id': '0000000000',
'nodepool_provider_name': 'openshift',
'nodepool_pool_name': 'main',
'nodepool_node_label': 'pod-extra'
'nodepool_node_label': 'pod-extra',
'tenant': 'tenant-1',
},
})
self.assertEqual(pod['spec'], {

View File

@ -0,0 +1,11 @@
---
features:
- |
The Kubernetes and OpenShift drivers now support adding dynamic metadata,
i.e. Pod and Namespace labels, with information about the corresponding
node request. This is analogous to the existing dynamic tags of the
OpenStack, AWS, and Azure drivers.
See :attr:`providers.[kubernetes].pools.labels.dynamic-labels`,
:attr:`providers.[openshift].pools.labels.dynamic-labels`, and
:attr:`providers.[openshiftpods].pools.labels.dynamic-labels` for details.