Retire kuryr-kubernetes: remove repo content

kuryr-kubernetes repository are retiring
and this commit remove the content of this repo.

Depends-On: https://review.opendev.org/c/openstack/project-config/+/923072

[1] https://review.opendev.org/c/openstack/governance/+/922507

Change-Id: Ied35a7d48e569e8dcf6708cf0facc847a72d16e6
This commit is contained in:
Ghanshyam Mann 2024-06-28 14:41:52 -07:00 committed by Doug Goldstein
parent 398bd46126
commit 5a0478117b
397 changed files with 8 additions and 58333 deletions

View File

@ -1,17 +0,0 @@
[run]
branch = True
source = kuryr_kubernetes
omit = kuryr_kubernetes/tests/*
[report]
ignore_errors = True
exclude_lines =
# Have to re-enable the standard pragma
pragma: no cover
# Don't complain if tests don't hit defensive assertion code:
raise NotImplementedError
# Don't complain if non-runnable code isn't run:
if __name__ == .__main__.:

View File

@ -1,3 +0,0 @@
.tox
.dockerignore
*.Dockerfile

77
.gitignore vendored
View File

@ -1,77 +0,0 @@
*.py[cod]
# C extensions
*.so
# Packages
*.egg
*.egg-info
dist
build
eggs
parts
bin
var
sdist
develop-eggs
lib
lib64
# Installer logs
pip-log.txt
# Unit test / coverage reports
nosetests.xml
cover
# Translations
*.mo
# Complexity
output/*.html
output/*/index.html
# Sphinx
doc/build
# Files created by releasenotes build
releasenotes/build
# pbr generates these
AUTHORS
ChangeLog
# Editors
*~
*.sw?
# Hidden directories
!/.coveragerc
!/.gitignore
!/.gitreview
!/.mailmap
!/.pylintrc
!/.testr.conf
!/.stestr.conf
.stestr
contrib/vagrant/.vagrant
# Configuration files
etc/kuryr.conf.sample
# Ignore user specific local.conf settings for vagrant
contrib/vagrant/user_local.conf
# Log files
*.log
# devstack-heat
*.pem
# Binaries from docker images builds
kuryr-cni-bin
kuryr-cni
# editor tags dir
tags

View File

@ -1,6 +0,0 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v1.4.0
hooks:
- id: flake8

View File

@ -1,3 +0,0 @@
[DEFAULT]
test_path=${OS_TEST_PATH:-./kuryr_kubernetes/tests/}
top_dir=./

View File

@ -1,259 +0,0 @@
# Copyright 2018 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- job:
name: kuryr-kubernetes-base
parent: devstack-tempest
description: |
Base Kuryr Kubernetes tempest job. There are neither Neutron nor Octavia
services, its meant to be extended.
required-projects:
- openstack/devstack-plugin-container
- openstack/kuryr-kubernetes
- openstack/kuryr-tempest-plugin
- openstack/tempest
timeout: 10800
post-run:
- playbooks/copy-k8s-logs.yaml
- playbooks/copy-crio-logs.yaml
host-vars:
controller:
devstack_plugins:
kuryr-kubernetes: https://opendev.org/openstack/kuryr-kubernetes
devstack-plugin-container: https://opendev.org/openstack/devstack-plugin-container
kuryr-tempest-plugin: https://opendev.org/openstack/kuryr-tempest-plugin
vars:
# Default swap size got shrinked to 1 GB, it's way too small for us.
configure_swap_size: 8192
tempest_test_regex: '^(kuryr_tempest_plugin.tests.)'
# Since we switched the amphora image to focal, the tests started
# requiring more time.
tempest_test_timeout: 2400
tox_envlist: 'all'
tempest_plugins:
- kuryr-tempest-plugin
devstack_localrc:
CONTAINER_ENGINE: crio
CRIO_VERSION: "1.28"
ENABLE_TLS: true
ETCD_USE_RAMDISK: true
KURYR_ENABLED_HANDLERS: vif,endpoints,service,namespace,pod_label,policy,kuryrnetworkpolicy,kuryrnetwork,kuryrport,kuryrloadbalancer
KURYR_SG_DRIVER: policy
KURYR_SUBNET_DRIVER: namespace
KURYR_SUPPORT_POD_SECURITY: true
devstack_services:
c-api: false
c-bak: false
c-sch: false
c-vol: false
cinder: false
coredns: false
# Need to disable dstat due to bug https://github.com/dstat-real/dstat/pull/162
dstat: false
etcd3: true
g-api: true
g-reg: true
key: true
kubernetes-master: true
kuryr-daemon: true
kuryr-kubernetes: true
mysql: true
n-api-meta: true
n-api: true
n-cond: true
n-cpu: true
n-sch: true
placement-api: true
placement-client: true
rabbit: true
s-account: false
s-container: false
s-object: false
s-proxy: false
tempest: true
zuul_copy_output:
'{{ devstack_log_dir }}/kubernetes': 'logs'
'{{ devstack_log_dir }}/crio': 'logs'
irrelevant-files:
- ^.*\.rst$
- ^doc/.*$
- ^releasenotes/.*$
- ^contrib/.*$
- job:
name: kuryr-kubernetes-base-ovn
parent: kuryr-kubernetes-base
description: Base kuryr-kubernetes-job with OVN
required-projects:
- openstack/neutron
timeout: 10800
post-run: playbooks/copy-k8s-logs.yaml
host-vars:
controller:
devstack_plugins:
neutron: https://opendev.org/openstack/neutron
vars:
network_api_extensions_common:
- tag-ports-during-bulk-creation
devstack_localrc:
KURYR_NEUTRON_DEFAULT_ROUTER: kuryr-router
ML2_L3_PLUGIN: ovn-router,trunk,qos
OVN_BRANCH: v21.06.0
OVS_BRANCH: "a4b04276ab5934d087669ff2d191a23931335c87"
OVN_BUILD_FROM_SOURCE: true
OVN_L3_CREATE_PUBLIC_NETWORK: true
VAR_RUN_PATH: /usr/local/var/run
devstack_services:
neutron-tag-ports-during-bulk-creation: true
neutron: true
q-qos: true
q-trunk: true
zuul_copy_output:
'{{ devstack_base_dir }}/data/ovn': 'logs'
'{{ devstack_log_dir }}/ovsdb-server-nb.log': 'logs'
'{{ devstack_log_dir }}/ovsdb-server-sb.log': 'logs'
'/home/zuul/np_sctp_kubetest.log': 'logs'
- job:
name: kuryr-kubernetes-base-ovs
parent: kuryr-kubernetes-base
description: Base kuryr-kubernetes-job with OVS
required-projects:
- openstack/devstack-plugin-container
- openstack/kuryr-kubernetes
- openstack/kuryr-tempest-plugin
- openstack/tempest
- openstack/neutron
timeout: 10800
post-run: playbooks/copy-k8s-logs.yaml
host-vars:
controller:
devstack_plugins:
neutron: https://opendev.org/openstack/neutron
vars:
network_api_extensions_common:
- tag-ports-during-bulk-creation
devstack_services:
neutron-tag-ports-during-bulk-creation: true
neutron: true
ovn-controller: false
ovn-northd: false
ovs-vswitchd: false
ovsdb-server: false
q-agt: true
q-dhcp: true
q-l3: true
q-meta: true
q-ovn-metadata-agent: false
q-svc: true
q-trunk: true
devstack_localrc:
KURYR_ENFORCE_SG_RULES: true
ML2_L3_PLUGIN: router
Q_AGENT: openvswitch
Q_ML2_PLUGIN_MECHANISM_DRIVERS: openvswitch
Q_ML2_TENANT_NETWORK_TYPE: vxlan
zuul_copy_output:
'{{ devstack_log_dir }}/ovsdb-server-nb.log': 'logs'
'{{ devstack_log_dir }}/ovsdb-server-sb.log': 'logs'
- job:
name: kuryr-kubernetes-octavia-base
parent: kuryr-kubernetes-base-ovn
description: |
Kuryr-Kubernetes tempest job using OVN and ovn-octavia driver for Kuryr
required-projects:
- openstack/octavia
- openstack/python-octaviaclient
- openstack/ovn-octavia-provider
- openstack/octavia-tempest-plugin
pre-run: playbooks/get_amphora_tarball.yaml
host-vars:
controller:
devstack_plugins:
octavia: https://opendev.org/openstack/octavia
ovn-octavia-provider: https://opendev.org/openstack/ovn-octavia-provider
octavia-tempest-plugin: https://opendev.org/openstack/octavia-tempest-plugin
vars:
tempest_plugins:
- kuryr-tempest-plugin
- octavia-tempest-plugin
devstack_localrc:
KURYR_EP_DRIVER_OCTAVIA_PROVIDER: ovn
KURYR_ENFORCE_SG_RULES: false
KURYR_K8S_OCTAVIA_MEMBER_MODE: L2
KURYR_LB_ALGORITHM: SOURCE_IP_PORT
OCTAVIA_AMP_IMAGE_FILE: /tmp/test-only-amphora-x64-haproxy-ubuntu-focal.qcow2
OCTAVIA_AMP_IMAGE_NAME: test-only-amphora-x64-haproxy-ubuntu-focal
OCTAVIA_AMP_IMAGE_SIZE: 3
devstack_local_conf:
post-config:
$OCTAVIA_CONF:
controller_worker:
amp_active_retries: 9999
api_settings:
enabled_provider_drivers: amphora:'Octavia Amphora driver',ovn:'Octavia OVN driver'
health_manager:
failover_threads: 2
health_update_threads: 2
stats_update_threads: 2
devstack_services:
octavia: true
o-api: true
o-cw: true
o-da: true
o-hk: true
o-hm: true
- job:
name: kuryr-kubernetes-octavia-base-ovs
parent: kuryr-kubernetes-base-ovs
nodeset: kuryr-nested-virt-ubuntu-jammy
description: |
Kuryr-Kubernetes tempest job using OVS and amphora driver for Octavia
required-projects:
- openstack/octavia
- openstack/python-octaviaclient
- openstack/octavia-tempest-plugin
pre-run: playbooks/get_amphora_tarball.yaml
host-vars:
controller:
devstack_plugins:
octavia: https://opendev.org/openstack/octavia
octavia-tempest-plugin: https://opendev.org/openstack/octavia-tempest-plugin
vars:
tempest_plugins:
- kuryr-tempest-plugin
- octavia-tempest-plugin
devstack_localrc:
OCTAVIA_AMP_IMAGE_FILE: /tmp/test-only-amphora-x64-haproxy-ubuntu-focal.qcow2
OCTAVIA_AMP_IMAGE_NAME: test-only-amphora-x64-haproxy-ubuntu-focal
OCTAVIA_AMP_IMAGE_SIZE: 3
LIBVIRT_TYPE: kvm
LIBVIRT_CPU_MODE: host-passthrough
devstack_local_conf:
post-config:
$OCTAVIA_CONF:
controller_worker:
amp_active_retries: 9999
health_manager:
failover_threads: 2
health_update_threads: 2
stats_update_threads: 2
devstack_services:
octavia: true
o-api: true
o-cw: true
o-hk: true
o-hm: true

View File

@ -1,144 +0,0 @@
# Copyright 2021 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- job:
name: kuryr-kubernetes-e2e-np
parent: devstack
description: |
Kuryr-Kubernetes job with OVN and Octavia provider OVN running k8s network policy e2e tests
required-projects:
- openstack/devstack-plugin-container
- openstack/kuryr-kubernetes
- openstack/neutron
- openstack/octavia
- openstack/ovn-octavia-provider
- openstack/python-octaviaclient
pre-run: playbooks/get_amphora_tarball.yaml
post-run:
- playbooks/run_k8s_e2e_tests.yaml
- playbooks/copy-k8s-logs.yaml
- playbooks/copy-crio-logs.yaml
post-timeout: 7200
host-vars:
controller:
devstack_plugins:
devstack-plugin-container: https://opendev.org/openstack/devstack-plugin-container
kuryr-kubernetes: https://opendev.org/openstack/kuryr-kubernetes
neutron: https://opendev.org/openstack/neutron
octavia: https://opendev.org/openstack/octavia
ovn-octavia-provider: https://opendev.org/openstack/ovn-octavia-provider
vars:
network_api_extensions_common:
- tag-ports-during-bulk-creation
devstack_localrc:
CONTAINER_ENGINE: crio
CRIO_VERSION: "1.28"
ETCD_USE_RAMDISK: true
KURYR_ENABLED_HANDLERS: vif,endpoints,service,namespace,pod_label,policy,kuryrnetworkpolicy,kuryrnetwork,kuryrport,kuryrloadbalancer
KURYR_ENFORCE_SG_RULES: false
KURYR_EP_DRIVER_OCTAVIA_PROVIDER: ovn
KURYR_K8S_API_PORT: 6443
KURYR_K8S_CLOUD_PROVIDER: false
KURYR_K8S_OCTAVIA_MEMBER_MODE: L2
KURYR_LB_ALGORITHM: SOURCE_IP_PORT
KURYR_NEUTRON_DEFAULT_ROUTER: kuryr-router
KURYR_SG_DRIVER: policy
KURYR_SUBNET_DRIVER: namespace
ML2_L3_PLUGIN: ovn-router,trunk,qos
OCTAVIA_AMP_IMAGE_FILE: "/tmp/test-only-amphora-x64-haproxy-ubuntu-focal.qcow2"
OCTAVIA_AMP_IMAGE_NAME: "test-only-amphora-x64-haproxy-ubuntu-focal"
OCTAVIA_AMP_IMAGE_SIZE: 3
OVN_BRANCH: v21.06.0
OVS_BRANCH: "a4b04276ab5934d087669ff2d191a23931335c87"
OVN_BUILD_FROM_SOURCE: true
OVN_L3_CREATE_PUBLIC_NETWORK: true
PHYSICAL_NETWORK: public
Q_AGENT: ovn
Q_BUILD_OVS_FROM_GIT: true
Q_ML2_PLUGIN_MECHANISM_DRIVERS: ovn,logger
Q_ML2_PLUGIN_TYPE_DRIVERS: local,flat,vlan,geneve
Q_ML2_TENANT_NETWORK_TYPE: geneve
Q_USE_PROVIDERNET_FOR_PUBLIC: true
VAR_RUN_PATH: /usr/local/var/run
devstack_services:
# TODO(dmellado):Temporary workaround until proper fix
base: false
c-api: false
c-bak: false
c-sch: false
c-vol: false
cinder: false
coredns: false
# Need to disable dstat due to bug https://github.com/dstat-real/dstat/pull/162
dstat: false
etcd3: true
g-api: true
g-reg: true
key: true
kubernetes-master: true
kuryr-daemon: true
kuryr-kubernetes: true
mysql: true
n-api-meta: true
n-api: true
n-cond: true
n-cpu: true
n-sch: true
neutron-tag-ports-during-bulk-creation: true
neutron: true
o-api: true
o-cw: true
o-da: true
o-hk: true
o-hm: true
octavia: true
ovn-controller: true
ovn-northd: true
placement-api: true
placement-client: true
q-agt: false
q-dhcp: false
q-l3: false
q-meta: false
q-ovn-metadata-agent: true
q-qos: true
q-svc: true
q-trunk: true
rabbit: true
s-account: false
s-container: false
s-object: false
s-proxy: false
devstack_local_conf:
post-config:
$OCTAVIA_CONF:
controller_worker:
amp_active_retries: 9999
api_settings:
enabled_provider_drivers: amphora:'Octavia Amphora driver',ovn:'Octavia OVN driver'
kubetest_version: v1.22.5
np_parallel_number: 2
gopkg: go1.16.12.linux-amd64.tar.gz
np_sleep: 30
zuul_copy_output:
'/home/zuul/np_kubetest.log': 'logs'
'/home/zuul/np_sctp_kubetest.log': 'logs'
'{{ devstack_log_dir }}/kubernetes': 'logs'
'{{ devstack_log_dir }}/crio': 'logs'
irrelevant-files:
- ^.*\.rst$
- ^doc/.*$
- ^releasenotes/.*$
- ^contrib/.*$
voting: false

View File

@ -1,63 +0,0 @@
# Copyright 2018 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- nodeset:
name: openstack-centos-7-single-node
nodes:
- name: controller
label: centos-7
groups:
- name: tempest
nodes:
- controller
- nodeset:
name: kuryr-nested-virt-ubuntu-jammy
nodes:
- name: controller
label: nested-virt-ubuntu-jammy
groups:
- name: tempest
nodes:
- controller
- nodeset:
name: kuryr-nested-virt-two-node-jammy
nodes:
- name: controller
label: nested-virt-ubuntu-jammy
- name: compute1
label: nested-virt-ubuntu-jammy
groups:
# Node where tests are executed and test results collected
- name: tempest
nodes:
- controller
# Nodes running the compute service
- name: compute
nodes:
- controller
- compute1
# Nodes that are not the controller
- name: subnode
nodes:
- compute1
# Switch node for multinode networking setup
- name: switch
nodes:
- controller
# Peer nodes for multinode networking setup
- name: peers
nodes:
- compute1

View File

@ -1,45 +0,0 @@
# Copyright 2018 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- project-template:
name: kuryr-kubernetes-tempest-jobs
check:
jobs:
- kuryr-kubernetes-tempest
- kuryr-kubernetes-tempest-defaults
- kuryr-kubernetes-tempest-systemd
- kuryr-kubernetes-tempest-multinode
- kuryr-kubernetes-tempest-multinode-ovs
- kuryr-kubernetes-tempest-ipv6
- kuryr-kubernetes-tempest-ipv6-ovs
- kuryr-kubernetes-tempest-amphora
- kuryr-kubernetes-tempest-amphora-ovs
- kuryr-kubernetes-tempest-annotation-project-driver
gate:
jobs:
- kuryr-kubernetes-tempest
- kuryr-kubernetes-tempest-systemd
experimental:
jobs:
- kuryr-kubernetes-tempest-pools-namespace
- kuryr-kubernetes-tempest-multinode-ha
- kuryr-kubernetes-tempest-dual-stack
- project:
templates:
- openstack-python3-jobs
- publish-openstack-docs-pti
- release-notes-jobs-python3
- check-requirements
- kuryr-kubernetes-tempest-jobs

View File

@ -1,239 +0,0 @@
# Copyright 2018 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- job:
name: kuryr-kubernetes-tempest
parent: kuryr-kubernetes-octavia-base
description: |
Kuryr-Kubernetes tempest job running kuryr containerized
- job:
name: kuryr-kubernetes-tempest-ovn-provider-ovn
parent: kuryr-kubernetes-octavia-base
description: |
Kuryr-Kubernetes alias for kuryr kubernetes tempest test.
Because of the change we introduced in switching over to Neutron OVN
and Octavia OVN provider, this can be removed after updating
ovn-octavia-provider zuul project.
- job:
name: kuryr-kubernetes-tempest-systemd
parent: kuryr-kubernetes-octavia-base
description: |
Kuryr-Kubernetes tempest job using octavia and running kuryr as systemd
services
vars:
devstack_localrc:
KURYR_K8S_CONTAINERIZED_DEPLOYMENT: false
- job:
name: kuryr-kubernetes-tempest-centos-7
parent: kuryr-kubernetes-tempest-systemd
nodeset: openstack-centos-7-single-node
voting: false
- job:
name: kuryr-kubernetes-tempest-defaults
parent: kuryr-kubernetes-octavia-base
nodeset: kuryr-nested-virt-ubuntu-jammy
description: |
Kuryr-Kubernetes tempest job running kuryr containerized with OVN,
Octavias amphora, default set of handlers, default SG driver and default
subnet driver.
host-vars:
controller:
devstack_plugins:
octavia: https://opendev.org/openstack/octavia
octavia-tempest-plugin: https://opendev.org/openstack/octavia-tempest-plugin
vars:
devstack_localrc:
KURYR_ENABLED_HANDLERS: ''
KURYR_ENFORCE_SG_RULES: true
KURYR_EP_DRIVER_OCTAVIA_PROVIDER: default
KURYR_K8S_OCTAVIA_MEMBER_MODE: L3
KURYR_LB_ALGORITHM: ROUND_ROBIN
KURYR_SG_DRIVER: default
KURYR_SUBNET_DRIVER: default
LIBVIRT_TYPE: kvm
LIBVIRT_CPU_MODE: host-passthrough
devstack_local_conf:
post-config:
$OCTAVIA_CONF:
controller_worker:
amp_active_retries: 9999
api_settings:
enabled_provider_drivers: amphora:'Octavia Amphora driver'
health_manager:
failover_threads: 2
health_update_threads: 2
stats_update_threads: 2
devstack_services:
q-trunk: true
o-da: false
voting: false
- job:
name: kuryr-kubernetes-tempest-ipv6
nodeset: kuryr-nested-virt-ubuntu-jammy
parent: kuryr-kubernetes-octavia-base
description: |
Kuryr-Kubernetes tempest job running kuryr containerized with IPv6 pod
and service networks using OVN and Octavia Amphora
# TODO(gryf): investigate why NP does not work with IPv6
host-vars:
controller:
devstack_plugins:
octavia: https://opendev.org/openstack/octavia
octavia-tempest-plugin: https://opendev.org/openstack/octavia-tempest-plugin
vars:
devstack_localrc:
KURYR_ENABLED_HANDLERS: ''
KURYR_ENFORCE_SG_RULES: true
KURYR_EP_DRIVER_OCTAVIA_PROVIDER: default
KURYR_IPV6: true
KURYR_K8S_OCTAVIA_MEMBER_MODE: L3
KURYR_LB_ALGORITHM: ROUND_ROBIN
KURYR_SG_DRIVER: default
KURYR_SUBNET_DRIVER: default
LIBVIRT_TYPE: kvm
LIBVIRT_CPU_MODE: host-passthrough
devstack_local_conf:
post-config:
$OCTAVIA_CONF:
controller_worker:
amp_active_retries: 9999
api_settings:
enabled_provider_drivers: amphora:'Octavia Amphora driver'
health_manager:
failover_threads: 2
health_update_threads: 2
stats_update_threads: 2
devstack_services:
q-trunk: true
o-da: false
voting: false
- job:
name: kuryr-kubernetes-tempest-ipv6-ovs
parent: kuryr-kubernetes-octavia-base-ovs
description: |
Kuryr-Kubernetes tempest job running kuryr containerized with IPv6 pod
and service networks based on OVS
# TODO(gryf): investigate why NP does not work with IPv6
vars:
devstack_localrc:
KURYR_ENABLED_HANDLERS: ''
KURYR_IPV6: true
KURYR_SG_DRIVER: default
KURYR_SUBNET_DRIVER: default
devstack_services:
q-trunk: false
voting: false
- job:
name: kuryr-kubernetes-tempest-dual-stack
parent: kuryr-kubernetes-octavia-base
description: |
Kuryr-Kubernetes tempest job running kuryr containerized with dual stack
pod and service networks
vars:
devstack_localrc:
KURYR_DUAL_STACK: true
voting: false
- job:
name: kuryr-kubernetes-tempest-pools-namespace
parent: kuryr-kubernetes-octavia-base
description: |
Tempest with containers, port pools and namespace subnet driver
vars:
devstack_localrc:
KURYR_SUBNET_DRIVER: namespace
KURYR_ENABLED_HANDLERS: vif,endpoints,service,namespace,pod_label,policy,kuryrnetworkpolicy,kuryrnetwork,kuryrport,kuryrloadbalancer
KURYR_SG_DRIVER: policy
KURYR_USE_PORT_POOLS: true
KURYR_POD_VIF_DRIVER: neutron-vif
KURYR_VIF_POOL_DRIVER: neutron
KURYR_CONFIGMAP_MODIFIABLE: false
- job:
name: kuryr-kubernetes-tempest-annotation-project-driver
parent: kuryr-kubernetes-octavia-base
description: |
Run kuryr-Kubernetes tempest job with annotation project driver
vars:
devstack_localrc:
KURYR_PROJECT_DRIVER: annotation
voting: true
- job:
name: kuryr-kubernetes-tempest-amphora
parent: kuryr-kubernetes-base-ovn
nodeset: kuryr-nested-virt-ubuntu-jammy
required-projects:
- openstack/octavia
- openstack/python-octaviaclient
- openstack/octavia-tempest-plugin
pre-run: playbooks/get_amphora_tarball.yaml
host-vars:
controller:
devstack_plugins:
octavia: https://opendev.org/openstack/octavia
octavia-tempest-plugin: https://opendev.org/openstack/octavia-tempest-plugin
vars:
tempest_plugins:
- kuryr-tempest-plugin
- octavia-tempest-plugin
devstack_localrc:
KURYR_ENFORCE_SG_RULES: true
OCTAVIA_AMP_IMAGE_FILE: /tmp/test-only-amphora-x64-haproxy-ubuntu-focal.qcow2
OCTAVIA_AMP_IMAGE_NAME: test-only-amphora-x64-haproxy-ubuntu-focal
OCTAVIA_AMP_IMAGE_SIZE: 3
LIBVIRT_TYPE: kvm
LIBVIRT_CPU_MODE: host-passthrough
devstack_local_conf:
post-config:
$OCTAVIA_CONF:
controller_worker:
amp_active_retries: 9999
health_manager:
failover_threads: 2
health_update_threads: 2
stats_update_threads: 2
devstack_services:
octavia: true
o-api: true
o-cw: true
o-hk: true
o-hm: true
voting: false
- job:
name: kuryr-kubernetes-tempest-amphora-ovs
parent: kuryr-kubernetes-octavia-base-ovs
vars:
devstack_localrc:
KURYR_EP_DRIVER_OCTAVIA_PROVIDER: amphora
devstack_local_conf:
post-config:
$OCTAVIA_CONF:
controller_worker:
amp_active_retries: 9999
health_manager:
failover_threads: 2
health_update_threads: 2
stats_update_threads: 2
api_settings:
enabled_provider_drivers: amphora:'Octavia Amphora driver'
voting: false

View File

@ -1,168 +0,0 @@
# Copyright 2018 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
- job:
name: kuryr-kubernetes-tempest-multinode
parent: kuryr-kubernetes-octavia-base
description: |
Kuryr-Kubernetes tempest multinode job with OVN
nodeset: kuryr-nested-virt-two-node-jammy
host-vars:
controller:
devstack_plugins:
octavia: https://opendev.org/openstack/octavia
octavia-tempest-plugin: https://opendev.org/openstack/octavia-tempest-plugin
group-vars:
subnode:
devstack_plugins:
devstack-plugin-container: https://opendev.org/openstack/devstack-plugin-container
kuryr-kubernetes: https://opendev.org/openstack/kuryr-kubernetes
devstack_services:
c-bak: false
c-vol: false
dstat: false
kubernetes-master: false
kubernetes-worker: true
kuryr-daemon: true
kuryr-kubernetes: false
neutron: true
ovn-northd: false
ovn-octavia-provider: true
placement-client: true
q-svc: false
devstack_local_conf:
post-config:
$OCTAVIA_CONF:
controller_worker:
amp_active_retries: 9999
api_settings:
enabled_provider_drivers: amphora:'Octavia Amphora driver',ovn:'Octavia OVN driver'
health_manager:
failover_threads: 2
health_update_threads: 2
stats_update_threads: 2
devstack_localrc:
CONTAINER_ENGINE: crio
CRIO_VERSION: "1.28"
KURYR_ENABLED_HANDLERS: vif,endpoints,service,namespace,pod_label,policy,kuryrnetworkpolicy,kuryrnetwork,kuryrport,kuryrloadbalancer
KURYR_ENFORCE_SG_RULES: false
KURYR_EP_DRIVER_OCTAVIA_PROVIDER: ovn
KURYR_K8S_OCTAVIA_MEMBER_MODE: L2
KURYR_LB_ALGORITHM: SOURCE_IP_PORT
KURYR_NEUTRON_DEFAULT_ROUTER: kuryr-router
KURYR_SG_DRIVER: policy
KURYR_SUBNET_DRIVER: namespace
OVN_BRANCH: v21.06.0
OVS_BRANCH: "a4b04276ab5934d087669ff2d191a23931335c87"
OVN_BUILD_FROM_SOURCE: true
OVN_L3_CREATE_PUBLIC_NETWORK: true
VAR_RUN_PATH: /usr/local/var/run
vars:
tempest_test_regex: '^(kuryr_tempest_plugin.tests.scenario.test_cross_ping_multi_worker.TestCrossPingScenarioMultiWorker)'
devstack_localrc:
KURYR_K8S_MULTI_WORKER_TESTS: true
devstack_local_conf:
post-config:
$OCTAVIA_CONF:
controller_worker:
amp_active_retries: 9999
api_settings:
enabled_provider_drivers: amphora:'Octavia Amphora driver',ovn:'Octavia OVN driver'
health_manager:
failover_threads: 2
health_update_threads: 2
stats_update_threads: 2
devstack_services:
kubernetes-master: true
kubernetes-worker: false
kuryr-daemon: true
kuryr-kubernetes: true
zuul_copy_output:
'{{ devstack_base_dir }}/data/ovn': 'logs'
'{{ devstack_log_dir }}/ovsdb-server-nb.log': 'logs'
'{{ devstack_log_dir }}/ovsdb-server-sb.log': 'logs'
voting: false
- job:
name: kuryr-kubernetes-tempest-multinode-ovs
parent: kuryr-kubernetes-octavia-base-ovs
description: |
Kuryr-Kubernetes tempest multinode job with OVS
nodeset: kuryr-nested-virt-two-node-jammy
group-vars:
subnode:
devstack_plugins:
devstack-plugin-container: https://opendev.org/openstack/devstack-plugin-container
kuryr-kubernetes: https://opendev.org/openstack/kuryr-kubernetes
devstack_services:
c-bak: false
c-vol: false
dstat: false
kubernetes-master: false
kubernetes-worker: true
kuryr-daemon: true
kuryr-kubernetes: false
neutron: true
ovn-controller: false
ovs-vswitchd: false
ovsdb-server: false
placement-client: true
q-agt: true
q-dhcp: true
q-l3: true
q-meta: true
q-ovn-metadata-agent: false
q-svc: false
devstack_localrc:
CONTAINER_ENGINE: crio
CRIO_VERSION: "1.26"
KURYR_ENABLED_HANDLERS: vif,endpoints,service,namespace,pod_label,policy,kuryrnetworkpolicy,kuryrnetwork,kuryrport,kuryrloadbalancer
KURYR_ENFORCE_SG_RULES: true
KURYR_SG_DRIVER: policy
KURYR_SUBNET_DRIVER: namespace
ML2_L3_PLUGIN: router
Q_AGENT: openvswitch
Q_ML2_PLUGIN_MECHANISM_DRIVERS: openvswitch
Q_ML2_TENANT_NETWORK_TYPE: vxlan
vars:
tempest_test_regex: '^(kuryr_tempest_plugin.tests.scenario.test_cross_ping_multi_worker.TestCrossPingScenarioMultiWorker)'
devstack_services:
dstat: false
kubernetes-master: true
kubernetes-worker: false
kuryr-daemon: true
kuryr-kubernetes: true
neutron: true
devstack_localrc:
KURYR_K8S_MULTI_WORKER_TESTS: true
voting: false
- job:
name: kuryr-kubernetes-tempest-multinode-ha
parent: kuryr-kubernetes-tempest-multinode
description: |
Kuryr-Kubernetes tempest multinode job running containerized in HA
timeout: 7800
vars:
devstack_localrc:
KURYR_CONTROLLER_REPLICAS: 2
KURYR_K8S_SERIAL_TESTS: true
tempest_concurrency: 1
group-vars:
subnode:
devstack_plugins:
devstack-plugin-container: https://opendev.org/openstack/devstack-plugin-container
kuryr-kubernetes: https://opendev.org/openstack/kuryr-kubernetes
devstack_services:
kubernetes-worker: true

View File

@ -1,19 +0,0 @@
The source repository for this project can be found at:
https://opendev.org/openstack/kuryr-kubernetes
Pull requests submitted through GitHub are not monitored.
To start contributing to OpenStack, follow the steps in the contribution guide
to set up and use Gerrit:
https://docs.openstack.org/contributors/code-and-documentation/quick-start.html
Bugs should be filed on Launchpad:
https://bugs.launchpad.net/kuryr-kubernetes
For more specific information about contributing to this repository, see the
kuryr-kubernetes contributor guide:
https://docs.openstack.org/kuryr-kubernetes/latest/contributor/contributing.html

View File

@ -1,5 +0,0 @@
===================================
kuryr-kubernetes Style Commandments
===================================
Read the OpenStack Style Commandments https://docs.openstack.org/hacking/latest

176
LICENSE
View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,35 +1,10 @@
======================== This project is no longer maintained.
Team and repository tags
========================
.. image:: https://governance.openstack.org/tc/badges/kuryr-kubernetes.svg The contents of this repository are still available in the Git
:target: https://governance.openstack.org/tc/reference/tags/index.html source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
.. Change things from this point on For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
OFTC.
Project description
===================
Kubernetes integration with OpenStack networking
The OpenStack Kuryr project enables native Neutron-based networking in
Kubernetes. With Kuryr-Kubernetes it's now possible to choose to run both
OpenStack VMs and Kubernetes Pods on the same Neutron network if your workloads
require it or to use different segments and, for example, route between them.
* Free software: Apache license
* Documentation: https://docs.openstack.org/kuryr-kubernetes/latest
* Source: https://opendev.org/openstack/kuryr-kubernetes
* Bugs: https://bugs.launchpad.net/kuryr-kubernetes
* Overview and demo: https://superuser.openstack.org/articles/networking-kubernetes-kuryr
* Release notes: https://docs.openstack.org/releasenotes/kuryr-kubernetes/
Contribution guidelines
-----------------------
For the process of new feature addition, refer to the `Kuryr Policy`_.
.. _Kuryr Policy: https://wiki.openstack.org/wiki/Kuryr#Kuryr_Policies

View File

@ -1,2 +0,0 @@
[python: **.py]

View File

@ -1,42 +0,0 @@
FROM quay.io/kuryr/golang:1.16 as builder
WORKDIR /go/src/opendev.com/kuryr-kubernetes
COPY . .
RUN GO111MODULE=auto go build -o /go/bin/kuryr-cni ./kuryr_cni/pkg/*
FROM quay.io/centos/centos:stream9
LABEL authors="Antoni Segura Puimedon<toni@kuryr.org>, Michał Dulko<mdulko@redhat.com>"
ARG UPPER_CONSTRAINTS_FILE="https://releases.openstack.org/constraints/upper/master"
ARG OSLO_LOCK_PATH=/var/kuryr-lock
ARG RDO_REPO=https://www.rdoproject.org/repos/rdo-release.el9.rpm
RUN dnf upgrade -y && dnf install -y epel-release $RDO_REPO \
&& dnf install -y --setopt=tsflags=nodocs python3-pip openvswitch sudo iproute pciutils kmod-libs \
&& dnf install -y --setopt=tsflags=nodocs gcc gcc-c++ python3-devel git
COPY . /opt/kuryr-kubernetes
ARG VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
# This is enough to activate a venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN pip3 --no-cache-dir install -U pip \
&& python3 -m pip --no-cache-dir install -c $UPPER_CONSTRAINTS_FILE /opt/kuryr-kubernetes \
&& cp /opt/kuryr-kubernetes/cni_ds_init /usr/bin/cni_ds_init \
&& mkdir -p /etc/kuryr-cni \
&& cp /opt/kuryr-kubernetes/etc/cni/net.d/* /etc/kuryr-cni \
&& dnf -y history undo last \
&& dnf clean all \
&& rm -rf /opt/kuryr-kubernetes \
&& mkdir ${OSLO_LOCK_PATH}
COPY --from=builder /go/bin/kuryr-cni /kuryr-cni
ARG CNI_DAEMON=True
ENV CNI_DAEMON ${CNI_DAEMON}
ENV OSLO_LOCK_PATH=${OSLO_LOCK_PATH}
ENTRYPOINT [ "cni_ds_init" ]

View File

@ -1,22 +0,0 @@
#!/bin/bash -ex
function cleanup() {
rm -f "/etc/cni/net.d/10-kuryr.conflist"
rm -f "/opt/cni/bin/kuryr-cni"
}
function deploy() {
# Copy the binary into the designated location
cp /kuryr-cni "/opt/cni/bin/kuryr-cni"
chmod +x /opt/cni/bin/kuryr-cni
if [ -f /etc/cni/net.d/kuryr.conflist.template ]; then
cp /etc/cni/net.d/kuryr.conflist.template /etc/cni/net.d/10-kuryr.conflist
else
cp /etc/kuryr-cni/kuryr.conflist.template /etc/cni/net.d/10-kuryr.conflist
fi
}
cleanup
deploy
exec kuryr-daemon --config-file /etc/kuryr/kuryr.conf

View File

@ -1,4 +0,0 @@
.idea
*.pem
__pycache__
*.pyc

View File

@ -1,88 +0,0 @@
Kuryr Heat Templates
====================
This set of scripts and Heat templates are useful for deploying DevStack
scenarios. It handles the creation of an all-in-one DevStack nova instance and
its networking needs.
Prerequisites
~~~~~~~~~~~~~
Packages to install on the host you run devstack-heat (not on the cloud
server):
* python-openstackclient
After creating the instance, devstack-heat will immediately start creating a
devstack `stack` user and using devstack to stack kuryr-kubernetes. When it is
finished, there'll be a file names `/opt/stack/ready`.
How to run
~~~~~~~~~~
In order to run it, make sure you reviewed values in `hot/parameters.yml`
(especially the `image`, `flavor` and `public_net` properties, the last one
telling in which network to create the floating IPs). The cloud credentials
should be in `~/.config/openstack/clouds.yaml`. Then the most basic run
requires executing::
./devstack_heat.py -c <cloud-name> stack -e hot/parameters.yml <stack-name>
This will deploy the latest master on cloud <cloud-name> in a stack
<stack-name>. You can also specify other sources than master::
--gerrit GERRIT ID of Kuryr Gerrit change
--commit COMMIT Kuryr commit ID
--branch BRANCH Kuryr branch
--devstack-branch DEVSTACK_BRANCH DevStack branch to use
Note that some options are excluding other ones.
Besides that you can customize deployments using those options::
-p KEY=VALUE, --parameter KEY=VALUE Heat stack parameters
--local-conf LOCAL_CONF URL to DevStack local.conf file
--bashrc BASHRC URL to bashrc file to put on VM
--additional-key ADDITIONAL_KEY URL to additional SSH key to add for
stack user
`stack` will save you a private key for the deployment in `<stack-name>.pem`
file in current directory.
Getting inside the deployment
-----------------------------
You can then ssh into the deployment in two ways::
./devstack_heat.py show <stack-name>
Write down the FIP it tells you and then (might be skipped, key should be
there)::
./devstack_heat.py key <stack-name> > ./<stack-name>.pem
Finally to get in (use the default username for the distro of your chosen
glance image, in the example below centos)::
ssh -i ./<stack-name>.pem ubuntu@<floating-ip>
Alternatively, if you wait a bit, devstack-heat will have set up the devstack
stack user and you can just do::
./devstack_heat.py ssh <stack-name>
If you want to observe the progress of the installation you can use `join` to
make it stream `stack.sh` logs::
./devstack_heat.py join <stack-name>
Note that you can make `stack` join automatically using its `--join` option.
To delete the deployment::
./devstack_heat.py unstack <stack-name>
Supported images
----------------
Scripts were tested with latest Ubuntu 20.04 cloud images.

View File

@ -1,218 +0,0 @@
#!/usr/bin/env python3
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
import subprocess
import sys
import time
import openstack
from openstack import exceptions as o_exc
class ParseDict(argparse.Action):
def __call__(self, parser, namespace, values, option_string=None):
d = getattr(namespace, self.dest, {})
if not d:
d = {}
if values:
split_items = values.split("=", 1)
key = split_items[0].strip()
value = split_items[1]
d[key] = value
setattr(namespace, self.dest, d)
class DevStackHeat(object):
HOT_FILE = 'hot/devstack_heat_template.yml'
def __init__(self):
parser = self._get_arg_parser()
args = parser.parse_args()
if hasattr(args, 'func'):
self._setup_openstack(args.cloud)
args.func(args)
return
parser.print_help()
parser.exit()
def _get_arg_parser(self):
parser = argparse.ArgumentParser(
description="Deploy a DevStack VM with Kuryr-Kubernetes")
parser.add_argument('-c', '--cloud', help='name in clouds.yaml to use')
subparsers = parser.add_subparsers(help='supported commands')
stack = subparsers.add_parser('stack', help='run the VM')
stack.add_argument('name', help='name of the stack')
stack.add_argument('-e', '--environment', help='Heat stack env file',
default='hot/parameters.yml')
stack.add_argument('-p', '--parameter', help='Heat stack parameters',
metavar='KEY=VALUE',
action=ParseDict)
stack.add_argument('-j', '--join', help='SSH the stack and watch log',
action='store_true')
stack.add_argument('--local-conf',
help='URL to DevStack local.conf file')
stack.add_argument('--bashrc',
help='URL to bashrc file to put on VM')
source = stack.add_mutually_exclusive_group()
source.add_argument('--gerrit', help='ID of Kuryr Gerrit change')
source.add_argument('--commit', help='Kuryr commit ID')
source.add_argument('--branch', help='Kuryr branch')
stack.add_argument('--devstack-branch', help='DevStack branch to use',
default='master')
stack.add_argument('--additional-key', help='Additional SSH key to '
'add for stack user')
stack.set_defaults(func=self.stack)
unstack = subparsers.add_parser('unstack', help='delete the VM')
unstack.add_argument('name', help='name of the stack')
unstack.set_defaults(func=self.unstack)
key = subparsers.add_parser('key', help='get SSH key')
key.add_argument('name', help='name of the stack')
key.set_defaults(func=self.key)
show = subparsers.add_parser('show', help='show basic stack info')
show.add_argument('name', help='name of the stack')
show.set_defaults(func=self.show)
ssh = subparsers.add_parser('ssh', help='SSH to the stack')
ssh.add_argument('name', help='name of the stack')
ssh.set_defaults(func=self.ssh)
join = subparsers.add_parser('join', help='join watching logs of'
'DevStack installation')
join.add_argument('name', help='name of the stack')
join.set_defaults(func=self.join)
return parser
def _setup_openstack(self, cloud_name):
self.heat = openstack.connection.from_config(
cloud=cloud_name).orchestration
def _find_output(self, stack, name):
for output in stack.outputs:
if output['output_key'] == name:
return output['output_value']
return None
def _get_private_key(self, name):
stack = self.heat.find_stack(name)
if stack:
return self._find_output(stack, 'master_key_priv')
return None
def stack(self, args):
stack_attrs = self.heat.read_env_and_templates(
template_file=self.HOT_FILE, environment_files=[args.environment])
stack_attrs['name'] = args.name
stack_attrs['parameters'] = args.parameter or {}
if args.local_conf:
stack_attrs['parameters']['local_conf'] = args.local_conf
if args.bashrc:
stack_attrs['parameters']['bashrc'] = args.bashrc
if args.additional_key:
stack_attrs['parameters']['ssh_key'] = args.additional_key
if args.gerrit:
stack_attrs['parameters']['gerrit_change'] = args.gerrit
if args.commit:
stack_attrs['parameters']['git_hash'] = args.commit
if args.branch:
stack_attrs['parameters']['branch'] = args.branch
if args.devstack_branch:
stack_attrs['parameters']['devstack_branch'] = args.devstack_branch
print(f'Creating stack {args.name}')
stack = self.heat.create_stack(**stack_attrs)
print(f'Wating for stack {args.name} to create')
self.heat.wait_for_status(stack, status='CREATE_COMPLETE',
failures=['CREATE_FAILED'], wait=600)
print(f'Stack {args.name} created')
print(f'Saving SSH key to {args.name}.pem')
key = self._get_private_key(args.name)
if not key:
print(f'Private key or stack {args.name} not found')
with open(f'{args.name}.pem', "w") as pemfile:
print(key, file=pemfile)
os.chmod(f'{args.name}.pem', 0o600)
if args.join:
time.sleep(120) # FIXME(dulek): This isn't pretty.
self.join(args)
def unstack(self, args):
stack = self.heat.find_stack(args.name)
if stack:
self.heat.delete_stack(stack)
try:
self.heat.wait_for_status(stack, status='DELETE_COMPLETE',
failures=['DELETE_FAILED'])
except o_exc.ResourceNotFound:
print(f'Stack {args.name} deleted')
print(f'Deleting SSH key {args.name}.pem')
os.unlink(f'{args.name}.pem')
else:
print(f'Stack {args.name} not found')
def key(self, args):
key = self._get_private_key(args.name)
if not key:
print(f'Private key or stack {args.name} not found')
print(key)
def show(self, args):
stack = self.heat.find_stack(args.name)
if not stack:
print(f'Stack {args.name} not found')
ips = self._find_output(stack, 'node_fips')
print(f'IPs: {", ".join(ips)}')
def _ssh(self, keyname, ip, command=None):
if not command:
command = []
subprocess.run(['ssh', '-i', keyname, f'stack@{ip}'] + command,
stdin=sys.stdin, stdout=sys.stdout)
def ssh(self, args, command=None):
stack = self.heat.find_stack(args.name)
if not stack:
print(f'Stack {args.name} not found')
ips = self._find_output(stack, 'node_fips')
if not ips:
print(f'Stack {args.name} has no IPs')
self._ssh(f'{args.name}.pem', ips[0], command)
def join(self, args):
stack = self.heat.find_stack(args.name)
if not stack:
print(f'Stack {args.name} not found')
ips = self._find_output(stack, 'node_fips')
if not ips:
print(f'Stack {args.name} has no IPs')
self.ssh(args, ['tail', '-f', '/opt/stack/devstack.log'])
if __name__ == '__main__':
DevStackHeat()

View File

@ -1,121 +0,0 @@
heat_template_version: 2015-10-15
description: Simple template to deploy kuryr resources
parameters:
image:
type: string
label: Image name or ID
description: Image to be used for the kuryr nodes
default: Ubuntu20.04
flavor:
type: string
label: Flavor
description: Flavor to be used for the VM
default: m1.xlarge
public_net:
type: string
description: public network for the instances
default: public
vm_net_cidr:
type: string
description: vm_net network address (CIDR notation)
default: 10.11.0.0/24
vm_net_gateway:
type: string
description: vm_net network gateway address
default: 10.11.0.1
node_num:
type: number
description: Number of VMs
default: 1
local_conf:
type: string
label: local.conf file to use
description: URL of local.conf file to use when deploying DevStack
default: ""
gerrit_change:
type: string
label: Gerrit change to deploy Kuryr from
description: Gerrit change number to clone Kuryr from
default: ""
git_hash:
type: string
label: Commit from which to deploy Kuryr
description: Commit hash from which Kuryr should be deployed
default: ""
bashrc:
type: string
label: bashrc file URL
description: URL of bashrc file that will be appended for stack user
default: ""
branch:
type: string
label: Branch which should be deployed
description: E.g. master or stable/queens
default: ""
devstack_branch:
type: string
label: Branch which should be deployed
description: E.g. master or stable/queens
default: ""
ssh_key:
type: string
label: Additional SSH key
description: To be added for stack user.
default: ""
resources:
network:
type: OS::Kuryr::DevstackNetworking
properties:
public_net: { get_param: public_net }
vm_net_cidr: { get_param: vm_net_cidr }
vm_net_gateway: { get_param: vm_net_gateway }
master_key:
type: OS::Nova::KeyPair
properties:
name: { get_param: 'OS::stack_name' }
save_private_key: true
nodes:
type: OS::Heat::ResourceGroup
properties:
count: { get_param: node_num }
resource_def:
type: OS::Kuryr::DevstackNode
properties:
public_net: { get_param: public_net }
image: { get_param: image }
flavor: { get_param: flavor }
key: { get_resource: master_key }
local_conf: { get_param: local_conf }
gerrit_change: { get_param: gerrit_change }
branch: { get_param: branch }
devstack_branch: { get_param: devstack_branch }
ssh_key: { get_param: ssh_key }
git_hash: { get_param: git_hash }
bashrc: { get_param: bashrc }
private_key: { get_attr: [master_key, private_key] }
public_key: { get_attr: [master_key, public_key] }
vm_net: { get_attr: [network, vm_net_id] }
vm_subnet: { get_attr: [network, vm_subnet_id] }
vm_sg: { get_attr: [network, vm_sg_id] }
name:
str_replace:
template: "__stack__/vm-%index%"
params:
__stack__: { get_param: 'OS::stack_name' }
outputs:
node_fips:
value: { get_attr: [nodes, node_fip] }
vm_subnet:
value: { get_attr: [network, vm_subnet_id] }
vm_sg:
value: { get_attr: [network, vm_sg_id] }
master_key_pub:
value: { get_attr: [master_key, public_key] }
master_key_priv:
value: { get_attr: [master_key, private_key] }

View File

@ -1,19 +0,0 @@
distro=$(awk -F'=' '/^ID=/ {print $2}' /etc/os-release)
distro="${distro%\"}"
distro="${distro#\"}"
if [[ "$distro" =~ centos|fedora ]]; then
yum install -y git python-devel
yum group install -y Development Tools
if [[ "$distro" == "centos" ]]; then
yum install -y epel-release
sed -i -e '/Defaults requiretty/{ s/.*/# Defaults requiretty/ }' /etc/sudoers
fi
yum install -y jq
yum install -y python-pip
pip install -U setuptools
elif [[ "$distro" =~ ubuntu|debian ]]; then
apt update -y
apt upgrade -y
apt-get install -y build-essential git python-dev jq
fi

View File

@ -1,80 +0,0 @@
heat_template_version: 2014-10-16
description: Simple template to deploy kuryr resources
parameters:
public_net:
type: string
label: public net ID
description: Public network for the node FIPs
vm_net_cidr:
type: string
description: vm_net network address (CIDR notation)
vm_net_gateway:
type: string
description: vm_net network gateway address
resources:
vm_net:
type: OS::Neutron::Net
properties:
name:
str_replace:
template: __stack__/vm_net
params:
__stack__: { get_param: 'OS::stack_name' }
vm_subnet:
type: OS::Neutron::Subnet
properties:
network_id: { get_resource: vm_net }
cidr: { get_param: vm_net_cidr }
gateway_ip: { get_param: vm_net_gateway }
name:
str_replace:
template: __stack__/vm_subnet
params:
__stack__: { get_param: 'OS::stack_name' }
kuryr_router:
type: OS::Neutron::Router
properties:
external_gateway_info:
network: { get_param: public_net }
name:
str_replace:
template: __stack__/router
params:
__stack__: { get_param: 'OS::stack_name' }
kr_vm_iface:
type: OS::Neutron::RouterInterface
properties:
router_id: { get_resource: kuryr_router }
subnet_id: { get_resource: vm_subnet }
vm_sg:
type: OS::Neutron::SecurityGroup
properties:
name: vm_sg
description: Ping and SSH
rules:
- protocol: icmp
- ethertype: IPv4
remote_mode: remote_group_id
- ethertype: IPv6
remote_mode: remote_group_id
- protocol: tcp
port_range_min: 22
port_range_max: 22
- protocol: tcp
port_range_min: 8080
port_range_max: 8080
outputs:
vm_net_id:
value: { get_resource: vm_net }
vm_subnet_id:
value: { get_resource: vm_subnet }
vm_sg_id:
value: { get_resource: vm_sg }

View File

@ -1,206 +0,0 @@
heat_template_version: 2015-10-15
description: template to deploy devstack nodes
parameters:
public_net:
type: string
label: public net ID
description: Public network for the node FIPs
image:
type: string
label: Image name or ID
description: Image to be used for the kuryr nodes
flavor:
type: string
label: Flavor
description: Flavor to be used for the image
default: m1.small
key:
type: string
label: key name
description: Keypair to be used for the instance
public_key:
type: string
label: key content for stack user authorized_keys
description: private key to configure all nodes
private_key:
type: string
label: key content to access other nodes
description: private key to configure all nodes
vm_net:
type: string
label: VM Network
description: Neutron network for VMs
vm_subnet:
type: string
label: VM Subnet
description: Neutron subnet for VMs
vm_sg:
type: string
label: kubernetes API sg
description: Security Group for Kubernetes API
name:
type: string
label: Instance name
description: devstack node instance name
local_conf:
type: string
label: local.conf file to use
description: URL of local.conf file to use when deploying DevStack
gerrit_change:
type: string
label: Gerrit change to deploy Kuryr from
description: Gerrit change number to clone Kuryr from
git_hash:
type: string
label: Commit from which to deploy Kuryr
description: Commit hash from which Kuryr should be deployed
bashrc:
type: string
label: bashrc file URL
description: URL of bashrc file that will be injected for stack user
default: ""
branch:
type: string
label: Branch which should be deployed
description: E.g. master or stable/queens
default: ""
devstack_branch:
type: string
label: Branch which should be deployed
description: E.g. master or stable/queens
default: ""
ssh_key:
type: string
label: Additional SSH key
description: To be added for stack user.
default: ""
resources:
instance_port:
type: OS::Neutron::Port
properties:
network: { get_param: vm_net }
security_groups:
- default
- { get_param: vm_sg }
fixed_ips:
- subnet: { get_param: vm_subnet }
instance_fip:
type: OS::Neutron::FloatingIP
properties:
floating_network: { get_param: public_net }
port_id: { get_resource: instance_port }
instance:
type: OS::Nova::Server
properties:
name: { get_param: name }
image: { get_param: image }
flavor: { get_param: flavor }
key_name: { get_param: key }
networks:
- port: { get_resource: instance_port }
user_data_format: RAW
user_data:
str_replace:
params:
__distro_deps__: { get_file: distro_deps.sh }
__gerrit_change__: { get_param: gerrit_change }
__git_hash__: { get_param: git_hash }
__local_conf__: { get_param: local_conf }
__bashrc__: { get_param: bashrc }
__pubkey__: { get_param: public_key }
__branch__: { get_param: branch }
__devstack_branch__: { get_param: devstack_branch }
__ssh_key__: { get_param: ssh_key }
template: |
#!/bin/bash
set -ex
# Wait a bit for connectivity
sleep 30
# Stack user config
groupadd stack
useradd -s /bin/bash -d /opt/stack -m stack -g stack
mkdir /opt/stack/.ssh
cat > /opt/stack/.ssh/authorized_keys << EOF
__pubkey__
EOF
if [[ ! -z "__ssh_key__" ]]; then
curl "__ssh_key__" >> /opt/stack/.ssh/authorized_keys
fi
echo "stack ALL=(ALL) NOPASSWD: ALL" | tee /etc/sudoers.d/stack
curl "__bashrc__" >> /opt/stack/.bashrc
chown -R stack:stack /opt/stack
chmod 755 /opt/stack
# Deps for devstack
__distro_deps__
# Stacking
sudo -i -u stack /bin/bash - <<"EOF"
function get_from_gerrit() {
local gerrit_change
local ref
gerrit_change="__gerrit_change__"
echo "Finding latest ref for change ${gerrit_change}"
ref=$(curl -s "https://review.opendev.org/changes/${gerrit_change}?o=CURRENT_REVISION" | tail -n +2 | jq -r '.revisions[].ref')
echo "Fetching ref ${ref}"
git fetch https://opendev.org/openstack/kuryr-kubernetes "${ref}" && git checkout FETCH_HEAD
}
function get_from_sha() {
local commit_sha
commit_sha="__git_hash__"
echo "Sha to fetch: ${commit_sha}"
git checkout "$commit_sha"
}
cd /opt/stack
git clone https://opendev.org/openstack-dev/devstack
if [[ ! -z "__devstack_branch__" ]]; then
pushd devstack
git checkout "__devstack_branch__"
popd
fi
git clone https://github.com/openstack/kuryr-kubernetes
pushd kuryr-kubernetes
if [[ ! -z "__git_hash__" ]]; then
get_from_sha
elif [[ ! -z "__gerrit_change__" ]]; then
get_from_gerrit
elif [[ ! -z "__branch__" ]]; then
git checkout "__branch__"
else
"Deploying from master"
fi
popd
pushd devstack
if [[ -z "__local_conf__" ]]; then
# The change is already downloaded, do not reclone
sed -e 's/# RECLONE=/RECLONE=/' /opt/stack/kuryr-kubernetes/devstack/local.conf.sample > /opt/stack/devstack/local.conf
else
curl "__local_conf__" > /opt/stack/devstack/local.conf
fi
popd
touch stacking
pushd devstack
./stack.sh >> /opt/stack/devstack.log 2>&1
popd
touch ready
EOF
outputs:
node_fip:
description: FIP address of the node
value: { get_attr: [instance_fip, floating_ip_address] }

View File

@ -1,10 +0,0 @@
parameter_defaults:
vm_net_cidr: 10.11.0.0/24
vm_net_gateway: 10.11.0.1
public_net: 316eeb47-1498-46b4-b39e-00ddf73bd2a5
image: Ubuntu20.04
flavor: m1.xlarge
resource_registry:
OS::Kuryr::DevstackNetworking: networking_deployment.yaml
OS::Kuryr::DevstackNode: node.yaml

View File

@ -1,28 +0,0 @@
====================
Kuryr kubectl plugin
====================
This plugin aims to bring kuryr introspection an interaction to the kubectl and
oc command line tools.
Installation
------------
Place the kuryr directory in your ~/.kube/plugins
Usage
-----
The way to use it is via the kubectl/oc plugin facility::
kubectl plugin kuryr get vif -o wide -l deploymentconfig=demo
Media
-----
You can see an example of its operation:
.. image:: kubectl_kuryr_plugin_1080.gif

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.8 MiB

View File

@ -1,186 +0,0 @@
#!/usr/bin/env python
import argparse
import base64
import json
import os
from os.path import expanduser
import sys
import tempfile
import urllib
import yaml
import requests
from pprint import pprint
def _get_session_from_kubeconfig():
kubeconfig = expanduser('~/.kube/config')
with open(kubeconfig, 'r') as f:
conf = yaml.safe_load(f.read())
for context in conf['contexts']:
if context['name'] == conf['current-context']:
current_context = context
break
cluster_name = current_context['context']['cluster']
for cluster in conf['clusters']:
if cluster['name'] == cluster_name:
current_cluster = cluster
break
server = current_cluster['cluster']['server']
if server.startswith('https'):
ca_cert_data = current_cluster['cluster']['certificate-authority-data']
for user in conf['users']:
if user['name'] == current_context['context']['user']:
current_user = user
break
client_cert_data = current_user['user']['client-certificate-data']
client_key_data = current_user['user']['client-key-data']
client_cert_file = tempfile.NamedTemporaryFile(delete=False)
client_key_file = tempfile.NamedTemporaryFile(delete=False)
ca_cert_file = tempfile.NamedTemporaryFile(delete=False)
client_cert_file.write(base64.decodebytes(client_cert_data.encode()))
client_cert_file.close()
client_key_file.write(base64.decodebytes(client_key_data.encode()))
client_key_file.close()
ca_cert_file.write(base64.decodebytes(ca_cert_data.encode()))
ca_cert_file.close()
session = requests.Session()
session.cert = (client_cert_file.name, client_key_file.name)
session.verify = ca_cert_file.name
else:
session = requests.Session()
return session, server
def get(args):
session, server = _get_session_from_kubeconfig()
namespace = os.getenv('KUBECTL_PLUGINS_CURRENT_NAMESPACE')
if args.resource in ('vif', 'vifs'):
vifs(session, server, namespace, args)
def _vif_formatted_output(vif_data, wide=False):
max_len = 12
padding = 4
vif_data.insert(0,
{'pod_name': 'POD NAME',
'vif_name': 'VIF NAME',
'host_ip': 'HOST IP',
'plugin': 'BINDING',
'active': 'ACTIVE',
'address': 'IP ADDRESS',
'port_id': 'PORT ID',
'mac_address': 'MAC ADDRESS',
'vlan_id': 'VLAN'})
short_format = ('{pod_name:{tab_len:d}s} {vif_name:{tab_len:d}s} '
'{plugin:10s} {address:{tab_len:d}s} {vlan_id:4}')
long_format = ('{pod_name:{tab_len:d}s} {vif_name:{tab_len:d}s} '
'{plugin:10s} {address:{tab_len:d}s} {vlan_id:4} '
'{active:6} {host_ip:{tab_len:d}s} '
'{mac_address:{tab_len:d}s} {port_id:{tab_len:d}s}')
for vif in vif_data:
active = vif['active']
if type(active) == bool:
vif['active'] = 'yes' if active else 'no'
if 'vlan_id' not in vif:
vif['vlan_id'] = ''
if wide:
print(long_format.format(tab_len=max_len+padding, **vif))
else:
print(short_format.format(tab_len=max_len+padding, **vif))
def vifs(session, server, namespace, args):
url = '%s/api/v1/namespaces/%s/pods' % (server, namespace)
selector = os.getenv('KUBECTL_PLUGINS_LOCAL_FLAG_SELECTOR')
if selector:
url += '?labelSelector=' + urllib.quote(selector)
output = os.getenv('KUBECTL_PLUGINS_LOCAL_FLAG_OUTPUT')
response = session.get(url)
if response.ok:
pods = response.json()
else:
sys.stderr.write('Failed to retrieve pod data')
sys.exit(1)
vif_data = []
for pod in pods['items']:
data = {'pod_name': pod['metadata']['name']}
if 'hostIP' in pod['status']:
data['host_ip'] = pod['status']['hostIP']
vif = pod['metadata']['annotations'].get('openstack.org/kuryr-vif')
if vif is None:
continue # not kuryr annotated
else:
vif = json.loads(vif)
if vif['versioned_object.name'] == 'PodState':
# This is new format, fetch only default_vif from there.
vif = vif['versioned_object.data']['default_vif']
network = (vif['versioned_object.data']['network']
['versioned_object.data'])
first_subnet = (network['subnets']['versioned_object.data']
['objects'][0]['versioned_object.data'])
first_subnet_ip = (first_subnet['ips']['versioned_object.data']
['objects'][0]['versioned_object.data']['address'])
first_subnet_prefix = first_subnet['cidr'].split('/')[1]
data['vif_name'] = vif['versioned_object.data']['vif_name']
data['plugin'] = vif['versioned_object.data']['plugin']
data['active'] = vif['versioned_object.data']['active']
data['address'] = '%s/%s' % (first_subnet_ip, first_subnet_prefix)
data['port_id'] = vif['versioned_object.data']['id']
data['mac_address'] = vif['versioned_object.data']['address']
vlan_id = vif['versioned_object.data'].get('vlan_id')
if vlan_id is not None:
data['vlan_id'] = vlan_id
vif_data.append(data)
if output == 'json':
pprint(vif_data)
elif output == 'tabular':
_vif_formatted_output(vif_data)
elif output == 'wide':
_vif_formatted_output(vif_data, wide=True)
else:
sys.stderr.write('Unrecognized output format')
sys.exit(1)
if __name__ == '__main__':
parser = argparse.ArgumentParser(usage='kuryr [command] [options]')
subparsers = parser.add_subparsers(title='Available commands', metavar='')
get_parser = subparsers.add_parser(
'get',
usage='kuryr get [resource] [options]',
help='Gets Kuryr managed resource information.')
get_parser.add_argument(
'resource',
action='store',
choices=('vif',),
help='Resource to return info for.')
get_parser.set_defaults(func=get)
args = parser.parse_args()
args.func(args)

View File

@ -1,15 +0,0 @@
name: kuryr
shortDesc: "OpenStack kuryr tools"
tree:
- name: get
shortDesc: "Retrieves Kuryr managed resources"
command: "./kuryr get"
flags:
- name: selector
shorthand: l
desc: "Selects which pods to find kuryr vif info for"
defValue: ""
- name: output
shorthand: o
desc: How to format the output
defValue: tabular

View File

@ -1,321 +0,0 @@
=============================
Subport pools management tool
=============================
This tool makes it easier to deal with subports pools. It allows to populate
a given amount of subports at the specified pools (i.e., at the VM trunks), as
well as to free the unused ones.
The first step to perform is to enable the pool manager by adding this to
``/etc/kuryr/kuryr.conf``::
[kubernetes]
enable_manager = True
If the environment has been deployed with devstack, the socket file directory
will have been created automatically. However, if that is not the case, you
need to create the directory for the socket file with the right permissions.
If no other path is specified, the default location for the socket file is:
``/run/kuryr/kuryr_manage.sock``
Hence, you need to create that directory and give it read/write access to the
user who is running the kuryr-kubernetes.service, for instance::
$ sudo mkdir -p /run/kuryr
$ sudo chown stack:stack /run/kuryr
Finally, restart kuryr-k8s-controller::
$ sudo systemctl restart devstack@kuryr-kubernetes.service
Populate subport pools for nested environment
---------------------------------------------
Once the nested environment is up and running, and the pool manager has been
started, we can populate the pools, i.e., the trunk ports in used by the
overcloud VMs, with subports. From the *undercloud* we just need to make use
of the subports.py tool.
To obtain information about the tool options::
$ python contrib/pools-management/subports.py -h
usage: subports.py [-h] {create,free} ...
Tool to create/free subports from the subport pools
positional arguments:
{create,free} commands
create Populate the pool(s) with subports
free Remove unused subports from the pools
optional arguments:
-h, --help show this help message and exit
And to obtain information about the create subcommand::
$ python contrib/pools-management/subports.py create -h
usage: subports.py create [-h] --trunks SUBPORTS [SUBPORTS ...] [-n NUM] [-t TIMEOUT]
optional arguments:
-h, --help show this help message and exit
--trunks SUBPORTS [SUBPORTS ...]
list of trunk IPs where subports will be added
-n NUM, --num-ports NUM
number of subports to be created per pool
-t TIMEOUT, --timeout TIMEOUT
set timeout for operation. Default is 180 sec
Then, we can check the existing (overcloud) VMs to use their (trunk) IPs to
later populate their respective pool::
$ openstack server list -f value -c Networks
net0-10.0.4.5
net0=10.0.4.6, 172.24.4.5
As it can be seen, the second VM has also a floating ip associated, but we
only need to use the one belonging to `net0`. If we want to create and attach
a subport to the 10.0.4.5 trunk, and the respective pool, we just need to do::
$ python contrib/pools-management/subports.py create --trunks 10.0.4.5
As the number of ports to create is not specified, it only creates 1 subport
as this is the default value. We can check the result of this command with::
# Checking the subport named `available-port` has been created
$ openstack port list | grep available-port
| 1de77073-7127-4c39-a47b-cef15f98849c | available-port| fa:16:3e:64:7d:90 | ip_address='10.0.0.70', subnet_id='c3a8feb0-62b5-4b53-9235-af1ca93c2571' | ACTIVE |
# Checking the subport is attached to the VM trunk
$ openstack network trunk show trunk1
+-----------------+--------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------+--------------------------------------------------------------------------------------------------+
| admin_state_up | UP |
| created_at | 2017-08-28T15:06:54Z |
| description | |
| id | 9048c109-c1aa-4a41-9508-71b2ba98f3b0 |
| name | trunk1 |
| port_id | 4180a2e5-e184-424a-93d4-54b48490f50d |
| project_id | a05f6ec0abd04cba80cd160f8baaac99 |
| revision_number | 43 |
| status | ACTIVE |
| sub_ports | port_id='1de77073-7127-4c39-a47b-cef15f98849c', segmentation_id='3934', segmentation_type='vlan' |
| tags | [] |
| tenant_id | a05f6ec0abd04cba80cd160f8baaac99 |
| updated_at | 2017-08-29T06:12:39Z |
+-----------------+--------------------------------------------------------------------------------------------------+
It can be seen that the port with id `1de77073-7127-4c39-a47b-cef15f98849c`
has been attached to `trunk1`.
Similarly, we can add subport to different pools by including several IPs at
the `--trunks` option, and we can also modify the amount of subports created
per pool with the `--num` option::
$ python contrib/pools-management/subports.py create --trunks 10.0.4.6 10.0.4.5 --num 3
This command will create 6 subports in total, 3 at trunk 10.0.4.5 and another
3 at trunk 10.0.4.6. So, to check the result of this command, as before::
$ openstack port list | grep available-port
| 1de77073-7127-4c39-a47b-cef15f98849c | available-port | fa:16:3e:64:7d:90 | ip_address='10.0.0.70', subnet_id='c3a8feb0-62b5-4b53-9235-af1ca93c2571' | ACTIVE |
| 52e52281-4692-45e9-935e-db77de44049a | available-port | fa:16:3e:0b:45:f6 | ip_address='10.0.0.73', subnet_id='c3a8feb0-62b5-4b53-9235-af1ca93c2571' | ACTIVE |
| 71245983-e15e-4ae8-9425-af255b54921b | available-port | fa:16:3e:e5:2f:90 | ip_address='10.0.0.68', subnet_id='c3a8feb0-62b5-4b53-9235-af1ca93c2571' | ACTIVE |
| b6a8aa34-feef-42d7-b7ce-f9c33ac499ca | available-port | fa:16:3e:0c:8c:b0 | ip_address='10.0.0.65', subnet_id='c3a8feb0-62b5-4b53-9235-af1ca93c2571' | ACTIVE |
| bee0cb3e-8d83-4942-8cdd-fc091b6e6058 | available-port | fa:16:3e:c2:0a:c6 | ip_address='10.0.0.74', subnet_id='c3a8feb0-62b5-4b53-9235-af1ca93c2571' | ACTIVE |
| c2d7b5c9-606d-4499-9981-0f94ec94f7e1 | available-port | fa:16:3e:73:89:d2 | ip_address='10.0.0.67', subnet_id='c3a8feb0-62b5-4b53-9235-af1ca93c2571' | ACTIVE |
| cb42940f-40c0-4e01-aa40-f3e9c5f6743f | available-port | fa:16:3e:49:73:ca | ip_address='10.0.0.66', subnet_id='c3a8feb0-62b5-4b53-9235-af1ca93c2571' | ACTIVE |
$ openstack network trunk show trunk0
+-----------------+--------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------+--------------------------------------------------------------------------------------------------+
| admin_state_up | UP |
| created_at | 2017-08-25T07:28:11Z |
| description | |
| id | c730ff56-69c2-4540-b3d4-d2978007236d |
| name | trunk0 |
| port_id | ad1b8e91-0698-473d-a2f2-d123e8a0af45 |
| project_id | a05f6ec0abd04cba80cd160f8baaac99 |
| revision_number | 381 |
| status | ACTIVE |
| sub_port | port_id='bee0cb3e-8d83-4942-8cdd-fc091b6e6058', segmentation_id='875', segmentation_type='vlan' |
| | port_id='71245983-e15e-4ae8-9425-af255b54921b', segmentation_id='1446', segmentation_type='vlan' |
| | port_id='b6a8aa34-feef-42d7-b7ce-f9c33ac499ca', segmentation_id='1652', segmentation_type='vlan' |
| tags | [] |
| tenant_id | a05f6ec0abd04cba80cd160f8baaac99 |
| updated_at | 2017-08-29T06:19:24Z |
+-----------------+--------------------------------------------------------------------------------------------------+
$ openstack network trunk show trunk1
+-----------------+--------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------+--------------------------------------------------------------------------------------------------+
| admin_state_up | UP |
| created_at | 2017-08-28T15:06:54Z |
| description | |
| id | 9048c109-c1aa-4a41-9508-71b2ba98f3b0 |
| name | trunk1 |
| port_id | 4180a2e5-e184-424a-93d4-54b48490f50d |
| project_id | a05f6ec0abd04cba80cd160f8baaac99 |
| revision_number | 46 |
| status | ACTIVE |
| sub_ports | port_id='c2d7b5c9-606d-4499-9981-0f94ec94f7e1', segmentation_id='289', segmentation_type='vlan' |
| | port_id='cb42940f-40c0-4e01-aa40-f3e9c5f6743f', segmentation_id='1924', segmentation_type='vlan' |
| | port_id='52e52281-4692-45e9-935e-db77de44049a', segmentation_id='3866', segmentation_type='vlan' |
| | port_id='1de77073-7127-4c39-a47b-cef15f98849c', segmentation_id='3934', segmentation_type='vlan' |
| tags | [] |
| tenant_id | a05f6ec0abd04cba80cd160f8baaac99 |
| updated_at | 2017-08-29T06:19:28Z |
+-----------------+--------------------------------------------------------------------------------------------------+
We can see that now we have 7 subports, 3 of them attached to `trunk0` and 4
(1 + 3) attached to `trunk1`.
After that, if we create a new pod, we can see that the pre-created subports
are being used::
$ kubectl create deployment demo --image=quay.io/kuryr/demo
$ kubectl scale deploy/demo --replicas=2
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
demo-2293951457-0l35q 1/1 Running 0 8s
demo-2293951457-nlghf 1/1 Running 0 17s
$ openstack port list | grep demo
| 71245983-e15e-4ae8-9425-af255b54921b | demo-2293951457-0l35q | fa:16:3e:e5:2f:90 | ip_address='10.0.0.68', subnet_id='c3a8feb0-62b5-4b53-9235-af1ca93c2571' | ACTIVE |
| b6a8aa34-feef-42d7-b7ce-f9c33ac499ca | demo-2293951457-nlghf | fa:16:3e:0c:8c:b0 | ip_address='10.0.0.65', subnet_id='c3a8feb0-62b5-4b53-9235-af1ca93c2571' | ACTIVE |
Free pools for nested environment
---------------------------------
In addition to the create subcommand, there is a `free` command available that
allows to either remove the available ports at a given pool (i.e., VM trunk),
or in all of them::
$ python contrib/pools-management/subports.py free -h
usage: subports.py free [-h] [--trunks SUBPORTS [SUBPORTS ...]] [-t TIMEOUT]
optional arguments:
-h, --help show this help message and exit
--trunks SUBPORTS [SUBPORTS ...]
list of trunk IPs where subports will be freed
-t TIMEOUT, --timeout TIMEOUT
set timeout for operation. Default is 180 sec
Following from the previous example, we can remove the available-ports
attached to a give pool, e.g.::
$ python contrib/pools-management/subports.py free --trunks 10.0.4.5
$ openstack network trunk show trunk1
+-----------------+--------------------------------------+
| Field | Value |
+-----------------+--------------------------------------+
| admin_state_up | UP |
| created_at | 2017-08-28T15:06:54Z |
| description | |
| id | 9048c109-c1aa-4a41-9508-71b2ba98f3b0 |
| name | trunk1 |
| port_id | 4180a2e5-e184-424a-93d4-54b48490f50d |
| project_id | a05f6ec0abd04cba80cd160f8baaac99 |
| revision_number | 94 |
| status | ACTIVE |
| sub_ports | |
| tags | [] |
| tenant_id | a05f6ec0abd04cba80cd160f8baaac99 |
| updated_at | 2017-08-29T06:40:18Z |
+-----------------+--------------------------------------+
Or from all the pools at once::
$ python contrib/pools-management/subports.py free
$ openstack port list | grep available-port
$ # returns nothing
List pools for nested environment
---------------------------------
There is a `list` command available to show information about the existing
pools, i.e., it prints out the pool keys (trunk_ip, project_id,
[security_groups]) and the amount of available ports in each one of them::
$ python contrib/pools-management/subports.py list -h
usage: subports.py list [-h] [-t TIMEOUT]
optional arguments:
-h, --help show this help message and exit
-t TIMEOUT, --timeout TIMEOUT
set timeout for operation. Default is 180 sec
As an example::
$ python contrib/pools-management/subports.py list
Content-length: 150
Pools:
["10.0.0.6", "9d2b45c4efaa478481c30340b49fd4d2", ["00efc78c-f11c-414a-bfcd-a82e16dc07d1", "fd6b13dc-7230-4cbe-9237-36b4614bc6b5"]] has 4 ports
Show pool for nested environment
--------------------------------
There is a `show` command available to print out information about a given
pool. It prints the ids of the ports associated to that pool:::
$ python contrib/pools-management/subports.py show -h
usage: subports.py show [-h] --trunk TRUNK_IP -p PROJECT_ID --sg SG [SG ...]
[-t TIMEOUT]
optional arguments:
-h, --help show this help message and exit
--trunk TRUNK_IP Trunk IP of the desired pool
-p PROJECT_ID, --project-id PROJECT_ID
project id of the pool
--sg SG [SG ...] Security group ids of the pool
-t TIMEOUT, --timeout TIMEOUT
set timeout for operation. Default is 180 sec
As an example::
$ python contrib/pools-management/subports.py show --trunk 10.0.0.6 -p 9d2b45c4efaa478481c30340b49fd4d2 --sg 00efc78c-f11c-414a-bfcd-a82e16dc07d1 fd6b13dc-7230-4cbe-9237-36b4614bc6b5
Content-length: 299
Pool (u'10.0.0.6', u'9d2b45c4efaa478481c30340b49fd4d2', (u'00efc78c-f11c-414a-bfcd-a82e16dc07d1', u'fd6b13dc-7230-4cbe-9237-36b4614bc6b5')) ports are:
4913fbde-5939-4aef-80c0-7fcca0348871
864c8237-6ab4-4713-bec8-3d8bb6aa2144
8138134b-44df-489c-a693-3defeb2adb58
f5e107c6-f998-4416-8f17-a055269f2829
Without the script
------------------
Note the same can be done without using this script, by directly calling the
REST API with curl::
# To populate the pool
$ curl --unix-socket /run/kuryr/kuryr_manage.sock http://localhost/populatePool -H "Content-Type: application/json" -X POST -d '{"trunks": ["10.0.4.6"], "num_ports": 3}'
# To free the pool
$ curl --unix-socket /run/kuryr/kuryr_manage.sock http://localhost/freePool -H "Content-Type: application/json" -X POST -d '{"trunks": ["10.0.4.6"]}'
# To list the existing pools
$ curl --unix-socket /run/kuryr/kuryr_manage.sock http://localhost/listPools -H "Content-Type: application/json" -X GET -d '{}'
# To show a specific pool
$ curl --unix-socket /run/kuryr/kuryr_manage.sock http://localhost/showPool -H "Content-Type: application/json" -X GET -d '{"pool_key": ["10.0.0.6", "9d2b45c4efaa478481c30340b49fd4d2", ["00efc78c-f11c-414a-bfcd-a82e16dc07d1", "fd6b13dc-7230-4cbe-9237-36b4614bc6b5"]]}'

View File

@ -1,187 +0,0 @@
# Copyright 2017 Red Hat, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
from http import client as httplib
import socket
from oslo_serialization import jsonutils
from kuryr_kubernetes import constants
class UnixDomainHttpConnection(httplib.HTTPConnection):
def __init__(self, path, timeout):
httplib.HTTPConnection.__init__(
self, "localhost", timeout=timeout)
self.__unix_socket_path = path
self.timeout = timeout
def connect(self):
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.settimeout(self.timeout)
sock.connect(self.__unix_socket_path)
self.sock = sock
def create_subports(num_ports, trunk_ips, timeout=180):
method = 'POST'
body = jsonutils.dumps({"trunks": trunk_ips, "num_ports": num_ports})
headers = {'Content-Type': 'application/json', 'Connection': 'close'}
headers['Content-Length'] = len(body)
path = 'http://localhost{0}'.format(constants.VIF_POOL_POPULATE)
socket_path = constants.MANAGER_SOCKET_FILE
conn = UnixDomainHttpConnection(socket_path, timeout)
conn.request(method, path, body=body, headers=headers)
resp = conn.getresponse()
print(resp.read())
def delete_subports(trunk_ips, timeout=180):
method = 'POST'
body = jsonutils.dumps({"trunks": trunk_ips})
headers = {'Content-Type': 'application/json', 'Connection': 'close'}
headers['Content-Length'] = len(body)
path = 'http://localhost{0}'.format(constants.VIF_POOL_FREE)
socket_path = constants.MANAGER_SOCKET_FILE
conn = UnixDomainHttpConnection(socket_path, timeout)
conn.request(method, path, body=body, headers=headers)
resp = conn.getresponse()
print(resp.read())
def list_pools(timeout=180):
method = 'GET'
body = jsonutils.dumps({})
headers = {'Context-Type': 'application/json', 'Connection': 'close'}
headers['Context-Length'] = len(body)
path = 'http://localhost{0}'.format(constants.VIF_POOL_LIST)
socket_path = constants.MANAGER_SOCKET_FILE
conn = UnixDomainHttpConnection(socket_path, timeout)
conn.request(method, path, body=body, headers=headers)
resp = conn.getresponse()
print(resp.read())
def show_pool(trunk_ip, project_id, sg, timeout=180):
method = 'GET'
body = jsonutils.dumps({"pool_key": [trunk_ip, project_id, sg]})
headers = {'Context-Type': 'application/json', 'Connection': 'close'}
headers['Context-Length'] = len(body)
path = 'http://localhost{0}'.format(constants.VIF_POOL_SHOW)
socket_path = constants.MANAGER_SOCKET_FILE
conn = UnixDomainHttpConnection(socket_path, timeout)
conn.request(method, path, body=body, headers=headers)
resp = conn.getresponse()
print(resp.read())
def _get_parser():
parser = argparse.ArgumentParser(
description='Tool to create/free subports from the subports pool')
subparser = parser.add_subparsers(help='commands', dest='command')
create_ports_parser = subparser.add_parser(
'create',
help='Populate the pool(s) with subports')
create_ports_parser.add_argument(
'--trunks',
help='list of trunk IPs where subports will be added',
nargs='+',
dest='subports',
required=True)
create_ports_parser.add_argument(
'-n', '--num-ports',
help='number of subports to be created per pool.',
dest='num',
default=1,
type=int)
create_ports_parser.add_argument(
'-t', '--timeout',
help='set timeout for operation. Default is 180 sec',
dest='timeout',
default=180,
type=int)
delete_ports_parser = subparser.add_parser(
'free',
help='Remove unused subports from the pools')
delete_ports_parser.add_argument(
'--trunks',
help='list of trunk IPs where subports will be freed',
nargs='+',
dest='subports')
delete_ports_parser.add_argument(
'-t', '--timeout',
help='set timeout for operation. Default is 180 sec',
dest='timeout',
default=180,
type=int)
list_pools_parser = subparser.add_parser(
'list',
help='List available pools and the number of ports they have')
list_pools_parser.add_argument(
'-t', '--timeout',
help='set timeout for operation. Default is 180 sec',
dest='timeout',
default=180,
type=int)
show_pool_parser = subparser.add_parser(
'show',
help='Show the ports associated to a given pool')
show_pool_parser.add_argument(
'--trunk',
help='Trunk IP of the desired pool',
dest='trunk_ip',
required=True)
show_pool_parser.add_argument(
'-p', '--project-id',
help='project id of the pool',
dest='project_id',
required=True)
show_pool_parser.add_argument(
'--sg',
help='Security group ids of the pool',
dest='sg',
nargs='+',
required=True)
show_pool_parser.add_argument(
'-t', '--timeout',
help='set timeout for operation. Default is 180 sec',
dest='timeout',
default=180,
type=int)
return parser
def main():
"""Parse options and call the appropriate class/method."""
parser = _get_parser()
args = parser.parse_args()
if args.command == 'create':
create_subports(args.num, args.subports, args.timeout)
elif args.command == 'free':
delete_subports(args.subports, args.timeout)
elif args.command == 'list':
list_pools(args.timeout)
elif args.command == 'show':
show_pool(args.trunk_ip, args.project_id, args.sg, args.timeout)
if __name__ == '__main__':
main()

View File

@ -1,17 +0,0 @@
#!/bin/bash
set -o errexit
KURYR_DIR=${KURYR_DIR:-/opt/stack/kuryr-kubernetes}
KURYR_CONTROLLER_NAME=${KURYR_CONTROLLER_NAME:-kuryr-controller}
function build_tagged_container {
docker build -t kuryr/controller -f $KURYR_DIR/controller.Dockerfile $KURYR_DIR
}
function recreate_controller {
kubectl delete pods -n kube-system -l name=$KURYR_CONTROLLER_NAME
}
build_tagged_container
recreate_controller

View File

@ -1,88 +0,0 @@
#!/bin/bash
set -o errexit
# Exit early if python3 is not available.
python3 --version > /dev/null
KURYR_DIR=${KURYR_DIR:-./}
KURYR_API_PROTO="kuryr_kubernetes/pod_resources/api.proto"
# If API_VERSION is not specified assuming v1alpha1.
VERSION=${API_VERSION:-v1alpha1}
ACTIVATED="no"
ENV_DIR=$(mktemp -d -t kuryr-tmp-env-XXXXXXXXXX)
function cleanup() {
if [ "${ACTIVATED}" = "yes" ]; then deactivate; fi
rm -rf "${ENV_DIR}"
}
trap cleanup EXIT INT
if [ -z "${KUBERNETES_API_PROTO}" ]; then
echo "KUBERNETES_API_PROTO is not specified." \
"Trying to download api.proto from the k8s github."
pushd "${ENV_DIR}"
BASE_URL="https://raw.githubusercontent.com/kubernetes/kubernetes/master"
PROTO_FILE="pkg/kubelet/apis/podresources/${VERSION}/api.proto"
wget "${BASE_URL}/${PROTO_FILE}" -O api.proto
KUBERNETES_API_PROTO="$PWD/api.proto"
popd
fi
if [ ! -f "${KUBERNETES_API_PROTO}" ]; then
echo "Can't find ${KUBERNETES_API_PROTO}"
exit 1
fi
KUBERNETES_API_PROTO=$(readlink -e "${KUBERNETES_API_PROTO}")
pushd "${KURYR_DIR}"
# Obtaining api version from the proto file.
VERSION=$(grep package "${KUBERNETES_API_PROTO}" \
| sed 's/^package *\(.*\)\;$/\1/')
echo "\
// Generated from kubernetes/pkg/kubelet/apis/podresources/${VERSION}/api.proto
// To regenerate api.proto, api_pb2.py and api_pb2_grpc.py follow instructions
// from doc/source/devref/updating_pod_resources_api.rst.
" > ${KURYR_API_PROTO}
# Stripping unwanted dependencies.
sed '/gogoproto/d;/api.pb.go/d' "${KUBERNETES_API_PROTO}" >> ${KURYR_API_PROTO}
echo '' >> ${KURYR_API_PROTO}
# Stripping redundant empty lines.
sed -i '/^$/N;/^\n$/D' ${KURYR_API_PROTO}
# Creating new virtual environment.
python3 -m venv "${ENV_DIR}"
source "${ENV_DIR}/bin/activate"
ACTIVATED="yes"
pip install grpcio-tools==1.19
# Checking protobuf version.
protobuf_version=$(grep protobuf lower-constraints.txt \
| sed 's/^protobuf==\([0-9\.]*\)\.[0-9]*$/\1/')
protoc_version=$(python -m grpc_tools.protoc --version \
| sed 's/^libprotoc \([0-9\.]*\)\.[0-9]*$/\1/')
if [ "${protobuf_version}" != "${protoc_version}" ]; then
echo "protobuf version in lower-constraints.txt (${protobuf_version})" \
"!= installed protoc compiler version (${protoc_version})."
echo "Please, update requirements.txt and lower-constraints.txt or" \
"change version of grpcio-tools used in this script."
# Clearing api.proto to highlight the issue.
echo '' > ${KURYR_API_PROTO}
exit 1
fi
# Generating python bindings.
python -m grpc_tools.protoc -I./ \
--python_out=. --grpc_python_out=. ${KURYR_API_PROTO}
popd

View File

@ -1,31 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sctp
import socket
import sys
sk = sctp.sctpsocket_tcp(socket.AF_INET)
def connect_plus_message(OUT_IP, OUT_PORT):
sk.connect((OUT_IP, OUT_PORT))
print("Sending Message")
sk.sctp_send(msg='HELLO, I AM ALIVE!!!')
msgFromServer = sk.recvfrom(1024)
print(msgFromServer[0].decode('utf-8'))
sk.shutdown(0)
sk.close()
if __name__ == '__main__':
connect_plus_message(sys.argv[1], int(sys.argv[2]))

View File

@ -1,3 +0,0 @@
FROM scratch
ADD kuryr_testing_rootfs.tar.gz /
CMD ["/usr/bin/kuryr_hostname"]

View File

@ -1,43 +0,0 @@
#!/bin/bash
set -o errexit
function install_busybox {
if [[ -x $(command -v apt-get 2> /dev/null) ]]; then
sudo apt-get update
sudo apt-get install -y busybox-static gcc
elif [[ -x $(command -v dnf 2> /dev/null) ]]; then
sudo dnf install -y busybox gcc
elif [[ -x $(command -v yum 2> /dev/null) ]]; then
sudo yum install -y busybox gcc
elif [[ -x $(command -v pacman 2> /dev/null) ]]; then
sudo pacman -S --noconfirm busybox gcc
else
echo "unknown distro" 1>2
exit 1
fi
return 0
}
function make_root {
local root_dir
local binary
root_dir=$(mktemp -d)
mkdir -p "${root_dir}/bin" "${root_dir}/usr/bin"
binary=$(command -v busybox)
cp "$binary" "${root_dir}/bin/busybox"
"${root_dir}/bin/busybox" --install "${root_dir}/bin"
gcc --static hostname.c -o "${root_dir}/usr/bin/kuryr_hostname"
tar -C "$root_dir" -czvf kuryr_testing_rootfs.tar.gz bin usr
return 0
}
function build_container {
docker build -t kuryr/test_container .
}
install_busybox
make_root
build_container

View File

@ -1,129 +0,0 @@
#define _GNU_SOURCE
#include <arpa/inet.h>
#include <err.h>
#include <errno.h>
#include <linux/in.h>
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/socket.h>
#include <sys/types.h>
#include <unistd.h>
#define MAX_LEN 1024
#define BACKLOG 10
#define LISTENING_PORT 8000
volatile sig_atomic_t running = 1;
volatile sig_atomic_t sig_number;
static void handler(int signo) {
sig_number = signo;
running = 0;
}
int main() {
struct sigaction sa = {
.sa_handler = handler,
.sa_flags = 0};
sigemptyset(&sa.sa_mask);
if (sigaction(SIGINT, &sa, NULL) == -1) {
err(1, "Failed to set SIGINT handler");
}
if (sigaction(SIGTERM, &sa, NULL) == -1) {
err(1, "Failed to set SIGTERM handler");
}
int enable = 1;
int result = 1;
char hostname[MAX_LEN];
int res = gethostname(hostname, MAX_LEN);
if (res < 0) {
err(1, "Failed to retrieve hostname");
}
char *response;
ssize_t responselen;
responselen = asprintf(&response, "HTTP/1.1 200 OK\r\n"
"Content-Type: text/html; charset=UTF-8\r\n\r\n"
"%s\r\n", hostname);
if (responselen == -1) {
err(1, "Failed to form response");
}
int sock = socket(AF_INET, SOCK_STREAM, 0);
if (sock < 0) {
perror("Failed to open socket");
goto nosocket;
}
res = setsockopt(sock, SOL_SOCKET, SO_REUSEADDR, &enable,
sizeof(int));
if (res < 0) {
perror("Failed to set socket options");
goto cleanup;
}
struct sockaddr_in srv = {
.sin_family = AF_INET,
.sin_port = htons(LISTENING_PORT),
.sin_addr = { .s_addr = INADDR_ANY}};
socklen_t addrlen= sizeof(srv);
res = bind(sock, (struct sockaddr *) &srv, (socklen_t) sizeof(srv));
if (res < 0) {
res = close(sock);
if (res == -1) {
perror("Failed close socket");
goto cleanup;
}
perror("Failed to bind socket");
goto cleanup;
}
res = listen(sock, BACKLOG);
if (res < 0) {
perror("Failed to set socket to listen");
goto cleanup;
}
while (running) {
struct sockaddr_in cli;
int client_fd = accept(sock, (struct sockaddr *) &cli,
&addrlen);
if (client_fd == -1) {
if (running) {
perror("failed to accept connection");
continue;
} else {
char *signame = strsignal(sig_number);
printf("Received %s. Quitting\n", signame);
break;
}
}
fprintf(stderr, "Accepted client connection\n");
/* Assume we write it all at once */
write(client_fd, response, responselen);
res = shutdown(client_fd, SHUT_RDWR);
if (res == -1) {
perror("Failed to shutdown client connection");
goto cleanup;
}
res = close(client_fd);
if (res == -1) {
perror("Failed to close client connection");
goto cleanup;
}
}
result = 0;
cleanup:
close(sock);
nosocket:
free(response);
return result;
}

View File

@ -1,99 +0,0 @@
====================================================
Vagrant based Kuryr-Kubernetes devstack installation
====================================================
Deploy kuryr-kubernetes on devstack in VM using `Vagrant`_. Vagrant simplifies
life cycle of the local virtual machine and provides automation for repetitive
tasks.
Requirements
------------
For comfortable work, here are minimal host requirements:
#. ``vagrant`` installed
#. 4 CPU cores
#. At least 8GB of RAM
#. Around 20GB of free disk space
Vagrant will create VM with 2 cores, 6GB of RAM and dynamically expanded disk
image.
Getting started
---------------
You'll need vagrant itself, i.e.:
.. code:: console
$ apt install vagrant virtualbox
As an option, you can install libvirt instead of VirtualBox, although
VirtualBox is as an easiest drop-in.
Next, clone the kuryr-kubernetes repository:
.. code:: console
$ git clone https://opendev.org/openstack/kuryr-kubernetes
And run provided vagrant file, by executing:
.. code:: console
$ cd kuryr-kubernetes/contrib/vagrant
$ vagrant up
This can take some time, depending on your host performance, and may take
20 minutes and up.
After deploying is complete, you can access VM by ssh:
.. code:: console
$ vagrant ssh
At this point you should have experimental kubernetes (etcdv3, k8s-apiserver,
k8s-controller-manager, k8s-scheduler, kubelet and kuryr-controller), docker,
OpenStack services (neutron, keystone, placement, nova, octavia), kuryr-cni and
kuryr-controller all up, running and pointing to each other. Pods and services
orchestrated by kubernetes will be backed by kuryr+neutron and Octavia. The
architecture of the setup `can be seen here`_.
Vagrant Options available
-------------------------
You can set the following environment variables before running `vagrant up` to
modify the definition of the Virtual Machine spawned:
* ``VAGRANT_KURYR_VM_BOX`` - to change the Vagrant Box used. Should be
available in `atlas <https://app.vagrantup.com/>`_. For example of a
rpm-based option:
.. code:: console
$ export VAGRANT_KURYR_VM_BOX=centos/8
* ``VAGRANT_KURYR_VM_MEMORY`` - to modify the RAM of the VM. Defaulted to:
**6144**. If you plan to create multiple Kubernetes services on the setup and
the Octavia driver used is Amphora, you should increase this setting.
* ``VAGRANT_KURYR_VM_CPU``: to modify number of CPU cores for the VM. Defaulted
to: **2**.
* ``VAGRANT_KURYR_RUN_DEVSTACK`` - whether ``vagrant up`` should run devstack
to have an environment ready to use. Set it to 'false' if you want to edit
``local.conf`` before stacking devstack in the VM. Defaulted to: **true**.
See below for additional options for editing local.conf.
Additional devstack configuration
---------------------------------
To add additional configuration to local.conf before the VM is provisioned, you
can create a file called ``user_local.conf`` in the contrib/vagrant directory
of networking-kuryr. This file will be appended to the "local.conf" created
during the Vagrant provisioning.
.. _Vagrant: https://www.vagrantup.com/
.. _can be seen here: https://docs.openstack.org/developer/kuryr-kubernetes/devref/kuryr_kubernetes_design.html

View File

@ -1,50 +0,0 @@
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
VM_MEMORY = ENV.fetch('VAGRANT_KURYR_VM_MEMORY', 6144).to_i
VM_CPUS = ENV.fetch('VAGRANT_KURYR_VM_CPUS', 2).to_i
RUN_DEVSTACK = ENV.fetch('VAGRANT_KURYR_RUN_DEVSTACK', 'true')
config.vm.hostname = 'devstack'
config.vm.provider 'virtualbox' do |v, override|
override.vm.box = ENV.fetch('VAGRANT_KURYR_VM_BOX', 'generic/ubuntu2004')
v.memory = VM_MEMORY
v.cpus = VM_CPUS
v.customize "post-boot", ['controlvm', :id, 'setlinkstate1', 'on']
end
config.vm.provider 'parallels' do |v, override|
override.vm.box = ENV.fetch('VAGRANT_KURYR_VM_BOX', 'generic/ubuntu2004')
v.memory = VM_MEMORY
v.cpus = VM_CPUS
v.customize ['set', :id, '--nested-virt', 'on']
end
config.vm.provider 'libvirt' do |v, override|
override.vm.box = ENV.fetch('VAGRANT_KURYR_VM_BOX', 'generic/ubuntu2004')
v.memory = VM_MEMORY
v.cpus = VM_CPUS
v.nested = true
v.graphics_type = 'spice'
v.video_type = 'qxl'
end
config.vm.synced_folder '../../devstack/', '/devstack', type: 'rsync'
# For CentOS machines it needs to be specified
config.vm.synced_folder '.', '/vagrant', type: 'rsync'
config.vm.provision :shell do |s|
s.path = 'vagrant.sh'
s.args = RUN_DEVSTACK
end
if Vagrant.has_plugin?('vagrant-cachier')
config.cache.scope = :box
end
config.vm.network :forwarded_port, guest: 80, host_ip: "127.0.0.1", host: 8080
end

View File

@ -1,4 +0,0 @@
export OS_USERNAME=admin
export OS_PASSWORD=pass
export OS_PROJECT_NAME=admin
export OS_AUTH_URL=http://127.0.0.1/identity

View File

@ -1,62 +0,0 @@
#!/bin/bash
set -ex
echo $(whoami)
BASHPATH=$(dirname "$0"\")
RUN_DEVSTACK="$1"
echo "Run script from $BASHPATH"
# Copied shamelessly from Devstack
function GetOSVersion {
if [[ -x $(which lsb_release 2>/dev/null) ]]; then
os_FAMILY='Debian'
elif [[ -r /etc/redhat-release ]]; then
os_FAMILY='RedHat'
else
echo "Unsupported distribution!"
exit 1;
fi
}
GetOSVersion
if [[ "$os_FAMILY" == "Debian" ]]; then
export DEBIAN_FRONTEND noninteractive
sudo apt-get update
sudo apt-get install -qqy git
elif [[ "$os_FAMILY" == "RedHat" ]]; then
sudo yum install -y -d 0 -e 0 git
fi
# determine checkout folder
PWD=$(getent passwd $OS_USER | cut -d: -f6)
DEVSTACK=$PWD/devstack
# check if devstack is already there
if [[ ! -d "$DEVSTACK" ]]
then
echo "Download devstack into $DEVSTACK"
# clone devstack
su "$OS_USER" -c "cd && git clone -b master https://github.com/openstack-dev/devstack.git $DEVSTACK"
echo "Copy configuration"
# copy local.conf.sample settings (source: kuryr/devstack/local.conf.sample)
cp /devstack/local.conf.sample $DEVSTACK/local.conf
# If local settings are present, append them
if [ -f "/vagrant/user_local.conf" ]; then
cat /vagrant/user_local.conf >> $DEVSTACK/local.conf
fi
chown "$OS_USER":"$OS_USER" "$DEVSTACK"/local.conf
fi
if $RUN_DEVSTACK; then
echo "Start Devstack"
su "$OS_USER" -c "cd $DEVSTACK && ./stack.sh"
else
echo "Virtual Machine ready. You can run devstack by executing '/home/$OS_USER/devstack/stack.sh'"
fi

View File

@ -1,25 +0,0 @@
#!/bin/sh
getent passwd vagrant > /dev/null
if [ $? -eq 0 ]; then
export OS_USER=vagrant
else
getent passwd ubuntu > /dev/null
if [ $? -eq 0 ]; then
export OS_USER=ubuntu
fi
fi
set -ex
export HOST_IP=127.0.0.1
# Enable IPv6
sudo sysctl -w net.ipv6.conf.default.disable_ipv6=0
sudo sysctl -w net.ipv6.conf.all.disable_ipv6=0
# run script
bash /vagrant/devstack.sh "$1"
#set environment variables for kuryr
su "$OS_USER" -c "echo 'source /vagrant/config/kuryr_rc' >> ~/.bash_profile"

View File

@ -1,32 +0,0 @@
FROM quay.io/centos/centos:stream9
LABEL authors="Antoni Segura Puimedon<toni@kuryr.org>, Michał Dulko<mdulko@redhat.com>"
ARG UPPER_CONSTRAINTS_FILE="https://releases.openstack.org/constraints/upper/master"
RUN dnf upgrade -y \
&& dnf install -y epel-release \
&& dnf install -y --setopt=tsflags=nodocs python3-pip libstdc++ \
&& dnf install -y --setopt=tsflags=nodocs gcc gcc-c++ python3-devel git
COPY . /opt/kuryr-kubernetes
ARG VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
# This is enough to activate a venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN pip3 --no-cache-dir install -U pip \
&& python3 -m pip install -c $UPPER_CONSTRAINTS_FILE --no-cache-dir /opt/kuryr-kubernetes \
&& dnf -y history undo last \
&& dnf clean all \
&& rm -rf /opt/kuryr-kubernetes \
&& groupadd -r kuryr -g 1000 \
&& useradd -u 1000 -g kuryr \
-d /opt/kuryr-kubernetes \
-s /sbin/nologin \
-c "Kuryr controller user" \
kuryr
USER kuryr
CMD ["--config-dir", "/etc/kuryr"]
ENTRYPOINT [ "kuryr-k8s-controller" ]

View File

@ -1 +0,0 @@
golang

View File

@ -1 +0,0 @@
golang

View File

@ -1,245 +0,0 @@
#!/bin/bash
KURYR_KUBEADMIN_IMAGE_REPOSITORY="registry.k8s.io"
function get_k8s_log_level {
if [[ ${ENABLE_DEBUG_LOG_LEVEL} == "True" ]]; then
echo "4"
else
echo "2"
fi
}
function kubeadm_install {
if ! is_ubuntu && ! is_fedora; then
(>&2 echo "WARNING: kubeadm installation is not supported in this \
distribution.")
return
fi
if is_ubuntu; then
apt_get install apt-transport-https gpg
sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v${KURYR_KUBERNETES_VERSION%.*}/deb/Release.key | \
sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v'${KURYR_KUBERNETES_VERSION%.*}'/deb/ /' | \
sudo tee /etc/apt/sources.list.d/kubernetes.list
REPOS_UPDATED=False apt_get_update
# NOTE(gryf): kubectl will be installed alongside with the kubeadm as
# a dependency, although let's pin it to the k8s version as well.
kube_pkg_version=$(sudo apt-cache show kubeadm | grep "Version: $KURYR_KUBERNETES_VERSION-" | awk '{ print $2 }')
apt_get install \
kubelet="${kube_pkg_version}" \
kubeadm="${kube_pkg_version}" \
kubectl="${kube_pkg_version}"
sudo apt-mark hold kubelet kubeadm kubectl
# NOTE(hongbin): This work-around an issue that kubelet pick a wrong
# IP address if the node has multiple network interfaces.
# See https://github.com/kubernetes/kubeadm/issues/203
echo "KUBELET_EXTRA_ARGS=--node-ip=$HOST_IP" | sudo tee -a \
/etc/default/kubelet
sudo systemctl daemon-reload && sudo systemctl restart kubelet
fi
if is_fedora; then
source /etc/os-release
os_VENDOR=$(echo $NAME | tr -d '[:space:]')
if [[ $os_VENDOR =~ "CentOS" ]]; then
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg \
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
sudo chmod 755 /etc/yum.repos.d/kubernetes.repo
sudo dnf install kubeadm -y
fi
fi
}
function kubeadm_init {
local cluster_ip_ranges
local output_dir="${DATA_DIR}/kuryr-kubernetes"
local cgroup_driver
local cri_socket
mkdir -p "${output_dir}"
if [[ ${CONTAINER_ENGINE} == 'crio' ]]; then
local crio_conf="/etc/crio/crio.conf"
cgroup_driver=$(iniget ${crio_conf} crio.runtime cgroup_manager)
cri_socket="unix:///var/run/crio/crio.sock"
else
# docker is used
cgroup_driver=$(docker info -f '{{.CgroupDriver}}')
cri_socket="/var/run/dockershim.sock"
fi
cluster_ip_ranges=()
for service_subnet_id in ${KURYR_SERVICE_SUBNETS_IDS[@]}; do
service_cidr=$(openstack --os-cloud devstack-admin \
--os-region "$REGION_NAME" \
subnet show "$service_subnet_id" \
-c cidr -f value)
cluster_ip_ranges+=($(split_subnet "$service_cidr" | cut -f1))
done
# TODO(gryf): take care of cri-o case aswell
rm -f ${output_dir}/kubeadm-init.yaml
cat >> ${output_dir}/kubeadm-init.yaml << EOF
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
imageRepository: "${KURYR_KUBEADMIN_IMAGE_REPOSITORY}"
etcd:
external:
endpoints:
- "http://${SERVICE_HOST}:${ETCD_PORT}"
networking:
serviceSubnet: "$(IFS=, ; echo "${cluster_ip_ranges[*]}")"
apiServer:
extraArgs:
endpoint-reconciler-type: "none"
min-request-timeout: "300"
allow-privileged: "true"
v: "$(get_k8s_log_level)"
controllerManager:
extraArgs:
master: "$KURYR_K8S_API_URL"
min-resync-period: "3m"
v: "$(get_k8s_log_level)"
leader-elect: "false"
scheduler:
extraArgs:
master: "${KURYR_K8S_API_URL}"
v: "$(get_k8s_log_level)"
leader-elect: "false"
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
bootstrapTokens:
- token: "${KURYR_K8S_TOKEN}"
ttl: 0s
localAPIEndpoint:
advertiseAddress: "${K8S_API_SERVER_IP}"
bindPort: ${K8S_API_SERVER_PORT}
nodeRegistration:
criSocket: "$cri_socket"
kubeletExtraArgs:
enable-server: "true"
taints:
[]
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
failSwapOn: false
address: "0.0.0.0"
enableServer: true
cgroupDriver: $cgroup_driver
EOF
sudo kubeadm config images pull --image-repository=${KURYR_KUBEADMIN_IMAGE_REPOSITORY}
args="--config ${output_dir}/kubeadm-init.yaml"
# NOTE(gryf): skip installing kube proxy, kuryr will handle services.
args+=" --skip-phases=addon/kube-proxy"
args+=" --ignore-preflight-errors Swap"
if ! is_service_enabled coredns; then
# FIXME(gryf): Do we need specific configuration for coredns?
args+=" --skip-phases=addon/coredns"
fi
sudo kubeadm init $args
local kube_config_file=$HOME/.kube/config
mkdir -p $(dirname ${kube_config_file})
sudo cp /etc/kubernetes/admin.conf $kube_config_file
safe_chown $STACK_USER:$STACK_USER $kube_config_file
}
function kubeadm_join {
local output_dir="${DATA_DIR}/kuryr-kubernetes"
local cgroup_driver
local cri_socket
mkdir -p "${output_dir}"
if [[ ${CONTAINER_ENGINE} == 'crio' ]]; then
local crio_conf="/etc/crio/crio.conf"
cgroup_driver=$(iniget ${crio_conf} crio.runtime cgroup_manager)
cri_socket="unix:///var/run/crio/crio.sock"
else
# docker is used
cgroup_driver=$(docker info -f '{{.CgroupDriver}}')
cri_socket="/var/run/dockershim.sock"
fi
cluster_ip_ranges=()
for service_subnet_id in ${KURYR_SERVICE_SUBNETS_IDS[@]}; do
service_cidr=$(openstack --os-cloud devstack-admin \
--os-region "$REGION_NAME" \
subnet show "$service_subnet_id" \
-c cidr -f value)
cluster_ip_ranges+=($(split_subnet "$service_cidr" | cut -f1))
done
# TODO(gryf): take care of cri-o case aswell
rm -f ${output_dir}/kubeadm-join.yaml
cat >> ${output_dir}/kubeadm-join.yaml << EOF
apiVersion: kubeadm.k8s.io/v1beta3
discovery:
bootstrapToken:
apiServerEndpoint: ${SERVICE_HOST}:${KURYR_K8S_API_PORT}
token: "${KURYR_K8S_TOKEN}"
unsafeSkipCAVerification: true
tlsBootstrapToken: "${KURYR_K8S_TOKEN}"
kind: JoinConfiguration
nodeRegistration:
criSocket: "$cri_socket"
kubeletExtraArgs:
enable-server: "true"
taints:
[]
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
failSwapOn: false
address: "0.0.0.0"
enableServer: true
cgroupDriver: $cgroup_driver
EOF
sudo -E kubeadm join --ignore-preflight-errors Swap \
--config ${output_dir}/kubeadm-join.yaml
}
function get_k8s_apiserver {
# assumption is, there is no other cluster, so there is only one API
# server.
echo "$(kubectl config view -o jsonpath='{.clusters[].cluster.server}')"
}
function get_k8s_token {
local secret
secret=$(kubectl get secrets -o jsonpath='{.items[0].metadata.name}')
echo $(kubectl get secret $secret -o jsonpath='{.items[0].data.token}' | \
base64 -d)
}
function kubeadm_reset {
sudo kubeadm reset -f
sudo iptables -F
sudo iptables -t nat -F
sudo iptables -t mangle -F
sudo iptables -X
sudo ipvsadm -C
}
function kubeadm_uninstall {
sudo systemctl stop kubelet
apt_get purge --allow-change-held-packages. kubelet kubeadm kubeadm \
kubernetes-cni apt-transport-https
sudo add-apt-repository -r -y \
"deb https://apt.kubernetes.io/ kubernetes-xenial main"
REPOS_UPDATED=False apt_get_update
sudo rm -fr /etc/default/kubelet /etc/kubernetes
}

File diff suppressed because it is too large Load Diff

View File

@ -1,225 +0,0 @@
[[local|localrc]]
enable_plugin kuryr-kubernetes https://opendev.org/openstack/kuryr-kubernetes
# If you do not want stacking to clone new versions of the enabled services,
# like for example when you did local modifications and need to ./unstack.sh
# and ./stack.sh again, uncomment the following
# RECLONE="no"
# Log settings for better readability
LOGFILE=devstack.log
LOG_COLOR=False
# Credentials
ADMIN_PASSWORD=pass
DATABASE_PASSWORD=pass
RABBIT_PASSWORD=pass
SERVICE_PASSWORD=pass
SERVICE_TOKEN=pass
# disable services, to conserve the resources usage
disable_service cinder
disable_service dstat
disable_service n-novnc
disable_service horizon
# If you plan to run tempest tests on devstack, you should comment out/remove
# below line
disable_service tempest
# Neutron services
# ================
enable_plugin neutron https://opendev.org/openstack/neutron
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service q-svc
enable_service neutron-tag-ports-during-bulk-creation
# Disable OVN in favor of OVS
Q_AGENT="openvswitch"
Q_ML2_PLUGIN_MECHANISM_DRIVERS="openvswitch"
Q_ML2_TENANT_NETWORK_TYPE="vxlan"
# Set workaround for
FLOATING_RANGE="172.24.5.0/24"
PUBLIC_NETWORK_GATEWAY="172.24.5.1"
# VAR RUN PATH
# =============
# VAR_RUN_PATH=/var/run
# OCTAVIA
# =======
# Uncomment it to use L2 communication between loadbalancer and member pods
# KURYR_K8S_OCTAVIA_MEMBER_MODE=L2
# Uncomment to change Octavia loadbalancer listener client and member
# inactivity timeout from 50000ms.
# KURYR_TIMEOUT_CLIENT_DATA=50000
# KURYR_TIMEOUT_MEMBER_DATA=50000
# Octavia LBaaSv2
LIBS_FROM_GIT+=python-octaviaclient
enable_plugin octavia https://opendev.org/openstack/octavia
enable_service octavia
enable_service o-api
enable_service o-cw
enable_service o-hm
enable_service o-hk
## Octavia Deps
# In order to skip building the Octavia Amphora image you can fetch a
# precreated qcow image from here [1] and set up octavia to use it by
# uncommenting the following lines.
# [1] https://tarballs.openstack.org/octavia/test-images/test-only-amphora-x64-haproxy-ubuntu-xenial.qcow2
# OCTAVIA_AMP_IMAGE_FILE=/tmp/test-only-amphora-x64-haproxy-ubuntu-xenial.qcow2
# OCTAVIA_AMP_IMAGE_SIZE=3
# OCTAVIA_AMP_IMAGE_NAME=test-only-amphora-x64-haproxy-ubuntu-xenial
# CRI
# ===
# If you already have either CRI-O or Docker configured, running and with its
# socket writable by the stack user, you can omit the following lines.
enable_plugin devstack-plugin-container https://opendev.org/openstack/devstack-plugin-container
# We are using CRI-O by default. The version should match K8s version:
CONTAINER_ENGINE="crio"
CRIO_VERSION="1.28"
# Etcd
# ====
# The default is for devstack to run etcd for you. Remove comment to disable
# it, if you already have etcd running.
#disable_service etcd3
# If you already have an etcd cluster configured and running, you can just
# comment out the lines enabling legacy_etcd and etcd3
# then uncomment and set the following line:
# KURYR_ETCD_CLIENT_URL="http://etcd_ip:etcd_client_port"
# Kubernetes
# ==========
#
# Kubernetes is installed by kubeadm (which is installed from proper
# repository).
# If you already have a Kubernetes deployment, you can use it instead and omit
# enabling the Kubernetes service.
# TODO(gryf): review the part whith existsing cluster for kubelet
# configuration instead of runing it via devstack - it need to be
# configured for use our CNI.
#
# The default is, again, for devstack to run the Kubernetes services:
enable_service kubernetes-master
# If you have the 6443 port already bound to another service, you will need to
# have kubernetes API server bind to another port. In order to do that,
# uncomment and set a different port number in:
# KURYR_K8S_API_PORT="6443"
#
# If, however, you are reusing an existing deployment, you should uncomment and
# set an ENV var so that the Kubelet devstack runs can find the API server:
#
# TODO(gryf): revisit this scenario. Do we even support this in devstack?
#
# KURYR_K8S_API_URL="http (or https, if K8S is SSL/TLS enabled)://k8s_api_ip:k8s_api_port"
#
# If kubernetes API server is 'https' enabled, set path of the ssl cert files
# KURYR_K8S_API_CERT="/etc/kubernetes/certs/kubecfg.crt"
# KURYR_K8S_API_KEY="/etc/kubernetes/certs/kubecfg.key"
# KURYR_K8S_API_CACERT="/etc/kubernetes/certs/ca.crt"
enable_service kubernetes-master
# Kuryr watcher
# =============
#
# Just like the Kubelet, you'll want to have the watcher enabled. It is the
# part of the codebase that connects to the Kubernetes API server to read the
# resource events and convert them to Neutron actions
enable_service kuryr-kubernetes
# Kuryr Daemon
# ============
#
# Kuryr can run CNI plugin in daemonized way - i.e. kubelet will run kuryr CNI
# driver and the driver will pass requests to Kuryr daemon running on the node,
# instead of processing them on its own. This limits the number of Kubernetes
# API requests (as only Kuryr Daemon will watch for new pod events) and should
# increase scalability in environments that often delete and create pods.
# Since Rocky release this is a default deployment configuration.
enable_service kuryr-daemon
# Containerized Kuryr
# ===================
#
# Kuryr can be installed on Kubernetes as a pair of Deployment
# (kuryr-controller) and DaemonSet (kuryr-cni) or as systemd services. If you
# want DevStack to deploy Kuryr services as pods on Kubernetes, comment (or
# remove) next line.
KURYR_K8S_CONTAINERIZED_DEPLOYMENT=False
# Kuryr POD VIF Driver
# ====================
#
# Set up the VIF Driver to be used. The default one is the neutron-vif, but if
# a nested deployment is desired, the corresponding driver need to be set,
# e.g.: nested-vlan or nested-macvlan
# KURYR_POD_VIF_DRIVER=neutron-vif
# Kuryr Enabled Handlers
# ======================
#
# By default, some Kuryr Handlers are set for DevStack installation. This can be
# further tweaked in order to enable additional ones such as Network Policy. If
# you want to add additional handlers those can be set here:
# KURYR_ENABLED_HANDLERS = vif,endpoints,service,kuryrloadbalancer,kuryrport
# Kuryr Ports Pools
# =================
#
# To speed up containers boot time the kuryr ports pool driver can be enabled
# by uncommenting the next line, so that neutron port resources are precreated
# and ready to be used by the pods when needed
# KURYR_USE_PORTS_POOLS=True
#
# By default the pool driver is noop, i.e., there is no pool. If pool
# optimizations want to be used you need to set it to 'neutron' for the
# baremetal case, or to 'nested' for the nested case
# KURYR_VIF_POOL_DRIVER=noop
#
# There are extra configuration options for the pools that can be set to decide
# on the minimum number of ports that should be ready to use at each pool, the
# maximum (0 to unset), and the batch size for the repopulation actions, i.e.,
# the number of neutron ports to create in bulk operations. Finally, the update
# frequency between actions over the pool can be set too
# KURYR_VIF_POOL_MIN=2
# KURYR_VIF_POOL_MAX=0
# KURYR_VIF_POOL_BATCH=5
# KURYR_VIF_POOL_UPDATE_FREQ=30
# Kuryr VIF Pool Manager
# ======================
#
# Uncomment the next line to enable the pool manager. Note it requires the
# nested-vlan pod vif driver, as well as the ports pool being enabled and
# configured with the nested driver
# KURYR_VIF_POOL_MANAGER=True
# Kuryr Multi-VIF Driver
# ======================
# Uncomment the next line to enable the npwg multi-vif driver.
# Default value: noop
# KURYR_MULTI_VIF_DRIVER=npwg_multiple_interfaces
# Kuryr own router
# ================
# Uncomment the next line to force devstack to create a new router for kuryr
# networks instead of using the default one being created by devstack
# KURYR_NEUTRON_DEFAULT_ROUTER = kuryr-router
# Increase Octavia amphorae timeout so that the first LB amphora has time to
# build and boot
IMAGE_URLS+=",http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img"
[[post-config|$OCTAVIA_CONF]]
[controller_worker]
amp_active_retries=9999

View File

@ -1,71 +0,0 @@
[[local|localrc]]
RECLONE="no"
enable_plugin kuryr-kubernetes \
https://opendev.org/openstack/kuryr-kubernetes
OFFLINE="no"
LOGFILE=devstack.log
LOG_COLOR=False
ADMIN_PASSWORD=pass
DATABASE_PASSWORD=pass
RABBIT_PASSWORD=pass
SERVICE_PASSWORD=pass
SERVICE_TOKEN=pass
IDENTITY_API_VERSION=3
ENABLED_SERVICES=""
SERVICE_HOST=UNDERCLOUD_CONTROLLER_IP
MULTI_HOST=1
KEYSTONE_SERVICE_HOST=$SERVICE_HOST
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
KURYR_CONFIGURE_NEUTRON_DEFAULTS=True
KURYR_CONFIGURE_BAREMETAL_KUBELET_IFACE=False
# Include subnet pool id to use. You can use the one from the default undercloud devstack
KURYR_NEUTRON_DEFAULT_SUBNETPOOL_ID=UNDERCLOUD_SUBNETPOOL_V4_ID
# Include the router to use. You can use the one from the default undercloud devstack
KURYR_NEUTRON_DEFAULT_ROUTER=router1
enable_plugin devstack-plugin-container https://opendev.org/openstack/devstack-plugin-container
enable_service etcd3
enable_service kubernetes-master
enable_service kuryr-kubernetes
enable_service kuryr-daemon
KURYR_POD_VIF_DRIVER=nested-vlan
# Kuryr Ports Pools
# =================
#
# To speed up containers boot time the kuryr ports pool driver can be enabled
# by uncommenting the next line, so that neutron port resources are precreated
# and ready to be used by the pods when needed
# KURYR_USE_PORTS_POOLS=True
#
# By default the pool driver is noop, i.e., there is no pool. If pool
# optimizations want to be used you need to set it to 'neutron' for the
# baremetal case, or to 'nested' for the nested case
KURYR_VIF_POOL_DRIVER=nested
#
# There are extra configuration options for the pools that can be set to decide
# on the minimum number of ports that should be ready to use at each pool, the
# maximum (0 to unset), and the batch size for the repopulation actions, i.e.,
# the number of neutron ports to create in bulk operations. Finally, the update
# frequency between actions over the pool can be set too
# KURYR_VIF_POOL_MIN=2
# KURYR_VIF_POOL_MAX=0
# KURYR_VIF_POOL_BATCH=5
# KURYR_VIF_POOL_UPDATE_FREQ=30
# Kuryr VIF Pool Manager
# ======================
#
# Uncomment the next line to enable the pool manager. Note it requires the
# nested-vlan pod vif driver, as well as the ports pool being enabled and
# configured with the nested driver
# KURYR_VIF_POOL_MANAGER=True

View File

@ -1,42 +0,0 @@
[[local|localrc]]
# If you do not want stacking to clone new versions of the enabled services,
# like for example when you did local modifications and need to ./unstack.sh
# and ./stack.sh again, uncomment the following
# RECLONE="no"
# Log settings for better readability
LOGFILE=devstack.log
LOG_COLOR=False
MULTI_HOST=1
# Credentials
ADMIN_PASSWORD=pass
DATABASE_PASSWORD=pass
RABBIT_PASSWORD=pass
SERVICE_PASSWORD=pass
SERVICE_TOKEN=pass
# Enable services, these services depend on neutron plugin.
enable_plugin neutron https://opendev.org/openstack/neutron
enable_service q-trunk
# Octavia LBaaSv2
LIBS_FROM_GIT+=python-octaviaclient
enable_plugin octavia https://opendev.org/openstack/octavia
enable_service octavia
enable_service o-api
enable_service o-cw
enable_service o-hm
enable_service o-hk
## Octavia Deps
# In order to skip building the Octavia Amphora image you can fetch a
# precreated qcow image from here [1] and set up octavia to use it by
# uncommenting the following lines.
# [1] https://tarballs.openstack.org/octavia/test-images/test-only-amphora-x64-haproxy-ubuntu-xenial.qcow2
# OCTAVIA_AMP_IMAGE_FILE=/tmp/test-only-amphora-x64-haproxy-ubuntu-xenial.qcow2
# OCTAVIA_AMP_IMAGE_SIZE=3
# OCTAVIA_AMP_IMAGE_NAME=test-only-amphora-x64-haproxy-ubuntu-xenial
IMAGE_URLS+=",http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img"

View File

@ -1,56 +0,0 @@
[[local|localrc]]
# If you do not want stacking to clone new versions of the enabled services,
# like for example when you did local modifications and need to ./unstack.sh
# and ./stack.sh again, uncomment the following
# RECLONE="no"
# Log settings for better readability
LOGFILE=devstack.log
LOG_COLOR=False
MULTI_HOST=1
# Credentials
ADMIN_PASSWORD=pass
DATABASE_PASSWORD=pass
RABBIT_PASSWORD=pass
SERVICE_PASSWORD=pass
SERVICE_TOKEN=pass
TUNNEL_TYPE=vxlan
# Enable Keystone v3
IDENTITY_API_VERSION=3
# Octavia LBaaSv2
LIBS_FROM_GIT+=python-octaviaclient
enable_plugin octavia https://opendev.org/openstack/octavia
enable_service octavia
enable_service o-api
enable_service o-cw
enable_service o-hm
enable_service o-hk
## Octavia Deps
# In order to skip building the Octavia Amphora image you can fetch a
# precreated qcow image from here [1] and set up octavia to use it by
# uncommenting the following lines.
# [1] https://tarballs.openstack.org/octavia/test-images/test-only-amphora-x64-haproxy-ubuntu-xenial.qcow2
# OCTAVIA_AMP_IMAGE_FILE=/tmp/test-only-amphora-x64-haproxy-ubuntu-xenial.qcow2
# OCTAVIA_AMP_IMAGE_SIZE=3
# OCTAVIA_AMP_IMAGE_NAME=test-only-amphora-x64-haproxy-ubuntu-xenial
### Nova
enable_service n-api
enable_service n-api-meta
enable_service n-cpu
enable_service n-cond
enable_service n-sch
enable_service placement-api
enable_service placement-client
### Glance
enable_service g-api
enable_service g-reg
IMAGE_URLS+=",http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img"
[[post-config|/$Q_PLUGIN_CONF_FILE]]
[securitygroup]
firewall_driver = openvswitch

View File

@ -1,210 +0,0 @@
[[local|localrc]]
enable_plugin kuryr-kubernetes https://opendev.org/openstack/kuryr-kubernetes
# If you do not want stacking to clone new versions of the enabled services,
# like for example when you did local modifications and need to ./unstack.sh
# and ./stack.sh again, uncomment the following
# RECLONE="no"
# Log settings for better readability
LOGFILE=devstack.log
LOG_COLOR=False
# Credentials
ADMIN_PASSWORD=pass
DATABASE_PASSWORD=pass
RABBIT_PASSWORD=pass
SERVICE_PASSWORD=pass
SERVICE_TOKEN=pass
# disable services, to conserve the resources usage
disable_service cinder
disable_service dstat
disable_service n-novnc
disable_service horizon
# If you plan to run tempest tests on devstack, you should comment out/remove
# below line
disable_service tempest
# Neutron services
# ================
enable_plugin neutron https://opendev.org/openstack/neutron
enable_service neutron-tag-ports-during-bulk-creation
# VAR RUN PATH
# =============
# VAR_RUN_PATH=/var/run
# OCTAVIA
# =======
# Uncomment it to use L2 communication between loadbalancer and member pods
# KURYR_K8S_OCTAVIA_MEMBER_MODE=L2
# Kuryr K8S-Endpoint driver Octavia provider
# ==========================================
# Kuryr uses LBaaS to provide the Kubernetes services
# functionality.
# In case Octavia is used for LBaaS, you can choose the
# Octavia's Load Balancer provider.
# KURYR_EP_DRIVER_OCTAVIA_PROVIDER=default
# Uncomment the next lines to enable ovn provider. Note only one mode is
# supported on ovn-octavia. As the member subnet must be added when adding
# members, it must be set to L2 mode
KURYR_EP_DRIVER_OCTAVIA_PROVIDER=ovn
KURYR_K8S_OCTAVIA_MEMBER_MODE=L2
KURYR_ENFORCE_SG_RULES=False
KURYR_LB_ALGORITHM=SOURCE_IP_PORT
# Uncomment to modify listener client and member inactivity timeout.
# KURYR_TIMEOUT_CLIENT_DATA=50000
# KURYR_TIMEOUT_MEMBER_DATA=50000
# Octavia LBaaSv2
LIBS_FROM_GIT+=python-octaviaclient
enable_plugin octavia https://opendev.org/openstack/octavia
enable_service octavia
enable_service o-api
enable_service o-cw
enable_service o-hm
enable_service o-hk
enable_service o-da
# OVN octavia provider plugin
enable_plugin ovn-octavia-provider https://opendev.org/openstack/ovn-octavia-provider
# CRI
# ===
# If you already have either CRI-O or Docker configured, running and with its
# socket writable by the stack user, you can omit the following lines.
enable_plugin devstack-plugin-container https://opendev.org/openstack/devstack-plugin-container
# We are using CRI-O by default. The version should match K8s version:
CONTAINER_ENGINE="crio"
CRIO_VERSION="1.28"
# Etcd
# ====
# The default is for devstack to run etcd for you. Remove comment to disable
# it, if you already have etcd running.
#disable_service etcd3
# If you already have an etcd cluster configured and running, you can just
# comment out the lines enabling legacy_etcd and etcd3
# then uncomment and set the following line:
# KURYR_ETCD_CLIENT_URL="http://etcd_ip:etcd_client_port"
# Kubernetes
# ==========
#
# Kubernetes is installed by kubeadm (which is installed from proper
# repository).
# If you already have a Kubernetes deployment, you can use it instead and omit
# enabling the Kubernetes service.
# TODO(gryf): review the part whith existsing cluster for kubelet
# configuration instead of runing it via devstack - it need to be
# configured for use our CNI.
#
# The default is, again, for devstack to run the Kubernetes services:
enable_service kubernetes-master
# If you have the 6443 port already bound to another service, you will need to
# have kubernetes API server bind to another port. In order to do that,
# uncomment and set a different port number in:
# KURYR_K8S_API_PORT="6443"
#
# If, however, you are reusing an existing deployment, you should uncomment and
# set an ENV var so that the Kubelet devstack runs can find the API server:
#
# TODO(gryf): revisit this scenario. Do we even support this in devstack?
#
# KURYR_K8S_API_URL="http (or https, if K8S is SSL/TLS enabled)://k8s_api_ip:k8s_api_port"
#
# If kubernetes API server is 'https' enabled, set path of the ssl cert files
# KURYR_K8S_API_CERT="/etc/kubernetes/certs/kubecfg.crt"
# KURYR_K8S_API_KEY="/etc/kubernetes/certs/kubecfg.key"
# KURYR_K8S_API_CACERT="/etc/kubernetes/certs/ca.crt"
enable_service kubernetes-master
# Kuryr watcher
# =============
#
# Just like the Kubelet, you'll want to have the watcher enabled. It is the
# part of the codebase that connects to the Kubernetes API server to read the
# resource events and convert them to Neutron actions
enable_service kuryr-kubernetes
# Kuryr Daemon
# ============
#
# Kuryr can run CNI plugin in daemonized way - i.e. kubelet will run kuryr CNI
# driver and the driver will pass requests to Kuryr daemon running on the node,
# instead of processing them on its own. This limits the number of Kubernetes
# API requests (as only Kuryr Daemon will watch for new pod events) and should
# increase scalability in environments that often delete and create pods.
# Since Rocky release this is a default deployment configuration.
enable_service kuryr-daemon
# Containerized Kuryr
# ===================
#
# Kuryr can be installed on Kubernetes as a pair of Deployment
# (kuryr-controller) and DaemonSet (kuryr-cni) or as systemd services. If you
# want DevStack to deploy Kuryr services as pods on Kubernetes, comment (or
# remove) next line.
KURYR_K8S_CONTAINERIZED_DEPLOYMENT=False
# Kuryr POD VIF Driver
# ====================
#
# Set up the VIF Driver to be used. The default one is the neutron-vif, but if
# a nested deployment is desired, the corresponding driver need to be set,
# e.g.: nested-vlan or nested-macvlan
# KURYR_POD_VIF_DRIVER=neutron-vif
# Kuryr Enabled Handlers
# ======================
#
# By default, some Kuryr Handlers are set for DevStack installation. This can be
# further tweaked in order to enable additional ones such as Network Policy. If
# you want to add additional handlers those can be set here:
# KURYR_ENABLED_HANDLERS = vif,endpoints,service,kuryrloadbalancer,kuryrport
# Kuryr Ports Pools
# =================
#
# To speed up containers boot time the kuryr ports pool driver can be enabled
# by uncommenting the next line, so that neutron port resources are precreated
# and ready to be used by the pods when needed
# KURYR_USE_PORTS_POOLS=True
#
# By default the pool driver is noop, i.e., there is no pool. If pool
# optimizations want to be used you need to set it to 'neutron' for the
# baremetal case, or to 'nested' for the nested case
# KURYR_VIF_POOL_DRIVER=noop
#
# There are extra configuration options for the pools that can be set to decide
# on the minimum number of ports that should be ready to use at each pool, the
# maximum (0 to unset), and the batch size for the repopulation actions, i.e.,
# the number of neutron ports to create in bulk operations. Finally, the update
# frequency between actions over the pool can be set too
# KURYR_VIF_POOL_MIN=2
# KURYR_VIF_POOL_MAX=0
# KURYR_VIF_POOL_BATCH=5
# KURYR_VIF_POOL_UPDATE_FREQ=30
# Kuryr VIF Pool Manager
# ======================
#
# Uncomment the next line to enable the pool manager. Note it requires the
# nested-vlan pod vif driver, as well as the ports pool being enabled and
# configured with the nested driver
# KURYR_VIF_POOL_MANAGER=True
# Increase Octavia amphorae timeout so that the first LB amphora has time to
# build and boot
#IMAGE_URLS+=",http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img"
[[post-config|$OCTAVIA_CONF]]
[controller_worker]
amp_active_retries=9999
[api_settings]
enabled_provider_drivers = amphora:'Octavia Amphora driver',ovn:'Octavia OVN driver'

View File

@ -1,60 +0,0 @@
[[local|localrc]]
enable_plugin kuryr-kubernetes \
https://opendev.org/openstack/kuryr-kubernetes
RECLONE="no"
# Log settings for better readability
LOGFILE=devstack.log
LOG_COLOR=False
# Credentials
ADMIN_PASSWORD=pass
DATABASE_PASSWORD=pass
RABBIT_PASSWORD=pass
SERVICE_PASSWORD=pass
SERVICE_TOKEN=pass
# Enable Keystone v3
IDENTITY_API_VERSION=3
# In pro of speed and being lightweight, we will be explicit in regards to
# which services we enable
ENABLED_SERVICES=""
SERVICE_HOST=CONTROLLER_IP
MULTI_HOST=1
KEYSTONE_SERVICE_HOST=$SERVICE_HOST
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
KURYR_K8S_API_URL="http://${SERVICE_HOST}:8080"
# For Baremetal deployment, enable SDN agent that should run on worker node
# enable_service q-agt
# Docker
# ======
# If you already have docker configured, running and with its socket writable
# by the stack user, you can omit the following line.
enable_plugin devstack-plugin-container https://opendev.org/openstack/devstack-plugin-container
# Kubernetes
# ==========
#
# We are reusing an existing deployment on master, you should uncomment and
# set an ENV var so that the Kubelet devstack runs can find the API server:
# KURYR_K8S_API_URL="http (or https, if K8S is SSL/TLS enabled)://k8s_api_ip:k8s_api_port"
#
# Set neutron service subnet id/name
# KURYR_NEUTRON_DEFAULT_SERVICE_SUBNET=k8s-service-subnet
#
# For overcloud deployment uncomment this line
# KURYR_CONFIGURE_BAREMETAL_KUBELET_IFACE=False
# Kubelet
# =======
#
# Kubelet will be run via kubeadm
enable_service kubernetes-worker

View File

@ -1,174 +0,0 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Save trace setting
XTRACE=$(set +o | grep xtrace)
set -o xtrace
KURYR_CONT=$(trueorfalse False KURYR_K8S_CONTAINERIZED_DEPLOYMENT)
KURYR_OVS_BM=$(trueorfalse True KURYR_CONFIGURE_BAREMETAL_KUBELET_IFACE)
source $DEST/kuryr-kubernetes/devstack/lib/kuryr_kubernetes
source $DEST/kuryr-kubernetes/devstack/lib/kubernetes
if is_service_enabled kuryr-kubernetes kuryr-daemon \
kuryr-kubernetes-worker; then
# There are four services provided by this plugin.
#
# Those two are needed for non-containerized deployment, otherwise,
# run_process will not create systemd units thus run the service. For
# containerized one, kuryr-daemon can be omitted, but you'll still need
# kuryr-kubernetes to be able to install and run kuryr/k8s bits.
#
# * kuryr-kubernetes (no change from former version)
# * kuryr-daemon (no change from former version)
#
# Those are new one, and differentiate between kubernetes master node and
# worker node:
#
# * kubernetes-master (former: kubernetes-api, kubernetes-scheduler,
# kubernetes-controller-manager, kubelet)
# * kubernetes-worker (former: kubelet)
#
# There were openshift-* services removed, since they are not working
# anymore.
if [[ "$1" == "stack" && "$2" == "pre-install" ]]; then
echo_summary "Installing dependecies for Kuryr-Kubernetes"
if is_service_enabled kubernetes-master kubernetes-worker; then
kubeadm_install
fi
elif [[ "$1" == "stack" && "$2" == "install" ]]; then
echo_summary "Installing kuryr CNI and Controller"
setup_develop "$KURYR_HOME"
if [[ "${KURYR_CONT}" == "False" ]]; then
# Build CNI only for non-containerized deployment. For
# containerized CNI will be built within the images build.
build_install_kuryr_cni
fi
elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
echo_summary "Configure kuryr bits"
if is_service_enabled kuryr-daemon; then
create_kuryr_account
configure_kuryr
fi
elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
echo_summary "Installing kubernetes and kuryr"
# Initialize and start the template service
if is_service_enabled kuryr-kubernetes; then
configure_neutron_defaults
fi
if is_service_enabled kubernetes-master kubernetes-worker; then
prepare_kubelet
fi
if is_service_enabled kubernetes-master; then
wait_for "etcd" "http://${SERVICE_HOST}:${ETCD_PORT}/v2/machines"
kubeadm_init
copy_kuryr_certs
elif is_service_enabled kubernetes-worker; then
kubeadm_join
fi
if [ "${KURYR_CONT}" == "True" ]; then
if is_service_enabled kubernetes-master; then
build_kuryr_container_image "controller"
build_kuryr_container_image "cni"
else
build_kuryr_container_image "cni"
fi
fi
if is_service_enabled kubernetes-master; then
# don't alter kubernetes config
# prepare_kubeconfig
prepare_kubernetes_files
fi
if is_service_enabled kubernetes-master kubernetes-worker; then
if [[ "${KURYR_OVS_BM}" == "True" ]]; then
ovs_bind_for_kubelet "$KURYR_NEUTRON_DEFAULT_PROJECT" 6443
else
configure_overcloud_vm_k8s_svc_sg
fi
fi
if is_service_enabled tempest; then
copy_tempest_kubeconfig
configure_k8s_pod_sg_rules
fi
if is_service_enabled kuryr-kubernetes; then
kubectl apply -f ${KURYR_HOME}/kubernetes_crds/kuryr_crds/
if [[ "${KURYR_CONT}" == "True" ]]; then
_generate_containerized_kuryr_resources
fi
if [ "$KURYR_MULTI_VIF_DRIVER" == "npwg_multiple_interfaces" ]; then
kubectl apply -f ${KURYR_HOME}/kubernetes_crds/network_attachment_definition_crd.yaml
fi
fi
elif [[ "$1" == "stack" && "$2" == "test-config" ]]; then
echo_summary "Run kuryr-kubernetes"
if is_service_enabled kuryr-kubernetes; then
if is_service_enabled octavia; then
create_lb_for_services
fi
# FIXME(dulek): This is a very late phase to start Kuryr services.
# We're doing it here because we need K8s API LB to be created in
# order to run kuryr services. Thing is Octavia is unable to
# create LB until test-config phase. We can revisit this once
# Octavia's DevStack plugin will get improved.
if [ "${KURYR_CONT}" == "True" ]; then
run_containerized_kuryr_resources
else
run_kuryr_kubernetes
run_kuryr_daemon
fi
fi
if is_service_enabled tempest; then
update_tempest_conf_file
fi
fi
if [[ "$1" == "unstack" ]]; then
# Shut down kuryr and kubernetes services
if is_service_enabled kuryr-kubernetes; then
if [ "${KURYR_CONT}" == "False" ]; then
stop_process kuryr-kubernetes
stop_process kuryr-daemon
fi
kubeadm_reset
fi
cleanup_kuryr_devstack_iptables
fi
if [[ "$1" == "clean" ]]; then
# Uninstall kubernetes pkgs, remove config files and kuryr cni
kubeadm_uninstall
uninstall_kuryr_cni
rm_kuryr_conf
fi
fi
# Restore xtrace
$XTRACE

View File

@ -1,101 +0,0 @@
KURYR_HOME=${KURYR_HOME:-$DEST/kuryr-kubernetes}
CNI_PLUGIN_DIR=${CNI_PLUGIN_DIR:-/opt/cni/bin}
CNI_CONF_DIR=${CNI_CONF_DIR:-/etc/cni/net.d}
KURYR_CONFIG_DIR=${KURYR_CONFIG_DIR:-/etc/kuryr}
KURYR_CONFIG=${KURYR_CONFIG:-${KURYR_CONFIG_DIR}/kuryr.conf}
KURYR_AUTH_CACHE_DIR=${KURYR_AUTH_CACHE_DIR:-/var/cache/kuryr}
KURYR_LOCK_DIR=${KURYR_LOCK_DIR:-${DATA_DIR}/kuryr-kubernetes}
KURYR_WAIT_TIMEOUT=${KURYR_WAIT_TIMEOUT:-300}
KURYR_DOCKER_ENGINE_SOCKET_FILE=${KURYR_DOCKER_ENGINE_SOCKET_FILE:-/var/run/docker.sock}
# Neutron defaults
KURYR_CONFIGURE_NEUTRON_DEFAULTS=${KURYR_CONFIGURE_NEUTRON_DEFAULTS:-True}
KURYR_NEUTRON_DEFAULT_PROJECT=${KURYR_NEUTRON_DEFAULT_PROJECT:-k8s}
KURYR_NEUTRON_DEFAULT_POD_NET=${KURYR_NEUTRON_DEFAULT_POD_SUBNET:-k8s-pod-net}
KURYR_NEUTRON_DEFAULT_SERVICE_NET=${KURYR_NEUTRON_DEFAULT_SERVICE_SUBNET:-k8s-service-net}
KURYR_NEUTRON_DEFAULT_POD_SUBNET=${KURYR_NEUTRON_DEFAULT_POD_SUBNET:-k8s-pod-subnet}
KURYR_NEUTRON_DEFAULT_SERVICE_SUBNET=${KURYR_NEUTRON_DEFAULT_SERVICE_SUBNET:-k8s-service-subnet}
KURYR_NEUTRON_DEFAULT_SUBNETPOOL_ID=${KURYR_NEUTRON_DEFAULT_SUBNETPOOL_ID:-}
KURYR_NEUTRON_DEFAULT_ROUTER=${KURYR_NEUTRON_DEFAULT_ROUTER:-}
KURYR_NEUTRON_DEFAULT_EXT_SVC_NET=${KURYR_NEUTRON_DEFAULT_EXT_SVC_NET:-public}
KURYR_NEUTRON_DEFAULT_EXT_SVC_SUBNET=${KURYR_NEUTRON_DEFAULT_EXT_SVC_SUBNET:-public-subnet}
# Etcd
ETCD_PORT=${ETCD_PORT:-2379}
# KUBERNETES
KURYR_KUBERNETES_DATA_DIR=${KURYR_KUBERNETES_DATA_DIR:-${DATA_DIR}/kubernetes}
KURYR_KUBERNETES_VERSION=${KURYR_KUBERNETES_VERSION:-1.28.2}
KURYR_K8S_API_PORT=${KURYR_K8S_API_PORT:-6443}
# NOTE(dulek): [kubernetes]api_root option will use LB IP instead.
KURYR_K8S_API_URL=${KURYR_K8S_API_URL:-"https://${SERVICE_HOST}:${KURYR_K8S_API_PORT}"}
KURYR_K8S_API_CERT=${KURYR_K8S_API_CERT:-"/etc/kubernetes/pki/apiserver-kubelet-client.crt"}
KURYR_K8S_API_KEY=${KURYR_K8S_API_KEY:-"/etc/kubernetes/pki/kuryr-client.key"}
KURYR_K8S_API_CACERT=${KURYR_K8S_API_CACERT:-}
KURYR_K8S_API_LB_PORT=${KURYR_K8S_API_LB_PORT:-443}
KURYR_PORT_DEBUG=${KURYR_PORT_DEBUG:-True}
KURYR_SUBNET_DRIVER=${KURYR_SUBNET_DRIVER:-default}
KURYR_SG_DRIVER=${KURYR_SG_DRIVER:-default}
KURYR_ENABLED_HANDLERS=${KURYR_ENABLED_HANDLERS:-vif,endpoints,service,kuryrloadbalancer,kuryrport}
KURYR_K8S_TOKEN=${KURYR_K8S_TOKEN:-5c54f8.34eb2d4f30bccf81}
# Octavia
KURYR_K8S_OCTAVIA_MEMBER_MODE=${KURYR_K8S_OCTAVIA_MEMBER_MODE:-L3}
KURYR_ENFORCE_SG_RULES=${KURYR_ENFORCE_SG_RULES:-True}
KURYR_LB_ALGORITHM=${KURYR_LB_ALGORITHM:-ROUND_ROBIN}
KURYR_TIMEOUT_CLIENT_DATA=${KURYR_TIMEOUT_CLIENT_DATA:-0}
KURYR_TIMEOUT_MEMBER_DATA=${KURYR_TIMEOUT_MEMBER_DATA:-0}
# Kuryr_ovs_baremetal
KURYR_CONFIGURE_BAREMETAL_KUBELET_IFACE=${KURYR_CONFIGURE_BAREMETAL_KUBELET_IFACE:-True}
# Kubernetes containerized deployment
KURYR_K8S_CONTAINERIZED_DEPLOYMENT=${KURYR_K8S_CONTAINERIZED_DEPLOYMENT:-True}
# Kuryr Endpoint LBaaS OCTAVIA provider
KURYR_EP_DRIVER_OCTAVIA_PROVIDER=${KURYR_EP_DRIVER_OCTAVIA_PROVIDER:-default}
# kuryr project driver
KURYR_PROJECT_DRIVER=${KURYR_PROJECT_DRIVER:-default}
# Kuryr VIF driver
KURYR_POD_VIF_DRIVER=${KURYR_POD_VIF_DRIVER:-neutron-vif}
# Kuryr Pool Driver
KURYR_USE_PORTS_POOLS=${KURYR_USE_PORTS_POOLS:-False}
KURYR_VIF_POOL_DRIVER=${KURYR_VIF_POOL_DRIVER:-noop}
KURYR_VIF_POOL_MIN=${KURYR_VIF_POOL_MIN:-2}
KURYR_VIF_POOL_MAX=${KURYR_VIF_POOL_MAX:-0}
KURYR_VIF_POOL_BATCH=${KURYR_VIF_POOL_BATCH:-5}
KURYR_VIF_POOL_UPDATE_FREQ=${KURYR_VIF_POOL_UPDATE_FREQ:-30}
# Kuryr VIF Pool Manager
KURYR_VIF_POOL_MANAGER=${KURYR_VIF_POOL_MANAGER:-False}
# Health Server
KURYR_HEALTH_SERVER_PORT=${KURYR_HEALTH_SERVER_PORT:-8082}
# OVS HOST PATH
VAR_RUN_PATH=${VAR_RUN_PATH:-/var/run}
# Health Server
KURYR_CNI_HEALTH_SERVER_PORT=${KURYR_CNI_HEALTH_SERVER_PORT:-8090}
# High availability of controller
KURYR_CONTROLLER_HA_PORT=${KURYR_CONTROLLER_HA_PORT:-16401}
KURYR_CONTROLLER_REPLICAS=${KURYR_CONTROLLER_REPLICAS:-1}
# Whether to use lower-constraints.txt when installing dependencies.
KURYR_CONTAINERS_USE_LOWER_CONSTRAINTS=${KURYR_CONTAINERS_USE_LOWER_CONSTRAINTS:-False}
# Kuryr overcloud VM port's name
KURYR_OVERCLOUD_VM_PORT=${KURYR_OVERCLOUD_VM_PORT:-port0}
KURYR_IPV6=${KURYR_IPV6:-False}
KURYR_DUAL_STACK=${KURYR_DUAL_STACK:-False}
SUBNETPOOL_KURYR_NAME_V6=${SUBNETPOOL_KURYR_NAME_V6:-"shared-kuryr-subnetpool-v6"}
# Support Pod Security Standards
KURYR_SUPPORT_POD_SECURITY=False

Binary file not shown.

Before

Width:  |  Height:  |  Size: 153 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 16 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 63 KiB

View File

@ -1,367 +0,0 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns="http://www.w3.org/2000/svg"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
width="312.45016mm"
height="115.37309mm"
viewBox="0 0 1107.1069 408.80229"
id="svg2"
version="1.1"
inkscape:version="0.92.1 r"
sodipodi:docname="kuryr_k8s_components.svg"
inkscape:export-filename="/home/celebdor/Pictures/kuryr_k8s_components.png"
inkscape:export-xdpi="87.841972"
inkscape:export-ydpi="87.841972">
<defs
id="defs4" />
<sodipodi:namedview
id="base"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
inkscape:zoom="0.98994949"
inkscape:cx="801.50318"
inkscape:cy="174.45472"
inkscape:document-units="px"
inkscape:current-layer="layer1"
showgrid="false"
inkscape:window-width="1920"
inkscape:window-height="1016"
inkscape:window-x="0"
inkscape:window-y="27"
inkscape:window-maximized="1"
fit-margin-top="0"
fit-margin-left="0"
fit-margin-right="0"
fit-margin-bottom="0" />
<metadata
id="metadata7">
<rdf:RDF>
<cc:Work
rdf:about="">
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
<dc:title></dc:title>
</cc:Work>
</rdf:RDF>
</metadata>
<g
inkscape:label="Layer 1"
inkscape:groupmode="layer"
id="layer1"
transform="translate(-302.04578,-262.43307)">
<g
id="g10788">
<path
id="rect4272-7-56"
d="m 314.07901,283.91968 0,11.05326 -0.0603,-6.51645 -5.78332,0 0,6.72648 5.84361,0 0,12.20448 -0.0604,-6.51645 -5.78331,0 0,6.72646 5.8436,0 0,5.12605 25.24709,0 0,-28.80383 -25.24709,0 z"
style="fill:#fcd95d;fill-opacity:1;stroke:#c39526;stroke-width:1.99129868;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
inkscape:connector-curvature="0" />
<path
sodipodi:nodetypes="ssssscc"
d="m 708.98112,415.51058 c -5.91999,0 -10.7191,-4.7991 -10.7191,-10.71909 0,-5.92 4.7991,-10.7191 10.7191,-10.7191 5.91999,0 10.71909,4.79911 10.71909,10.7191 0,5.91999 -4.7991,10.71909 -10.71909,10.71909 z m -209.72439,-10.7191 199.40037,0"
style="fill:#ffffff;fill-opacity:1;stroke:#000000;stroke-width:1.71899998;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="path7744"
inkscape:connector-curvature="0" />
<g
id="g10323">
<path
id="path9651"
style="fill:none;fill-opacity:1;fill-rule:evenodd;stroke:#c39526;stroke-width:2;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="m 303.21429,332.00506 196.07143,0 m -196.23994,-68.57199 195.9696,0 0,183.9598 -195.9696,0 z"
inkscape:connector-curvature="0" />
<path
inkscape:connector-curvature="0"
style="fill:#ffffff;fill-opacity:1;stroke:#000000;stroke-width:1.56733572;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
d="m 487.16969,415.85353 24.48063,0 0,-22.75823 -24.48063,0 z"
id="rect4272"
sodipodi:nodetypes="ccccc" />
</g>
<text
id="text10308"
y="293.79855"
x="350.54324"
style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
style="font-weight:bold;font-size:27.5px;line-height:93.99999976%"
y="293.79855"
x="350.54324"
id="tspan10310"
sodipodi:role="line">K8s API</tspan><tspan
style="font-weight:bold;font-size:27.5px;line-height:93.99999976%"
id="tspan10312"
y="319.64856"
x="350.54324"
sodipodi:role="line">server</tspan></text>
<text
id="text10308-7"
y="400.58347"
x="542.80176"
style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
style="font-weight:normal;font-size:30px;line-height:93.99999976%"
id="tspan10312-2"
y="400.58347"
x="542.80176"
sodipodi:role="line">Patch</tspan></text>
<path
sodipodi:nodetypes="ssssscc"
d="m 686.50743,529.42314 c -5.12686,-2.95999 -6.88346,-9.51569 -3.92347,-14.64255 2.96,-5.12687 9.5157,-6.88346 14.64257,-3.92346 5.12686,2.95999 6.88344,9.51569 3.92345,14.64256 -2.96,5.12685 -9.51569,6.88345 -14.64255,3.92345 z m -176.2671,-114.1452 172.68579,99.70018"
style="fill:#ffffff;fill-opacity:1;stroke:#000000;stroke-width:1.71899998;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="path7744-0"
inkscape:connector-curvature="0" />
<text
transform="rotate(30)"
id="text10308-7-2"
y="128.42073"
x="675.72711"
style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
style="font-weight:normal;font-size:30px;line-height:93.99999976%"
id="tspan10312-2-3"
y="128.42073"
x="675.72711"
sodipodi:role="line">Watch</tspan></text>
</g>
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="941.92871"
y="293.61118"
id="text10308-9-1"><tspan
sodipodi:role="line"
x="941.92871"
y="293.61118"
style="font-weight:bold;font-size:27.5px;line-height:93.99999976%"
id="tspan10499-9">Controller</tspan></text>
<g
id="g10777">
<g
id="g10258">
<path
inkscape:connector-curvature="0"
style="fill:#fcd95d;fill-opacity:1;stroke:#c39526;stroke-width:1.99129868;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
d="m 312.89554,491.27247 0,11.05326 -0.0603,-6.51645 -5.78332,0 0,6.72648 5.84361,0 0,12.20448 -0.0604,-6.51645 -5.78331,0 0,6.72646 5.8436,0 0,5.12605 25.24709,0 0,-28.80383 -25.24709,0 z"
id="rect4272-7-3" />
<path
inkscape:connector-curvature="0"
id="path9651-7-0"
style="fill:none;fill-opacity:1;fill-rule:evenodd;stroke:#c39526;stroke-width:2;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="m 304.36656,524.84756 196.07138,0 m -196.23989,-38.57199 195.96959,0 0,183.9598 -195.96959,0 z"
sodipodi:nodetypes="ccccccc" />
</g>
<path
sodipodi:nodetypes="ccccc"
id="rect4272-6"
d="m 491.84611,571.31481 24.48063,0 0,-22.75823 -24.48063,0 z"
style="fill:#ffffff;fill-opacity:1;stroke:#000000;stroke-width:1.56733572;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
inkscape:connector-curvature="0" />
<path
sodipodi:nodetypes="ccccc"
id="rect4272-6-1"
d="m 490.83596,655.33512 24.48063,0 0,-22.75824 -24.48063,0 z"
style="fill:#ffffff;fill-opacity:1;stroke:#000000;stroke-width:1.56733572;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
inkscape:connector-curvature="0" />
<path
sodipodi:nodetypes="ccccc"
d="m 676.87881,515.09533 c -6.11397,1.84782 -9.78976,8.07492 -8.4533,14.32065 1.34469,6.24318 7.25067,10.4132 13.58377,9.59021 m -165.80329,23.31018 153.01927,-33.07206"
style="fill:none;fill-opacity:1;stroke:#000000;stroke-width:1.78999996;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="path7157-1-7"
inkscape:connector-curvature="0" />
<text
id="text10308-2"
y="516.46832"
x="352.51373"
style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
style="font-weight:bold;font-size:27.5px;line-height:93.99999976%"
id="tspan10312-8"
y="516.46832"
x="352.51373"
sodipodi:role="line">Kubelet</tspan></text>
<path
sodipodi:nodetypes="ccccc"
d="m 674.18723,631.99181 c -6.36557,0.52403 -11.26595,5.84141 -11.26953,12.22852 0.005,6.38635 4.90471,11.70259 11.26953,12.22656 m -159.16189,-12.22663 148.71032,0"
style="fill:none;fill-opacity:1;stroke:#000000;stroke-width:1.78999996;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="path7157-1-78"
inkscape:connector-curvature="0" />
</g>
<g
transform="translate(600.21336)"
id="g10258-9">
<path
id="rect4272-7-3-3"
d="m 312.89554,491.27247 v 11.05326 l -0.0603,-6.51645 h -5.78332 v 6.72648 h 5.84361 v 12.20448 l -0.0604,-6.51645 h -5.78331 v 6.72646 h 5.8436 v 5.12605 h 25.24709 v -28.80383 h -25.24709 z"
style="fill:#fcd95d;fill-opacity:1;stroke:#c39526;stroke-width:1.99129868;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
inkscape:connector-curvature="0" />
<path
sodipodi:nodetypes="ccccccc"
d="M 304.36656,524.84756 H 500.43794 M 304.19805,486.27557 h 195.96959 v 183.9598 H 304.19805 Z"
style="fill:none;fill-opacity:1;fill-rule:evenodd;stroke:#c39526;stroke-width:2;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="path9651-7-0-6"
inkscape:connector-curvature="0" />
</g>
<path
inkscape:connector-curvature="0"
style="fill:#ffffff;fill-opacity:1;stroke:#000000;stroke-width:1.56733572;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
d="m 894.37159,610.89157 h 24.48063 v -22.75823 h -24.48063 z"
id="rect4272-6-5"
sodipodi:nodetypes="ccccc" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="938.94232"
y="516.61121"
id="text10308-9"><tspan
sodipodi:role="line"
x="938.94232"
y="516.61121"
style="font-weight:bold;font-size:27.5px;line-height:93.99999976%"
id="tspan10499">CNI Driver</tspan></text>
<path
inkscape:connector-curvature="0"
id="path7157-1-6"
style="fill:none;fill-opacity:1;stroke:#000000;stroke-width:1.78999996;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="m 698.30506,537.9638 c 6.0695,1.98902 12.65018,-1.00811 15.13377,-6.89258 2.47539,-5.8871 0.0246,-12.68883 -5.6372,-15.64331 m 186.64511,89.93239 -181.7617,-74.60665"
sodipodi:nodetypes="ccccc" />
<path
inkscape:connector-curvature="0"
style="fill:#ffffff;fill-opacity:1;stroke:#000000;stroke-width:1.56733572;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
d="m 894.37159,655.33512 h 24.48063 v -22.75824 h -24.48063 z"
id="rect4272-6-1-3"
sodipodi:nodetypes="ccccc" />
<path
inkscape:connector-curvature="0"
id="path7744-1"
style="fill:#ffffff;fill-opacity:1;stroke:#000000;stroke-width:1.71899998;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="m 684.06996,632.64311 c 5.91999,0 10.7191,4.7991 10.7191,10.71909 0,5.92 -4.7991,10.7191 -10.7191,10.7191 -5.91999,0 -10.71909,-4.79911 -10.71909,-10.7191 0,-5.91999 4.7991,-10.71909 10.71909,-10.71909 z m 209.72439,10.7191 H 694.39398"
sodipodi:nodetypes="ssssscc" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="754.02399"
y="640.83807"
id="text10308-7-5"><tspan
sodipodi:role="line"
x="754.02399"
y="640.83807"
id="tspan10312-2-0"
style="font-weight:normal;font-size:30px;line-height:93.99999976%">CNI 0.3.1</tspan></text>
<g
id="g10768">
<path
sodipodi:nodetypes="ccccc"
d="m 716.24135,416.7326 c 6.36557,-0.52403 11.26595,-5.84141 11.26953,-12.22852 -0.005,-6.38635 -4.90471,-11.70259 -11.26953,-12.22656 m 178.7336,12.22663 -168.28203,0"
style="fill:none;fill-opacity:1;stroke:#000000;stroke-width:1.78999996;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="path7157-1"
inkscape:connector-curvature="0" />
<g
id="g10283">
<path
inkscape:connector-curvature="0"
style="fill:#fcd95d;fill-opacity:1;stroke:#c39526;stroke-width:1.99129868;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
d="m 914.733,269.45962 0,11.05326 -0.0603,-6.51645 -5.78332,0 0,6.72648 5.84361,0 0,12.20448 -0.0604,-6.51645 -5.78331,0 0,6.72646 5.8436,0 0,5.12605 25.24709,0 0,-28.80383 -25.24709,0 z"
id="rect4272-7" />
<path
inkscape:connector-curvature="0"
id="path9651-7"
style="fill:none;fill-opacity:1;fill-rule:evenodd;stroke:#c39526;stroke-width:2;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="m 904.57992,303.50743 196.07138,0 m -196.23989,-38.57199 195.96959,0 0,183.9598 -195.96959,0 z"
sodipodi:nodetypes="ccccccc" />
</g>
<path
sodipodi:nodetypes="ccccc"
id="rect4272-2-1"
d="m 1092.7597,416.23354 24.4806,0 0,-22.75823 -24.4806,0 z"
style="fill:#ffffff;fill-opacity:1;stroke:#000000;stroke-width:1.56733572;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
inkscape:connector-curvature="0" />
<path
sodipodi:nodetypes="ccccc"
id="rect4272-2"
d="m 894.37159,416.23354 24.48063,0 0,-22.75823 -24.48063,0 z"
style="fill:#ffffff;fill-opacity:1;stroke:#000000;stroke-width:1.56733572;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
inkscape:connector-curvature="0" />
<path
sodipodi:nodetypes="ccccc"
d="m 1160.1615,430.93777 c -5.1038,-3.84018 -12.2995,-3.12547 -16.5481,1.64362 -4.238,4.77756 -4.1155,12.0029 0.2911,16.62552 m -26.4881,-38.86841 26.8081,22.78662"
style="fill:none;fill-opacity:1;stroke:#000000;stroke-width:1.78999996;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="path7157-1-78-3"
inkscape:connector-curvature="0" />
</g>
<g
id="g10744">
<g
id="g10258-9-0"
transform="translate(907.98501,-110.67008)">
<path
inkscape:connector-curvature="0"
style="fill:#fcd95d;fill-opacity:1;stroke:#c39526;stroke-width:1.99129868;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
d="m 312.89554,491.27247 0,11.05326 -0.0603,-6.51645 -5.78332,0 0,6.72648 5.84361,0 0,12.20448 -0.0604,-6.51645 -5.78331,0 0,6.72646 5.8436,0 0,5.12605 25.24709,0 0,-28.80383 -25.24709,0 z"
id="rect4272-7-3-3-6" />
<path
inkscape:connector-curvature="0"
id="path9651-7-0-6-2"
style="fill:none;fill-opacity:1;fill-rule:evenodd;stroke:#c39526;stroke-width:2;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="m 304.36656,524.84756 196.07138,0 m -196.23989,-38.57199 195.96959,0 0,183.9598 -195.96959,0 z"
sodipodi:nodetypes="ccccccc" />
</g>
<text
id="text10308-9-3"
y="403.61429"
x="1259.6802"
style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
id="tspan10499-6"
style="font-weight:bold;font-size:27.5px;line-height:93.99999976%"
y="403.61429"
x="1259.6802"
sodipodi:role="line">Neutron</tspan></text>
<path
sodipodi:nodetypes="ccccc"
id="rect4272-2-1-0"
d="m 1200.9529,496.03559 24.4806,0 0,-22.75823 -24.4806,0 z"
style="fill:#ffffff;fill-opacity:1;stroke:#000000;stroke-width:1.56733572;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
inkscape:connector-curvature="0" />
<path
sodipodi:nodetypes="ssssscc"
d="m 1168.0237,442.14629 c 4.5321,3.80883 5.1183,10.57041 1.3095,15.10242 -3.8088,4.53202 -10.5704,5.11828 -15.1024,1.30943 -4.532,-3.80876 -5.1182,-10.57036 -1.3094,-15.10236 3.8089,-4.53201 10.5704,-5.11827 15.1023,-1.30949 z m 32.4384,42.124 -31.4313,-27.27571"
style="fill:#ffffff;fill-opacity:1;stroke:#000000;stroke-width:1.71899998;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="path7744-1-6"
inkscape:connector-curvature="0" />
<text
transform="rotate(45.724829)"
id="text10308-7-5-2"
y="-494.80469"
x="1149.7618"
style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
style="font-weight:normal;font-size:30px;line-height:93.99999976%"
id="tspan10312-2-0-0"
y="-494.80469"
x="1149.7618"
sodipodi:role="line">v2</tspan></text>
</g>
<path
sodipodi:nodetypes="ccccc"
d="m 705.52528,517.47621 c 5.83163,-2.60522 8.69148,-9.24671 6.57688,-15.27362 -2.12245,-6.02335 -8.50781,-9.41404 -14.68625,-7.79777 M 894.67125,430.61521 711.33051,502.4739"
style="fill:none;fill-opacity:1;stroke:#000000;stroke-width:1.78999996;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
id="path7157-1-6-3"
inkscape:connector-curvature="0" />
<path
sodipodi:nodetypes="ccccc"
id="rect4272-2-6"
d="m 894.37159,447.03412 h 24.48063 v -22.75823 h -24.48063 z"
style="fill:#ffffff;fill-opacity:1;stroke:#000000;stroke-width:1.56733572;stroke-miterlimit:10;stroke-dasharray:none;stroke-opacity:1"
inkscape:connector-curvature="0" />
</g>
</svg>

Before

Width:  |  Height:  |  Size: 19 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 157 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 108 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 120 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 115 KiB

File diff suppressed because it is too large Load Diff

Before

Width:  |  Height:  |  Size: 183 KiB

View File

@ -1,687 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg version="1.2" width="215.9mm" height="279.4mm" viewBox="0 0 21590 27940" preserveAspectRatio="xMidYMid" fill-rule="evenodd" stroke-width="28.222" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg" xmlns:ooo="http://xml.openoffice.org/svg/export" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:presentation="http://sun.com/xmlns/staroffice/presentation" xmlns:smil="http://www.w3.org/2001/SMIL20/" xmlns:anim="urn:oasis:names:tc:opendocument:xmlns:animation:1.0" xml:space="preserve">
<defs class="ClipPathGroup">
<clipPath id="presentation_clip_path" clipPathUnits="userSpaceOnUse">
<rect x="0" y="0" width="21590" height="27940"/>
</clipPath>
<clipPath id="presentation_clip_path_shrink" clipPathUnits="userSpaceOnUse">
<rect x="21" y="27" width="21547" height="27885"/>
</clipPath>
</defs>
<defs>
<font id="EmbeddedFont_1" horiz-adv-x="2048">
<font-face font-family="Liberation Sans embedded" units-per-em="2048" font-weight="normal" font-style="normal" ascent="1852" descent="423"/>
<missing-glyph horiz-adv-x="2048" d="M 0,0 L 2047,0 2047,2047 0,2047 0,0 Z"/>
<glyph unicode="y" horiz-adv-x="1033" d="M 604,1 C 579,-64 553,-123 527,-175 500,-227 471,-272 438,-309 405,-346 369,-374 329,-394 289,-413 243,-423 191,-423 168,-423 147,-423 128,-423 109,-423 88,-420 67,-414 L 67,-279 C 80,-282 94,-284 110,-284 126,-284 140,-284 151,-284 204,-284 253,-264 298,-225 343,-186 383,-124 417,-38 L 434,5 5,1082 197,1082 425,484 C 432,466 440,442 451,412 461,382 471,352 482,322 492,292 501,265 509,241 517,217 522,202 523,196 525,203 530,218 538,240 545,261 554,285 564,312 573,339 583,366 593,393 603,420 611,444 618,464 L 830,1082 1020,1082 604,1 Z"/>
<glyph unicode="x" horiz-adv-x="1006" d="M 801,0 L 510,444 217,0 23,0 408,556 41,1082 240,1082 510,661 778,1082 979,1082 612,558 1002,0 801,0 Z"/>
<glyph unicode="w" horiz-adv-x="1509" d="M 1174,0 L 965,0 792,698 C 787,716 781,738 776,765 770,792 764,818 759,843 752,872 746,903 740,934 734,904 728,874 721,845 716,820 710,793 704,766 697,739 691,715 686,694 L 508,0 300,0 -3,1082 175,1082 358,347 C 363,332 367,313 372,291 377,268 381,246 386,225 391,200 396,175 401,149 406,174 412,199 418,223 423,244 429,265 434,286 439,307 444,325 448,339 L 644,1082 837,1082 1026,339 C 1031,322 1036,302 1041,280 1046,258 1051,237 1056,218 1061,195 1067,172 1072,149 1077,174 1083,199 1088,223 1093,244 1098,265 1103,288 1108,310 1112,330 1117,347 L 1308,1082 1484,1082 1174,0 Z"/>
<glyph unicode="v" horiz-adv-x="1033" d="M 613,0 L 400,0 7,1082 199,1082 437,378 C 442,363 447,346 454,325 460,304 466,282 473,259 480,236 486,215 492,194 497,173 502,155 506,141 510,155 515,173 522,194 528,215 534,236 541,258 548,280 555,302 562,323 569,344 575,361 580,376 L 826,1082 1017,1082 613,0 Z"/>
<glyph unicode="u" horiz-adv-x="874" d="M 314,1082 L 314,396 C 314,343 318,299 326,264 333,229 346,200 363,179 380,157 403,142 432,133 460,124 495,119 537,119 580,119 618,127 653,142 687,157 716,178 741,207 765,235 784,270 797,312 810,353 817,401 817,455 L 817,1082 997,1082 997,228 C 997,205 997,181 998,156 998,131 998,107 999,85 1000,62 1000,43 1001,27 1002,11 1002,3 1003,3 L 833,3 C 832,6 832,15 831,30 830,44 830,61 829,79 828,98 827,117 826,136 825,156 825,172 825,185 L 822,185 C 805,154 786,125 765,100 744,75 720,53 693,36 666,18 634,4 599,-6 564,-15 523,-20 476,-20 416,-20 364,-13 321,2 278,17 242,39 214,70 186,101 166,140 153,188 140,236 133,294 133,361 L 133,1082 314,1082 Z"/>
<glyph unicode="t" horiz-adv-x="531" d="M 554,8 C 527,1 499,-5 471,-10 442,-14 409,-16 372,-16 228,-16 156,66 156,229 L 156,951 31,951 31,1082 163,1082 216,1324 336,1324 336,1082 536,1082 536,951 336,951 336,268 C 336,216 345,180 362,159 379,138 408,127 450,127 467,127 484,128 501,131 517,134 535,137 554,141 L 554,8 Z"/>
<glyph unicode="s" horiz-adv-x="901" d="M 950,299 C 950,248 940,203 921,164 901,124 872,91 835,64 798,37 752,16 698,2 643,-13 581,-20 511,-20 448,-20 392,-15 342,-6 291,4 247,20 209,41 171,62 139,91 114,126 88,161 69,203 57,254 L 216,285 C 231,227 263,185 311,158 359,131 426,117 511,117 550,117 585,120 618,125 650,130 678,140 701,153 724,166 743,183 756,205 769,226 775,253 775,285 775,318 767,345 752,366 737,387 715,404 688,418 661,432 628,444 589,455 550,465 507,476 460,489 417,500 374,513 331,527 288,541 250,560 216,583 181,606 153,634 132,668 111,702 100,745 100,796 100,895 135,970 206,1022 276,1073 378,1099 513,1099 632,1099 727,1078 798,1036 868,994 912,927 931,834 L 769,814 C 763,842 752,866 736,885 720,904 701,919 678,931 655,942 630,951 602,956 573,961 544,963 513,963 432,963 372,951 333,926 294,901 275,864 275,814 275,785 282,761 297,742 311,723 331,707 357,694 382,681 413,669 449,660 485,650 525,640 568,629 597,622 626,614 656,606 686,597 715,587 744,576 772,564 799,550 824,535 849,519 870,500 889,478 908,456 923,430 934,401 945,372 950,338 950,299 Z"/>
<glyph unicode="r" horiz-adv-x="530" d="M 142,0 L 142,830 C 142,853 142,876 142,900 141,923 141,946 140,968 139,990 139,1011 138,1030 137,1049 137,1067 136,1082 L 306,1082 C 307,1067 308,1049 309,1030 310,1010 311,990 312,969 313,948 313,929 314,910 314,891 314,874 314,861 L 318,861 C 331,902 344,938 359,969 373,999 390,1024 409,1044 428,1063 451,1078 478,1088 505,1097 537,1102 575,1102 590,1102 604,1101 617,1099 630,1096 641,1094 648,1092 L 648,927 C 636,930 622,933 606,935 590,936 572,937 552,937 511,937 476,928 447,909 418,890 394,865 376,832 357,799 344,759 335,714 326,668 322,618 322,564 L 322,0 142,0 Z"/>
<glyph unicode="p" horiz-adv-x="953" d="M 1053,546 C 1053,464 1046,388 1033,319 1020,250 998,190 967,140 936,90 895,51 844,23 793,-6 730,-20 655,-20 578,-20 510,-5 452,24 394,53 350,101 319,168 L 314,168 C 315,167 315,161 316,150 316,139 316,126 317,110 317,94 317,76 318,57 318,37 318,17 318,-2 L 318,-425 138,-425 138,864 C 138,891 138,916 138,940 137,964 137,986 136,1005 135,1025 135,1042 134,1056 133,1070 133,1077 132,1077 L 306,1077 C 307,1075 308,1068 309,1057 310,1045 311,1031 312,1014 313,998 314,980 315,961 316,943 316,925 316,908 L 320,908 C 337,943 356,972 377,997 398,1021 423,1041 450,1057 477,1072 508,1084 542,1091 575,1098 613,1101 655,1101 730,1101 793,1088 844,1061 895,1034 936,997 967,949 998,900 1020,842 1033,774 1046,705 1053,629 1053,546 Z M 864,542 C 864,609 860,668 852,720 844,772 830,816 811,852 791,888 765,915 732,934 699,953 658,962 609,962 569,962 531,956 496,945 461,934 430,912 404,880 377,848 356,804 341,748 326,691 318,618 318,528 318,451 324,387 337,334 350,281 368,238 393,205 417,172 447,149 483,135 519,120 560,113 607,113 657,113 699,123 732,142 765,161 791,189 811,226 830,263 844,308 852,361 860,414 864,474 864,542 Z"/>
<glyph unicode="o" horiz-adv-x="980" d="M 1053,542 C 1053,353 1011,212 928,119 845,26 724,-20 565,-20 490,-20 422,-9 363,14 304,37 254,71 213,118 172,165 140,223 119,294 97,364 86,447 86,542 86,915 248,1102 571,1102 655,1102 728,1090 789,1067 850,1044 900,1009 939,962 978,915 1006,857 1025,787 1044,717 1053,635 1053,542 Z M 864,542 C 864,626 858,695 845,750 832,805 813,848 788,881 763,914 732,937 696,950 660,963 619,969 574,969 528,969 487,962 450,949 413,935 381,912 355,879 329,846 309,802 296,747 282,692 275,624 275,542 275,458 282,389 297,334 312,279 332,235 358,202 383,169 414,146 449,133 484,120 522,113 563,113 609,113 651,120 688,133 725,146 757,168 783,201 809,234 829,278 843,333 857,388 864,458 864,542 Z"/>
<glyph unicode="n" horiz-adv-x="874" d="M 825,0 L 825,686 C 825,739 821,783 814,818 806,853 793,882 776,904 759,925 736,941 708,950 679,959 644,963 602,963 559,963 521,956 487,941 452,926 423,904 399,876 374,847 355,812 342,771 329,729 322,681 322,627 L 322,0 142,0 142,853 C 142,876 142,900 142,925 141,950 141,974 140,996 139,1019 139,1038 138,1054 137,1070 137,1078 136,1078 L 306,1078 C 307,1075 307,1066 308,1052 309,1037 310,1021 311,1002 312,984 312,965 313,945 314,926 314,910 314,897 L 317,897 C 334,928 353,957 374,982 395,1007 419,1029 446,1047 473,1064 505,1078 540,1088 575,1097 616,1102 663,1102 723,1102 775,1095 818,1080 861,1065 897,1043 925,1012 953,981 974,942 987,894 1000,845 1006,788 1006,721 L 1006,0 825,0 Z"/>
<glyph unicode="l" horiz-adv-x="187" d="M 138,0 L 138,1484 318,1484 318,0 138,0 Z"/>
<glyph unicode="i" horiz-adv-x="187" d="M 137,1312 L 137,1484 317,1484 317,1312 137,1312 Z M 137,0 L 137,1082 317,1082 317,0 137,0 Z"/>
<glyph unicode="h" horiz-adv-x="874" d="M 317,897 C 337,934 359,965 382,991 405,1016 431,1037 459,1054 487,1071 518,1083 551,1091 584,1098 622,1102 663,1102 732,1102 789,1093 834,1074 878,1055 913,1029 939,996 964,962 982,922 992,875 1001,828 1006,777 1006,721 L 1006,0 825,0 825,686 C 825,732 822,772 817,807 811,842 800,871 784,894 768,917 745,934 716,946 687,957 649,963 602,963 559,963 521,955 487,940 452,925 423,903 399,875 374,847 355,813 342,773 329,733 322,688 322,638 L 322,0 142,0 142,1484 322,1484 322,1098 C 322,1076 322,1054 321,1032 320,1010 320,990 319,971 318,952 317,937 316,924 315,911 315,902 314,897 L 317,897 Z"/>
<glyph unicode="g" horiz-adv-x="927" d="M 548,-425 C 486,-425 431,-419 383,-406 335,-393 294,-375 260,-352 226,-328 198,-300 177,-267 156,-234 140,-198 131,-158 L 312,-132 C 324,-182 351,-220 392,-248 433,-274 486,-288 553,-288 594,-288 631,-282 664,-271 697,-260 726,-241 749,-217 772,-191 790,-159 803,-119 816,-79 822,-30 822,27 L 822,201 820,201 C 807,174 790,148 771,123 751,98 727,75 699,56 670,37 637,21 600,10 563,-2 520,-8 472,-8 403,-8 345,4 296,27 247,50 207,84 176,130 145,176 122,233 108,302 93,370 86,449 86,539 86,626 93,704 108,773 122,842 145,901 178,950 210,998 252,1035 304,1061 355,1086 418,1099 492,1099 569,1099 635,1082 692,1047 748,1012 791,962 822,897 L 824,897 C 824,914 825,933 826,953 827,974 828,994 829,1012 830,1031 831,1046 832,1060 833,1073 835,1080 836,1080 L 1007,1080 C 1006,1074 1006,1064 1005,1050 1004,1035 1004,1018 1003,998 1002,978 1002,956 1002,932 1001,907 1001,882 1001,856 L 1001,30 C 1001,-121 964,-234 890,-311 815,-387 701,-425 548,-425 Z M 822,541 C 822,616 814,681 798,735 781,788 760,832 733,866 706,900 676,925 642,941 607,957 572,965 536,965 490,965 451,957 418,941 385,925 357,900 336,866 314,831 298,787 288,734 277,680 272,616 272,541 272,463 277,398 288,345 298,292 314,249 335,216 356,183 383,160 416,146 449,132 488,125 533,125 569,125 604,133 639,148 673,163 704,188 731,221 758,254 780,297 797,350 814,403 822,466 822,541 Z"/>
<glyph unicode="f" horiz-adv-x="557" d="M 361,951 L 361,0 181,0 181,951 29,951 29,1082 181,1082 181,1204 C 181,1243 185,1280 192,1314 199,1347 213,1377 233,1402 252,1427 279,1446 313,1461 347,1475 391,1482 445,1482 466,1482 489,1481 512,1479 535,1477 555,1474 572,1470 L 572,1333 C 561,1335 548,1337 533,1339 518,1340 504,1341 492,1341 465,1341 444,1337 427,1330 410,1323 396,1312 387,1299 377,1285 370,1268 367,1248 363,1228 361,1205 361,1179 L 361,1082 572,1082 572,951 361,951 Z"/>
<glyph unicode="e" horiz-adv-x="980" d="M 276,503 C 276,446 282,394 294,347 305,299 323,258 348,224 372,189 403,163 441,144 479,125 525,115 578,115 656,115 719,131 766,162 813,193 844,233 861,281 L 1019,236 C 1008,206 992,176 972,146 951,115 924,88 890,64 856,39 814,19 763,4 712,-12 650,-20 578,-20 418,-20 296,28 213,123 129,218 87,360 87,548 87,649 100,735 125,806 150,876 185,933 229,977 273,1021 324,1053 383,1073 442,1092 504,1102 571,1102 662,1102 738,1087 799,1058 860,1029 909,988 946,937 983,885 1009,824 1025,754 1040,684 1048,608 1048,527 L 1048,503 276,503 Z M 862,641 C 852,755 823,838 775,891 727,943 658,969 568,969 538,969 507,964 474,955 441,945 410,928 382,903 354,878 330,845 311,803 292,760 281,706 278,641 L 862,641 Z"/>
<glyph unicode="d" horiz-adv-x="927" d="M 821,174 C 788,105 744,55 689,25 634,-5 565,-20 484,-20 347,-20 247,26 183,118 118,210 86,349 86,536 86,913 219,1102 484,1102 566,1102 634,1087 689,1057 744,1027 788,979 821,914 L 823,914 C 823,921 823,931 823,946 822,960 822,975 822,991 821,1006 821,1021 821,1035 821,1049 821,1059 821,1065 L 821,1484 1001,1484 1001,219 C 1001,193 1001,168 1002,143 1002,119 1002,97 1003,77 1004,57 1004,40 1005,26 1006,11 1006,4 1007,4 L 835,4 C 834,11 833,20 832,32 831,44 830,58 829,73 828,89 827,105 826,123 825,140 825,157 825,174 L 821,174 Z M 275,542 C 275,467 280,403 289,350 298,297 313,253 334,219 355,184 381,159 413,143 445,127 484,119 530,119 577,119 619,127 656,142 692,157 722,182 747,217 771,251 789,296 802,351 815,406 821,474 821,554 821,631 815,696 802,749 789,802 771,844 746,877 721,910 691,933 656,948 620,962 579,969 532,969 488,969 450,961 418,946 386,931 359,906 338,872 317,838 301,794 291,740 280,685 275,619 275,542 Z"/>
<glyph unicode="c" horiz-adv-x="901" d="M 275,546 C 275,484 280,427 289,375 298,323 313,278 334,241 355,203 384,174 419,153 454,132 497,122 548,122 612,122 666,139 709,173 752,206 778,258 788,328 L 970,328 C 964,283 951,239 931,197 911,155 884,118 850,86 815,54 773,28 724,9 675,-10 618,-20 553,-20 468,-20 396,-6 337,23 278,52 230,91 193,142 156,192 129,251 112,320 95,388 87,462 87,542 87,615 93,679 105,735 117,790 134,839 156,881 177,922 203,957 232,986 261,1014 293,1037 328,1054 362,1071 398,1083 436,1091 474,1098 512,1102 551,1102 612,1102 666,1094 713,1077 760,1060 801,1038 836,1009 870,980 898,945 919,906 940,867 955,824 964,779 L 779,765 C 770,825 746,873 708,908 670,943 616,961 546,961 495,961 452,953 418,936 383,919 355,893 334,859 313,824 298,781 289,729 280,677 275,616 275,546 Z"/>
<glyph unicode="a" horiz-adv-x="1060" d="M 414,-20 C 305,-20 224,9 169,66 114,124 87,203 87,303 87,375 101,434 128,480 155,526 190,562 234,588 277,614 327,632 383,642 439,652 496,657 554,657 L 797,657 797,717 C 797,762 792,800 783,832 774,863 759,889 740,908 721,928 697,942 668,951 639,960 604,965 565,965 530,965 499,963 471,958 443,953 419,944 398,931 377,918 361,900 348,878 335,855 327,827 323,793 L 135,810 C 142,853 154,892 173,928 192,963 218,994 253,1020 287,1046 330,1066 382,1081 433,1095 496,1102 569,1102 705,1102 807,1071 876,1009 945,946 979,856 979,738 L 979,272 C 979,219 986,179 1000,152 1014,125 1041,111 1080,111 1090,111 1100,112 1110,113 1120,114 1130,116 1139,118 L 1139,6 C 1116,1 1094,-3 1072,-6 1049,-9 1025,-10 1000,-10 966,-10 937,-5 913,4 888,13 868,26 853,45 838,63 826,86 818,113 810,140 805,171 803,207 L 797,207 C 778,172 757,141 734,113 711,85 684,61 653,42 622,22 588,7 549,-4 510,-15 465,-20 414,-20 Z M 455,115 C 512,115 563,125 606,146 649,167 684,194 713,226 741,259 762,294 776,332 790,371 797,408 797,443 L 797,531 600,531 C 556,531 514,528 475,522 435,517 400,506 370,489 340,472 316,449 299,418 281,388 272,349 272,300 272,241 288,195 320,163 351,131 396,115 455,115 Z"/>
<glyph unicode="S" horiz-adv-x="1192" d="M 1272,389 C 1272,330 1261,275 1238,225 1215,175 1179,132 1131,96 1083,59 1023,31 950,11 877,-10 790,-20 690,-20 515,-20 378,11 280,72 182,133 120,222 93,338 L 278,375 C 287,338 302,305 321,275 340,245 367,219 400,198 433,176 473,159 522,147 571,135 629,129 697,129 754,129 806,134 853,144 900,153 941,168 975,188 1009,208 1036,234 1055,266 1074,297 1083,335 1083,379 1083,425 1073,462 1052,491 1031,520 1001,543 963,562 925,581 880,596 827,609 774,622 716,635 652,650 613,659 573,668 534,679 494,689 456,701 420,716 383,730 349,747 317,766 285,785 257,809 234,836 211,863 192,894 179,930 166,965 159,1006 159,1053 159,1120 173,1177 200,1225 227,1272 264,1311 312,1342 360,1373 417,1395 482,1409 547,1423 618,1430 694,1430 781,1430 856,1423 918,1410 980,1396 1032,1375 1075,1348 1118,1321 1152,1287 1178,1247 1203,1206 1224,1159 1239,1106 L 1051,1073 C 1042,1107 1028,1137 1011,1164 993,1191 970,1213 941,1231 912,1249 878,1263 837,1272 796,1281 747,1286 692,1286 627,1286 572,1280 528,1269 483,1257 448,1241 421,1221 394,1201 374,1178 363,1151 351,1124 345,1094 345,1063 345,1021 356,987 377,960 398,933 426,910 462,892 498,874 540,859 587,847 634,835 685,823 738,811 781,801 825,791 868,781 911,770 952,758 991,744 1030,729 1067,712 1102,693 1136,674 1166,650 1191,622 1216,594 1236,561 1251,523 1265,485 1272,440 1272,389 Z"/>
<glyph unicode="P" horiz-adv-x="1112" d="M 1258,985 C 1258,924 1248,867 1228,814 1207,761 1177,715 1137,676 1096,637 1046,606 985,583 924,560 854,549 773,549 L 359,549 359,0 168,0 168,1409 761,1409 C 844,1409 917,1399 979,1379 1041,1358 1093,1330 1134,1293 1175,1256 1206,1211 1227,1159 1248,1106 1258,1048 1258,985 Z M 1066,983 C 1066,1072 1039,1140 984,1187 929,1233 847,1256 738,1256 L 359,1256 359,700 746,700 C 856,700 937,724 989,773 1040,822 1066,892 1066,983 Z"/>
<glyph unicode="G" horiz-adv-x="1377" d="M 103,711 C 103,821 118,920 148,1009 177,1098 222,1173 281,1236 340,1298 413,1346 500,1380 587,1413 689,1430 804,1430 891,1430 967,1422 1032,1407 1097,1392 1154,1370 1202,1341 1250,1312 1291,1278 1324,1237 1357,1196 1386,1149 1409,1098 L 1227,1044 C 1210,1079 1189,1110 1165,1139 1140,1167 1111,1191 1076,1211 1041,1231 1001,1247 956,1258 910,1269 858,1274 799,1274 714,1274 640,1261 577,1234 514,1207 461,1169 420,1120 379,1071 348,1011 328,942 307,873 297,796 297,711 297,626 308,549 330,479 352,408 385,348 428,297 471,246 525,206 590,178 654,149 728,135 813,135 868,135 919,140 966,149 1013,158 1055,171 1093,186 1130,201 1163,217 1192,236 1221,254 1245,272 1264,291 L 1264,545 843,545 843,705 1440,705 1440,219 C 1409,187 1372,157 1330,128 1287,99 1240,73 1187,51 1134,29 1077,12 1014,-1 951,-14 884,-20 813,-20 694,-20 591,-2 502,35 413,71 340,122 281,187 222,252 177,329 148,418 118,507 103,605 103,711 Z"/>
<glyph unicode="A" horiz-adv-x="1377" d="M 1167,0 L 1006,412 364,412 202,0 4,0 579,1409 796,1409 1362,0 1167,0 Z M 768,1026 C 757,1053 747,1080 738,1107 728,1134 719,1159 712,1182 705,1204 699,1223 694,1238 689,1253 686,1262 685,1265 684,1262 681,1252 676,1237 671,1222 665,1203 658,1180 650,1157 641,1132 632,1105 622,1078 612,1051 602,1024 L 422,561 949,561 768,1026 Z"/>
<glyph unicode="-" horiz-adv-x="531" d="M 91,464 L 91,624 591,624 591,464 91,464 Z"/>
<glyph unicode=" " horiz-adv-x="556"/>
</font>
</defs>
<defs class="TextShapeIndex">
<g ooo:slide="id1" ooo:id-list="id3 id4 id5 id6 id7 id8 id9 id10 id11 id12 id13 id14 id15 id16 id17 id18 id19 id20 id21 id22"/>
</defs>
<defs class="EmbeddedBulletChars">
<g id="bullet-char-template(57356)" transform="scale(0.00048828125,-0.00048828125)">
<path d="M 580,1141 L 1163,571 580,0 -4,571 580,1141 Z"/>
</g>
<g id="bullet-char-template(57354)" transform="scale(0.00048828125,-0.00048828125)">
<path d="M 8,1128 L 1137,1128 1137,0 8,0 8,1128 Z"/>
</g>
<g id="bullet-char-template(10146)" transform="scale(0.00048828125,-0.00048828125)">
<path d="M 174,0 L 602,739 174,1481 1456,739 174,0 Z M 1358,739 L 309,1346 659,739 1358,739 Z"/>
</g>
<g id="bullet-char-template(10132)" transform="scale(0.00048828125,-0.00048828125)">
<path d="M 2015,739 L 1276,0 717,0 1260,543 174,543 174,936 1260,936 717,1481 1274,1481 2015,739 Z"/>
</g>
<g id="bullet-char-template(10007)" transform="scale(0.00048828125,-0.00048828125)">
<path d="M 0,-2 C -7,14 -16,27 -25,37 L 356,567 C 262,823 215,952 215,954 215,979 228,992 255,992 264,992 276,990 289,987 310,991 331,999 354,1012 L 381,999 492,748 772,1049 836,1024 860,1049 C 881,1039 901,1025 922,1006 886,937 835,863 770,784 769,783 710,716 594,584 L 774,223 C 774,196 753,168 711,139 L 727,119 C 717,90 699,76 672,76 641,76 570,178 457,381 L 164,-76 C 142,-110 111,-127 72,-127 30,-127 9,-110 8,-76 1,-67 -2,-52 -2,-32 -2,-23 -1,-13 0,-2 Z"/>
</g>
<g id="bullet-char-template(10004)" transform="scale(0.00048828125,-0.00048828125)">
<path d="M 285,-33 C 182,-33 111,30 74,156 52,228 41,333 41,471 41,549 55,616 82,672 116,743 169,778 240,778 293,778 328,747 346,684 L 369,508 C 377,444 397,411 428,410 L 1163,1116 C 1174,1127 1196,1133 1229,1133 1271,1133 1292,1118 1292,1087 L 1292,965 C 1292,929 1282,901 1262,881 L 442,47 C 390,-6 338,-33 285,-33 Z"/>
</g>
<g id="bullet-char-template(9679)" transform="scale(0.00048828125,-0.00048828125)">
<path d="M 813,0 C 632,0 489,54 383,161 276,268 223,411 223,592 223,773 276,916 383,1023 489,1130 632,1184 813,1184 992,1184 1136,1130 1245,1023 1353,916 1407,772 1407,592 1407,412 1353,268 1245,161 1136,54 992,0 813,0 Z"/>
</g>
<g id="bullet-char-template(8226)" transform="scale(0.00048828125,-0.00048828125)">
<path d="M 346,457 C 273,457 209,483 155,535 101,586 74,649 74,723 74,796 101,859 155,911 209,963 273,989 346,989 419,989 480,963 531,910 582,859 608,796 608,723 608,648 583,586 532,535 482,483 420,457 346,457 Z"/>
</g>
<g id="bullet-char-template(8211)" transform="scale(0.00048828125,-0.00048828125)">
<path d="M -4,459 L 1135,459 1135,606 -4,606 -4,459 Z"/>
</g>
<g id="bullet-char-template(61548)" transform="scale(0.00048828125,-0.00048828125)">
<path d="M 173,740 C 173,903 231,1043 346,1159 462,1274 601,1332 765,1332 928,1332 1067,1274 1183,1159 1299,1043 1357,903 1357,740 1357,577 1299,437 1183,322 1067,206 928,148 765,148 601,148 462,206 346,322 231,437 173,577 173,740 Z"/>
</g>
</defs>
<defs class="TextEmbeddedBitmaps"/>
<g>
<g id="id2" class="Master_Slide">
<g id="bg-id2" class="Background"/>
<g id="bo-id2" class="BackgroundObjects"/>
</g>
</g>
<g class="SlideGroup">
<g>
<g id="container-id1">
<g id="id1" class="Slide" clip-path="url(#presentation_clip_path)">
<g class="Page">
<g class="com.sun.star.drawing.CustomShape">
<g id="id3">
<rect class="BoundingBox" stroke="none" fill="none" x="2649" y="2649" width="3432" height="1654"/>
<path fill="rgb(114,159,207)" stroke="none" d="M 4365,4301 L 2650,4301 2650,2650 6079,2650 6079,4301 4365,4301 Z"/>
<path fill="none" stroke="rgb(52,101,164)" d="M 4365,4301 L 2650,4301 2650,2650 6079,2650 6079,4301 4365,4301 Z"/>
<text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="2938" y="3696"><tspan fill="rgb(0,0,0)" stroke="none">Api server</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id4">
<rect class="BoundingBox" stroke="none" fill="none" x="8619" y="2649" width="3813" height="1654"/>
<path fill="rgb(114,159,207)" stroke="none" d="M 10525,4301 L 8620,4301 8620,2650 12430,2650 12430,4301 10525,4301 Z"/>
<path fill="none" stroke="rgb(52,101,164)" d="M 10525,4301 L 8620,4301 8620,2650 12430,2650 12430,4301 10525,4301 Z"/>
<text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="8957" y="3696"><tspan fill="rgb(0,0,0)" stroke="none">Pod watch </tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id5">
<rect class="BoundingBox" stroke="none" fill="none" x="14179" y="2649" width="4145" height="1654"/>
<path fill="rgb(114,159,207)" stroke="none" d="M 16240,4301 L 14208,4301 14208,2650 18272,2650 18272,4301 16240,4301 Z"/>
<path fill="none" stroke="rgb(52,101,164)" d="M 16240,4301 L 14208,4301 14208,2650 18272,2650 18272,4301 16240,4301 Z"/>
<text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="14127" y="3341"><tspan fill="rgb(0,0,0)" stroke="none">Policy selected</tspan></tspan></tspan><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="14424" y="4052"><tspan fill="rgb(0,0,0)" stroke="none">Pods watch </tspan></tspan></tspan><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="16062" y="4763"><tspan fill="rgb(0,0,0)" stroke="none"> </tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.LineShape">
<g id="id6">
<rect class="BoundingBox" stroke="none" fill="none" x="4030" y="4428" width="302" height="10543"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4302,4429 L 4180,14540"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 4175,14970 L 4330,14522 4030,14518 4175,14970 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.LineShape">
<g id="id7">
<rect class="BoundingBox" stroke="none" fill="none" x="10375" y="4428" width="301" height="10670"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10525,4429 L 10525,14667"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 10525,15097 L 10675,14647 10375,14647 10525,15097 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.LineShape">
<g id="id8">
<rect class="BoundingBox" stroke="none" fill="none" x="16465" y="4301" width="302" height="10797"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 16494,4302 L 16616,14667"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 16621,15097 L 16766,14645 16466,14649 16621,15097 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.LineShape">
<g id="id9">
<rect class="BoundingBox" stroke="none" fill="none" x="4301" y="5930" width="6225" height="301"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4302,6080 L 4355,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4408,6080 L 4461,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4514,6080 L 4567,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4621,6080 L 4674,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4727,6080 L 4780,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4833,6080 L 4886,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4939,6080 L 4992,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5045,6080 L 5098,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5151,6080 L 5205,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5258,6080 L 5311,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5364,6080 L 5417,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5470,6080 L 5523,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5576,6080 L 5629,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5682,6080 L 5735,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5789,6080 L 5842,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5895,6080 L 5948,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6001,6080 L 6054,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6107,6080 L 6160,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6213,6080 L 6266,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6319,6080 L 6373,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6426,6080 L 6479,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6532,6080 L 6585,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6638,6080 L 6691,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6744,6080 L 6797,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6850,6080 L 6903,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6957,6080 L 7010,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7063,6080 L 7116,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7169,6080 L 7222,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7275,6080 L 7328,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7381,6080 L 7434,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7487,6080 L 7541,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7594,6080 L 7647,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7700,6080 L 7753,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7806,6080 L 7859,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7912,6080 L 7965,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8018,6080 L 8071,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8125,6080 L 8178,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8231,6080 L 8284,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8337,6080 L 8390,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8443,6080 L 8496,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8549,6080 L 8602,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8656,6080 L 8709,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8762,6080 L 8815,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8868,6080 L 8921,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8974,6080 L 9027,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9080,6080 L 9133,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9186,6080 L 9240,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9293,6080 L 9346,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9399,6080 L 9452,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9505,6080 L 9558,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9611,6080 L 9664,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9717,6080 L 9770,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9824,6080 L 9877,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9930,6080 L 9983,6080"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10036,6080 L 10089,6080"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 10525,6080 L 10075,5930 10075,6230 10525,6080 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.TextShape">
<g id="id10">
<rect class="BoundingBox" stroke="none" fill="none" x="4937" y="5191" width="5081" height="1674"/>
<text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="5787" y="5892"><tspan fill="rgb(0,0,0)" stroke="none">Pod created</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.LineShape">
<g id="id11">
<rect class="BoundingBox" stroke="none" fill="none" x="4174" y="7449" width="12448" height="302"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4175,7477 L 4228,7478"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4281,7478 L 4334,7479"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4387,7479 L 4440,7480"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4494,7480 L 4547,7481"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4600,7481 L 4653,7482"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4706,7482 L 4759,7483"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4812,7484 L 4865,7484"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4918,7485 L 4971,7485"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5024,7486 L 5078,7486"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5131,7487 L 5184,7487"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5237,7488 L 5290,7488"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5343,7489 L 5396,7489"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5449,7490 L 5502,7491"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5555,7491 L 5608,7492"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5661,7492 L 5715,7493"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5768,7493 L 5821,7494"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5874,7494 L 5927,7495"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5980,7495 L 6033,7496"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6086,7497 L 6139,7497"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6192,7498 L 6245,7498"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6299,7499 L 6352,7499"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6405,7500 L 6458,7500"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6511,7501 L 6564,7501"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6617,7502 L 6670,7502"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6723,7503 L 6776,7504"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6829,7504 L 6883,7505"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6936,7505 L 6989,7506"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7042,7506 L 7095,7507"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7148,7507 L 7201,7508"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7254,7508 L 7307,7509"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7360,7510 L 7413,7510"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7467,7511 L 7520,7511"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7573,7512 L 7626,7512"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7679,7513 L 7732,7513"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7785,7514 L 7838,7514"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7891,7515 L 7944,7515"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7997,7516 L 8050,7517"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8104,7517 L 8157,7518"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8210,7518 L 8263,7519"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8316,7519 L 8369,7520"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8422,7520 L 8475,7521"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8528,7521 L 8581,7522"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8634,7523 L 8688,7523"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8741,7524 L 8794,7524"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8847,7525 L 8900,7525"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8953,7526 L 9006,7526"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9059,7527 L 9112,7527"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9165,7528 L 9218,7528"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9272,7529 L 9325,7530"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9378,7530 L 9431,7531"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9484,7531 L 9537,7532"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9590,7532 L 9643,7533"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9696,7533 L 9749,7534"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9802,7534 L 9855,7535"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9909,7536 L 9962,7536"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10015,7537 L 10068,7537"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10121,7538 L 10174,7538"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10227,7539 L 10280,7539"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10333,7540 L 10386,7540"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10439,7541 L 10493,7541"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10546,7542 L 10599,7543"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10652,7543 L 10705,7544"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10758,7544 L 10811,7545"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10864,7545 L 10917,7546"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10970,7546 L 11023,7547"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 11077,7547 L 11130,7548"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 11183,7549 L 11236,7549"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 11289,7550 L 11342,7550"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 11395,7551 L 11448,7551"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 11501,7552 L 11554,7552"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 11607,7553 L 11661,7553"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 11714,7554 L 11767,7554"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 11820,7555 L 11873,7556"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 11926,7556 L 11979,7557"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 12032,7557 L 12085,7558"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 12138,7558 L 12191,7559"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 12244,7559 L 12298,7560"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 12351,7560 L 12404,7561"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 12457,7562 L 12510,7562"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 12563,7563 L 12616,7563"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 12669,7564 L 12722,7564"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 12775,7565 L 12828,7565"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 12882,7566 L 12935,7566"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 12988,7567 L 13041,7567"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 13094,7568 L 13147,7569"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 13200,7569 L 13253,7570"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 13306,7570 L 13359,7571"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 13412,7571 L 13466,7572"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 13519,7572 L 13572,7573"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 13625,7573 L 13678,7574"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 13731,7575 L 13784,7575"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 13837,7576 L 13890,7576"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 13943,7577 L 13996,7577"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 14050,7578 L 14103,7578"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 14156,7579 L 14209,7579"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 14262,7580 L 14315,7580"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 14368,7581 L 14421,7582"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 14474,7582 L 14527,7583"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 14580,7583 L 14633,7584"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 14687,7584 L 14740,7585"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 14793,7585 L 14846,7586"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 14899,7586 L 14952,7587"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 15005,7588 L 15058,7588"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 15111,7589 L 15164,7589"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 15217,7590 L 15271,7590"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 15324,7591 L 15377,7591"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 15430,7592 L 15483,7592"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 15536,7593 L 15589,7593"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 15642,7594 L 15695,7595"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 15748,7595 L 15801,7596"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 15855,7596 L 15908,7597"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 15961,7597 L 16014,7598"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 16067,7598 L 16120,7599"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 16173,7599 L 16191,7600"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 16621,7604 L 16173,7449 16169,7749 16621,7604 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.TextShape">
<g id="id12">
<rect class="BoundingBox" stroke="none" fill="none" x="4937" y="6461" width="4573" height="1271"/>
<text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="5787" y="7162"><tspan fill="rgb(0,0,0)" stroke="none">Pod created</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.LineShape">
<g id="id13">
<rect class="BoundingBox" stroke="none" fill="none" x="4175" y="9613" width="6352" height="301"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10525,9763 L 10474,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10423,9763 L 10372,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10321,9763 L 10270,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10219,9763 L 10168,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10117,9763 L 10066,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10015,9763 L 9964,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9913,9763 L 9862,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9811,9763 L 9760,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9709,9763 L 9658,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9607,9763 L 9556,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9505,9763 L 9454,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9403,9763 L 9352,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9301,9763 L 9250,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9199,9763 L 9148,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9097,9763 L 9046,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8995,9763 L 8944,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8893,9763 L 8842,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8791,9763 L 8740,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8689,9763 L 8638,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8587,9763 L 8536,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8485,9763 L 8434,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8383,9763 L 8332,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8281,9763 L 8230,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8179,9763 L 8128,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8077,9763 L 8026,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7975,9763 L 7924,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7873,9763 L 7822,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7771,9763 L 7720,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7669,9763 L 7618,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7567,9763 L 7516,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7465,9763 L 7414,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7363,9763 L 7312,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7261,9763 L 7210,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7159,9763 L 7108,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7057,9763 L 7006,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6955,9763 L 6904,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6853,9763 L 6802,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6751,9763 L 6700,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6649,9763 L 6598,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6547,9763 L 6496,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6445,9763 L 6394,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6343,9763 L 6292,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6241,9763 L 6190,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6139,9763 L 6088,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6037,9763 L 5986,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5935,9763 L 5884,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5833,9763 L 5782,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5731,9763 L 5680,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5629,9763 L 5578,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5527,9763 L 5476,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5425,9763 L 5374,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5323,9763 L 5272,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5221,9763 L 5170,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5119,9763 L 5068,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5017,9763 L 4966,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4915,9763 L 4864,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4813,9763 L 4762,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4711,9763 L 4660,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4609,9763 L 4605,9763"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 4175,9763 L 4625,9913 4625,9613 4175,9763 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.TextShape">
<g id="id14">
<rect class="BoundingBox" stroke="none" fill="none" x="4683" y="8239" width="5208" height="1674"/>
<text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="5533" y="8940"><tspan fill="rgb(0,0,0)" stroke="none">Pod created </tspan></tspan><tspan class="TextPosition" x="4933" y="9651"><tspan fill="rgb(0,0,0)" stroke="none">with default SG</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.LineShape">
<g id="id15">
<rect class="BoundingBox" stroke="none" fill="none" x="4176" y="9613" width="6352" height="301"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10526,9763 L 10475,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10424,9763 L 10373,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10322,9763 L 10271,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10220,9763 L 10169,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10118,9763 L 10067,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10016,9763 L 9965,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9914,9763 L 9863,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9812,9763 L 9761,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9710,9763 L 9659,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9608,9763 L 9557,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9506,9763 L 9455,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9404,9763 L 9353,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9302,9763 L 9251,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9200,9763 L 9149,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9098,9763 L 9047,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8996,9763 L 8945,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8894,9763 L 8843,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8792,9763 L 8741,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8690,9763 L 8639,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8588,9763 L 8537,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8486,9763 L 8435,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8384,9763 L 8333,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8282,9763 L 8231,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8180,9763 L 8129,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8078,9763 L 8027,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7976,9763 L 7925,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7874,9763 L 7823,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7772,9763 L 7721,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7670,9763 L 7619,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7568,9763 L 7517,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7466,9763 L 7415,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7364,9763 L 7313,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7262,9763 L 7211,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7160,9763 L 7109,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7058,9763 L 7007,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6956,9763 L 6905,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6854,9763 L 6803,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6752,9763 L 6701,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6650,9763 L 6599,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6548,9763 L 6497,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6446,9763 L 6395,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6344,9763 L 6293,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6242,9763 L 6191,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6140,9763 L 6089,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6038,9763 L 5987,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5936,9763 L 5885,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5834,9763 L 5783,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5732,9763 L 5681,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5630,9763 L 5579,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5528,9763 L 5477,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5426,9763 L 5375,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5324,9763 L 5273,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5222,9763 L 5171,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5120,9763 L 5069,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5018,9763 L 4967,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4916,9763 L 4865,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4814,9763 L 4763,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4712,9763 L 4661,9763"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4610,9763 L 4606,9763"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 4176,9763 L 4626,9913 4626,9613 4176,9763 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.LineShape">
<g id="id16">
<rect class="BoundingBox" stroke="none" fill="none" x="4175" y="11518" width="12194" height="301"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 16367,11668 L 16316,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 16265,11668 L 16214,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 16163,11668 L 16112,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 16061,11668 L 16010,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 15959,11668 L 15908,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 15857,11668 L 15806,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 15755,11668 L 15704,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 15653,11668 L 15602,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 15551,11668 L 15500,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 15449,11668 L 15398,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 15347,11668 L 15296,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 15245,11668 L 15194,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 15143,11668 L 15092,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 15041,11668 L 14990,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 14939,11668 L 14888,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 14837,11668 L 14786,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 14735,11668 L 14684,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 14633,11668 L 14582,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 14531,11668 L 14480,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 14429,11668 L 14378,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 14327,11668 L 14276,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 14225,11668 L 14174,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 14123,11668 L 14072,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 14021,11668 L 13970,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 13919,11668 L 13868,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 13817,11668 L 13766,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 13715,11668 L 13664,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 13613,11668 L 13562,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 13511,11668 L 13460,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 13409,11668 L 13358,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 13307,11668 L 13256,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 13205,11668 L 13154,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 13103,11668 L 13052,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 13001,11668 L 12950,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 12899,11668 L 12848,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 12797,11668 L 12746,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 12695,11668 L 12644,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 12593,11668 L 12542,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 12491,11668 L 12440,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 12389,11668 L 12338,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 12287,11668 L 12236,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 12185,11668 L 12134,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 12083,11668 L 12032,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 11981,11668 L 11930,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 11879,11668 L 11828,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 11777,11668 L 11726,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 11675,11668 L 11624,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 11573,11668 L 11522,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 11471,11668 L 11420,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 11369,11668 L 11318,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 11267,11668 L 11216,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 11165,11668 L 11114,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 11063,11668 L 11012,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10961,11668 L 10910,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10859,11668 L 10808,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10757,11668 L 10706,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10655,11668 L 10604,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10553,11668 L 10502,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10451,11668 L 10400,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10349,11668 L 10298,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10247,11668 L 10196,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10145,11668 L 10094,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10043,11668 L 9992,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9941,11668 L 9890,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9839,11668 L 9788,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9737,11668 L 9686,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9635,11668 L 9584,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9533,11668 L 9482,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9431,11668 L 9380,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9329,11668 L 9278,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9227,11668 L 9176,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9125,11668 L 9074,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9023,11668 L 8972,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8921,11668 L 8870,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8819,11668 L 8768,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8717,11668 L 8666,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8615,11668 L 8564,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8513,11668 L 8462,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8411,11668 L 8360,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8309,11668 L 8258,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8207,11668 L 8156,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8105,11668 L 8054,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8003,11668 L 7952,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7901,11668 L 7850,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7799,11668 L 7748,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7697,11668 L 7646,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7595,11668 L 7544,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7493,11668 L 7442,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7391,11668 L 7340,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7289,11668 L 7238,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7187,11668 L 7136,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7085,11668 L 7034,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6983,11668 L 6932,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6881,11668 L 6830,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6779,11668 L 6728,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6677,11668 L 6626,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6575,11668 L 6524,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6473,11668 L 6422,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6371,11668 L 6320,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6269,11668 L 6218,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6167,11668 L 6116,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6065,11668 L 6014,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5963,11668 L 5912,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5861,11668 L 5810,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5759,11668 L 5708,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5657,11668 L 5606,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5555,11668 L 5504,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5453,11668 L 5402,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5351,11668 L 5300,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5249,11668 L 5198,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5147,11668 L 5096,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5045,11668 L 4994,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4943,11668 L 4892,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4841,11668 L 4790,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4739,11668 L 4688,11668"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4637,11668 L 4605,11668"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 4175,11668 L 4625,11818 4625,11518 4175,11668 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.TextShape">
<g id="id17">
<rect class="BoundingBox" stroke="none" fill="none" x="10779" y="10144" width="5843" height="2385"/>
<text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="11629" y="10845"><tspan fill="rgb(0,0,0)" stroke="none">Pod annotated </tspan></tspan><tspan class="TextPosition" x="11029" y="11556"><tspan fill="rgb(0,0,0)" stroke="none">with net-policy SG </tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.LineShape">
<g id="id18">
<rect class="BoundingBox" stroke="none" fill="none" x="4301" y="13550" width="6352" height="301"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4302,13700 L 4355,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4408,13700 L 4461,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4514,13700 L 4567,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4621,13700 L 4674,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4727,13700 L 4780,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4833,13700 L 4886,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 4939,13700 L 4992,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5045,13700 L 5098,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5151,13700 L 5205,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5258,13700 L 5311,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5364,13700 L 5417,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5470,13700 L 5523,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5576,13700 L 5629,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5682,13700 L 5735,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5789,13700 L 5842,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 5895,13700 L 5948,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6001,13700 L 6054,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6107,13700 L 6160,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6213,13700 L 6266,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6319,13700 L 6373,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6426,13700 L 6479,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6532,13700 L 6585,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6638,13700 L 6691,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6744,13700 L 6797,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6850,13700 L 6903,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 6957,13700 L 7010,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7063,13700 L 7116,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7169,13700 L 7222,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7275,13700 L 7328,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7381,13700 L 7434,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7487,13700 L 7541,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7594,13700 L 7647,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7700,13700 L 7753,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7806,13700 L 7859,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7912,13700 L 7965,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8018,13700 L 8071,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8125,13700 L 8178,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8231,13700 L 8284,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8337,13700 L 8390,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8443,13700 L 8496,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8549,13700 L 8602,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8656,13700 L 8709,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8762,13700 L 8815,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8868,13700 L 8921,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 8974,13700 L 9027,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9080,13700 L 9133,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9186,13700 L 9240,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9293,13700 L 9346,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9399,13700 L 9452,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9505,13700 L 9558,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9611,13700 L 9664,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9717,13700 L 9770,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9824,13700 L 9877,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 9930,13700 L 9983,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10036,13700 L 10089,13700"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10142,13700 L 10195,13700"/>
<path fill="rgb(0,0,0)" stroke="none" d="M 10652,13700 L 10202,13550 10202,13850 10652,13700 Z"/>
</g>
</g>
<g class="com.sun.star.drawing.TextShape">
<g id="id19">
<rect class="BoundingBox" stroke="none" fill="none" x="4556" y="12303" width="5335" height="1271"/>
<text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="5406" y="13004"><tspan fill="rgb(0,0,0)" stroke="none">Pod update </tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id20">
<rect class="BoundingBox" stroke="none" fill="none" x="8365" y="15096" width="43471" height="1889"/>
<text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="48236" y="16853"><tspan fill="rgb(0,0,0)" stroke="none">Attach policy </tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id21">
<rect class="BoundingBox" stroke="none" fill="none" x="9508" y="17015" width="42394" height="500"/>
<text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="46208" y="17475"><tspan fill="rgb(0,0,0)" stroke="none">vxcxvattsdsadsadsa</tspan></tspan></tspan></text>
</g>
</g>
<g class="com.sun.star.drawing.CustomShape">
<g id="id22">
<rect class="BoundingBox" stroke="none" fill="none" x="7730" y="14968" width="5592" height="3179"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 10525,14969 L 13320,16557 10525,18145 7731,16557 10525,14969 10525,14969 Z"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 7731,14969 L 7731,14969 Z"/>
<path fill="none" stroke="rgb(0,0,0)" d="M 13320,18145 L 13320,18145 Z"/>
<text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="8639" y="16422"><tspan fill="rgb(0,0,0)" stroke="none">Attach policy </tspan></tspan></tspan><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="9202" y="17133"><tspan fill="rgb(0,0,0)" stroke="none">sg to port</tspan></tspan></tspan></text>
</g>
</g>
</g>
</g>
</g>
</g>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 215 KiB

File diff suppressed because it is too large Load Diff

Before

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 91 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 467 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

View File

@ -1,6 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
sphinx>=2.0.0,!=2.1.0 # BSD
openstackdocstheme>=2.2.1 # Apache-2.0
reno>=3.1.0 # Apache-2.0

View File

@ -1,82 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.todo',
'openstackdocstheme',
'reno.sphinxext'
]
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = 'kuryr-kubernetes'
copyright = '2013, OpenStack Foundation'
# openstackdocstheme options
openstackdocs_repo_name = 'openstack/kuryr-kubernetes'
openstackdocs_auto_name = False
openstackdocs_bug_project = 'kuryr-kubernetes'
openstackdocs_bug_tag = ''
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'native'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
html_theme = 'openstackdocs'
# html_static_path = ['static']
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
'%s Documentation' % project,
'OpenStack Foundation', 'manual'),
]
# Example configuration for intersphinx: refer to the Python standard library.
#intersphinx_mapping = {'http://docs.python.org/': None}

View File

@ -1,82 +0,0 @@
============================
So You Want to Contribute...
============================
For general information on contributing to OpenStack, please check out the
`contributor guide <https://docs.openstack.org/contributors/>`_ to get started.
It covers all the basics that are common to all OpenStack projects: the
accounts you need, the basics of interacting with our Gerrit review system, how
we communicate as a community, etc.
Below will cover the more project specific information you need to get started
with kuryr-kubernetes.
Communication
-------------
The primary communication channel of kuryr-kubernetes team is `#openstack-kuryr
channel on IRC <ircs://irc.oftc.net:6697/openstack-kuryr>`_. For more
formal inquiries you can use [kuryr] tag on `openstack-discuss mailing list
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss>`_.
kuryr-kubernetes team is not holding weekly meetings, but we have office hours
every Monday at 15:00 UTC on our IRC channel.
Contacting the Core Team
------------------------
Outside of office hours, kuryr-kubernetes team is available mostly in the CET
working hours (7:00-17:00 UTC), as most of the team is located in Europe. Feel
free to try pinging dulek, ltomasbo, maysams or gryf on IRC, we have bouncers
set up so we'll answer once online.
New Feature Planning
--------------------
We don't really follow a very detailed way of feature planning. If you want to
implement a feature, come talk to us on IRC, create a `blueprint on Launchpad
<https://blueprints.launchpad.net/kuryr-kubernetes>`_ and start coding!
kuryr-kubernetes follows OpenStack release schedule pretty loosely as we're
more bound to Kubernetes release schedule. This means that we do not observe as
hard deadlines as other projects.
Task Tracking
-------------
We track our `tasks in Launchpad
<https://bugs.launchpad.net/kuryr-kubernetes>`_.
If you're looking for some smaller, easier work item to pick up and get started
on, search for the 'low-hanging-fruit' tag in either blueprints or bugs.
Reporting a Bug
---------------
You found an issue and want to make sure we are aware of it? You can do so on
`Launchpad <https://bugs.launchpad.net/kuryr-kubernetes>`_. It won't hurt to
ping us about it on IRC too.
Getting Your Patch Merged
-------------------------
We follow the normal procedures, requiring two +2's before approving the patch.
Due to limited number of contributors we do not require that those +2's are
from reviewers working for separate businesses.
If your patch is stuck in review, please ping us on IRC as listed in sections
above.
Project Team Lead Duties
------------------------
All common PTL duties are enumerated in the `PTL guide
<https://docs.openstack.org/project-team-guide/ptl.html>`_.
And additional PTL duty is to maintain `kuryr images on Docker Hub
<https://hub.docker.com/orgs/kuryr/repositories>`_.

View File

@ -1,8 +0,0 @@
===========================
Contributor Documentation
===========================
.. toctree::
:maxdepth: 2
contributing

View File

@ -1,98 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
Convention for heading levels in Neutron devref:
======= Heading 0 (reserved for the title in a document)
------- Heading 1
~~~~~~~ Heading 2
+++++++ Heading 3
''''''' Heading 4
(Avoid deeper levels because they do not render well.)
======================================
Kuryr Support Multiple Projects Design
======================================
Purpose
-------
Now, ``kuryr-kubernetes`` just implement a default project driver, the project
id of openstack resource which used to support k8s resource was specified by
configuration option ``neutron_defaults.project``. This means all of these
openstack resources have the same project id. This will result in some puzzling
issues in multiple tenant environment. Such as, the metering and billing system
can not classify these resources and the resources will exceed the tenant's
quota. In order to resolve these issues, we need to ensure these resources have
different project id (For the sake of simplicity, we can treat a project as a
tenant).
Overview
--------
Implement an annotation project driver for ``namespace``, ``pod`. ``service``
and ``network policy``. The driver can read project id from the annotations of
this resources' namespace.
Proposed Solution
-----------------
Now, the openstack resources that are created by ``kuryr-kubernetes`` only
involves ``neutron`` and ``octavia``. ``Neutron`` and ``octavia`` use openstack
project id to isolate their resources, so we can treat a openstack project as a
metering or billing tenant. Generally, ``kuryr-kubernetes`` use ``kuryr`` user
to create/delete/update/read ``neutron`` or ``octavia`` resources. The
``kuryr`` user has admin role, so ``kuryr-kubernetes`` can manage any project's
resources.
So, I propose that we introduce an annotation ``openstack.org/kuryr-project``,
the annotation should be set when a k8s namespace was created. The annotation's
value is a openstack project's id. One k8s namespace can only specify one
openstack project, but one openstack project can be associated with one or
multiple k8s namespace.
.. note::
``kuryr-kubernetes`` can not verify the project id that speficied by
``openstack.org/kuryr-project``. So, the validity of project id should be
ensured by third-party process. In addition to, we suggest that the
privilege of k8s namespace creation and updation only grant the user who has
admin role (avoid the common user to create k8s namespace arbitrarily).
When user create a ``pod``, ``service`` or ``network policy``, the new project
driver will retrieve these resources's namespace and get the namespace's
information, then the driver will try to get project id from annotaion
``openstack.org/kuryr-project``. If the driver succeed get project id, the
project id will return to these resource's handlers, then these handlers will
create related openstack resource with the project id.
.. note::
This is only solving the resource ownership issues. No isolation in terms
of networking will be achieved this way.
For namespace, then namespace handler can get namespace information from the
``on_present`` function's parameter. So, the namespace annotaion project driver
can try get project id from the information directly.
If user don't add ``openstack.org/kuryr-project`` annotation to namespace, the
default project need to be selected, the default project specified by
configuration option ``neutron_defaults.project``. If the default project not
specified still, the driver will raise ``cfg.RequiredOptError`` error.
Testing
-------
Need to add a new CI gate with these drivers
Tempest Tests
~~~~~~~~~~~~~
Need to add tempest tests

View File

@ -1,84 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
Convention for heading levels in Neutron devref:
======= Heading 0 (reserved for the title in a document)
------- Heading 1
~~~~~~~ Heading 2
+++++++ Heading 3
''''''' Heading 4
(Avoid deeper levels because they do not render well.)
======================================
Kuryr Kubernetes Health Manager Design
======================================
Purpose
-------
The purpose of this document is to present the design decision behind
Kuryr Kubernetes Health Managers.
The main purpose of the Health Managers is to perform Health verifications that
assures readiness and liveness to Kuryr Controller and CNI pod, and so improve
the management that Kubernetes does on Kuryr-Kubernetes pods.
Overview
--------
Kuryr Controller might get to a broken state due to problems like:
unable to connect with services it depends on and they being not healthy.
It is important to check health of these services so that Kubernetes and
its users know when Kuryr Controller is ready to perform its networking
tasks. Also, it is necessary to check the health state of Kuryr components in
order to assure Kuryr Controller service is alive. To provide these
functionalities, Controller's Health Manager will verify and serve the health
state of these services and components to the probes.
Besides these problems on the Controller, Kuryr CNI daemon also might get to a
broken state as a result of its components being not healthy and necessary
configurations not present. It is essential that CNI components health and
configurations are properly verified to assure CNI daemon is in a good shape.
On this way, the CNI Health Manager will check and serve the health state to
Kubernetes readiness and liveness probes.
Proposed Solution
-----------------
One of the endpoints provided by the Controller Health Manager will check
whether it is able to watch the Kubernetes API, authenticate with Keystone
and talk to Neutron, since these are services needed by Kuryr Controller.
These checks will assure the Controller readiness. The other endpoint, will
verify the health state of Kuryr components and guarantee Controller liveness.
The CNI Health Manager also provides two endpoints to Kubernetes probes.
The endpoint that provides readiness state to the probe checks connection
to Kubernetes API and presence of NET_ADMIN capabilities. The other endpoint,
which provides liveness, validates whether IPDB is in working order, maximum
CNI ADD failure is reached, health of CNI components and existence of memory
leak.
.. note::
The CNI Health Manager will be started with the check for memory leak
disabled. In order to enable, set the following option in kuryr.conf to a
limit value of memory in MiBs.
.. code-block:: ini
[cni_health_server]
max_memory_usage = -1
The CNI Health Manager is added as a process to CNI daemon and communicates
to the other two processes i.e. Watcher and Server with a shared boolean
object, which indicates the current health state of each component.
The idea behind these two Managers is to combine all the necessary checks in
servers running inside Kuryr Controller and CNI pods to provide the result of
these checks to the probes.

View File

@ -1,143 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
Convention for heading levels in Neutron devref:
======= Heading 0 (reserved for the title in a document)
------- Heading 1
~~~~~~~ Heading 2
+++++++ Heading 3
''''''' Heading 4
(Avoid deeper levels because they do not render well.)
================================
Active/Passive High Availability
================================
Overview
--------
Initially it was assumed that there will only be a single kuryr-controller
instance in the Kuryr-Kubernetes deployment. While it simplified a lot of
controller code, it is obviously not a perfect situation. Having redundant
controllers can help with achieving higher availability and scalability of the
deployment.
Now with introduction of possibility to run Kuryr in Pods on Kubernetes cluster
HA is much easier to be implemented. The purpose of this document is to explain
how will it work in practice.
Proposed Solution
-----------------
There are two types of HA - Active/Passive and Active/Active. In this document
we'll focus on the former. A/P basically works as one of the instances being
the leader (doing all the exclusive tasks) and other instances waiting in
*standby* mode in case the leader *dies* to take over the leader role. As you
can see a *leader election* mechanism is required to make this work.
Leader election
~~~~~~~~~~~~~~~
The idea here is to use leader election mechanism based on Kubernetes
endpoints. The idea is neatly `explained on Kubernetes blog`_. Election is
based on Endpoint resources, that hold annotation about current leader and its
leadership lease time. If leader dies, other instances of the service are free
to take over the record. Kubernetes API mechanisms will provide update
exclusion mechanisms to prevent race conditions.
This can be implemented by adding another *leader-elector* container to each
of kuryr-controller pods:
.. code-block:: yaml
- image: gcr.io/google_containers/leader-elector:0.5
name: leader-elector
args:
- "--election=kuryr-controller"
- "--http=0.0.0.0:${KURYR_CONTROLLER_HA_PORT:-16401}"
- "--election-namespace=kube-system"
- "--ttl=5s"
ports:
- containerPort: ${KURYR_CONTROLLER_HA_PORT:-16401}
protocol: TCP
This adds a new container to the pod. This container will do the
leader-election and expose the simple JSON API on port 16401 by default. This
API will be available to kuryr-controller container.
Kuryr Controller Implementation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The main issue with having multiple controllers is task division. All of the
controllers are watching the same endpoints and getting the same notifications,
but those notifications cannot be processed by multiple controllers at once,
because we end up with a huge race condition, where each controller creates
Neutron resources but only one succeeds to put the annotation on the Kubernetes
resource it is processing.
This is obviously unacceptable so as a first step we're implementing A/P HA,
where only the leader is working on the resources and the other instances wait
as standby. This will be implemented by periodically calling the leader-elector
API to check the current leader. On leader change:
* Pod losing the leadership will stop its Watcher. Please note that it will be
stopped gracefully, so all the ongoing operations will be completed.
* Pod gaining the leadership will start its Watcher. Please note that it will
get notified about all the previously created Kubernetes resources, but will
ignore them as they already have the annotations.
* Pods not affected by leadership change will continue to be in standby mode
with their Watchers stopped.
Please note that this means that in HA mode Watcher will not get started on
controller startup, but only when periodic task will notice that it is the
leader.
Issues
~~~~~~
There are certain issues related to orphaned OpenStack resources that we may
hit. Those can happen in two cases:
* Controller instance dies instantly during request processing. Some of
OpenStack resources were already created, but information about them was not
yet annotated onto Kubernetes resource. Therefore information is lost and we
end up with orphaned OpenStack resources. New leader will process the
Kubernetes resource by creating resources again.
* During leader transition (short period after a leader died, but before its
lease expired and periodic task on other controllers noticed that; this
shouldn't exceed 10s) some K8s resources are deleted. New leader will not
get the notification about the deletion and those will go unnoticed.
Both of this issues can be tackled by garbage-collector mechanism that will
periodically look over Kubernetes resources and delete OpenStack resources that
have no representation in annotations.
The latter of the issues can also be tackled by saving last seen
``resourceVersion`` of watched resources list when stopping the Watcher and
restarting watching from that point.
Future enhancements
~~~~~~~~~~~~~~~~~~~
It would be useful to implement the garbage collector and
``resourceVersion``-based protection mechanism described in section above.
Besides that to further improve the scalability, we should work on
Active/Active HA model, where work is divided evenly between all of the
kuryr-controller instances. This can be achieved e.g. by using
consistent hash ring to decide which instance will process which resource.
Potentially this can be extended with support for non-containerized deployments
by using Tooz and some other tool providing leader-election - like Consul or
Zookeeper.
.. _explained on Kubernetes blog: https://kubernetes.io/blog/2016/01/simple-leader-election-with-kubernetes/

View File

@ -1,56 +0,0 @@
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Convention for heading levels in Neutron devref:
======= Heading 0 (reserved for the title in a document)
------- Heading 1
~~~~~~~ Heading 2
+++++++ Heading 3
''''''' Heading 4
(Avoid deeper levels because they do not render well.)
===========================
Design and Developer Guides
===========================
In the Design and Developer Guides, you will find information on kuryr
kubernetes integration plans and design decisions. There are sections that
contain specific integration components description and detailed designs.
Finally, the developer guide includes information about kuryr kubernetes
testing infrastructure.
Design documents
----------------
.. toctree::
:maxdepth: 3
kuryr_kubernetes_design
service_support
port_manager
vif_handler_drivers_design
health_manager
high_availability
kuryr_kubernetes_versions
network_policy
updating_pod_resources_api
annotation_project_driver
Indices and tables
------------------
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -1,328 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
Convention for heading levels in Neutron devref:
======= Heading 0 (reserved for the title in a document)
------- Heading 1
~~~~~~~ Heading 2
+++++++ Heading 3
''''''' Heading 4
(Avoid deeper levels because they do not render well.)
===================================
Kuryr Kubernetes Integration Design
===================================
Purpose
-------
The purpose of this document is to present the main Kuryr-K8s integration
components and capture the design decisions of each component currently taken
by the Kuryr team.
Goal Statement
--------------
Enable OpenStack Neutron realization of the Kubernetes networking. Start by
supporting network connectivity and expand to support advanced features, such
as Network Policies. In the future, it may be extended to some other
OpenStack services.
Overview
--------
In order to integrate Neutron into Kubernetes networking, 2 components are
introduced: Controller and CNI Driver. Controller is a supervisor component
responsible to maintain translation of networking relevant Kubernetes model
into the OpenStack (i.e. Neutron) model. This can be considered as a
centralized service (supporting HA mode in the future). CNI driver is
responsible for binding Kubernetes pods on worker nodes into Neutron ports
ensuring requested level of isolation. Please see below the component view of
the integrated system:
.. image:: ../../images/kuryr_k8s_components.png
:alt: integration components
:align: center
:width: 100%
Design Principles
-----------------
#. Loose coupling between integration components.
#. Flexible deployment options to support different project, subnet and
security groups assignment profiles.
#. The communication of the pod binding data between Controller and CNI driver
should rely on existing communication channels, currently through the
KuryrPort CRDs.
#. CNI Driver should not depend on Neutron. It gets all required details
from Kubernetes API server (currently through Kubernetes CRDs),
therefore depending on Controller to perform its translation tasks.
#. Allow different neutron backends to bind Kubernetes pods without code
modification. This means that both Controller and CNI binding mechanism
should allow loading of the vif management and binding components,
manifested via configuration. If some vendor requires some extra code, it
should be handled in one of the stevedore drivers.
Kuryr Controller Design
-----------------------
Controller is responsible for watching Kubernetes API endpoints to make sure
that the corresponding model is maintained in Neutron. Controller updates K8s
resources endpoints' annotations and/or CRDs to keep neutron details required
by the CNI driver as well as for the model mapping persistency.
Controller is composed from the following components:
Watcher
~~~~~~~
Watcher is a common software component used by both the Controller and the CNI
driver. Watcher connects to Kubernetes API. Watcher's responsibility is to
observe the registered (either on startup or dynamically during its runtime)
endpoints and invoke registered callback handler (pipeline) to pass all events
from registered endpoints. As an example, if a Service is created at the
Kubernetes end, the ServiceHandler which is watching the Service Objects uses
the watcher to detect the changes on them and calls the right driver for the
reconciliation of Kubernetes and the needed OpenStack resources.
Event Handler
~~~~~~~~~~~~~
EventHandler is an interface class for the Kubernetes event handling. There are
several 'wrapper' event handlers that can be composed to implement Controller
handling pipeline.
**Retry** Event Handler is used for handling specified failures during event
processing. It can be used to 'wrap' another EventHandler and in case of
specified error will retry the wrapped event handler invocation within
specified timeout. In case of persistent failure, Retry will raise the wrapped
EventHandler exception.
**Async** Event Handler is used to execute event handling asynchronously.
Events are grouped based on the specified 'thread_groups'. Events of the same
group are processed in order of arrival. Thread group maps to an unique K8s
resource (each Pod, Service, etc.). Async can be used to 'wrap' another
EventHandler. Queues per thread group are added dynamically once relevant
events arrive and removed once queue is empty.
**LogExceptions** Event Handler suppresses exceptions and sends them to log
facility.
**Dispatcher** is an Event Handler that distributes events to registered
handlers based on event content and handler predicate provided during event
handler registration.
ControllerPipeline
~~~~~~~~~~~~~~~~~~
ControllerPipeline serves as an event dispatcher of the Watcher for Kuryr-K8s
controller Service. Currently watched endpoints are 'pods', 'services' and
'endpoints'. Kubernetes resource event handlers (Event Consumers) are
registered into the Controller Pipeline. There is a special EventConsumer,
ResourceEventHandler, that provides API for Kubernetes event handling. When a
watched event arrives, it is processed by all Resource Event Handlers
registered for specific Kubernetes object kind. Pipeline retries on resource
event handler invocation in case of the ResourceNotReady exception till it
succeeds or the number of retries (time-based) is reached. Any unrecovered
failure is logged without affecting other Handlers (of the current and other
events). Events of the same group (same Kubernetes object) are handled
sequentially in the order arrival. Events of different Kubernetes objects are
handled concurrently.
.. image:: ../..//images/controller_pipeline.png
:alt: controller pipeline
:align: center
:width: 100%
ResourceEventHandler
~~~~~~~~~~~~~~~~~~~~
ResourceEventHandler is a convenience base class for the Kubernetes event
processing. The specific Handler associates itself with specific Kubernetes
object kind (through setting OBJECT_KIND) and is expected to implement at
least one of the methods of the base class to handle at least one of the
ADDED/MODIFIED/DELETED events of the Kubernetes object. For details, see
`k8s-api`_. Since both ADDED and MODIFIED event types trigger very similar
sequence of actions, Handler has 'on_present' method that is invoked for both
event types. The specific Handler implementation should strive to put all the
common ADDED and MODIFIED event handling logic in this method to avoid code
duplication.
Pluggable Handlers
~~~~~~~~~~~~~~~~~~
Starting with the Rocky release, Kuryr-Kubernetes includes a pluggable
interface for the Kuryr controller handlers.
The pluggable handlers framework allows :
- Using externally provided handlers.
- Controlling which handlers should be active.
To control which Kuryr Controller handlers should be active, the selected
handlers need to be included in kuryr.conf at the 'kubernetes' section.
If not specified, Kuryr Controller will run the default handlers, which
currently includes the following:
====================== =========================
Handler Kubernetes resource
====================== =========================
vif Pod
kuryrport KuryrPort CRD
endpoints Endpoints
service Service
kuryrloadbalancer KuryrLoadBalancer CRD
kuryrnetwork KuryrNetwork CRD
namespace Namespaces
kuryrnetworkpolicy KuryrNetworkPolicy CRD
podlabel Pod
policy NetworkPolicy
machine Machine
kuryrnetworkpopulation KuryrNetwork CRD
====================== =========================
For example, to enable only the 'vif' controller handler we should set the
following at kuryr.conf:
.. code-block:: ini
[kubernetes]
enabled_handlers=vif,kuryrport
Note, that we have to specify vif and kuryrport together, since currently those
two plugins works together.
Providers
~~~~~~~~~
Provider (Drivers) are used by ResourceEventHandlers to manage specific aspects
of the Kubernetes resource in the OpenStack domain. For example, creating a
Kubernetes Pod will require a neutron port to be created on a specific network
with the proper security groups applied to it. There will be dedicated Drivers
for Project, Subnet, Port and Security Groups settings in neutron. For
instance, the Handler that processes pod events, will use PodVIFDriver,
PodProjectDriver, PodSubnetsDriver and PodSecurityGroupsDriver. The Drivers
model is introduced in order to allow flexibility in the Kubernetes model
mapping to the OpenStack. There can be different drivers that do Neutron
resources management, i.e. create on demand or grab one from the precreated
pool. There can be different drivers for the Project management, i.e. single
Tenant or multiple. Same goes for the other drivers. There are drivers that
handle the Pod based on the project, subnet and security groups specified via
configuration settings during cluster deployment phase.
NeutronPodVifDriver
~~~~~~~~~~~~~~~~~~~
PodVifDriver subclass should implement request_vif, release_vif and
activate_vif methods. In case request_vif returns Vif object in down state,
Controller will invoke activate_vif. Vif 'active' state is required by the
CNI driver to complete pod handling.
The NeutronPodVifDriver is the default driver that creates neutron port upon
Pod addition and deletes port upon Pod removal.
CNI Driver
----------
CNI driver is just a thin client that passes CNI ADD and DEL requests to
kuryr-daemon instance via its HTTP API. It's a simple executable that is
supposed to be called by kubelet's CNI. Since Train release the CNI driver
has an alternative golang implementation (see the kuryr_cni directory) to make
injecting it onto the Kubernetes node from the kuryr-cni pod easier. This
enables Kuryr to work on K8s deployments that does not have Python or curl on
Kubernetes nodes. Compatibility between Python and golang CNI drivers is
supposed to be maintained.
.. _cni-daemon:
CNI Daemon
----------
CNI Daemon is a service that should run on every Kubernetes node. Starting from
Rocky release it should be seen as a default supported deployment option. And
running without it is impossible starting from Stein release. It is responsible
for watching pod events on the node it's running on, answering calls from CNI
Driver and attaching VIFs when they are ready. In the future it will also keep
information about pooled ports in memory. This helps to limit the number of
processes spawned when creating multiple Pods, as a single Watcher is enough
for each node and CNI Driver will only wait on local network socket for
response from the Daemon.
Currently CNI Daemon consists of two processes i.e. Watcher and Server.
Processes communicate between each other using Python's
``multiprocessing.Manager`` and a shared dictionary object. Watcher is
responsible for extracting VIF information from KuryrPort CRD events and
putting them into the shared dictionary. Server is a regular WSGI server that
will answer CNI Driver calls. When a CNI request comes, Server is waiting for
VIF object to appear in the shared dictionary. As CRD data is read from
kubernetes API and added to the registry by Watcher thread, Server will
eventually get VIF it needs to connect for a given pod. Then it waits for the
VIF to become active before returning to the CNI Driver.
Communication
~~~~~~~~~~~~~
CNI Daemon Server is starting an HTTP server on a local network socket
(``127.0.0.1:5036`` by default). Currently server is listening for 2 API
calls. Both calls load the ``CNIParameters`` from the body of the call (it is
expected to be JSON).
For reference see updated pod creation flow diagram:
.. image:: ../../images/pod_creation_flow_daemon.png
:alt: Controller-CNI-daemon interaction
:align: center
:width: 100%
/addNetwork
+++++++++++
**Function**: Is equivalent of running ``K8sCNIPlugin.add``.
**Return code:** 202 Accepted
**Return body:** Returns VIF data in JSON form. This is serialized
oslo.versionedobject from ``os_vif`` library. On the other side it can be
deserialized using o.vo's ``obj_from_primitive()`` method.
/delNetwork
+++++++++++
**Function**: Is equivalent of running ``K8sCNIPlugin.delete``.
**Return code:** 204 No content
**Return body:** None.
When running in daemonized mode, CNI Driver will call CNI Daemon over those
APIs to perform its tasks and wait on socket for result.
Kubernetes Documentation
------------------------
The `Kubernetes reference documentation`_ is a great source for finding more
details about Kubernetes API, CLIs, and tools.
.. _k8s-api: https://github.com/kubernetes/kubernetes/blob/release-1.4/docs/devel/api-conventions.md#types-kinds>
.. _Kubernetes reference documentation: https://kubernetes.io/docs/reference/

View File

@ -1,43 +0,0 @@
===============================================
Kubernetes and OpenShift version support matrix
===============================================
This document maintains updated documentation about what Kubernetes and
OpenShift versions are supported at each Kuryr-Kubernetes release.
.. note::
In general Kuryr should work fine with older versions of Kubernetes and
OpenShift as well as it only depends from the APIs that are quite stable
in Kubernetes itself. However we try to limit the number of supported
versions, as Kubernetes policy is to only support last 3 minor releases.
.. note::
Kuryr-Kubernetes follows *cycle-with-intermediary* release model and that's
why there are multiple minor releases per single OpenStack release. Going
forward it is possible that Kuryr-Kubernetes will switch to *independent*
release model, that would completely untie it from OpenStack releases. This
is because it seems to be easier to follow Kubernetes releases than
OpenStack releases.
.. warning::
In most cases only the latest supported version is tested in the CI/CD
system.
======================== ====================================== ========================
Kuryr-Kubernetes version Kubernetes version OpenShift Origin version
======================== ====================================== ========================
master (Victoria) v1.16.x, v1.17.x, v1.18.x 4.3, 4.4, 4.5
2.0.0 (Ussuri) v1.14.x, v1.15.x, v1.16.x 3.11, 4.2
1.1.0 (Train) v1.13.x, v1.14.x, v1.15.x 3.9, 3.10, 3.11, 4.2
0.6.x, 1.0.0 (Stein) v1.11.x, v1.12.x, v1.13.x 3.9, 3.10, 3.11, 4.2
0.5.2-3 (Rocky) v1.9.x, v1.10.x, v1.11.x, v1.12.x 3.9, 3.10
0.5.0-1 (Rocky) v1.9.x, v1.10.x 3.9, 3.10
0.4.x (Queens) v1.8.x 3.7
0.3.0 (Queens) v1.6.x, v1.8.x No support
0.2.x (Pike) v1.4.x, v1.6.x No support
0.1.0 (Pike) v1.3.x, v1.4.x No support
======================== ====================================== ========================

View File

@ -1,521 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
Convention for heading levels in Neutron devref:
======= Heading 0 (reserved for the title in a document)
------- Heading 1
~~~~~~~ Heading 2
+++++++ Heading 3
''''''' Heading 4
(Avoid deeper levels because they do not render well.)
==============
Network Policy
==============
Purpose
--------
The purpose of this document is to present how Network Policy is supported by
Kuryr-Kubernetes.
Overview
--------
Kubernetes supports a Network Policy object to express ingress and egress rules
for pods. Network Policy reacts on labels to qualify multiple pods, and defines
rules based on different labeling and/or CIDRs. When combined with a networking
plugin, those policy objects are enforced and respected.
Proposed Solution
-----------------
Kuryr-Kubernetes relies on Neutron security groups and security group rules to
enforce a Network Policy object, more specifically one security group per policy
with possibly multiple rules. Each object has a namespace scoped Network Policy
CRD that stores all OpenStack related resources on the Kubernetes side, avoiding
many calls to Neutron and helping to differentiate between the current Kubernetes
status of the Network Policy and the last one Kuryr-Kubernetes enforced.
The network policy CRD has the following format:
.. code-block:: yaml
apiVersion: openstack.org/v1
kind: KuryrNetworkPolicy
metadata:
...
spec:
egressSgRules:
- sgRule:
...
ingressSgRules:
- sgRule:
...
podSelector:
...
status:
securityGroupId: ...
podSelector: ...
securityGroupRules: ...
A new handler has been added to react to Network Policy events, and the existing
ones, for instance service/pod handlers, have been modified to account for the
side effects/actions of when a Network Policy is being enforced.
.. note::
Kuryr supports a network policy that contains:
* Ingress and Egress rules
* namespace selector and pod selector, defined with match labels or match
expressions, mix of namespace and pod selector, ip block
* named port
New handlers and drivers
~~~~~~~~~~~~~~~~~~~~~~~~
The Network Policy handler
++++++++++++++++++++++++++
This handler is responsible for triggering the Network Policy Spec processing,
and the creation or removal of security group with appropriate security group
rules. It also, applies the security group to the pods and services affected
by the policy.
The Pod Label handler
+++++++++++++++++++++
This new handler is responsible for triggering the update of a security group
rule upon pod labels changes, and its enforcement on the pod port and service.
The Network Policy driver
++++++++++++++++++++++++++
Is the main driver. It ensures a Network Policy by processing the Spec
and creating or updating the Security group with appropriate
security group rules.
The Network Policy Security Group driver
++++++++++++++++++++++++++++++++++++++++
It is responsible for creating, deleting, or updating security group rules
for pods, namespaces or services based on different Network Policies.
Modified handlers and drivers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The KuryrPort handler
+++++++++++++++++++++
As network policy rules can be defined based on pod labels, this handler
has been enhanced to trigger a security group rule creation or deletion,
depending on the type of pod event, if the pod is affected by the network
policy and if a new security group rule is needed. Also, it triggers the
translation of the pod rules to the affected service. Note, that KuryrPort
takes over most of the VIF handler functionality, although it has to be enabled
together with VIF handler.
The Namespace handler
+++++++++++++++++++++
Just as the pods labels, namespaces labels can also define a rule in a
Network Policy. To account for this, the namespace handler has been
incremented to trigger the creation, deletion or update of a
security group rule, in case the namespace affects a Network Policy rule,
and the translation of the rule to the affected service.
The Namespace Subnet driver
+++++++++++++++++++++++++++
In case of a namespace event and a Network Policy enforcement based
on the namespace, this driver creates a subnet to this namespace,
and restrict the number of security group rules for the Network Policy
to just one with the subnet CIDR, instead of one for each pod in the namespace.
The LBaaS driver
++++++++++++++++
To restrict the incoming traffic to the backend pods, the LBaaS driver
has been enhanced to translate pods rules to the listener port, and react
to Service ports updates. E.g., when the target port is not allowed by the
policy enforced in the pod, the rule should not be added.
The VIF Pool driver
+++++++++++++++++++
The VIF Pool driver is responsible for updating the Security group applied
to the pods ports. It has been modified to embrace the fact that with Network
Policies pods' ports changes their security group while being used, meaning the
original pool does not fit them anymore, resulting in useless pools and ports
reapplying the original security group. To avoid it, the security group id
is removed from the pool merging all pools with same network, project
and host id. Thus if there is no ports on the pool with the needed
security group id(s), one of the existing ports in the pool is updated
to match the requested sg Id.
Use cases examples
~~~~~~~~~~~~~~~~~~
This section describes some scenarios with a Network Policy being enforced,
what Kuryr components gets triggered and what resources are created.
Deny all incoming traffic
+++++++++++++++++++++++++
By default, Kubernetes clusters do not restrict traffic. Only once a network
policy is enforced to a namespace, all traffic not explicitly allowed in the
policy becomes denied. As specified in the following policy:
.. code-block:: yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
spec:
podSelector: {}
policyTypes:
- Ingress
The following CRD is the translation of policy rules to security group rules.
No ingress rule was created, which means traffic is blocked, and since
there is no restriction for egress traffic, it is allowed to everywhere. Note
that the same happens when no ``policyType`` is defined, since all policies
are assumed to affect Ingress.
.. code-block:: yaml
apiVersion: openstack.org/v1
kind: KuryrNetworkPolicy
metadata:
name: default-deny
namespace: default
...
spec:
egressSgRules:
- sgRule:
description: Kuryr-Kubernetes NetPolicy SG rule
direction: egress
ethertype: IPv4
security_group_id: 20d9b623-f1e0-449d-95c1-01624cb3e315
ingressSgRules: []
podSelector:
...
status:
securityGroupId: 20d9b623-f1e0-449d-95c1-01624cb3e315
securityGroupRules: ...
podSelector: ...
Allow traffic from pod
++++++++++++++++++++++
The following Network Policy specification has a single rule allowing traffic
on a single port from the group of pods that have the label ``role=monitoring``.
.. code-block:: yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-monitoring-via-pod-selector
spec:
podSelector:
matchLabels:
app: server
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
role: monitoring
ports:
- protocol: TCP
port: 8080
Create the following pod with label ``role=monitoring``:
.. code-block:: console
$ kubectl create deployment monitor --image=busybox --restart=Never --labels=role=monitoring
The generated CRD contains an ingress rule allowing traffic on port 8080 from
the created pod, and an egress rule allowing traffic to everywhere, since no
restriction was enforced.
.. code-block:: yaml
apiVersion: openstack.org/v1
kind: KuryrNetworkPolicy
metadata:
name: allow-monitoring-via-pod-selector
namespace: default
...
spec:
egressSgRules:
- sgRule:
description: Kuryr-Kubernetes NetPolicy SG rule
direction: egress
ethertype: IPv4
ingressSgRules:
- namespace: default
sgRule:
description: Kuryr-Kubernetes NetPolicy SG rule
direction: ingress
ethertype: IPv4
port_range_max: 8080
port_range_min: 8080
protocol: tcp
remote_ip_prefix: 10.0.1.143
podSelector:
...
status:
securityGroupId: 7f0ef8c2-4846-4d8c-952f-94a9098fff17
securityGroupRules: ...
podSelector: ...
Allow traffic from namespace
++++++++++++++++++++++++++++
The following network policy only allows ingress traffic
from namespace with the label ``purpose=test``:
.. code-block:: yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-test-via-ns-selector
spec:
podSelector:
matchLabels:
app: server
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
purpose: test
ports:
- protocol: TCP
port: 8080
Create a namespace and label it with ``purpose=test``:
.. code-block:: console
$ kubectl create namespace dev
$ kubectl label namespace dev purpose=test
The resulting CRD has an ingress rule allowing traffic
from the namespace CIDR on the specified port, and an
egress rule allowing traffic to everywhere.
.. code-block:: yaml
apiVersion: openstack.org/v1
kind: KuryrNetworkPolicy
name: allow-test-via-ns-selector
namespace: default
...
spec:
egressSgRules:
- sgRule:
description: Kuryr-Kubernetes NetPolicy SG rule
direction: egress
ethertype: IPv4
ingressSgRules:
- namespace: dev
sgRule:
description: Kuryr-Kubernetes NetPolicy SG rule
direction: ingress
ethertype: IPv4
port_range_max: 8080
port_range_min: 8080
protocol: tcp
remote_ip_prefix: 10.0.1.192/26
podSelector:
...
status:
securityGroupId: c480327c-2db4-4eb6-af1e-eeb0ce9b46c9
securityGroupRules: ...
podSelector: ...
.. note::
Only when using Amphora Octavia provider and Services with selector,
the Load Balancer security groups need to be rechecked when a
network policy that affects ingress traffic is created, and
also everytime a pod or namespace is created. Network Policy
is not enforced on Services without Selectors.
Allow traffic to a Pod in a Namespace
+++++++++++++++++++++++++++++++++++++
The following network policy only allows egress traffic from Pods
with the label ``app=demo`` at Namespace with label ``app=demo``:
.. code-block:: yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: block-egress
namespace: client
spec:
podSelector:
matchLabels:
app: client
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
app: demo
podSelector:
matchLabels:
app: demo
The resulting CRD has an ingress rules allowing traffic
from everywhere and egress rules to the selected Pod
and the Service that points to it.
.. code-block:: yaml
apiVersion: openstack.org/v1
kind: KuryrNetworkPolicy
metadata: ...
spec:
egressSgRules:
- namespace: demo
sgRule:
description: Kuryr-Kubernetes NetPolicy SG rule
direction: egress
ethertype: IPv4
port_range_max: 65535
port_range_min: 1
protocol: tcp
remote_ip_prefix: 10.0.2.120
- sgRule:
description: Kuryr-Kubernetes NetPolicy SG rule
direction: egress
ethertype: IPv4
port_range_max: 65535
port_range_min: 1
protocol: tcp
remote_ip_prefix: 10.0.0.144
ingressSgRules:
- sgRule:
description: Kuryr-Kubernetes NetPolicy SG rule
direction: ingress
ethertype: IPv4
- sgRule:
description: Kuryr-Kubernetes NetPolicy SG rule
direction: ingress
ethertype: IPv6
podSelector:
matchLabels:
app: client
policyTypes:
- Egress
status:
podSelector:
matchLabels:
app: client
securityGroupId: 322a347b-0684-4aea-945a-5f204361a64e
securityGroupRules: ...
.. note::
A Network Policy egress rule creates a Security Group rule
corresponding to a Service(with or without selectors) that
points to the selected Pod.
Create network policy flow
++++++++++++++++++++++++++
.. image:: ../../images/create_network_policy_flow.svg
:alt: Network Policy creation flow
:align: center
:width: 100%
Create pod flow
+++++++++++++++
The following diagram only covers the implementation part that affects
network policy.
.. image:: ../../images/update_network_policy_on_pod_creation.svg
:alt: Pod creation flow
:align: center
:width: 100%
Network policy rule definition
++++++++++++++++++++++++++++++
======================== ======================= ==============================================
NamespaceSelector podSelector Expected result
======================== ======================= ==============================================
namespaceSelector: ns1 podSelector: pod1 Allow traffic from pod1 at ns1
namespaceSelector: ns1 podSelector: {} Allow traffic from all pods at ns1
namespaceSelector: ns1 none Allow traffic from all pods at ns1
namespaceSelector: {} podSelector: pod1 Allow traffic from pod1 from all namespaces
namespaceSelector: {} podSelector: {} Allow traffic from all namespaces
namespaceSelector: {} none Allow traffic from all namespaces
none podSelector: pod1 Allow traffic from pod1 from NP namespace
none podSelector: {} Allow traffic from all pods from NP namespace
======================== ======================= ==============================================
======================== ================================================
Rules definition Expected result
======================== ================================================
No FROM (or from: []) Allow traffic from all pods from all namespaces
Ingress: {} Allow traffic from all namespaces
ingress: [] Deny all traffic
No ingress Blocks all traffic
======================== ================================================
Policy types definition
+++++++++++++++++++++++
=============== ===================== ======================= ======================
PolicyType Spec Ingress/Egress Ingress generated rules Egress generated rules
=============== ===================== ======================= ======================
none none BLOCK ALLOW
none ingress Specific rules ALLOW
none egress Block Specific rules
none ingress, egress Specific rules Specific rules
ingress none Block ALLOW
ingress ingress Specific rules ALLOW
egress none ALLOW BLOCK
egress egress ALLOW Specific rules
Ingress, egress none BLOCK BLOCK
Ingress, egress ingress Specific rules BLOCK
Ingress, egress egress BLOCK Specific rules
Ingress, egress ingress,egress Specific rules Specific rules
=============== ===================== ======================= ======================

View File

@ -1,195 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
Convention for heading levels in Neutron devref:
======= Heading 0 (reserved for the title in a document)
------- Heading 1
~~~~~~~ Heading 2
+++++++ Heading 3
''''''' Heading 4
(Avoid deeper levels because they do not render well.)
====================================
Kuryr Kubernetes Port Manager Design
====================================
Purpose
-------
The purpose of this document is to present Kuryr Kubernetes Port Manager,
capturing the design decision currently taken by the kuryr team.
The main purpose of the Port Manager is to perform Neutron resources handling,
i.e., ports creation and deletion. The main idea behind is to try to minimize
the amount of calls to Neutron by ensuring port reuse as well as performing
bulk actions, e.g., creating/deleting several ports within the same Neutron
call.
Overview
--------
Interactions between Kuryr and Neutron may take more time than desired from
the container management perspective.
Some of these interactions between Kuryr and Neutron can be optimized. For
instance, by maintaining pre-created pools of Neutron port resources instead
of asking for their creation during pod lifecycle pipeline.
As an example, every time a container is created or deleted, there is a call
from Kuryr to Neutron to create/remove the port used by the container. To
optimize this interaction and speed up both container creation and deletion,
the Kuryr-Kubernetes Port Manager will take care of both: Neutron ports
creation beforehand, and Neutron ports deletion afterwards. This will
consequently remove the waiting time for:
- Creating ports and waiting for them to become active when booting containers
- Deleting ports when removing containers
Proposed Solution
-----------------
The Port Manager will be in charge of handling Neutron ports. The main
difference with the current implementation resides on when and how these
ports are managed. The idea behind is to minimize the amount of calls to the
Neutron server by reusing already created ports as well as by creating/deleting
them in bulk requests.
This design focuses on Neutron ports management, but similar optimization can
be done for other Neutron resources, and consequently new resource managers
can be added.
Ports Manager
~~~~~~~~~~~~~
The Port Manager will handle different pool of Neutron ports:
- Available pools: There will be a pool of ports for each tenant, host (or
trunk port for the nested case) and security group, ready to be used by the
pods. Note at the beginning there are no pools. Once a pod is created at
a given host/VM by a tenant, with a specific security group, a corresponding
pool gets created and populated with the desired minimum amount of ports.
Note the Neutron port quota needs to be considered when configuring the
parameters of the pool, i.e., the minimum and/or maximum size of the pools as
well as the size of the bulk creation requests.
- Recyclable pool: Instead of deleting the port during pods removal it will
just be included into this pool. The ports in this pool will be later
recycled by the Port Manager and put back into the corresponding
available pool.
The Port Manager will handle the available pools ensuring that at least X ports
are ready to be used at each existing pool, i.e., for each security group
and tenant which already has a pod on it. The port creation at each
available_pool will be handled in batches, i.e., instead of creating one port
at a time, a configurable amount of them will be created altogether.
The Port Manager will check for each pod creation that the remaining number of
ports in the specific pool is above X. Otherwise it creates Y extra ports for
that pool (with the specific tenant and security group). Note both X and Y are
configurable.
Thanks to having the available ports pool, during the container creation
process, instead of calling Neutron port_create and then waiting for the port
to become active, a port will be taken from the right available_pool (hence,
no need to call Neutron) and then the port info will be updated with the
proper container name (i.e., call to Neutron port_update). Thus, thanks to the
Port Manager, at least two calls to Neutron are skipped (port create and
pooling waiting for port to become ACTIVE), while doing an extra one (update)
which is faster than the other ones. Similarly, for the port deletion we save
the call to remove the port as it is just included in the recyclable pool.
The port cleanup actions return ports to the corresponding available_pool after
re-applying security groups and changing the device name to 'available-port'.
A maximum limit for the pool can be specified to ensure that once the
corresponding available_pool reach a certain size, the ports gets deleted
instead of recycled. Note this upper limit can be disabled by setting it to 0.
In addition, a Time-To-Live (TTL) could be set to the ports at the pool, so
that if they are not used during a certain period of time, they are removed --
if and only if the available_pool size is still larger than the target minimum.
Recovery of pool ports upon Kuryr-Controller restart
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If the Kuryr-Controller is restarted, the pre-created ports will still exist
on the Neutron side but the Kuryr-controller will be unaware of them, thus
pre-creating more upon pod allocation requests. To avoid having these existing
but unused ports a mechanisms is needed to either delete them after
controller's reboot, or obtain their information and re-include them into
their corresponding pool.
For the baremetal (NeutronVIF) case, as the ports are not attached to any
hosts (at least not until CNI support is included) there is not enough
information to decide which pool should be selected for adding the port.
For simplicity, and as a temporal solution until CNI support is developed,
the implemented mechanism will find the previously created ports by looking
at the existing neutron ports and filtering them by device_owner and name,
which should be compute:kuryr and available-port, respectively.
Once these ports are obtained, they are deleted to release unused Neutron
resources and avoid problems related to ports quota limits.
By contrast, it is possible to obtain all the needed information for the
subports previously created for the nested (VLAN+Trunk) case as they are still
attached to their respective trunk ports. Therefore, these ports instead of
being deleted will be re-added to their corresponding pools.
To do this, the Neutron ports are filtered by device_owner (trunk:subport in
this case) and name (available-port), and then we iterate over the subports
attached to each existing trunk port to find where the filtered ports are
attached and then obtain all the needed information to re-add them into the
corresponding pools.
Kuryr Controller Impact
~~~~~~~~~~~~~~~~~~~~~~~
A new VIF Pool driver is created to manage the ports pools upon pods creation
and deletion events. It will ensure that a pool with at least X ports is
available for each tenant, host or trunk port, and security group, when the
first request to create a pod with these attributes happens. Similarly, it will
ensure that ports are recycled from the recyclable pool after pods deletion and
are put back in the corresponding available_pool to be reused. Thanks to this
Neutron calls are skipped and the ports of the pools are used instead. If the
corresponding pool is empty, a ResourceNotReady exception will be triggered and
the pool will be repopulated.
In addition to the handler modification and the new pool drivers there are
changes related to the VIF drivers. The VIF drivers (neutron-vif and nested)
will be extended to support bulk ports creation of Neutron ports and similarly
for the VIF objects requests.
Future enhancement
++++++++++++++++++
The VIFHandler needs to be aware of the new Pool driver, which will load the
respective VIF driver to be used. In a sense, the Pool Driver will be a proxy
to the VIF Driver, but also managing the pools. When a mechanism to load and
set the VIFHandler drivers is in place, this will be reverted so that the
VIFHandlers becomes unaware of the pool drivers.
Kuryr CNI Impact
~~~~~~~~~~~~~~~~
For the nested vlan case, the subports at the different pools are already
attached to the VMs trunk ports, therefore they are already in ACTIVE status.
However, for the generic case the ports are not really bond to anything (yet),
therefore their status will be DOWN. In order to keep these ports returned to
the pool in ACTIVE status, we will implement another pool at the CNI side for
the generic case. This solution could be different for different SDN
controllers. The main idea is that they should keep the port in ACTIVE
state without allowing network traffic through them. For instance, for the
Neutron reference implementation, this pool will maintain a pool of veth
devices at each host, by connecting them to a recyclable namespace so that the
OVS agent sees them as 'still connected' and maintains their ACTIVE status.
This modification must ensure the OVS (br-int) ports where these veth devices
are connected are not deleted after container deletion by the CNI.
Future enhancement
++++++++++++++++++
The CNI modifications will be implemented in a second phase.

View File

@ -1,157 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
Convention for heading levels in Neutron devref:
======= Heading 0 (reserved for the title in a document)
------- Heading 1
~~~~~~~ Heading 2
+++++++ Heading 3
''''''' Heading 4
(Avoid deeper levels because they do not render well.)
============================================
Kuryr Kubernetes Services Integration Design
============================================
Purpose
-------
The purpose of this document is to present how Kubernetes Service is supported
by the kuryr integration and to capture the design decisions currently taken
by the kuryr team.
Overview
--------
A Kubernetes Service is an abstraction which defines a logical set of Pods and
a policy by which to access them. Service is a Kubernetes managed API object.
For Kubernetes-native applications, Kubernetes offers an Endpoints API that is
updated whenever the set of Pods in a Service changes. For detailed information
please refer to `Kubernetes service`_. Kubernetes supports services with
kube-proxy component that runs on each node, `Kube-Proxy`_.
Proposed Solution
-----------------
Kubernetes service in its essence is a Load Balancer across Pods that fit the
service selection. Kuryr's choice is to support Kubernetes services by using
Neutron LBaaS service. The initial implementation is based on the OpenStack
LBaaSv2 API, so compatible with any LBaaSv2 API provider.
In order to be compatible with Kubernetes networking, Kuryr-Kubernetes makes
sure that services Load Balancers have access to Pods Neutron ports. This may
be affected once Kubernetes Network Policies will be supported. Oslo versioned
objects are used to keep translation details in Kubernetes entities annotation.
This will allow future changes to be backward compatible.
Data Model Translation
~~~~~~~~~~~~~~~~~~~~~~
Kubernetes service is mapped to the LBaaSv2 Load Balancer with associated
Listeners and Pools. Service endpoints are mapped to Load Balancer Pool
members.
Kuryr Controller Impact
~~~~~~~~~~~~~~~~~~~~~~~
Three Kubernetes Event Handlers are added to the Controller pipeline.
- ServiceHandler manages Kubernetes Service events.
Based on the service spec and metadata details, it creates KuryrLoadBalancer
CRD or it updates the CRD, more specifically the spec part of the CRD with
details to be used for translation to LBaaSv2 model, such as tenant-id,
subnet-id, ip address and security groups.
- EndpointsHandler is responsible for adding endpoints subsets to the
KuryrLoadBalancer CRD. If endpoint is created before Service, this handler
creates the CRD with the endpoints subsets, otherwise the existent CRD is
updated.
- KuryrLoadBalancerHandler manages KuryrLoadBalancer CRD events when the CRD is
successfully created and filled with spec data. This handler is responsible
for creating the needed Octavia resources according to the CRD spec and
update the status field with information about the generated resources, such
as LoadBalancer, LoadBalancerListener, LoadBalancerPool and
LoadBalancerMembers.
These Handlers use Project, Subnet and SecurityGroup service drivers to get
details for service mapping.
In order to prevent Kubernetes objects from being deleted before the OpenStack
resources are cleaned up, finalizers are used. Finalizers block deletion of the
Service, Endpoints and KuryrLoadBalancer objects until Kuryr deletes the
associated OpenStack loadbalancers. After that the finalizers are removed
allowing the Kubernetes API to delete the objects.
LBaaS Driver is added to manage service translation to the LBaaSv2-like API. It
abstracts all the details of service translation to Load Balancer.
LBaaSv2Driver supports this interface by mapping to neutron LBaaSv2 constructs.
Service Creation Process
~~~~~~~~~~~~~~~~~~~~~~~~
What happens when a service gets created by Kubernetes?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++
When a Kubernetes Service and Endpoints are created,the ServiceHandler and
EndpointHandler (at controller/handlers/lbaas.py) are called. When the
ServiceHandler first starts handling the on_present event, it creates the
KuryrLoadBalancer CRD with the Service spec and empty status. When the
Endpoints information is not yet added on the spec by the EndpointHandler, the
event reaching KuryrLoadBalancerHandler is skipped. If the EndpointsHandler
starts handling the on_present event first, the KuryrLoadBalancer CRD is
created with the endpoints subsets. Otherwise, it will update the existing CRD
created by the ServiceHandler with the endpoints subsets.
The KuryrLoadBalancerHandler (at controller/handlers/loadbalancer.py) upon
noticing the KuryrLoadBalancer CRD with the full specification, calls the
appropriate Drivers to handle the openstack resources such as Loadbalancer,
Load Balancer Listeners, Load Balancer Pools, and Load Balancer Members. It
uses the _sync_lbaas_members function to check if Openstack Loadbalancers are
in sync with the kubernetes counterparts.
.. figure:: ../../images/service_creation_diagram.svg
:alt: Service creation Diagram
:align: center
:width: 100%
Service creation flow diagram
Service Deletion Process
~~~~~~~~~~~~~~~~~~~~~~~~
What happens when a service gets deleted by Kubernetes?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++
When a Kubernetes Service and Endpoints are deleted, the finalizers which are
added to the service object (and the KLB CRD Object too) and defined during the
KuryrLoadBalancer CRD creation, blocks the removal of the kubernetes object
until the associated OpenStack resources are removed, which also avoids
leftovers. When they are removed, Kubernetes is able to remove the CRD, the
service and the endpoints, hence completing the service removal action.
What happens if the KuryrLoadBalancer CRD status changes?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++
If the members field on the status of the CRD is manually removed or the status
is completely set to an empty object, the KuryrLoadBalancerHandler that is
watching these CRD objects detects this change and confirms that there is no
information about the OpenStack resources on the status and so needs to
rediscover or recreate them. It checks if there are provisioned OpenStack
resources (in this case loadbalancers, listeners, pools, and members) for the
service/endpoints defined on the KuryrLoadBalancer CRD spec. If that is the
case, it retrieves their information and puts it back on the CRD status field.
If that is not the case (due to the resources being deleted on the OpenStack
side), it will recreate the resources and write the new information about them
on the CRD status field.
.. _Kubernetes service: http://kubernetes.io/docs/user-guide/services/
.. _Kube-Proxy: http://kubernetes.io/docs/admin/kube-proxy/

View File

@ -1,137 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
Convention for heading levels in Neutron devref:
======= Heading 0 (reserved for the title in a document)
------- Heading 1
~~~~~~~ Heading 2
+++++++ Heading 3
''''''' Heading 4
(Avoid deeper levels because they do not render well.)
==================================
HowTo Update PodResources gRPC API
==================================
Purpose
-------
The purpose of this document is to describe how to update gRPC API files in
kuryr-kubernetes repository in case of upgrading to a new version of Kubernetes
PodResources API. These files are ``api_pb2_grpc.py``, ``api_pb2.py`` and
``api.proto`` from ``kuryr_kubernetes/pod_resources/`` directory.
``api.proto`` is a gRPC API definition file generated from the
``kubernetes/pkg/kubelet/apis/podresources/<version>/api.proto`` of the
Kubernetes source tree.
``api_pb2_grpc.py`` and ``api_pb2.py`` are python bindings for gRPC API.
.. note::
There are only 2 reasons for update:
#. Kubernetes released new version of PodResources API and the old one is no
longer supported. In this case, without update, we'll not be able to use
PodResources service.
#. ``protobuf`` version in ``lower-constraints.txt`` changed to lower
version (this is highly unlikely). In this case ``protobuf`` could fail
to use our python bindings.
Automated update
----------------
``contrib/regenerate_pod_resources_api.sh`` script could be used to re-generate
PodResources gRPC API files. By default, this script will download ``v1alpha1``
version of ``api.proto`` file from the Kubernetes GitHub repo and create
required kuryr-kubernetes files from it:
.. code-block:: console
[kuryr-kubernetes]$ ./contrib/regenerate_pod_resources_api.sh
Alternatively, path to ``api.proto`` file could be specified in
``KUBERNETES_API_PROTO`` environment variable:
.. code-block:: console
$ export KUBERNETES_API_PROTO=/path/to/api.proto
Define ``API_VERSION`` environment variable to use specific version of
``api.proto`` from the Kubernetes GitHub:
.. code-block:: console
$ export API_VERSION=v1alpha1
Manual update steps
-------------------
Preparing the new api.proto
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Copy the ``api.proto`` from K8s sources to ``kuryr_kubernetes/pod_resources/``
and remove all the lines that contains ``gogoproto`` since this is unwanted
dependency that is not needed for python bindings:
.. code-block:: console
$ sed '/gogoproto/d' \
../kubernetes/pkg/kubelet/apis/podresources/<version>/api.proto \
> kuryr_kubernetes/pod_resources/api.proto
Don't forget to update the file header that should point to the original
``api.proto`` and to this reference document::
// Generated from kubernetes/pkg/kubelet/apis/podresources/<version>/api.proto
// To regenerate api.proto, api_pb2.py and api_pb2_grpc.py follow instructions
// from doc/source/devref/updating_pod_resources_api.rst.
Generating the python bindings
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* (Optional) Create the python virtual environment:
.. code-block:: console
[kuryr-kubernetes]$ python3 -m venv venv
[kuryr-kubernetes]$ . ./venv/bin/activate
* To generate python bindings we need a ``protoc`` compiler and the
``gRPC plugin`` for it. The most simple way to get them is to install
``grpcio-tools``:
.. code-block:: console
(venv) [kuryr-kubernetes]$ pip install grpcio-tools==1.19
.. note::
We're installing specific version of ``grpcio-tools`` to get specific
version of ``protoc`` compiler. The version of ``protoc`` compiler should
be equal to the ``protobuf`` package version in ``lower-constraints.txt``.
This is because older ``protobuf`` might be not able to use files
generated by newer compiler. In case you need to use more recent compiler,
you need update ``requirements.txt`` and ``lower-constraints.txt``
accordingly.
To check version of compiler installed with ``grpcio-tools`` use:
.. code-block:: console
(venv) [kuryr-kubernetes]$ python -m grpc_tools.protoc --version
libprotoc 3.6.1
* Following command will generate ``api_pb2_grpc.py`` and ``api_pb2.py``:
.. code-block:: console
(venv) [kuryr-kubernetes]$ python -m grpc_tools.protoc -I./ \
--python_out=. --grpc_python_out=. \
kuryr_kubernetes/pod_resources/api.proto

View File

@ -1,157 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
Convention for heading levels in Neutron devref:
======= Heading 0 (reserved for the title in a document)
------- Heading 1
~~~~~~~ Heading 2
+++++++ Heading 3
''''''' Heading 4
(Avoid deeper levels because they do not render well.)
==================================
VIF-Handler And Vif Drivers Design
==================================
Purpose
-------
The purpose of this document is to present an approach for implementing
design of interaction between VIF-handler and the drivers it uses in
Kuryr-Kubernetes Controller.
VIF-Handler
-----------
VIF-handler was intended to handle VIFs. Currently it is responsible for
reacting on Pod object events, and for creating/deleting corresponding
KuryrPort CRD object.
KuryrPort-handler
-----------------
KuryrPort is responsible for taking care about associated Pod VIFs. The main
aim of this handler is to get the KuryrPort CRD object, created by the
VIF-handler, send it to the:
- VIF-driver for the default network,
- enabled Multi-VIF drivers for the additional networks,
- and get VIF objects from both.
After that KuryrPort-handler is able to activate, release or update VIFs.
KuryrPort-handler should stay clean whereas parsing of specific pod information
should be done by Multi-VIF drivers.
Multi-VIF driver
~~~~~~~~~~~~~~~~
The new type of drivers which is used to call other VIF-drivers to attach
additional interfaces to Pods. The main aim of this kind of drivers is to get
additional interfaces from the Pods definition, then invoke real VIF-drivers
like neutron-vif, nested-macvlan to retrieve the VIF objects accordingly.
All Multi-VIF drivers should be derived from class *MultiVIFDriver*. And all
should implement the *request_additional_vifs* method which returns a list of
VIF objects. Those VIF objects are created by each of the vif-drivers invoked
by the Multi-VIF driver. Each of the multi-vif driver should support a syntax
of additional interfaces definition in Pod. If the pod object doesn't define
additional interfaces, the Multi-VIF driver can just return.
Diagram describing VifHandler - Drivers flow is giver below:
.. image:: ../../images/vif_handler_drivers_design.png
:alt: vif handler drivers design
:align: center
:width: 100%
Config Options
~~~~~~~~~~~~~~
Add new config option "multi_vif_drivers" (list) to config file that shows
what Multi-VIF drivers should be used in to specify the addition VIF objects.
It is allowed to have one or more multi_vif_drivers enabled, which means that
multi_vif_drivers can either work separately or together. By default, a noop
driver which basically does nothing will be used if this field is not
explicitly specified.
Option in config file might look like this:
.. code-block:: ini
[kubernetes]
multi_vif_drivers = additional_subnets
Or like this:
.. code-block:: ini
[kubernetes]
multi_vif_drivers = npwg_multiple_interfaces
Additional Subnets Driver
~~~~~~~~~~~~~~~~~~~~~~~~~
Since it is possible to request additional subnets for the pod through the pod
annotations it is necessary to have new driver. According to parsed information
(requested subnets) by Multi-vif driver it has to return dictionary containing
the mapping 'subnet_id' -> 'network' for all requested subnets in unified
format specified in PodSubnetsDriver class. Here's how a Pod Spec with
additional subnets requests might look like:
.. code-block:: yaml
spec:
replicas: 1
template:
metadata:
name: some-name
labels:
app: some-name
annotations:
openstack.org/kuryr-additional-subnets: '[
"id_of_neutron_subnet_created_previously"
]'
Specific ports support
----------------------
Specific ports support is enabled by default and will be a part of the drivers
to implement it. It is possible to have manually precreated specific ports in
neutron and specify them in pod annotations as preferably used. This means that
drivers will use specific ports if it is specified in pod annotations,
otherwise it will create new ports by default. It is important that specific
ports can have vnic_type both direct and normal, so it is necessary to provide
processing support for specific ports in both SRIOV and generic driver. Pod
annotation with requested specific ports might look like this:
.. code-block:: yaml
spec:
replicas: 1
template:
metadata:
name: some-name
labels:
app: some-name
annotations:
spec-ports: '[
"id_of_direct_precreated_port".
"id_of_normal_precreated_port"
]'
Pod spec above should be interpreted the following way: Multi-vif driver parses
pod annotations and gets ids of specific ports. If vnic_type is "normal" and
such ports exist, it calls generic driver to create vif objects for these
ports. Else if vnic_type is "direct" and such ports exist, it calls sriov
driver to create vif objects for these ports. If certain ports are not
requested in annotations then driver doesn't return additional vifs to
Multi-vif driver.

View File

@ -1,48 +0,0 @@
.. kuryr-kubernetes documentation master file, created by
sphinx-quickstart on Tue Jul 9 22:26:36 2013.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to kuryr-kubernetes's documentation!
============================================
Contents
--------
.. toctree::
:maxdepth: 3
readme
nested_vlan_mode
installation/index
usage
contributor/index
Developer Docs
--------------
.. toctree::
:maxdepth: 3
devref/index
Design Specs
------------
.. toctree::
:maxdepth: 1
specs/pike/contrail_support
specs/pike/fuxi_kubernetes
specs/queens/network_policy
specs/rocky/npwg_spec_support
specs/stein/vhostuser
Indices and tables
------------------
* :ref:`genindex`
* :ref:`search`

View File

@ -1,187 +0,0 @@
.. _containerized:
================================================
Kuryr installation as a Kubernetes network addon
================================================
Building images
~~~~~~~~~~~~~~~
First you should build kuryr-controller and kuryr-cni docker images and place
them on cluster-wide accessible registry.
For creating controller image on local machine:
.. code-block:: console
$ docker build -t kuryr/controller -f controller.Dockerfile .
For creating cni daemonset image on local machine:
.. code-block:: console
$ docker build -t kuryr/cni -f cni.Dockerfile .
Kuryr-kubernetes also includes a tool to automatically build the controller
image and deletes the existing container to apply the newly built
image. The tool is avaliable at:
.. code-block:: console
$ contrib/regenerate_controller_pod.sh
If you want to run kuryr CNI without the daemon, build the image with:
.. code-block:: console
$ docker build -t kuryr/cni -f cni.Dockerfile --build-arg CNI_DAEMON=False .
Alternatively, you can remove ``imagePullPolicy: Never`` from kuryr-controller
Deployment and kuryr-cni DaemonSet definitions to use pre-built `controller`_
and `cni`_ images from the Docker Hub. Those definitions will be generated in
next step.
.. _containerized-generate:
Generating Kuryr resource definitions for Kubernetes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
kuryr-kubernetes includes a tool that lets you generate resource definitions
that can be used to Deploy Kuryr on Kubernetes. The script is placed in
``tools/generate_k8s_resource_definitions.sh`` and takes up to 3 arguments:
.. code-block:: console
$ ./tools/generate_k8s_resource_definitions.sh <output_dir> [<controller_conf_path>] [<cni_conf_path>] [<ca_certificate_path>]
* ``output_dir`` - directory where to put yaml files with definitions.
* ``controller_conf_path`` - path to custom kuryr-controller configuration
file.
* ``cni_conf_path`` - path to custom kuryr-cni configuration file (defaults to
``controller_conf_path``).
* ``ca_certificate_path`` - path to custom CA certificate for OpenStack API. It
will be added into Kubernetes as a ``Secret`` and mounted into
kuryr-controller container. Defaults to no certificate.
.. note::
Providing no or incorrect ``ca_certificate_path`` will still create the file
with ``Secret`` definition with empty CA certificate file. This file will
still be mounted in kuryr-controller ``Deployment`` definition.
If no path to config files is provided, script automatically generates minimal
configuration. However some of the options should be filled by the user. You
can do that either by editing the file after the ConfigMap definition is
generated or provide your options as environment variables before running the
script. Below is the list of available variables:
* ``$KURYR_K8S_API_ROOT`` - ``[kubernetes]api_root`` (default:
https://127.0.0.1:6443)
* ``$KURYR_K8S_AUTH_URL`` - ``[neutron]auth_url`` (default:
http://127.0.0.1/identity)
* ``$KURYR_K8S_USERNAME`` - ``[neutron]username`` (default: admin)
* ``$KURYR_K8S_PASSWORD`` - ``[neutron]password`` (default: password)
* ``$KURYR_K8S_USER_DOMAIN_NAME`` - ``[neutron]user_domain_name`` (default:
Default)
* ``$KURYR_K8S_KURYR_PROJECT_ID`` - ``[neutron]kuryr_project_id``
* ``$KURYR_K8S_PROJECT_DOMAIN_NAME`` - ``[neutron]project_domain_name``
(default: Default)
* ``$KURYR_K8S_PROJECT_ID`` - ``[neutron]k8s_project_id``
* ``$KURYR_K8S_POD_SUBNET_ID`` - ``[neutron_defaults]pod_subnet_id``
* ``$KURYR_K8S_POD_SG`` - ``[neutron_defaults]pod_sg``
* ``$KURYR_K8S_SERVICE_SUBNET_ID`` - ``[neutron_defaults]service_subnet_id``
* ``$KURYR_K8S_WORKER_NODES_SUBNETS`` - ``[pod_vif_nested]worker_nodes_subnets``
* ``$KURYR_K8S_BINDING_DRIVER`` - ``[binding]driver`` (default:
``kuryr.lib.binding.drivers.vlan``)
* ``$KURYR_K8S_BINDING_IFACE`` - ``[binding]link_iface`` (default: eth0)
.. note::
kuryr-daemon will be started in the CNI container. It is using ``os-vif``
and ``oslo.privsep`` to do pod wiring tasks. By default it'll call ``sudo``
to raise privileges, even though container is priviledged by itself or
``sudo`` is missing from container OS (e.g. default CentOS 8). To prevent
that make sure to set following options in kuryr.conf used for kuryr-daemon:
.. code-block:: ini
[vif_plug_ovs_privileged]
helper_command=privsep-helper
[vif_plug_linux_bridge_privileged]
helper_command=privsep-helper
Those options will prevent oslo.privsep from doing that. If rely on
aformentioned script to generate config files, those options will be added
automatically.
In case of using ports pool functionality, we may want to make the
kuryr-controller not ready until the pools are populated with the existing
ports. To achieve this a readiness probe must be added to the kuryr-controller
deployment. To add the readiness probe, in addition to the above environment
variables or the kuryr-controller configuration file, and extra environmental
variable must be set:
* ``$KURYR_USE_PORTS_POOLS`` - ``True`` (default: False)
Example run:
.. code-block:: console
$ KURYR_K8S_API_ROOT="192.168.0.1:6443" ./tools/generate_k8s_resource_definitions.sh /tmp
This should generate 6 files in your ``<output_dir>``:
* config_map.yml
* certificates_secret.yml
* controller_service_account.yml
* cni_service_account.yml
* controller_deployment.yml
* cni_ds.yml
.. note::
kuryr-cni daemonset mounts /var/run, due to necessity of accessing to
several sub directories like openvswitch and auxiliary directory for
vhostuser configuration and socket files. Also when
neutron-openvswitch-agent works with datapath_type = netdev configuration
option, kuryr-kubernetes has to move vhostuser socket to auxiliary
directory, that auxiliary directory should be on the same mount point,
otherwise connection of this socket will be refused. In case when Open
vSwitch keeps vhostuser socket files not in /var/run/openvswitch,
openvswitch mount point in cni_ds.yaml and [vhostuser] section in
config_map.yml should be changed properly.
Deploying Kuryr resources on Kubernetes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To deploy the files on your Kubernetes cluster run:
.. code-block:: console
$ kubectl apply -f config_map.yml -n kube-system
$ kubectl apply -f certificates_secret.yml -n kube-system
$ kubectl apply -f controller_service_account.yml -n kube-system
$ kubectl apply -f cni_service_account.yml -n kube-system
$ kubectl apply -f controller_deployment.yml -n kube-system
$ kubectl apply -f cni_ds.yml -n kube-system
After successful completion:
* kuryr-controller Deployment object, with single replica count, will get
created in kube-system namespace.
* kuryr-cni gets installed as a daemonset object on all the nodes in
kube-system namespace
To see kuryr-controller logs:
.. code-block:: console
$ kubectl logs <pod-name>
NOTE: kuryr-cni has no logs and to debug failures you need to check out kubelet
logs.
.. _controller: https://hub.docker.com/r/kuryr/controller/
.. _cni: https://hub.docker.com/r/kuryr/cni/

View File

@ -1,91 +0,0 @@
=============================
Inspect default Configuration
=============================
By default, DevStack creates networks called ``private`` and ``public``:
.. code-block:: console
$ openstack network list --project demo
+--------------------------------------+---------+----------------------------------------------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+---------+----------------------------------------------------------------------------+
| 12bc346b-35ed-4cfa-855b-389305c05740 | private | 1ee73076-e01e-4cec-a3a4-cbb275f94d0f, 8376a091-dcea-4ed5-b738-c16446e861da |
+--------------------------------------+---------+----------------------------------------------------------------------------+
$ openstack network list --project admin
+--------------------------------------+--------+----------------------------------------------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+--------+----------------------------------------------------------------------------+
| 646baf54-6178-4a26-a52b-68ad0ba1e057 | public | 00e0b1e4-4bee-4204-bd02-610291c56334, b1be34f2-7c3d-41ca-b2f5-6dcbd3c1715b |
+--------------------------------------+--------+----------------------------------------------------------------------------+
And kuryr-kubernetes creates two extra ones for the kubernetes services and
pods under the project k8s:
.. code-block:: console
$ openstack network list --project k8s
+--------------------------------------+-----------------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+-----------------+--------------------------------------+
| 1bff74a6-e4e2-42fb-a81b-33c9c144987c | k8s-pod-net | 3c3e18f9-d1d0-4674-b3be-9fc8561980d3 |
| d4be7efc-b84d-480e-a1db-34205877e6c4 | k8s-service-net | 55405e9d-4e25-4a55-bac2-e25ee88584e1 |
+--------------------------------------+-----------------+--------------------------------------+
And similarly for the subnets:
.. code-block:: console
$ openstack subnet list --project k8s
+--------------------------------------+--------------------+--------------------------------------+---------------+
| ID | Name | Network | Subnet |
+--------------------------------------+--------------------+--------------------------------------+---------------+
| 3c3e18f9-d1d0-4674-b3be-9fc8561980d3 | k8s-pod-subnet | 1bff74a6-e4e2-42fb-a81b-33c9c144987c | 10.0.0.64/26 |
| 55405e9d-4e25-4a55-bac2-e25ee88584e1 | k8s-service-subnet | d4be7efc-b84d-480e-a1db-34205877e6c4 | 10.0.0.128/26 |
+--------------------------------------+--------------------+--------------------------------------+---------------+
In addition to that, security groups for both pods and services are created too:
.. code-block:: console
$ openstack security group list --project k8s
+--------------------------------------+--------------------+------------------------+----------------------------------+
| ID | Name | Description | Project |
+--------------------------------------+--------------------+------------------------+----------------------------------+
| 00fd78f9-484d-4ea7-b677-82f73c54064a | service_pod_access | service_pod_access | 49e2683370f245e38ac2d6a8c16697b3 |
| fe7cee41-6021-4d7b-ab03-1ce1e391a1ca | default | Default security group | 49e2683370f245e38ac2d6a8c16697b3 |
+--------------------------------------+--------------------+------------------------+----------------------------------+
And finally, the loadbalancer for the kubernetes API service is also created,
with the subsequence listener, pool and added members:
.. code-block:: console
$ openstack loadbalancer list
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
| id | name | tenant_id | vip_address | provisioning_status | provider |
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
| 7d0cf5b5-b164-4b32-87d3-ae6c82513927 | default/kubernetes | 47c28e562795468ea52e92226e3bc7b1 | 10.0.0.129 | ACTIVE | haproxy |
+--------------------------------------+--------------------+----------------------------------+-------------+---------------------+----------+
$ openstack loadbalancer listener list
+--------------------------------------+--------------------------------------+------------------------+----------------------------------+----------+---------------+----------------+
| id | default_pool_id | name | tenant_id | protocol | protocol_port | admin_state_up |
+--------------------------------------+--------------------------------------+------------------------+----------------------------------+----------+---------------+----------------+
| abfbafd8-7609-4b7d-9def-4edddf2b887b | 70bed821-9a9f-4e1d-8c7e-7df89a923982 | default/kubernetes:443 | 47c28e562795468ea52e92226e3bc7b1 | HTTPS | 443 | True |
+--------------------------------------+--------------------------------------+------------------------+----------------------------------+----------+---------------+----------------+
$ openstack loadbalancer pool list
+--------------------------------------+------------------------+----------------------------------+--------------+----------+----------------+
| id | name | tenant_id | lb_algorithm | protocol | admin_state_up |
+--------------------------------------+------------------------+----------------------------------+--------------+----------+----------------+
| 70bed821-9a9f-4e1d-8c7e-7df89a923982 | default/kubernetes:443 | 47c28e562795468ea52e92226e3bc7b1 | ROUND_ROBIN | HTTPS | True |
+--------------------------------------+------------------------+----------------------------------+--------------+----------+----------------+
$ openstack loadbalancer member list default/kubernetes:443
+--------------------------------------+------+----------------------------------+--------------+---------------+--------+--------------------------------------+----------------+
| id | name | tenant_id | address | protocol_port | weight | subnet_id | admin_state_up |
+--------------------------------------+------+----------------------------------+--------------+---------------+--------+--------------------------------------+----------------+
| 5ddceaff-180b-47fa-b787-8921f4591cb0 | | 47c28e562795468ea52e92226e3bc7b1 | 192.168.5.10 | 6443 | 1 | b1be34f2-7c3d-41ca-b2f5-6dcbd3c1715b | True |
+--------------------------------------+------+----------------------------------+--------------+---------------+--------+--------------------------------------+----------------+

View File

@ -1,186 +0,0 @@
===========================
Basic DevStack installation
===========================
Most basic DevStack installation of kuryr-kubernetes is pretty simple. This
document aims to be a tutorial through installation steps.
Document assumes using Ubuntu LTS 20.04 (using server or cloud installation is
recommended, but desktop will also work), but same steps should apply for other
operating systems. It is also assumed that ``git`` and ``curl`` are already
installed on the system. DevStack will make sure to install and configure
OpenStack, Kubernetes and dependencies of both systems.
Please note, that DevStack installation should be done inside isolated
environment such as virtual machine, since it will make substantial changes to
the host.
Cloning required repositories
-----------------------------
First of all, you'll need a user account, which can execute passwordless
``sudo`` command. Consult `DevStack Documentation`_ for details, how to create
one, or simply add line:
.. code-block:: ini
"USERNAME ALL=(ALL) NOPASSWD:ALL"
to ``/etc/sudoers`` using ``visudo`` command. Remember to change ``USERNAME``
to the real name of the user account.
Clone DevStack:
.. code-block:: console
$ git clone https://opendev.org/openstack-dev/devstack
Copy sample ``local.conf`` (DevStack configuration file) to devstack
directory:
.. code-block:: console
$ curl https://opendev.org/openstack/kuryr-kubernetes/raw/branch/master/devstack/local.conf.sample \
-o devstack/local.conf
.. note::
``local.conf.sample`` file is configuring Neutron and Kuryr with OVN
ML2 networking. In the ``kuryr-kubernetes/devstack`` directory there are
other sample configuration files that enable Open vSwitch instead OVN.
networking. See other pages in this documentation section to learn more.
Now edit ``devstack/local.conf`` to set up some initial options:
* If you have multiple network interfaces, you need to set ``HOST_IP`` variable
to the IP on the interface you want to use as DevStack's primary. DevStack
sometimes complain about lacking of ``HOST_IP`` even if there is single
network interface.
* If you already have Docker installed on the machine, you can comment out line
starting with ``enable_plugin devstack-plugin-container``.
* If you can't pull images from k8s.gcr.io, you can add the variable
``KURYR_KUBEADMIN_IMAGE_REPOSITORY`` to ``devstack/local.conf`` and set it's
value to the repository that you can access.
Once ``local.conf`` is configured, you can start the installation:
.. code-block:: console
$ devstack/stack.sh
Installation takes from 20 to 40 minutes. Once that's done you should see
similar output:
.. code-block:: console
=========================
DevStack Component Timing
(times are in seconds)
=========================
wait_for_service 8
pip_install 137
apt-get 295
run_process 14
dbsync 22
git_timed 168
apt-get-update 4
test_with_retry 3
async_wait 71
osc 200
-------------------------
Unaccounted time 505
=========================
Total runtime 1427
=================
Async summary
=================
Time spent in the background minus waits: 140 sec
Elapsed time: 1427 sec
Time if we did everything serially: 1567 sec
Speedup: 1.09811
This is your host IP address: 10.0.2.15
This is your host IPv6 address: ::1
Keystone is serving at http://10.0.2.15/identity/
The default users are: admin and demo
The password: pass
Services are running under systemd unit files.
For more information see:
https://docs.openstack.org/devstack/latest/systemd.html
DevStack Version: xena
Change:
OS Version: Ubuntu 20.04 focal
You can test DevStack by sourcing credentials and trying some commands:
.. code-block:: console
$ source devstack/openrc admin admin
$ openstack service list
+----------------------------------+------------------+------------------+
| ID | Name | Type |
+----------------------------------+------------------+------------------+
| 07e985b425fc4f8a9da20970a26f754a | octavia | load-balancer |
| 1dc08cb4401243848a562c0042d3f40a | neutron | network |
| 35627730938d4a4295f3add6fc826261 | nova | compute |
| 636b43b739e548e0bb369bc41fe1df08 | glance | image |
| 90ef7129985e4e10874d5e4ddb36ea01 | keystone | identity |
| ce177a3f05dc454fb3d43f705ae24dde | kuryr-kubernetes | kuryr-kubernetes |
| d3d6a461a78e4601a14a5e484ec6cdd1 | nova_legacy | compute_legacy |
| d97e5c31b1054a308c5409ee813c0310 | placement | placement |
+----------------------------------+------------------+------------------+
To verify if Kubernetes is running properly, list its nodes and check status of
the only node you should have. The correct value is "Ready":
.. code-block:: console
$ kubectl get nodes
NAME STATUS AGE VERSION
localhost Ready 2m v1.6.2
To test kuryr-kubernetes itself try creating a Kubernetes pod:
.. code-block:: console
$ kubectl create deployment --image busybox test -- sleep 3600
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
test-3202410914-1dp7g 0/1 ContainerCreating 0 7s <none> localhost
After a moment (even up to few minutes as Docker image needs to be downloaded)
you should see that pod got the IP from OpenStack network:
.. code-block:: console
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
test-3202410914-1dp7g 1/1 Running 0 35s 10.0.0.73 localhost
You can verify that this IP is really assigned to Neutron port:
.. code-block:: console
[stack@localhost kuryr-kubernetes]$ openstack port list | grep 10.0.0.73
| 3ce7fd13-ad0a-4e92-9b6f-0d38d50b1699 | | fa:16:3e:8e:f4:30 | ip_address='10.0.0.73', subnet_id='ddfbc8e9-68da-48f9-8a05-238ea0607e0d' | ACTIVE |
If those steps were successful, then it looks like your DevStack with
kuryr-kubernetes is working correctly. In case of errors, copy last ~50 lines
of the logs, paste them into `paste.openstack.org`_ and ask other developers
for help on `Kuryr's IRC channel`_. More info on how to use DevStack can be
found in `DevStack Documentation`_, especially in section `Using Systemd in
DevStack`_, which explains how to use ``systemctl`` to control services and
``journalctl`` to read its logs.
.. _paste.openstack.org: http://paste.openstack.org
.. _Kuryr's IRC channel: ircs://irc.oftc.net:6697/openstack-kuryr
.. _DevStack Documentation: https://docs.openstack.org/devstack/latest/
.. _Using Systemd in DevStack: https://docs.openstack.org/devstack/latest/systemd.html

View File

@ -1,83 +0,0 @@
==========================
Containerized installation
==========================
It is possible to configure DevStack to install kuryr-controller and kuryr-cni
on Kubernetes as pods. Details can be found on :doc:`../containerized` page,
this page will explain DevStack aspects of running containerized.
Installation
------------
To configure DevStack to install Kuryr services as containerized Kubernetes
resources, you need to switch ``KURYR_K8S_CONTAINERIZED_DEPLOYMENT``. Add this
line to your ``local.conf``:
.. code-block:: ini
KURYR_K8S_CONTAINERIZED_DEPLOYMENT=True
This will trigger building the kuryr-controller and kuryr-cni containers during
installation, as well as will deploy those on Kubernetes cluster it installed.
Rebuilding container images
---------------------------
Instructions on how to manually rebuild both kuryr-controller and kuryr-cni
container images are presented on :doc:`../containerized` page. In case you
want to test any code changes, you need to rebuild the images first.
Changing configuration
----------------------
To change kuryr.conf files that are put into containers you need to edit the
associated ConfigMap. On DevStack deployment this can be done using:
.. code-block:: console
$ kubectl -n kube-system edit cm kuryr-config
Then the editor will appear that will let you edit the ConfigMap. Make sure to
keep correct indentation when doing changes.
Restarting services
-------------------
Once any changes are made to docker images or the configuration, it is crucial
to restart pod you've modified.
kuryr-controller
~~~~~~~~~~~~~~~~
To restart kuryr-controller and let it load new image and configuration, simply
kill existing pod:
.. code-block:: console
$ kubectl -n kube-system get pods
<find kuryr-controller pod you want to restart>
$ kubectl -n kube-system delete pod <pod-name>
Deployment controller will make sure to restart the pod with new configuration.
kuryr-cni
~~~~~~~~~
It's important to understand that kuryr-cni is only a storage pod i.e. it is
actually idling with ``sleep infinity`` once all the files are copied into
correct locations on Kubernetes host.
You can force it to redeploy new files by killing it. DaemonSet controller
should make sure to restart it with new image and configuration files.
.. code-block:: console
$ kubectl -n kube-system get pods
<find kuryr-cni pods you want to restart>
$ kubectl -n kube-system delete pod <pod-name1> <pod-name2> <...>

View File

@ -1,42 +0,0 @@
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Convention for heading levels in Neutron devref:
======= Heading 0 (reserved for the title in a document)
------- Heading 1
~~~~~~~ Heading 2
+++++++ Heading 3
''''''' Heading 4
(Avoid deeper levels because they do not render well.)
===========================
DevStack based Installation
===========================
This section describes how you can install and configure kuryr-kubernetes with
DevStack for testing different functionality, such as nested or different
ML2 drivers.
.. toctree::
:maxdepth: 1
basic
nested-vlan
nested-macvlan
nested-dpdk
ovn_support
ovn-octavia
containerized
ports-pool

View File

@ -1,222 +0,0 @@
=========================================
How to try out nested-pods locally (DPDK)
=========================================
Following are the instructions for an all-in-one setup, using the nested DPDK
driver. We assume that we already have the 'undercloud' configured with at
least one VM as nova instance which is also a kubernetes minion. We assume
that VM has an access to the Internet to install necessary packages.
Configure the VM:
#. Install kernel version supporting uio_pci_generic module:
.. code-block:: bash
sudo apt install linux-image-`uname -r` linux-headers-`uname -r`
sudo update-grub
sudo reboot
#. Install DPDK. On Ubuntu:
.. code-block:: bash
sudo apt update
sudo apt install dpdk
#. Enable hugepages:
.. code-block:: bash
sudo sysctl -w vm.nr_hugepages=768
#. Load DPDK userspace driver:
.. code-block:: bash
sudo modprobe uio_pci_generic
#. Clone devstack repository:
.. code-block:: bash
cd ~
git clone https://git.openstack.org/openstack-dev/devstack
#. Edit local.conf:
.. code-block:: ini
[[local|localrc]]
RECLONE="no"
enable_plugin kuryr-kubernetes \
https://git.openstack.org/openstack/kuryr-kubernetes
OFFLINE="no"
LOGFILE=devstack.log
LOG_COLOR=False
ADMIN_PASSWORD=<undercloud_password>
DATABASE_PASSWORD=<undercloud_password>
RABBIT_PASSWORD=<undercloud_password>
SERVICE_PASSWORD=<undercloud_password>
SERVICE_TOKEN=<undercloud_password>
IDENTITY_API_VERSION=3
ENABLED_SERVICES=""
HOST_IP=<vm-ip-address>
SERVICE_HOST=<undercloud-host-ip-address>
MULTI_HOST=1
KEYSTONE_SERVICE_HOST=$SERVICE_HOST
MYSQL_HOST=$SERVICE_HOST
RABBIT_HOST=$SERVICE_HOST
KURYR_CONFIGURE_NEUTRON_DEFAULTS=False
KURYR_CONFIGURE_BAREMETAL_KUBELET_IFACE=False
enable_service docker
enable_service etcd3
enable_service kubernetes-api
enable_service kubernetes-controller-manager
enable_service kubernetes-scheduler
enable_service kubelet
enable_service kuryr-kubernetes
enable_service kuryr-daemon
[[post-config|$KURYR_CONF]]
[nested_dpdk]
dpdk_driver = uio_pci_generic
#. Stack:
.. code-block:: bash
cd ~/devstack
./stack.sh
#. Install CNI plugins:
.. code-block:: bash
wget https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz
tar xf cni-plugins-amd64-v0.6.0.tgz -C ~/cni/bin/
#. Install Multus CNI using this guide: https://github.com/intel/multus-cni#build
- *Note: Kuryr natively supports multiple VIFs now. In step 13 solution*
*without Multus is described*
#. Create Multus CNI configuration file ~/cni/conf/multus-cni.conf:
.. code-block:: json
{
"name":"multus-demo-network",
"type":"multus",
"delegates":[
{
"type":"kuryr-cni",
"kuryr_conf":"/etc/kuryr/kuryr.conf",
"debug":true
},
{
"type":"macvlan",
"master":"ens3",
"masterplugin":true,
"ipam":{
"type":"host-local",
"subnet":"10.0.0.0/24"
}
}
]
}
#. Create a directory to store pci devices used by container:
.. code-block:: bash
mkdir /var/pci_address
#. If you do not use Multus CNI as a tool to have multiple interfaces in
container but use some multi vif driver, then change Kuryr configuration file
/etc/kuryr/kuryr.conf:
.. code-block:: ini
[kubernetes]
pod_vif_driver = nested-vlan
multi_vif_drivers = npwg_multiple_interfaces
[vif_pool]
vif_pool_mapping = nested-vlan:nested,nested-dpdk:noop
#. Also prepare and apply network attachment definition, for example:
.. code-block:: yaml
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: "net-nested-dpdk"
annotations:
openstack.org/kuryr-config: '{
"subnetId": "<NEUTRON SUBNET ID>",
"driverType": "nested-dpdk"
}'
#. Reload systemd services:
.. code-block:: bash
sudo systemctl daemon-reload
#. Restart systemd services:
.. code-block:: bash
sudo systemctl restart devstack@kubelet.service devstack@kuryr-kubernetes.service devstack@kuryr-daemon.service
#. Create pod specifying additional interface in annotations:
.. code-block:: yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-nested-dpdk
spec:
replicas: 1
template:
metadata:
name: nginx-nested-dpdk
labels:
app: nginx-nested-dpdk
annotations:
k8s.v1.cni.cncf.io/networks: net-nested-dpdk
spec:
containers:
- name: nginx-nested-dpdk
image: nginx
resources:
requests:
cpu: "1"
memory: "512Mi"
limits:
cpu: "1"
memory: "512Mi"
volumeMounts:
- name: dev
mountPath: /dev
- name: pci_address
mountPath: /var/pci_address
volumes:
- name: dev
hostPath:
path: /dev
type: Directory
- name: pci_address
hostPath:
path: /var/pci_address
type: Directory

View File

@ -1,53 +0,0 @@
============================================
How to try out nested-pods locally (MACVLAN)
============================================
Following are the instructions for an all-in-one setup, using the
nested MACVLAN driver rather than VLAN and trunk ports.
#. To install OpenStack services run devstack with
``devstack/local.conf.pod-in-vm.undercloud.sample``.
#. Launch a Nova VM with MACVLAN support
.. todo::
Add a list of neutron commands, required to launch a such a VM
#. Log into the VM and set up Kubernetes along with Kuryr using devstack:
- Since undercloud Neutron will be used by pods, Neutron services should be
disabled in localrc.
- Run devstack with ``devstack/local.conf.pod-in-vm.overcloud.sample``.
Fill in the needed information, such as the subnet pool id to use or the
router.
#. Once devstack is done and all services are up inside VM. Next steps are to
configure the missing information at ``/etc/kuryr/kuryr.conf``:
- Configure worker VMs subnet:
.. code-block:: ini
[pod_vif_nested]
worker_nodes_subnets = <UNDERCLOUD_SUBNET_WORKER_NODES_UUID>
- Configure "pod_vif_driver" as "nested-macvlan":
.. code-block:: ini
[kubernetes]
pod_vif_driver = nested-macvlan
- Configure binding section:
.. code-block:: ini
[binding]
link_iface = <VM interface name eg. eth0>
- Restart kuryr-k8s-controller:
.. code-block:: console
$ sudo systemctl restart devstack@kuryr-kubernetes.service
Now launch pods using kubectl, Undercloud Neutron will serve the networking.

View File

@ -1,94 +0,0 @@
=================================================
How to try out nested-pods locally (VLAN + trunk)
=================================================
Following are the instructions for an all-in-one setup where Kubernetes will
also be running inside the same Nova VM in which Kuryr-controller and Kuryr-cni
will be running. 4GB memory and 2 vCPUs, is the minimum resource requirement
for the VM:
#. To install OpenStack services run devstack with
``devstack/local.conf.pod-in-vm.undercloud.sample``. Ensure that "trunk"
service plugin is enabled in ``/etc/neutron/neutron.conf``:
.. code-block:: ini
[DEFAULT]
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.trunk.plugin.TrunkPlugin
#. Launch a VM with `Neutron trunk port`_. The next steps can be followed:
`Boot VM with a Trunk Port`_.
#. Inside VM, install and setup Kubernetes along with Kuryr using devstack:
- Since undercloud Neutron will be used by pods, Neutron services should be
disabled in localrc.
- Run devstack with ``devstack/local.conf.pod-in-vm.overcloud.sample``.
But first fill in the needed information:
- Point to the undercloud deployment by setting:
.. code-block:: bash
SERVICE_HOST=UNDERCLOUD_CONTROLLER_IP
- Fill in the subnetpool id of the undercloud deployment, as well as the
router where the new pod and service networks need to be connected:
.. code-block:: bash
KURYR_NEUTRON_DEFAULT_SUBNETPOOL_ID=UNDERCLOUD_SUBNETPOOL_V4_ID
KURYR_NEUTRON_DEFAULT_ROUTER=router1
- Ensure the nested-vlan driver is going to be set by setting:
.. code-block:: bash
KURYR_POD_VIF_DRIVER=nested-vlan
- Optionally, the ports pool funcionality can be enabled by following:
`How to enable ports pool with devstack`_.
- [OPTIONAL] If you want to enable the subport pools driver and the VIF
Pool Manager you need to include:
.. code-block:: bash
KURYR_VIF_POOL_MANAGER=True
#. Once devstack is done and all services are up inside VM. Next steps are to
configure the missing information at ``/etc/kuryr/kuryr.conf``:
- Configure worker VMs subnet:
.. code-block:: ini
[pod_vif_nested]
worker_nodes_subnets = <UNDERCLOUD_SUBNET_WORKER_NODES_UUID>
- Configure binding section:
.. code-block:: ini
[binding]
driver = kuryr.lib.binding.drivers.vlan
link_iface = <VM interface name eg. eth0>
- Restart kuryr-k8s-controller:
.. code-block:: console
$ sudo systemctl restart devstack@kuryr-kubernetes.service
- Restart kuryr-daemon:
.. code-block:: console
$ sudo systemctl restart devstack@kuryr-daemon.service
Now launch pods using kubectl, Undercloud Neutron will serve the networking.
.. _Neutron trunk port: https://wiki.openstack.org/wiki/Neutron/TrunkPort
.. _Boot VM with a Trunk Port: https://docs.openstack.org/kuryr-kubernetes/latest/installation/trunk_ports.html
.. _How to enable ports pool with devstack: https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/ports-pool.html

View File

@ -1,105 +0,0 @@
=======================================================
How to enable OVN Octavia provider driver with devstack
=======================================================
To enable the utilization of OVN as the provider driver for Octavia through
devstack:
#. You can start with the sample DevStack configuration file for OVN
that kuryr-kubernetes comes with.
.. code-block:: console
$ curl https://opendev.org/openstack/kuryr-kubernetes/raw/branch/master/devstack/local.conf.sample \
-o devstack/local.conf
#. In case you want more Kuryr specific features than provided by the default
handlers and more handlers are enabled, for example, the following enables
NetworkPolicies in addition to the default features:
.. code-block:: bash
KURYR_ENABLED_HANDLERS=vif,kuryrport,service,endpoints,kuryrloadbalancer,
namespace,pod_label,policy,kuryrnetworkpolicy,kuryrnetwork
Then, the proper subnet drivers need to be set:
.. code-block:: bash
KURYR_SG_DRIVER=policy
KURYR_SUBNET_DRIVER=namespace
#. Run DevStack.
.. code-block:: console
$ ./stack.sh
Enabling Kuryr support for OVN Octavia driver via ConfigMap
-----------------------------------------------------------
Alternatively, you can enable Kuryr support for the OVN Octavia driver on the
Kuryr ConfigMap, in case the options are not set at the local.conf file. On
DevStack deployment, the Kuryr ConfigMap can be edited using:
.. code-block:: console
$ kubectl -n kube-system edit cm kuryr-config
The following options need to be set at the ConfigMap:
.. code-block:: bash
[kubernetes]
endpoints_driver_octavia_provider = ovn
[octavia_defaults]
lb_algorithm = SOURCE_IP_PORT
enforce_sg_rules = False
member_mode = L2
Make sure to keep correct indentation when doing changes. To enforce the new
settings, you need to restart kuryr-controller by simply killing existing pod.
Deployment controller will make sure to restart the pod with new configuration.
Kuryr automatically handles the recreation of already created services/load
balancers, so that all of them have the same Octavia provider.
Testing ovn-octavia driver support
----------------------------------
Once the environment is ready, you can test that network connectivity works
and verify that Kuryr creates the load balancer for the service with the OVN
provider specified in the ConfigMap.
To do that check out :doc:`../testing_connectivity`.
You can also manually create a load balancer in Openstack:
.. code-block:: console
$ openstack loadbalancer create --vip-network-id public --provider ovn
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone | None |
| created_at | 2020-12-09T14:45:08 |
| description | |
| flavor_id | None |
| id | 94e7c431-912b-496c-a247-d52875d44ac7 |
| listeners | |
| name | |
| operating_status | OFFLINE |
| pools | |
| project_id | af820b57868c4864957d523fb32ccfba |
| provider | ovn |
| provisioning_status | PENDING_CREATE |
| updated_at | None |
| vip_address | 172.24.4.9 |
| vip_network_id | ee97665d-69d0-4995-a275-27855359956a |
| vip_port_id | c98e52d0-5965-4b22-8a17-a374f4399193 |
| vip_qos_policy_id | None |
| vip_subnet_id | 3eed0c05-6527-400e-bb80-df6e59d248f1 |
+---------------------+--------------------------------------+

View File

@ -1,190 +0,0 @@
================================
Kuryr Kubernetes OVN Integration
================================
OVN provides virtual networking for Open vSwitch and is a component of the Open
vSwitch project.
OpenStack can use OVN as its network management provider through the Modular
Layer 2 (ML2) north-bound plug-in.
Integrating of OVN allows Kuryr to be used to bridge (both baremetal and
nested) containers and VM networking in a OVN-based OpenStack deployment.
Testing with DevStack
---------------------
The next points describe how to test OpenStack with OVN using DevStack.
We will start by describing how to test the baremetal case on a single host,
and then cover a nested environment where containers are created inside VMs.
Single Node Test Environment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. Create a test system.
It's best to use a throwaway dev system for running DevStack. Your best bet
is to use latest Ubuntu LTS (20.04, Focal).
#. Optionally create the ``stack`` user. You'll need user account with
passwordless ``sudo`` command.
.. code-block:: console
$ git clone https://opendev.org/openstack-dev/devstack.git
$ sudo ./devstack/tools/create-stack-user.sh
$ sudo su - stack
#. Clone DevStack.
.. code-block:: console
$ sudo su - stack
$ git clone https://opendev.org/openstack-dev/devstack.git
$ git clone https://opendev.org/openstack/kuryr-kubernetes.git
#. Configure DevStack to use OVN.
kuryr-kubernetes comes with a sample DevStack configuration file for OVN you
can start with. For example, you may want to set some values for the various
PASSWORD variables in that file, or change the LBaaS service provider to
use. Feel free to edit it if you'd like, but it should work as-is.
.. code-block:: console
$ curl https://opendev.org/openstack/kuryr-kubernetes/raw/branch/master/devstack/local.conf.sample \
-o devstack/local.conf
Note that due to OVN compiling OVS from source at
/usr/local/var/run/openvswitch we need to state at the local.conf that the
path is different from the default one (i.e., /var/run/openvswitch).
Optionally, the ports pool functionality can be enabled by following:
:doc:`./ports-pool`
.. note::
Kuryr-kubernetes is using OVN by default
#. Run DevStack.
This is going to take a while. It installs a bunch of packages, clones a
bunch of git repos, and installs everything from these git repos.
.. code-block:: console
$ devstack/stack.sh
Once DevStack completes successfully, you should see output that looks
something like this:
.. code-block:: console
This is your host IP address: 192.168.5.10
This is your host IPv6 address: ::1
Keystone is serving at http://192.168.5.10/identity/
The default users are: admin and demo
The password: pass
#. Extra configurations.
DevStack does not wire up the public network by default so we must do some
extra steps for floating IP usage as well as external connectivity:
.. code-block:: console
$ sudo ip link set br-ex up
$ sudo ip route add 172.24.4.0/24 dev br-ex
$ sudo ip addr add 172.24.4.1/24 dev br-ex
Then you can create forwarding and NAT rules that will cause "external"
traffic from your instances to get rewritten to your network controller's ip
address and sent out on the network:
.. code-block:: console
$ sudo iptables -A FORWARD -d 172.24.4.0/24 -j ACCEPT
$ sudo iptables -A FORWARD -s 172.24.4.0/24 -j ACCEPT
$ sudo iptables -t nat -I POSTROUTING 1 -s 172.24.4.1/24 -j MASQUERADE
Inspect default Configuration
+++++++++++++++++++++++++++++
In order to check the default configuration, in term of networks, subnets,
security groups and loadbalancers created upon a successful DevStack stacking,
you can check the :doc:`../default_configuration`
Testing Network Connectivity
++++++++++++++++++++++++++++
Once the environment is ready, we can test that network connectivity works
among pods. To do that check out :doc:`../testing_connectivity`
Nested Containers Test Environment (VLAN)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Another deployment option is the nested-vlan where containers are created
inside OpenStack VMs by using the Trunk ports support. Thus, first we need to
deploy an undercloud DevStack environment with the needed components to
create VMs (e.g., Glance, Nova, Neutron, Keystone, ...), as well as the needed
OVN configurations such as enabling the trunk support that will be needed for
the VM. And then install the overcloud deployment inside the VM with the kuryr
components.
Undercloud deployment
+++++++++++++++++++++
The steps to deploy the undercloud environment are the same described above
for the `Single Node Test Environment` with the different of the sample
local.conf to use (step 4), in this case:
.. code-block:: console
$ curl https://opendev.org/openstack/kuryr-kubernetes/raw/branch/master/devstack/local.conf.pod-in-vm.undercloud.ovn.sample \
-o devstack/local.conf
The main differences with the default ovn local.conf sample are that:
- There is no need to enable the kuryr-kubernetes plugin as this will be
installed inside the VM (overcloud).
- There is no need to enable the kuryr related services as they will also be
installed inside the VM: kuryr-kubernetes, kubelet, kubernetes-api,
kubernetes-controller-manager, kubernetes-scheduler and kubelet.
- Nova and Glance components need to be enabled to be able to create the VM
where we will install the overcloud.
- OVN Trunk service plugin need to be enable to ensure Trunk ports support.
Once the undercloud deployment has finished, the next steps are related to
create the overcloud VM by using a parent port of a Trunk so that containers
can be created inside with their own networks. To do that we follow the next
steps detailed at :doc:`../trunk_ports`
Overcloud deployment
++++++++++++++++++++
Once the VM is up and running, we can start with the overcloud configuration.
The steps to perform are the same as without OVN integration, i.e., the
same steps as for ML2/OVS:
#. Log in into the VM:
.. code-block:: console
$ ssh -i id_rsa_demo ubuntu@FLOATING_IP
#. Deploy devstack following steps 3 and 4 detailed at :doc:`./nested-vlan`
Testing Nested Network Connectivity
+++++++++++++++++++++++++++++++++++
Similarly to the baremetal testing, we can create a demo deployment at the
overcloud VM, scale it to any number of pods and expose the service to check if
the deployment was successful. To do that check out
:doc:`../testing_nested_connectivity`

Some files were not shown because too many files have changed in this diff Show More