Alternative to "kuryr-kubernetes"

This patch replaces the "kuryr-kubernetes" handling used by tacker's
FT to build the k8s environment with "devstack-plugin-container".
Also, with the update of devstack-plugin-container, k8s, cri-o and
helm will be upgraded.

k8s: 1.26.8 -> 1.30.5
crio: 1.26 -> 1.30.5
helm: 3.11.3 -> 3.15.4

The following is a summary of the fixes in this patch.

* Remove plugins and settings related to kuryr-kubernetes
* Rename parameters with "kuryr"
* Modify devstack-plugin-container to be used in FT k8s environment
  build
* Add parameters required by devstack-plugin-container

Also, the following is a list of problems that occurred when setting
up the k8s environment with devstack-plugin-container and how to fix
them.

Cannot get bearer_token value:
- modified file: roles/setup-default-vim/tasks/main.yaml
- The task "Get admin token from described secret" of the Ansible
  role "setup-default-vim" failed to obtain the value of
  bearer_token, which is set as a parameter when creating vim,
  causing an error. Retrying to obtain token fixed the problem.

Unknown error in "Create clusterrolebinding on k8s server" task:
- modified file: roles/setup-k8s-nodes/tasks/main.yaml
- In task "Create clusterrolebinding on k8s server" in Ansible role
  "setup-k8s-oidc", `failed to download openapi: unknown;` error
  occurred. The cause was that the pod status of kube-apiserver was
  "Pending" after executing the previous "Wait for k8s apiserver to
  restart" task. The error was fixed by waiting for the Pod status
  to reach the "Running" state.

"cni0" is not assigned the intended IP address:
- added file: roles/restart-kubelet-service/tasks/main.yaml
- When using devstack-plugin-container to create a k8s environment
  and deploy a Pod, the Pod deployment fails with the error `network:
  failed to set bridge addr: "cni0" already has an IP address
  different from 10.x.x.x`. Removing the associated interface and
  restarting the service cleared the error.

Depends-On: https://review.opendev.org/c/openstack/devstack-plugin-container/+/926709
Change-Id: I596a2339f6a3c78fee99b92d7bfb65a6b0244901
This commit is contained in:
Yoshiro Watanabe 2024-08-19 02:48:36 +00:00
parent 4e92f2f130
commit 0d29292e00
7 changed files with 51 additions and 52 deletions

View File

@ -425,13 +425,10 @@
- openstack/heat - openstack/heat
- openstack/horizon - openstack/horizon
- openstack/keystone - openstack/keystone
- openstack/kuryr-kubernetes
- openstack/neutron - openstack/neutron
- openstack/nova - openstack/nova
- openstack/octavia
- openstack/placement - openstack/placement
- openstack/python-barbicanclient - openstack/python-barbicanclient
- openstack/python-octaviaclient
- openstack/python-tackerclient - openstack/python-tackerclient
- openstack/tacker - openstack/tacker
- openstack/tacker-horizon - openstack/tacker-horizon
@ -441,7 +438,6 @@
barbican: https://opendev.org/openstack/barbican barbican: https://opendev.org/openstack/barbican
heat: https://opendev.org/openstack/heat heat: https://opendev.org/openstack/heat
neutron: https://opendev.org/openstack/neutron neutron: https://opendev.org/openstack/neutron
octavia: https://opendev.org/openstack/octavia
devstack_services: devstack_services:
base: false base: false
c-api: true c-api: true
@ -463,11 +459,6 @@
n-novnc: true n-novnc: true
n-sch: true n-sch: true
neutron: true neutron: true
o-api: true
o-cw: true
o-hk: true
o-hm: true
octavia: true
placement-api: true placement-api: true
placement-client: true placement-client: true
ovn-controller: true ovn-controller: true
@ -506,18 +497,15 @@
devstack_local_conf: {} devstack_local_conf: {}
devstack_plugins: devstack_plugins:
devstack-plugin-container: https://opendev.org/openstack/devstack-plugin-container devstack-plugin-container: https://opendev.org/openstack/devstack-plugin-container
kuryr-kubernetes: https://opendev.org/openstack/kuryr-kubernetes
devstack_services: devstack_services:
etcd3: false etcd3: false
kubernetes-master: true
kuryr-daemon: true
kuryr-kubernetes: true
octavia: false
ovn-controller: true ovn-controller: true
ovn-northd: true ovn-northd: true
ovs-vswitchd: true ovs-vswitchd: true
ovsdb-server: true ovsdb-server: true
q-ovn-metadata-agent: true q-ovn-metadata-agent: true
container: true
k8s-master: true
tox_install_siblings: false tox_install_siblings: false
group-vars: group-vars:
subnode: subnode:
@ -527,21 +515,10 @@
IS_ZUUL_FT: True IS_ZUUL_FT: True
K8S_API_SERVER_IP: "{{ hostvars['controller-k8s']['nodepool']['private_ipv4'] }}" K8S_API_SERVER_IP: "{{ hostvars['controller-k8s']['nodepool']['private_ipv4'] }}"
KEYSTONE_SERVICE_HOST: "{{ hostvars['controller']['nodepool']['private_ipv4'] }}" KEYSTONE_SERVICE_HOST: "{{ hostvars['controller']['nodepool']['private_ipv4'] }}"
KURYR_FORCE_IMAGE_BUILD: true
KURYR_K8S_API_PORT: 6443
KURYR_K8S_API_URL: "https://{{ hostvars['controller-k8s']['nodepool']['private_ipv4'] }}:${KURYR_K8S_API_PORT}"
KURYR_K8S_CONTAINERIZED_DEPLOYMENT: false
KURYR_NEUTRON_DEFAULT_SUBNETPOOL_ID: shared-default-subnetpool-v4
# NOTES:
# - In Bobcat cycle, Kubernetes version is updated to 1.26.
# https://blueprints.launchpad.net/tacker/+spec/update-k8s-helm-prometheus
KURYR_KUBERNETES_VERSION: 1.26.8
CONTAINER_ENGINE: crio CONTAINER_ENGINE: crio
CRIO_VERSION: 1.26 K8S_VERSION: "1.30.5"
CRIO_VERSION: "1.30.5"
MYSQL_HOST: "{{ hostvars['controller']['nodepool']['private_ipv4'] }}" MYSQL_HOST: "{{ hostvars['controller']['nodepool']['private_ipv4'] }}"
OCTAVIA_AMP_IMAGE_FILE: "/tmp/test-only-amphora-x64-haproxy-ubuntu-bionic.qcow2"
OCTAVIA_AMP_IMAGE_NAME: "test-only-amphora-x64-haproxy-ubuntu-bionic"
OCTAVIA_AMP_IMAGE_SIZE: 3
OVS_BRIDGE_MAPPINGS: public:br-ex,mgmtphysnet0:br-infra OVS_BRIDGE_MAPPINGS: public:br-ex,mgmtphysnet0:br-infra
PHYSICAL_NETWORK: mgmtphysnet0 PHYSICAL_NETWORK: mgmtphysnet0
TACKER_HOST: "{{ hostvars['controller-tacker']['nodepool']['private_ipv4'] }}" TACKER_HOST: "{{ hostvars['controller-tacker']['nodepool']['private_ipv4'] }}"
@ -551,6 +528,7 @@
Q_ML2_PLUGIN_MECHANISM_DRIVERS: ovn,logger Q_ML2_PLUGIN_MECHANISM_DRIVERS: ovn,logger
# TODO(ueha): Remove this workarround if the Zuul jobs succeed with GLOBAL_VENV=true # TODO(ueha): Remove this workarround if the Zuul jobs succeed with GLOBAL_VENV=true
GLOBAL_VENV: false GLOBAL_VENV: false
K8S_TOKEN: "9agf12.zsu5uh2m4pzt3qba"
devstack_services: devstack_services:
dstat: false dstat: false
horizon: false horizon: false
@ -576,9 +554,6 @@
KEYSTONE_SERVICE_HOST: "{{ hostvars['controller']['nodepool']['private_ipv4'] }}" KEYSTONE_SERVICE_HOST: "{{ hostvars['controller']['nodepool']['private_ipv4'] }}"
L2_AGENT_EXTENSIONS: qos L2_AGENT_EXTENSIONS: qos
MYSQL_HOST: "{{ hostvars['controller']['nodepool']['private_ipv4'] }}" MYSQL_HOST: "{{ hostvars['controller']['nodepool']['private_ipv4'] }}"
OCTAVIA_AMP_IMAGE_FILE: "/tmp/test-only-amphora-x64-haproxy-ubuntu-bionic.qcow2"
OCTAVIA_AMP_IMAGE_NAME: "test-only-amphora-x64-haproxy-ubuntu-bionic"
OCTAVIA_AMP_IMAGE_SIZE: 3
OVS_BRIDGE_MAPPINGS: public:br-ex,mgmtphysnet0:br-infra OVS_BRIDGE_MAPPINGS: public:br-ex,mgmtphysnet0:br-infra
PHYSICAL_NETWORK: mgmtphysnet0 PHYSICAL_NETWORK: mgmtphysnet0
Q_SERVICE_PLUGIN_CLASSES: ovn-router,neutron.services.qos.qos_plugin.QoSPlugin,qos Q_SERVICE_PLUGIN_CLASSES: ovn-router,neutron.services.qos.qos_plugin.QoSPlugin,qos
@ -596,15 +571,9 @@
$NEUTRON_DHCP_CONF: $NEUTRON_DHCP_CONF:
DEFAULT: DEFAULT:
enable_isolated_metadata: True enable_isolated_metadata: True
$OCTAVIA_CONF: k8s_api_url: "https://{{ hostvars['controller-k8s']['nodepool']['private_ipv4'] }}:6443"
controller_worker:
amp_active_retries: 9999
kuryr_k8s_api_url: "https://{{ hostvars['controller-k8s']['nodepool']['private_ipv4'] }}:6443"
k8s_ssl_verify: true k8s_ssl_verify: true
# NOTES: helm_version: "3.15.4"
# - In Bobcat cycle, Helm version is updated to 3.11.
# https://blueprints.launchpad.net/tacker/+spec/update-k8s-helm-prometheus
helm_version: "3.11.3"
test_matrix_configs: [neutron] test_matrix_configs: [neutron]
zuul_work_dir: src/opendev.org/openstack/tacker zuul_work_dir: src/opendev.org/openstack/tacker
zuul_copy_output: zuul_copy_output:

View File

@ -3,6 +3,7 @@
- ensure-db-cli-installed - ensure-db-cli-installed
- setup-k8s-nodes - setup-k8s-nodes
- orchestrate-devstack - orchestrate-devstack
- restart-kubelet-service
- modify-heat-policy - modify-heat-policy
- setup-k8s-oidc - setup-k8s-oidc
- setup-default-vim - setup-default-vim

View File

@ -0,0 +1,23 @@
- block:
# NOTE: When create a k8s environment with devstack-plugin-container and
# deploy a Pod, the following error occured - `network: failed to set bridge
# addr: "cni0" already has an IP address different from 10.x.x.x` and
# the Pod fails to be deployed. As a fix, delete the related interface and
# restart service.
- name: k8s interface down
shell: ip link set cni0 down && ip link set flannel.1 down
become: yes
- name: k8s interface delete
shell: ip link delete cni0 && ip link delete flannel.1
become: yes
- name: kubelet service restart
service:
name: kubelet
state: restarted
become: yes
when:
- inventory_hostname == 'controller-k8s'
- k8s_api_url is defined

View File

@ -94,6 +94,9 @@
kubectl get {{ admin_secret_name.stdout }} -n kube-system -o jsonpath="{.data.token}" kubectl get {{ admin_secret_name.stdout }} -n kube-system -o jsonpath="{.data.token}"
| base64 -d | base64 -d
register: admin_token register: admin_token
until: admin_token.stdout != ""
retries: 10
delay: 5
become: yes become: yes
become_user: stack become_user: stack
@ -115,7 +118,7 @@
when: when:
- inventory_hostname == 'controller-k8s' - inventory_hostname == 'controller-k8s'
- kuryr_k8s_api_url is defined - k8s_api_url is defined
- block: - block:
- name: Copy tools/test-setup-k8s-vim.sh - name: Copy tools/test-setup-k8s-vim.sh
@ -182,7 +185,7 @@
replace: replace:
path: "{{ item }}" path: "{{ item }}"
regexp: "https://127.0.0.1:6443" regexp: "https://127.0.0.1:6443"
replace: "{{ kuryr_k8s_api_url }}" replace: "{{ k8s_api_url }}"
with_items: with_items:
- "{{ zuul_work_dir }}/samples/tests/etc/samples/local-k8s-vim.yaml" - "{{ zuul_work_dir }}/samples/tests/etc/samples/local-k8s-vim.yaml"
- "{{ zuul_work_dir }}/samples/tests/etc/samples/local-k8s-vim-helm.yaml" - "{{ zuul_work_dir }}/samples/tests/etc/samples/local-k8s-vim-helm.yaml"
@ -193,7 +196,7 @@
replace: replace:
path: "{{ item }}" path: "{{ item }}"
regexp: "https://127.0.0.1:6443" regexp: "https://127.0.0.1:6443"
replace: "{{ kuryr_k8s_api_url }}" replace: "{{ k8s_api_url }}"
with_items: with_items:
- "{{ zuul_work_dir }}/samples/tests/etc/samples/local-k8s-vim-oidc.yaml" - "{{ zuul_work_dir }}/samples/tests/etc/samples/local-k8s-vim-oidc.yaml"
when: when:
@ -283,7 +286,7 @@
when: when:
- inventory_hostname == 'controller-tacker' - inventory_hostname == 'controller-tacker'
- kuryr_k8s_api_url is defined - k8s_api_url is defined
- block: - block:
- name: Copy tools/test-setup-mgmt.sh - name: Copy tools/test-setup-mgmt.sh
@ -329,4 +332,4 @@
when: when:
- inventory_hostname == 'controller-tacker' - inventory_hostname == 'controller-tacker'
- kuryr_k8s_api_url is defined - k8s_api_url is defined

View File

@ -33,4 +33,4 @@
become: yes become: yes
when: when:
- inventory_hostname == 'controller-k8s' - inventory_hostname == 'controller-k8s'
- kuryr_k8s_api_url is defined - k8s_api_url is defined

View File

@ -89,11 +89,14 @@
ignore_errors: yes ignore_errors: yes
- name: Wait for k8s apiserver to restart - name: Wait for k8s apiserver to restart
wait_for: command: >
host: "{{ hostvars['controller-k8s']['nodepool']['private_ipv4'] }}" kubectl get pods -n kube-system -l component=kube-apiserver -o jsonpath='{.items[0].status.phase}'
port: 6443 register: kube_apiserver_status
until: kube_apiserver_status.stdout == "Running"
delay: 30 delay: 30
timeout: 180 timeout: 180
become: yes
become_user: stack
ignore_errors: yes ignore_errors: yes
- name: Create clusterrolebinding on k8s server - name: Create clusterrolebinding on k8s server

View File

@ -151,7 +151,7 @@
--project {{ os_project_tenant1 }} --project {{ os_project_tenant1 }}
--os-project-domain {{ os_domain_tenant1 }} --os-project-domain {{ os_domain_tenant1 }}
--os-user-domain {{ os_domain_tenant1 }} --os-user-domain {{ os_domain_tenant1 }}
--endpoint {{ kuryr_k8s_api_url }} --os-disable-cert-verify --endpoint {{ k8s_api_url }} --os-disable-cert-verify
--k8s-token {{ hostvars['controller-k8s'].admin_token.stdout }} --k8s-token {{ hostvars['controller-k8s'].admin_token.stdout }}
-o {{ k8s_vim_conf_path_tenant1 }} -o {{ k8s_vim_conf_path_tenant1 }}
@ -185,7 +185,7 @@
--project {{ os_project_tenant2 }} --project {{ os_project_tenant2 }}
--os-project-domain {{ os_domain_tenant2 }} --os-project-domain {{ os_domain_tenant2 }}
--os-user-domain {{ os_domain_tenant2 }} --os-user-domain {{ os_domain_tenant2 }}
--endpoint {{ kuryr_k8s_api_url }} --os-disable-cert-verify --endpoint {{ k8s_api_url }} --os-disable-cert-verify
--k8s-token {{ hostvars['controller-k8s'].admin_token.stdout }} --k8s-token {{ hostvars['controller-k8s'].admin_token.stdout }}
-o {{ k8s_vim_conf_path_tenant2 }} -o {{ k8s_vim_conf_path_tenant2 }}
@ -213,4 +213,4 @@
when: when:
- inventory_hostname == 'controller-tacker' - inventory_hostname == 'controller-tacker'
- kuryr_k8s_api_url is defined - k8s_api_url is defined