Clarifications to mcapi_vexxhost README
Fix a few typos and omissions in README.rst and bring config examples into line with config files used for functional tests. Change-Id: I0e36d725d2ef1f9bc94c0ae0d8435054793b12f4
This commit is contained in:
parent
afb05ba74a
commit
7f883f176e
@ -57,6 +57,8 @@ Pre-requisites
|
||||
OpenStack-Ansible Integration
|
||||
-----------------------------
|
||||
|
||||
NOTE: The example configuration files shown below are suitable for use with an openstack-ansible All-In-One (AIO) build and can be found at openstack-ansible-ops/mcapi_vexxhost/playbooks/files/openstack_deploy/
|
||||
|
||||
The playbooks are distributed as an ansible collection, and integrate with
|
||||
Openstack-Ansible by adding the collection to the deployment host by
|
||||
adding the following to `/etc/openstack_deploy/user-collection-requirements.yml`
|
||||
@ -82,13 +84,14 @@ The collections can then be installed with the following command:
|
||||
cd /opt/openstack-ansible
|
||||
openstack-ansible scripts/get-ansible-collection-requirements.yml
|
||||
|
||||
The modules in the kubernetes collection require an additional python module
|
||||
to be present in the ansible-runtime python virtual environment. Specify this
|
||||
The modules in the kubernetes collection require additional python modules
|
||||
to be present in the ansible-runtime python virtual environment. Specify these
|
||||
in /etc/openstack_deploy/user-ansible-venv-requirements.txt
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
docker-image-py
|
||||
kubernetes
|
||||
|
||||
OpenStack-Ansible configuration for magnum-cluster-api driver
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
@ -120,7 +123,8 @@ Specify the deployment of the control plane k8s cluster in
|
||||
- hosts
|
||||
|
||||
Define the physical hosts that will host the controlplane k8s
|
||||
cluster, this example is for an all-in-one deployment and should
|
||||
cluster in `/etc/openstack_deploy/openstack_user_config.yml`. This
|
||||
example is for an all-in-one deployment and should
|
||||
be adjusted to match a real deployment with multiple hosts if
|
||||
high availability is required.
|
||||
|
||||
@ -131,7 +135,7 @@ high availability is required.
|
||||
ip: 172.29.236.100
|
||||
|
||||
Integrate the control plane k8s cluster with the haproxy loadbalancer
|
||||
in `/etc/openstack-deploy/group_vars/k8s_all/haproxy_service.yml`
|
||||
in `/etc/openstack_deploy/group_vars/k8s_all/haproxy_service.yml`
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
@ -166,7 +170,7 @@ in `/etc/openstack-deploy/group_vars/k8s_all/haproxy_service.yml`
|
||||
- "{{ haproxy_k8s_service | combine(haproxy_k8s_service_overrides | default({})) }}"
|
||||
|
||||
Configure the LXC container that will host the control plane k8s cluster to
|
||||
be suitable for running nested containers in `/etc/openstack-deploy/group_vars/k8s_all/main.yml`
|
||||
be suitable for running nested containers in `/etc/openstack_deploy/group_vars/k8s_all/main.yml`
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
@ -178,7 +182,7 @@ be suitable for running nested containers in `/etc/openstack-deploy/group_vars/k
|
||||
- "proc:rw"
|
||||
- "sys:rw"
|
||||
|
||||
Set up config-overrides for the magnum service in `/etc/openstack-deploy/user_variables_magnum.yml`.
|
||||
Set up config-overrides for the magnum service in `/etc/openstack_deploy/user_variables_magnum.yml`.
|
||||
Adjust the images and flavors here as necessary, these are just for demonstration. Upload as many
|
||||
images as you need for the different workload cluster kubernetes versions.
|
||||
|
||||
@ -208,22 +212,6 @@ images as you need for the different workload cluster kubernetes versions.
|
||||
ram: 4096
|
||||
vcpus: 2
|
||||
|
||||
Set up config-overrides for the control plane k8s cluster in /etc/openstack-deploy/user_variables_k8s.yml`
|
||||
Attention must be given to the SSL configuration. Users and workload clusters will
|
||||
interact with the external endpoint and must trust the SSL certificate. The magnum
|
||||
service and cluster-api can be configured to interact with either the external or
|
||||
internal endpoint and must trust the SSL certificiate. Depending on the environment,
|
||||
these may be derived from different certificate authorities.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
# connect ansible group, host and network addresses into control plane k8s deployment
|
||||
kubernetes_control_plane_group: k8s_all
|
||||
kubelet_hostname: "{{ ansible_facts['hostname'] }}"
|
||||
kubelet_node_ip: "{{ management_address }}"
|
||||
kubernetes_hostname: "{{ internal_lb_vip_address }}"
|
||||
kubernetes_non_init_namespace: true
|
||||
|
||||
# install the vexxhost magnum-cluster-api plugin into the magnum venv
|
||||
magnum_user_pip_packages:
|
||||
- git+https://github.com/vexxhost/magnum-cluster-api@main#egg=magnum-cluster-api
|
||||
@ -245,6 +233,23 @@ images as you need for the different workload cluster kubernetes versions.
|
||||
# store certificates in the magnum database instead of barbican
|
||||
cert_manager_type: x509keypair
|
||||
|
||||
|
||||
Set up config-overrides for the control plane k8s cluster in /etc/openstack_deploy/user_variables_k8s.yml`
|
||||
Attention must be given to the SSL configuration. Users and workload clusters will
|
||||
interact with the external endpoint and must trust the SSL certificate. The magnum
|
||||
service and cluster-api can be configured to interact with either the external or
|
||||
internal endpoint and must trust the SSL certificiate. Depending on the environment,
|
||||
these may be derived from different certificate authorities.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
# connect ansible group, host and network addresses into control plane k8s deployment
|
||||
kubernetes_control_plane_group: k8s_all
|
||||
kubelet_hostname: "{{ ansible_facts['hostname'] }}"
|
||||
kubelet_node_ip: "{{ management_address }}"
|
||||
kubernetes_hostname: "{{ internal_lb_vip_address }}"
|
||||
kubernetes_non_init_namespace: true
|
||||
|
||||
# Pick a range of addresses for the control plane k8s cluster cilium
|
||||
# network that do not collide with anything else in the deployment
|
||||
cilium_ipv4_cidr: 172.29.200.0/22
|
||||
@ -265,7 +270,13 @@ For a new deployment
|
||||
Run the OSA playbooks/setup.yml playbooks as usual, following the normal
|
||||
deployment guide.
|
||||
|
||||
Run the magnum-cluster-api deployment
|
||||
Ensure that additional python modules required for ansible are present:
|
||||
|
||||
.. code-block bash
|
||||
|
||||
/opt/ansible-runtime/bin/pip install docker-image-py
|
||||
|
||||
Run the magnum-cluster-api deployment:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
@ -320,7 +331,7 @@ It will then deploy the workload k8s cluster using magnum, and
|
||||
run a sonobouy "quick mode" test of the workload cluster.
|
||||
|
||||
This playbook is intended to be used on an openstack-ansible
|
||||
all-in-one deployment.
|
||||
all-in-one deployment with no public network configured.
|
||||
|
||||
Use Magnum to create a workload cluster
|
||||
---------------------------------------
|
||||
|
Loading…
Reference in New Issue
Block a user