Change documentation for devstack deployment.

Reflect reality to the docs regarding devstack deployment.
Also, there were changes to refresh docs for vagrant, updated
Vagrantfile, and removed outdated Opendaylight docs.

Change-Id: Ic038967547ebf748c5b41ad598e8553c4a6bebad
This commit is contained in:
Roman Dobosz
2021-07-19 11:01:45 +02:00
parent aae39f5011
commit bd8469b40b
9 changed files with 192 additions and 352 deletions

View File

@@ -1,75 +0,0 @@
vagrant-devstack-Kuryr-Kubernetes
=================================
Getting started
---------------
A Vagrant based kuryr,neutron,keystone,docker and kubernetes system.
Steps to try vagrant image:
1. Install Vagrant on your local machine. Install one of the current
providers supported: VirtualBox, Libvirt or Vagrant
2. Git clone kuryr-kubernetes repository.
3. Run `cd kuryr-kubernetes/contrib/vagrant`
4. Run `vagrant up`
It will take from 10 to 60 minutes, depending on your internet speed.
Vagrant-cachier can speed up the process [1].
5. `vagrant ssh`
At this point you should have experimental kubernetes (etcdv3, k8s-apiserver,
k8s-controller-manager, k8s-scheduler, kubelet and kuryr-controller), docker,
kuryr, neutron, keystone, placement, nova, octavia all up, running and pointing
to each other. Pods and services orchestrated by kubernetes will be backed by
kuryr+neutron and Octavia. The architecture of the setup can be seen at [2].
References:
[1] http://fgrehm.viewdocs.io/vagrant-cachier/
[2] https://docs.openstack.org/developer/kuryr-kubernetes/devref/kuryr_kubernetes_design.html
Vagrant Options available
-------------------------
You can set the following environment variables before running `vagrant up` to modify
the definition of the Virtual Machine spawned:
* **VAGRANT\_KURYR\_VM\_BOX**: To change the Vagrant Box used. Should be available in
[atlas](https://app.vagrantup.com/).
export VAGRANT_KURYR_VM_BOX=centos/7
Could be an example of a rpm-based option.
* **VAGRANT\_KURYR\_VM\_MEMORY**: To modify the RAM of the VM. Defaulted to: 6144.
If you mean to create multiple Kubernetes services on the setup and the Octavia
driver used is Amphora, you should increase this setting.
* **VAGRANT\_KURYR\_VM\_CPU**: To modify the cpus of the VM. Defaulted to: 2.
* **VAGRANT\_KURYR\_RUN\_DEVSTACK**: Whether `vagrant up` should run devstack to
have an environment ready to use. Set it to 'false' if you want to edit
`local.conf` before run ./stack.sh manually in the VM. Defaulted to: true.
See below for additional options for editing local.conf.
For a lighter devstack installation, you can use the "local.conf"[1] that uses ovn
and ovn-octavia, no VM will be created for each load-balancer as is done by the
default Octavia provider (Amphora).
References:
[1] https://github.com/openstack/kuryr-kubernetes/blob/master/devstack/local.conf.ovn.sample
Additional devstack configuration
---------------------------------
To add additional configuration to local.conf before the VM is provisioned, you can
create a file called "user_local.conf" in the contrib/vagrant directory of
networking-kuryr. This file will be appended to the "local.conf" created during the
Vagrant provisioning.
For example, to use OVN as the Neutron plugin with Kuryr, you can create a
"user_local.conf" with the following configuration:
enable_plugin networking-ovn https://opendev.org/openstack/networking-ovn
enable_service ovn-northd
enable_service ovn-controller
disable_service q-agt
disable_service q-l3
disable_service q-dhcp

View File

@@ -0,0 +1,99 @@
====================================================
Vagrant based Kuryr-Kubernetes devstack installation
====================================================
Deploy kuryr-kubernetes on devstack in VM using `Vagrant`_. Vagrant simplifies
life cycle of the local virtual machine and provides automation for repetitive
tasks.
Requirements
------------
For comfortable work, here are minimal host requirements:
#. ``vagrant`` installed
#. 4 CPU cores
#. At least 8GB of RAM
#. Around 20GB of free disk space
Vagrant will create VM with 2 cores, 6GB of RAM and dynamically expanded disk
image.
Getting started
---------------
You'll need vagrant itself, i.e.:
.. code:: console
$ apt install vagrant virtualbox
As an option, you can install libvirt instead of VirtualBox, although
VirtualBox is as an easiest drop-in.
Next, clone the kuryr-kubernetes repository:
.. code:: console
$ git clone https://opendev.org/openstack/kuryr-kubernetes
And run provided vagrant file, by executing:
.. code:: console
$ cd kuryr-kubernetes/contrib/vagrant
$ vagrant up
This can take some time, depending on your host performance, and may take
20 minutes and up.
After deploying is complete, you can access VM by ssh:
.. code:: console
$ vagrant ssh
At this point you should have experimental kubernetes (etcdv3, k8s-apiserver,
k8s-controller-manager, k8s-scheduler, kubelet and kuryr-controller), docker,
OpenStack services (neutron, keystone, placement, nova, octavia), kuryr-cni and
kuryr-controller all up, running and pointing to each other. Pods and services
orchestrated by kubernetes will be backed by kuryr+neutron and Octavia. The
architecture of the setup `can be seen here`_.
Vagrant Options available
-------------------------
You can set the following environment variables before running `vagrant up` to
modify the definition of the Virtual Machine spawned:
* ``VAGRANT_KURYR_VM_BOX`` - to change the Vagrant Box used. Should be
available in `atlas <https://app.vagrantup.com/>`_. For example of a
rpm-based option:
.. code:: console
$ export VAGRANT_KURYR_VM_BOX=centos/8
* ``VAGRANT_KURYR_VM_MEMORY`` - to modify the RAM of the VM. Defaulted to:
**6144**. If you plan to create multiple Kubernetes services on the setup and
the Octavia driver used is Amphora, you should increase this setting.
* ``VAGRANT_KURYR_VM_CPU``: to modify number of CPU cores for the VM. Defaulted
to: **2**.
* ``VAGRANT_KURYR_RUN_DEVSTACK`` - whether ``vagrant up`` should run devstack
to have an environment ready to use. Set it to 'false' if you want to edit
``local.conf`` before stacking devstack in the VM. Defaulted to: **true**.
See below for additional options for editing local.conf.
Additional devstack configuration
---------------------------------
To add additional configuration to local.conf before the VM is provisioned, you
can create a file called ``user_local.conf`` in the contrib/vagrant directory
of networking-kuryr. This file will be appended to the "local.conf" created
during the Vagrant provisioning.
.. _Vagrant: https://www.vagrantup.com/
.. _can be seen here: https://docs.openstack.org/developer/kuryr-kubernetes/devref/kuryr_kubernetes_design.html

View File

@@ -9,21 +9,21 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.hostname = 'devstack'
config.vm.provider 'virtualbox' do |v, override|
override.vm.box = ENV.fetch('VAGRANT_KURYR_VM_BOX', 'generic/ubuntu1804')
override.vm.box = ENV.fetch('VAGRANT_KURYR_VM_BOX', 'generic/ubuntu2004')
v.memory = VM_MEMORY
v.cpus = VM_CPUS
v.customize "post-boot", ['controlvm', :id, 'setlinkstate1', 'on']
end
config.vm.provider 'parallels' do |v, override|
override.vm.box = ENV.fetch('VAGRANT_KURYR_VM_BOX', 'generic/ubuntu1804')
override.vm.box = ENV.fetch('VAGRANT_KURYR_VM_BOX', 'generic/ubuntu2004')
v.memory = VM_MEMORY
v.cpus = VM_CPUS
v.customize ['set', :id, '--nested-virt', 'on']
end
config.vm.provider 'libvirt' do |v, override|
override.vm.box = ENV.fetch('VAGRANT_KURYR_VM_BOX', 'generic/ubuntu1804')
override.vm.box = ENV.fetch('VAGRANT_KURYR_VM_BOX', 'generic/ubuntu2004')
v.memory = VM_MEMORY
v.cpus = VM_CPUS
v.nested = true

View File

@@ -100,7 +100,7 @@ script. Below is the list of available variables:
kuryr-daemon will be started in the CNI container. It is using ``os-vif``
and ``oslo.privsep`` to do pod wiring tasks. By default it'll call ``sudo``
to raise privileges, even though container is priviledged by itself or
``sudo`` is missing from container OS (e.g. default CentOS 7). To prevent
``sudo`` is missing from container OS (e.g. default CentOS 8). To prevent
that make sure to set following options in kuryr.conf used for kuryr-daemon:
.. code-block:: ini

View File

@@ -5,54 +5,58 @@ Basic DevStack installation
Most basic DevStack installation of kuryr-kubernetes is pretty simple. This
document aims to be a tutorial through installation steps.
Document assumes using Centos 7 OS, but same steps should apply for other
operating systems. It is also assumed that ``git`` is already installed on the
system. DevStack will make sure to install and configure OpenStack, Kubernetes
and dependencies of both systems.
Document assumes using Ubuntu LTS 20.04 (using server or cloud installation is
recommended, but desktop will also work), but same steps should apply for other
operating systems. It is also assumed that ``git`` and ``curl`` are already
installed on the system. DevStack will make sure to install and configure
OpenStack, Kubernetes and dependencies of both systems.
Please note, that DevStack installation should be done inside isolated
environment such as virtual machine, since it will make substantial changes to
the host.
Cloning required repositories
-----------------------------
First of all you need to clone DevStack:
First of all, you'll need a user account, which can execute passwordless
``sudo`` command. Consult `DevStack Documentation`_ for details, how to create
one, or simply add line:
.. code-block:: ini
"USERNAME ALL=(ALL) NOPASSWD:ALL"
to ``/etc/sudoers`` using ``visudo`` command. Remember to change ``USERNAME``
to the real name of the user account.
Clone DevStack:
.. code-block:: console
$ git clone https://opendev.org/openstack-dev/devstack
Create user *stack*, give it required permissions and log in as that user:
.. code-block:: console
$ ./devstack/tools/create-stack-user.sh
$ sudo su stack
*stack* user has ``/opt/stack`` set as its home directory. It will need its own
repository with DevStack. Also clone kuryr-kubernetes:
.. code-block:: console
$ git clone https://opendev.org/openstack-dev/devstack
$ git clone https://opendev.org/openstack/kuryr-kubernetes
Copy sample ``local.conf`` (DevStack configuration file) to devstack
directory:
.. code-block:: console
$ cp kuryr-kubernetes/devstack/local.conf.sample devstack/local.conf
$ curl https://opendev.org/openstack/kuryr-kubernetes/raw/branch/master/devstack/local.conf.sample \
-o devstack/local.conf
.. note::
``local.conf.sample`` file is configuring Neutron and Kuryr with standard
Open vSwitch ML2 networking. In the ``devstack`` directory there are other
sample configuration files that enable OpenDaylight or Drangonflow
``local.conf.sample`` file is configuring Neutron and Kuryr with OVN
ML2 networking. In the ``kuryr-kubernetes/devstack`` directory there are
other sample configuration files that enable Open vSwitch instead OVN.
networking. See other pages in this documentation section to learn more.
Now edit ``devstack/local.conf`` to set up some initial options:
* If you have multiple network interfaces, you need to set ``HOST_IP`` variable
to the IP on the interface you want to use as DevStack's primary.
to the IP on the interface you want to use as DevStack's primary. DevStack
sometimes complain about lacking of ``HOST_IP`` even if there is single
network interface.
* If you already have Docker installed on the machine, you can comment out line
starting with ``enable_plugin devstack-plugin-container``.
@@ -60,9 +64,9 @@ Once ``local.conf`` is configured, you can start the installation:
.. code-block:: console
$ ./devstack/stack.sh
$ devstack/stack.sh
Installation takes from 15 to 30 minutes. Once that's done you should see
Installation takes from 20 to 40 minutes. Once that's done you should see
similar output:
.. code-block:: console
@@ -71,50 +75,63 @@ similar output:
DevStack Component Timing
(times are in seconds)
=========================
run_process 5
test_with_retry 2
pip_install 48
osc 121
wait_for_service 1
yum_install 31
dbsync 27
wait_for_service 8
pip_install 137
apt-get 295
run_process 14
dbsync 22
git_timed 168
apt-get-update 4
test_with_retry 3
async_wait 71
osc 200
-------------------------
Unaccounted time 125
Unaccounted time 505
=========================
Total runtime 360
Total runtime 1427
=================
Async summary
=================
Time spent in the background minus waits: 140 sec
Elapsed time: 1427 sec
Time if we did everything serially: 1567 sec
Speedup: 1.09811
This is your host IP address: 192.168.101.249
This is your host IPv6 address: fec0::5054:ff:feb0:213a
Keystone is serving at http://192.168.101.249/identity/
This is your host IP address: 10.0.2.15
This is your host IPv6 address: ::1
Keystone is serving at http://10.0.2.15/identity/
The default users are: admin and demo
The password: password
WARNING:
Using lib/neutron-legacy is deprecated, and it will be removed in the future
The password: pass
Services are running under systemd unit files.
For more information see:
https://docs.openstack.org/devstack/latest/systemd.html
DevStack Version: queens
Change: 301d4d1678c3c1342abc03e51a74574f7792a58b Merge "Use "pip list" in check_libs_from_git" 2017-10-04 07:22:59 +0000
OS Version: CentOS 7.4.1708 Core
DevStack Version: xena
Change:
OS Version: Ubuntu 20.04 focal
You can test DevStack by sourcing credentials and trying some commands:
.. code-block:: console
$ source /devstack/openrc admin admin
$ source devstack/openrc admin admin
$ openstack service list
+----------------------------------+------------------+------------------+
| ID | Name | Type |
+----------------------------------+------------------+------------------+
| 091e3e2813cc4904b74b60c41e8a98b3 | kuryr-kubernetes | kuryr-kubernetes |
| 2b6076dd5fc04bf180e935f78c12d431 | neutron | network |
| b598216086944714aed2c233123fc22d | keystone | identity |
| 07e985b425fc4f8a9da20970a26f754a | octavia | load-balancer |
| 1dc08cb4401243848a562c0042d3f40a | neutron | network |
| 35627730938d4a4295f3add6fc826261 | nova | compute |
| 636b43b739e548e0bb369bc41fe1df08 | glance | image |
| 90ef7129985e4e10874d5e4ddb36ea01 | keystone | identity |
| ce177a3f05dc454fb3d43f705ae24dde | kuryr-kubernetes | kuryr-kubernetes |
| d3d6a461a78e4601a14a5e484ec6cdd1 | nova_legacy | compute_legacy |
| d97e5c31b1054a308c5409ee813c0310 | placement | placement |
+----------------------------------+------------------+------------------+
To verify if Kubernetes is running properly, list its nodes and check status of

View File

@@ -36,7 +36,6 @@ ML2 drivers.
nested-vlan
nested-macvlan
nested-dpdk
odl_support
ovn_support
ovn-octavia
containerized

View File

@@ -1,193 +0,0 @@
=========================================
Kuryr Kubernetes OpenDayLight Integration
=========================================
OpenDaylight is a highly available, modular, extensible, scalable and
multi-protocol controller infrastructure built for SDN deployments on modern
heterogeneous multi-vendor networks.
OpenStack can use OpenDaylight as its network management provider through the
Modular Layer 2 (ML2) north-bound plug-in. OpenDaylight manages the network
flows for the OpenStack compute nodes via the OVSDB south-bound plug-in.
Integrating these allows Kuryr to be used to bridge (both baremetal and
nested) containers and VM networking in a OpenDaylight-based OpenStack
deployment. Kuryr acts as the container networking interface for OpenDaylight.
Testing with DevStack
---------------------
The next points describe how to test OpenStack with ODL using DevStack.
We will start by describing how to test the baremetal case on a single host,
and then cover a nested environment where containers are created inside VMs.
Single Node Test Environment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. Create a test system.
It's best to use a throwaway dev system for running DevStack. Your best bet is
to use either CentOS 7 or the latest Ubuntu LTS (16.04, Xenial).
#. Create the ``stack`` user.
.. code-block:: console
$ git clone https://opendev.org/openstack-dev/devstack.git
$ sudo ./devstack/tools/create-stack-user.sh
#. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes.
.. code-block:: console
$ sudo su - stack
$ git clone https://opendev.org/openstack-dev/devstack.git
$ git clone https://opendev.org/openstack/kuryr-kubernetes.git
#. Configure DevStack to use ODL.
kuryr-kubernetes comes with a sample DevStack configuration file for ODL you
can start with. For example, you may want to set some values for the various
PASSWORD variables in that file, or change the LBaaS service provider to
use. Feel free to edit it if you'd like, but it should work as-is.
.. code-block:: console
$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.odl.sample local.conf
Optionally, the ports pool functionality can be enabled by following:
`How to enable ports pool with devstack`_.
#. Run DevStack.
This is going to take a while. It installs a bunch of packages, clones a
bunch of git repos, and installs everything from these git repos.
.. code-block:: console
$ ./stack.sh
Once DevStack completes successfully, you should see output that looks
something like this:
.. code-block:: console
This is your host IP address: 192.168.5.10
This is your host IPv6 address: ::1
Keystone is serving at http://192.168.5.10/identity/
The default users are: admin and demo
The password: pass
#. Extra configurations.
Devstack does not wire up the public network by default so we must do some
extra steps for floating IP usage as well as external connectivity:
.. code-block:: console
$ sudo ip link set br-ex up
$ sudo ip route add 172.24.4.0/24 dev br-ex
$ sudo ip addr add 172.24.4.1/24 dev br-ex
Then you can create forwarding and NAT rules that will cause "external"
traffic from your instances to get rewritten to your network controller's ip
address and sent out on the network:
.. code-block:: console
$ sudo iptables -A FORWARD -d 172.24.4.0/24 -j ACCEPT
$ sudo iptables -A FORWARD -s 172.24.4.0/24 -j ACCEPT
$ sudo iptables -t nat -I POSTROUTING 1 -s 172.24.4.1/24 -j MASQUERADE
Inspect default Configuration
+++++++++++++++++++++++++++++
In order to check the default configuration, in term of networks, subnets,
security groups and loadbalancers created upon a successful devstack stacking,
you can check the `Inspect default Configuration`_.
Testing Network Connectivity
++++++++++++++++++++++++++++
Once the environment is ready, we can test that network connectivity works
among pods. To do that check out `Testing Network Connectivity`_.
Nested Containers Test Environment (VLAN)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Another deployment option is the nested-vlan where containers are created
inside OpenStack VMs by using the Trunk ports support. Thus, first we need to
deploy an undercloud devstack environment with the needed components to
create VMs (e.g., Glance, Nova, Neutron, Keystone, ...), as well as the needed
ODL configurations such as enabling the trunk support that will be needed for
the VM. And then install the overcloud deployment inside the VM with the kuryr
components.
Undercloud deployment
+++++++++++++++++++++
The steps to deploy the undercloud environment are the same described above
for the `Single Node Test Environment` with the different of the sample
local.conf to use (step 4), in this case:
.. code-block:: console
$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.pod-in-vm.undercloud.odl.sample local.conf
The main differences with the default odl local.conf sample are that:
- There is no need to enable the kuryr-kubernetes plugin as this will be
installed inside the VM (overcloud).
- There is no need to enable the kuryr related services as they will also be
installed inside the VM: kuryr-kubernetes, kubelet, kubernetes-api,
kubernetes-controller-manager, kubernetes-scheduler and kubelet.
- Nova and Glance components need to be enabled to be able to create the VM
where we will install the overcloud.
- ODL Trunk service plugin need to be enable to ensure Trunk ports support.
Once the undercloud deployment has finished, the next steps are related to
create the overcloud VM by using a parent port of a Trunk so that containers
can be created inside with their own networks. To do that we follow the next
steps detailed at `Boot VM with a Trunk Port`_.
Overcloud deployment
++++++++++++++++++++
Once the VM is up and running, we can start with the overcloud configuration.
The steps to perform are the same as without ODL integration, i.e., the same
steps as for ML2/OVS:
#. Log in into the VM:
.. code-block:: console
$ ssh -i id_rsa_demo centos@FLOATING_IP
#. Deploy devstack following steps 3 and 4 detailed at
`How to try out nested-pods locally (VLAN + trunk)`_.
Testing Nested Network Connectivity
+++++++++++++++++++++++++++++++++++
Similarly to the baremetal testing, we can create a demo deployment at the
overcloud VM, scale it to any number of pods and expose the service to check if
the deployment was successful. To do that check out
`Testing Nested Network Connectivity`_.
.. _How to enable ports pool with devstack: https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/ports-pool.html
.. _Inspect default Configuration: https://docs.openstack.org/kuryr-kubernetes/latest/installation/default_configuration.html
.. _Testing Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_connectivity.html
.. _Boot VM with a Trunk Port: https://docs.openstack.org/kuryr-kubernetes/latest/installation/trunk_ports.html
.. _How to try out nested-pods locally (VLAN + trunk): https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/nested-vlan.html
.. _Testing Nested Network Connectivity: https://docs.openstack.org/kuryr-kubernetes/latest/installation/testing_nested_connectivity.html

View File

@@ -10,20 +10,8 @@ devstack:
.. code-block:: console
$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.ovn.sample local.conf
#. Then, you need to edit the local.conf file and enable ovn provider by
setting:
.. code-block:: bash
# Kuryr K8S-Endpoint driver Octavia provider
# ==========================================
KURYR_EP_DRIVER_OCTAVIA_PROVIDER=ovn
KURYR_K8S_OCTAVIA_MEMBER_MODE=L2
KURYR_ENFORCE_SG_RULES=False
KURYR_LB_ALGORITHM=SOURCE_IP_PORT
$ curl https://opendev.org/openstack/kuryr-kubernetes/raw/branch/master/devstack/local.conf.sample \
-o devstack/local.conf
#. In case you want more Kuryr specific features than provided by the default
handlers and more handlers are enabled, for example, the following enables

View File

@@ -26,16 +26,18 @@ Single Node Test Environment
#. Create a test system.
It's best to use a throwaway dev system for running DevStack. Your best bet
is to use either CentOS 7 or the latest Ubuntu LTS (16.04, Xenial).
is to use latest Ubuntu LTS (20.04, Focal).
#. Create the ``stack`` user.
#. Optionally create the ``stack`` user. You'll need user account with
passwordless ``sudo`` command.
.. code-block:: console
$ git clone https://opendev.org/openstack-dev/devstack.git
$ sudo ./devstack/tools/create-stack-user.sh
$ sudo su - stack
#. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes.
#. Clone DevStack.
.. code-block:: console
@@ -52,8 +54,8 @@ Single Node Test Environment
.. code-block:: console
$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.ovn.sample local.conf
$ curl https://opendev.org/openstack/kuryr-kubernetes/raw/branch/master/devstack/local.conf.sample \
-o devstack/local.conf
Note that due to OVN compiling OVS from source at
/usr/local/var/run/openvswitch we need to state at the local.conf that the
@@ -62,6 +64,10 @@ Single Node Test Environment
Optionally, the ports pool functionality can be enabled by following:
:doc:`./ports-pool`
.. note::
Kuryr-kubernetes is using OVN by default
#. Run DevStack.
This is going to take a while. It installs a bunch of packages, clones a
@@ -69,7 +75,7 @@ Single Node Test Environment
.. code-block:: console
$ ./stack.sh
$ devstack/stack.sh
Once DevStack completes successfully, you should see output that looks
something like this:
@@ -84,7 +90,7 @@ Single Node Test Environment
#. Extra configurations.
Devstack does not wire up the public network by default so we must do some
DevStack does not wire up the public network by default so we must do some
extra steps for floating IP usage as well as external connectivity:
.. code-block:: console
@@ -108,7 +114,7 @@ Inspect default Configuration
+++++++++++++++++++++++++++++
In order to check the default configuration, in term of networks, subnets,
security groups and loadbalancers created upon a successful devstack stacking,
security groups and loadbalancers created upon a successful DevStack stacking,
you can check the :doc:`../default_configuration`
Testing Network Connectivity
@@ -123,7 +129,7 @@ Nested Containers Test Environment (VLAN)
Another deployment option is the nested-vlan where containers are created
inside OpenStack VMs by using the Trunk ports support. Thus, first we need to
deploy an undercloud devstack environment with the needed components to
deploy an undercloud DevStack environment with the needed components to
create VMs (e.g., Glance, Nova, Neutron, Keystone, ...), as well as the needed
OVN configurations such as enabling the trunk support that will be needed for
the VM. And then install the overcloud deployment inside the VM with the kuryr
@@ -137,11 +143,10 @@ The steps to deploy the undercloud environment are the same described above
for the `Single Node Test Environment` with the different of the sample
local.conf to use (step 4), in this case:
.. code-block:: console
$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.pod-in-vm.undercloud.ovn.sample local.conf
.. code-block:: console
$ curl https://opendev.org/openstack/kuryr-kubernetes/raw/branch/master/devstack/local.conf.pod-in-vm.undercloud.ovn.sample \
-o devstack/local.conf
The main differences with the default ovn local.conf sample are that:
@@ -171,7 +176,7 @@ same steps as for ML2/OVS:
.. code-block:: console
$ ssh -i id_rsa_demo centos@FLOATING_IP
$ ssh -i id_rsa_demo ubuntu@FLOATING_IP
#. Deploy devstack following steps 3 and 4 detailed at :doc:`./nested-vlan`