Correct ordered/unordered lists.

There is a lot of lists in documentation, unfortunately they were used
incorrectly. In this patch we fix that. For ordered lists we use
auto numbering restructuredtext feature.

Change-Id: I77c4aa2fa333a3968daf8cdf9440848c51374582
This commit is contained in:
Roman Dobosz 2019-11-13 10:43:55 +01:00
parent 80b5ecd41b
commit ad4c460093
18 changed files with 576 additions and 586 deletions

View File

@ -55,16 +55,16 @@ Please see below the component view of the integrated system:
Design Principles
-----------------
1. Loose coupling between integration components.
2. Flexible deployment options to support different project, subnet and
#. Loose coupling between integration components.
#. Flexible deployment options to support different project, subnet and
security groups assignment profiles.
3. The communication of the pod binding data between Controller and CNI driver
#. The communication of the pod binding data between Controller and CNI driver
should rely on existing communication channels, currently added to the pod
metadata via annotations.
4. CNI Driver should not depend on Neutron. It gets all required details
#. CNI Driver should not depend on Neutron. It gets all required details
from Kubernetes API server (currently through Kubernetes annotations),
therefore depending on Controller to perform its translation tasks.
5. Allow different neutron backends to bind Kubernetes pods without code
#. Allow different neutron backends to bind Kubernetes pods without code
modification. This means that both Controller and CNI binding mechanism
should allow loading of the vif management and binding components,
manifested via configuration. If some vendor requires some extra code, it

View File

@ -179,8 +179,8 @@ not be completely waterproof in all situations (e.g., if there is another
entity using the same device owner name). Consequently, by storing the
information into K8s CRD objects we have the benefit of:
* Calling K8s API instead of Neutron API
* Being sure the recovered ports into the pools were created by
* Calling K8s API instead of Neutron API
* Being sure the recovered ports into the pools were created by
kuryr-controller
In addition to these advantages, moving to CRDs will easier the transition for

View File

@ -31,56 +31,56 @@ and then cover a nested environment where containers are created inside VMs.
Single Node Test Environment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Create a test system.
#. Create a test system.
It's best to use a throwaway dev system for running DevStack. Your best bet is
to use either Fedora 25 or the latest Ubuntu LTS (16.04, Xenial).
It's best to use a throwaway dev system for running DevStack. Your best bet
is to use either Fedora 25 or the latest Ubuntu LTS (16.04, Xenial).
2. Create the ``stack`` user.
#. Create the ``stack`` user.
.. code-block:: console
.. code-block:: console
$ git clone https://opendev.org/openstack-dev/devstack.git
$ sudo ./devstack/tools/create-stack-user.sh
3. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes.
#. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes.
.. code-block:: console
.. code-block:: console
$ sudo su - stack
$ git clone https://opendev.org/openstack-dev/devstack.git
$ git clone https://opendev.org/openstack/kuryr-kubernetes.git
4. Configure DevStack to use Dragonflow.
#. Configure DevStack to use Dragonflow.
kuryr-kubernetes comes with a sample DevStack configuration file for Dragonflow
you can start with. You may change some values for the various variables in
that file, like password settings or what LBaaS service provider to use.
Feel free to edit it if you'd like, but it should work as-is.
kuryr-kubernetes comes with a sample DevStack configuration file for
Dragonflow you can start with. You may change some values for the various
variables in that file, like password settings or what LBaaS service
provider to use. Feel free to edit it if you'd like, but it should work
as-is.
.. code-block:: console
.. code-block:: console
$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.df.sample local.conf
Optionally, the ports pool funcionality can be enabled by following:
`How to enable ports pool with devstack`_.
Optionally, the ports pool functionality can be enabled by following:
`How to enable ports pool with devstack`_.
5. Run DevStack.
#. Run DevStack.
Expect it to take a while. It installs required packages, clones a bunch
of git repos, and installs everything from these git repos.
Expect it to take a while. It installs required packages, clones a bunch of
git repos, and installs everything from these git repos.
.. code-block:: console
.. code-block:: console
$ ./stack.sh
Once DevStack completes successfully, you should see output that looks
something like this:
Once DevStack completes successfully, you should see output that looks
something like this:
.. code-block:: console
.. code-block:: console
This is your host IP address: 192.168.5.10
This is your host IPv6 address: ::1
@ -88,13 +88,13 @@ something like this:
The default users are: admin and demo
The password: pass
#. Extra configurations.
6. Extra configurations.
Create NAT rule that will cause "external" traffic from your instances to
get rewritten to your network controller's ip address and sent out on the
network:
Create NAT rule that will cause "external" traffic from your instances to get
rewritten to your network controller's ip address and sent out on the network:
.. code-block:: console
.. code-block:: console
$ sudo iptables -t nat -I POSTROUTING 1 -s 172.24.4.1/24 -j MASQUERADE
@ -140,21 +140,16 @@ use (step 4), in this case:
The main differences with the default dragonflow local.conf sample are that:
- There is no need to enable the kuryr-kubernetes plugin as this will be
- There is no need to enable the kuryr-kubernetes plugin as this will be
installed inside the VM (overcloud).
- There is no need to enable the kuryr related services as they will also
be installed inside the VM: kuryr-kubernetes, kubelet,
kubernetes-api, kubernetes-controller-manager, kubernetes-scheduler and
kubelet.
- Nova and Glance components need to be enabled to be able to create the VM
- There is no need to enable the kuryr related services as they will also be
installed inside the VM: kuryr-kubernetes, kubelet, kubernetes-api,
kubernetes-controller-manager, kubernetes-scheduler and kubelet.
- Nova and Glance components need to be enabled to be able to create the VM
where we will install the overcloud.
- Dragonflow Trunk service plugin need to be enable to ensure Trunk ports
- Dragonflow Trunk service plugin need to be enable to ensure Trunk ports
support.
Once the undercloud deployment has finished, the next steps are related to
creating the overcloud VM by using a parent port of a Trunk so that containers
can be created inside with their own networks. To do that we follow the next
@ -168,17 +163,16 @@ Once the VM is up and running, we can start with the overcloud configuration.
The steps to perform are the same as without Dragonflow integration, i.e., the
same steps as for ML2/OVS:
1. Log in into the VM:
#. Log in into the VM:
.. code-block:: console
$ ssh -i id_rsa_demo centos@FLOATING_IP
2. Deploy devstack following steps 3 and 4 detailed at
#. Deploy devstack following steps 3 and 4 detailed at
`How to try out nested-pods locally (VLAN + trunk)`_.
Testing Nested Network Connectivity
+++++++++++++++++++++++++++++++++++

View File

@ -5,22 +5,22 @@ How to try out nested-pods locally (MACVLAN)
Following are the instructions for an all-in-one setup, using the
nested MACVLAN driver rather than VLAN and trunk ports.
1. To install OpenStack services run devstack with
#. To install OpenStack services run devstack with
``devstack/local.conf.pod-in-vm.undercloud.sample``.
2. Launch a Nova VM with MACVLAN support
#. Launch a Nova VM with MACVLAN support
.. todo::
.. todo::
Add a list of neutron commands, required to launch a such a VM
3. Log into the VM and set up Kubernetes along with Kuryr using devstack:
#. Log into the VM and set up Kubernetes along with Kuryr using devstack:
- Since undercloud Neutron will be used by pods, Neutron services should be
disabled in localrc.
- Run devstack with ``devstack/local.conf.pod-in-vm.overcloud.sample``.
Fill in the needed information, such as the subnet pool id to use or the
router.
4. Once devstack is done and all services are up inside VM. Next steps are to
#. Once devstack is done and all services are up inside VM. Next steps are to
configure the missing information at ``/etc/kuryr/kuryr.conf``:
- Configure worker VMs subnet:

View File

@ -7,7 +7,7 @@ also be running inside the same Nova VM in which Kuryr-controller and Kuryr-cni
will be running. 4GB memory and 2 vCPUs, is the minimum resource requirement
for the VM:
1. To install OpenStack services run devstack with
#. To install OpenStack services run devstack with
``devstack/local.conf.pod-in-vm.undercloud.sample``. Ensure that "trunk"
service plugin is enabled in ``/etc/neutron/neutron.conf``:
@ -16,14 +16,15 @@ for the VM:
[DEFAULT]
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.trunk.plugin.TrunkPlugin
2. Launch a VM with `Neutron trunk port`_. The next steps can be followed:
#. Launch a VM with `Neutron trunk port`_. The next steps can be followed:
`Boot VM with a Trunk Port`_.
3. Inside VM, install and setup Kubernetes along with Kuryr using devstack:
#. Inside VM, install and setup Kubernetes along with Kuryr using devstack:
- Since undercloud Neutron will be used by pods, Neutron services should be
disabled in localrc.
- Run devstack with ``devstack/local.conf.pod-in-vm.overcloud.sample``.
but first fill in the needed information:
But first fill in the needed information:
- Point to the undercloud deployment by setting:
@ -31,9 +32,8 @@ for the VM:
SERVICE_HOST=UNDERCLOUD_CONTROLLER_IP
- Fill in the subnetpool id of the undercloud deployment, as well as
the router where the new pod and service networks need to be
connected:
- Fill in the subnetpool id of the undercloud deployment, as well as the
router where the new pod and service networks need to be connected:
.. code-block:: bash
@ -49,14 +49,14 @@ for the VM:
- Optionally, the ports pool funcionality can be enabled by following:
`How to enable ports pool with devstack`_.
- [OPTIONAL] If you want to enable the subport pools driver and the
VIF Pool Manager you need to include:
- [OPTIONAL] If you want to enable the subport pools driver and the VIF
Pool Manager you need to include:
.. code-block:: bash
KURYR_VIF_POOL_MANAGER=True
4. Once devstack is done and all services are up inside VM. Next steps are to
#. Once devstack is done and all services are up inside VM. Next steps are to
configure the missing information at ``/etc/kuryr/kuryr.conf``:
- Configure worker VMs subnet:

View File

@ -26,19 +26,19 @@ and then cover a nested environment where containers are created inside VMs.
Single Node Test Environment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Create a test system.
#. Create a test system.
It's best to use a throwaway dev system for running DevStack. Your best bet is
to use either CentOS 7 or the latest Ubuntu LTS (16.04, Xenial).
2. Create the ``stack`` user.
#. Create the ``stack`` user.
.. code-block:: console
$ git clone https://opendev.org/openstack-dev/devstack.git
$ sudo ./devstack/tools/create-stack-user.sh
3. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes.
#. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes.
.. code-block:: console
@ -46,32 +46,32 @@ to use either CentOS 7 or the latest Ubuntu LTS (16.04, Xenial).
$ git clone https://opendev.org/openstack-dev/devstack.git
$ git clone https://opendev.org/openstack/kuryr-kubernetes.git
4. Configure DevStack to use ODL.
#. Configure DevStack to use ODL.
kuryr-kubernetes comes with a sample DevStack configuration file for ODL you
can start with. For example, you may want to set some values for the various
PASSWORD variables in that file, or change the LBaaS service provider to use.
Feel free to edit it if you'd like, but it should work as-is.
kuryr-kubernetes comes with a sample DevStack configuration file for ODL you
can start with. For example, you may want to set some values for the various
PASSWORD variables in that file, or change the LBaaS service provider to
use. Feel free to edit it if you'd like, but it should work as-is.
.. code-block:: console
$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.odl.sample local.conf
Optionally, the ports pool funcionality can be enabled by following:
`How to enable ports pool with devstack`_.
Optionally, the ports pool functionality can be enabled by following:
`How to enable ports pool with devstack`_.
5. Run DevStack.
#. Run DevStack.
This is going to take a while. It installs a bunch of packages, clones a bunch
of git repos, and installs everything from these git repos.
This is going to take a while. It installs a bunch of packages, clones a
bunch of git repos, and installs everything from these git repos.
.. code-block:: console
$ ./stack.sh
Once DevStack completes successfully, you should see output that looks
something like this:
Once DevStack completes successfully, you should see output that looks
something like this:
.. code-block:: console
@ -81,10 +81,10 @@ something like this:
The default users are: admin and demo
The password: pass
6. Extra configurations.
#. Extra configurations.
Devstack does not wire up the public network by default so we must do
some extra steps for floating IP usage as well as external connectivity:
Devstack does not wire up the public network by default so we must do some
extra steps for floating IP usage as well as external connectivity:
.. code-block:: console
@ -92,9 +92,9 @@ some extra steps for floating IP usage as well as external connectivity:
$ sudo ip route add 172.24.4.0/24 dev br-ex
$ sudo ip addr add 172.24.4.1/24 dev br-ex
Then you can create forwarding and NAT rules that will cause "external"
traffic from your instances to get rewritten to your network controller's
ip address and sent out on the network:
Then you can create forwarding and NAT rules that will cause "external"
traffic from your instances to get rewritten to your network controller's ip
address and sent out on the network:
.. code-block:: console
@ -144,18 +144,14 @@ local.conf to use (step 4), in this case:
The main differences with the default odl local.conf sample are that:
- There is no need to enable the kuryr-kubernetes plugin as this will be
- There is no need to enable the kuryr-kubernetes plugin as this will be
installed inside the VM (overcloud).
- There is no need to enable the kuryr related services as they will also
be installed inside the VM: kuryr-kubernetes, kubelet,
kubernetes-api, kubernetes-controller-manager, kubernetes-scheduler and
kubelet.
- Nova and Glance components need to be enabled to be able to create the VM
- There is no need to enable the kuryr related services as they will also be
installed inside the VM: kuryr-kubernetes, kubelet, kubernetes-api,
kubernetes-controller-manager, kubernetes-scheduler and kubelet.
- Nova and Glance components need to be enabled to be able to create the VM
where we will install the overcloud.
- ODL Trunk service plugin need to be enable to ensure Trunk ports support.
- ODL Trunk service plugin need to be enable to ensure Trunk ports support.
Once the undercloud deployment has finished, the next steps are related to
create the overcloud VM by using a parent port of a Trunk so that containers
@ -167,16 +163,16 @@ Overcloud deployment
++++++++++++++++++++
Once the VM is up and running, we can start with the overcloud configuration.
The steps to perform are the same as without ODL integration, i.e., the
same steps as for ML2/OVS:
The steps to perform are the same as without ODL integration, i.e., the same
steps as for ML2/OVS:
1. Log in into the VM:
#. Log in into the VM:
.. code-block:: console
$ ssh -i id_rsa_demo centos@FLOATING_IP
2. Deploy devstack following steps 3 and 4 detailed at
#. Deploy devstack following steps 3 and 4 detailed at
`How to try out nested-pods locally (VLAN + trunk)`_.

View File

@ -23,19 +23,19 @@ and then cover a nested environment where containers are created inside VMs.
Single Node Test Environment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Create a test system.
#. Create a test system.
It's best to use a throwaway dev system for running DevStack. Your best bet is
to use either CentOS 7 or the latest Ubuntu LTS (16.04, Xenial).
It's best to use a throwaway dev system for running DevStack. Your best bet
is to use either CentOS 7 or the latest Ubuntu LTS (16.04, Xenial).
2. Create the ``stack`` user.
#. Create the ``stack`` user.
.. code-block:: console
$ git clone https://opendev.org/openstack-dev/devstack.git
$ sudo ./devstack/tools/create-stack-user.sh
3. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes.
#. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes.
.. code-block:: console
@ -43,36 +43,38 @@ to use either CentOS 7 or the latest Ubuntu LTS (16.04, Xenial).
$ git clone https://opendev.org/openstack-dev/devstack.git
$ git clone https://opendev.org/openstack/kuryr-kubernetes.git
4. Configure DevStack to use OVN.
#. Configure DevStack to use OVN.
kuryr-kubernetes comes with a sample DevStack configuration file for OVN you
can start with. For example, you may want to set some values for the various
PASSWORD variables in that file, or change the LBaaS service provider to use.
Feel free to edit it if you'd like, but it should work as-is.
kuryr-kubernetes comes with a sample DevStack configuration file for OVN you
can start with. For example, you may want to set some values for the various
PASSWORD variables in that file, or change the LBaaS service provider to
use. Feel free to edit it if you'd like, but it should work as-is.
.. code-block:: console
$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.ovn.sample local.conf
Note that due to OVN compiling OVS from source at
/usr/local/var/run/openvswitch we need to state at the local.conf that the path
is different from the default one (i.e., /var/run/openvswitch).
Note that due to OVN compiling OVS from source at
/usr/local/var/run/openvswitch we need to state at the local.conf that the
path is different from the default one (i.e., /var/run/openvswitch).
Optionally, the ports pool functionality can be enabled by following:
:doc:`./ports-pool`
Optionally, the ports pool functionality can be enabled by following:
:doc:`./ports-pool`
5. Run DevStack.
#. Run DevStack.
This is going to take a while. It installs a bunch of packages, clones a bunch
of git repos, and installs everything from these git repos.
This is going to take a while. It installs a bunch of packages, clones a
bunch of git repos, and installs everything from these git repos.
.. code-block:: console
$ ./stack.sh
Once DevStack completes successfully, you should see output that looks
something like this::
Once DevStack completes successfully, you should see output that looks
something like this:
.. code-block::
This is your host IP address: 192.168.5.10
This is your host IPv6 address: ::1
@ -80,11 +82,10 @@ something like this::
The default users are: admin and demo
The password: pass
#. Extra configurations.
6. Extra configurations.
Devstack does not wire up the public network by default so we must do
some extra steps for floating IP usage as well as external connectivity:
Devstack does not wire up the public network by default so we must do some
extra steps for floating IP usage as well as external connectivity:
.. code-block:: console
@ -92,9 +93,9 @@ some extra steps for floating IP usage as well as external connectivity:
$ sudo ip route add 172.24.4.0/24 dev br-ex
$ sudo ip addr add 172.24.4.1/24 dev br-ex
Then you can create forwarding and NAT rules that will cause "external"
traffic from your instances to get rewritten to your network controller's
ip address and sent out on the network:
Then you can create forwarding and NAT rules that will cause "external"
traffic from your instances to get rewritten to your network controller's ip
address and sent out on the network:
.. code-block:: console
@ -141,21 +142,17 @@ local.conf to use (step 4), in this case:
$ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.pod-in-vm.undercloud.ovn.sample local.conf
The main differences with the default ovn local.conf sample are that:
- There is no need to enable the kuryr-kubernetes plugin as this will be
- There is no need to enable the kuryr-kubernetes plugin as this will be
installed inside the VM (overcloud).
- There is no need to enable the kuryr related services as they will also
be installed inside the VM: kuryr-kubernetes, kubelet,
kubernetes-api, kubernetes-controller-manager, kubernetes-scheduler and
kubelet.
- Nova and Glance components need to be enabled to be able to create the VM
- There is no need to enable the kuryr related services as they will also be
installed inside the VM: kuryr-kubernetes, kubelet, kubernetes-api,
kubernetes-controller-manager, kubernetes-scheduler and kubelet.
- Nova and Glance components need to be enabled to be able to create the VM
where we will install the overcloud.
- OVN Trunk service plugin need to be enable to ensure Trunk ports support.
- OVN Trunk service plugin need to be enable to ensure Trunk ports support.
Once the undercloud deployment has finished, the next steps are related to
create the overcloud VM by using a parent port of a Trunk so that containers
@ -170,13 +167,13 @@ Once the VM is up and running, we can start with the overcloud configuration.
The steps to perform are the same as without OVN integration, i.e., the
same steps as for ML2/OVS:
1. Log in into the VM:
#. Log in into the VM:
.. code-block:: console
$ ssh -i id_rsa_demo centos@FLOATING_IP
2. Deploy devstack following steps 3 and 4 detailed at :doc:`./nested-vlan`
#. Deploy devstack following steps 3 and 4 detailed at :doc:`./nested-vlan`
Testing Nested Network Connectivity

View File

@ -5,13 +5,13 @@ How to enable ports pool with devstack
To enable the utilization of the ports pool feature through devstack, the next
options needs to be set at the local.conf file:
1. First, you need to enable the pools by setting:
#. First, you need to enable the pools by setting:
.. code-block:: bash
KURYR_USE_PORT_POOLS=True
2. Then, the proper pool driver needs to be set. This means that for the
#. Then, the proper pool driver needs to be set. This means that for the
baremetal case you need to ensure the pod vif driver and the vif pool driver
are set to the right baremetal drivers, for instance:
@ -27,7 +27,7 @@ options needs to be set at the local.conf file:
KURYR_POD_VIF_DRIVER=nested-vlan
KURYR_VIF_POOL_DRIVER=nested
3. Then, in case you want to set a limit to the maximum number of ports, or
#. Then, in case you want to set a limit to the maximum number of ports, or
increase/reduce the default one for the minimum number, as well as to modify
the way the pools are repopulated, both in time as well as regarding bulk
operation sizes, the next option can be included and modified accordingly:

View File

@ -6,17 +6,17 @@ To create pods with additional Interfaces follow the `Kubernetes Network Custom
Resource Definition De-facto Standard Version 1`_, the next steps can be
followed:
1. Create Neutron net/subnets which you want the additional interfaces attach
#. Create Neutron net/subnets which you want the additional interfaces attach
to.
.. code-block:: bash
.. code-block:: console
$ openstack network create net-a
$ openstack subnet create subnet-a --subnet-range 192.0.2.0/24 --network net-a
2. Create CRD of 'NetworkAttachmentDefinition' as defined in NPWG spec.
#. Create CRD of 'NetworkAttachmentDefinition' as defined in NPWG spec.
.. code-block:: bash
.. code-block:: console
$ cat << EOF > nad.yaml
apiVersion: apiextensions.k8s.io/v1beta1
@ -43,10 +43,10 @@ followed:
EOF
$ kubectl apply -f nad.yaml
3. Create NetworkAttachmentDefinition object with the UUID of Neutron subnet
defined in step 1.
#. Create NetworkAttachmentDefinition object with the UUID of Neutron subnet
defined in step 1.
.. code-block:: bash
.. code-block:: console
$ cat << EOF > net-a.yaml
apiVersion: "k8s.cni.cncf.io/v1"
@ -60,17 +60,17 @@ defined in step 1.
EOF
$ kubectl apply -f net-a.yaml
4. Enable the multi-vif driver by setting 'multi_vif_drivers' in kuryr.conf.
#. Enable the multi-vif driver by setting 'multi_vif_drivers' in kuryr.conf.
Then restart kuryr-controller.
.. code-block:: ini
.. code-block:: ini
[kubernetes]
multi_vif_drivers = npwg_multiple_interfaces
5. Add additional interfaces to pods definition. e.g.
.. code-block:: bash
.. code-block:: console
$ cat << EOF > pod.yaml
apiVersion: v1
@ -88,7 +88,8 @@ defined in step 1.
EOF
$ kubectl apply -f pod.yaml
You may put a list of network separated with comma to attach Pods to more networks.
You may put a list of network separated with comma to attach Pods to more
networks.
.. _Kubernetes Network Custom Resource Definition De-facto Standard Version 1: https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR7UdTPAG_RNydhVE1Kx54kFQ/edit?usp=sharing

View File

@ -5,7 +5,7 @@ Enable network per namespace functionality (handler + driver)
To enable the subnet driver that creates a new network for each new namespace
the next steps are needed:
1. Enable the namespace handler to reach to namespace events, in this case,
#. Enable the namespace handler to reach to namespace events, in this case,
creation and deletion. To do that you need to add it to the list of the
enabled handlers at kuryr.conf (details on how to edit this for
containerized deployment can be found at :doc:`./devstack/containerized`):
@ -24,7 +24,7 @@ the next steps are needed:
[kubernetes]
enabled_handlers=vif,lb,lbaasspec,namespace,kuryrnet
2. Enable the namespace subnet driver by modifying the default
#. Enable the namespace subnet driver by modifying the default
pod_subnet_driver option at kuryr.conf:
.. code-block:: ini
@ -43,7 +43,7 @@ the next steps are needed:
pod_security_groups_driver = namespace
service_security_groups_driver = namespace
3. Select (and create if needed) the subnet pool from where the new subnets
#. Select (and create if needed) the subnet pool from where the new subnets
will get their CIDR (e.g., the default on devstack deployment is
shared-default-subnetpool-v4):
@ -52,7 +52,7 @@ the next steps are needed:
[namespace_subnet]
pod_subnet_pool = SUBNET_POOL_ID
4. Select (and create if needed) the router where the new subnet will be
#. Select (and create if needed) the router where the new subnet will be
connected (e.g., the default on devstack deployments is router1):
.. code-block:: ini
@ -64,7 +64,7 @@ the next steps are needed:
requirements between pod, service and public subnets, as in the case for
the default subnet driver.
5. Select (and create if needed) the security groups to be attached to the
#. Select (and create if needed) the security groups to be attached to the
pods at the default namespace and to the others, enabling the cross access
between them:
@ -108,14 +108,14 @@ to add the namespace handler and state the namespace subnet driver with:
Testing the network per namespace functionality
-----------------------------------------------
1. Create two namespaces:
#. Create two namespaces:
.. code-block:: console
$ kubectl create namespace test1
$ kubectl create namespace test2
2. Check resources has been created:
#. Check resources has been created:
.. code-block:: console
@ -136,7 +136,7 @@ Testing the network per namespace functionality
$ openstack subnet list | grep test1
| 8640d134-5ea2-437d-9e2a-89236f6c0198 | ns/test1-subnet | 7c7b68c5-d3c4-431c-9f69-fbc777b43ee5 | 10.0.1.128/26 |
3. Create a pod in the created namespaces:
#. Create a pod in the created namespaces:
.. code-block:: console
@ -154,7 +154,7 @@ Testing the network per namespace functionality
NAME READY STATUS RESTARTS AGE IP NODE
demo-5135352253-dfghd 1/1 Running 0 7s 10.0.1.134 node1
4. Create a service:
#. Create a service:
.. code-block:: console
@ -165,7 +165,7 @@ Testing the network per namespace functionality
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo ClusterIP 10.0.0.141 <none> 80/TCP 18s
5. Test service connectivity from both namespaces:
#. Test service connectivity from both namespaces:
.. code-block:: console
@ -177,7 +177,7 @@ Testing the network per namespace functionality
test-2-pod$ curl 10.0.0.141
## No response
6. And finally, to remove the namespace and all its resources, including
#. And finally, to remove the namespace and all its resources, including
openstack networks, kuryrnet CRD, svc, pods, you just need to do:
.. code-block:: console

View File

@ -100,7 +100,7 @@ to add the policy, pod_label and namespace handler and drivers with:
Testing the network policy support functionality
------------------------------------------------
1. Given a yaml file with a network policy, such as:
#. Given a yaml file with a network policy, such as:
.. code-block:: yaml
@ -133,13 +133,13 @@ Testing the network policy support functionality
- protocol: TCP
port: 5978
2. Apply the network policy:
#. Apply the network policy:
.. code-block:: console
$ kubectl apply -f network_policy.yml
3. Check that the resources has been created:
#. Check that the resources has been created:
.. code-block:: console
@ -154,7 +154,7 @@ Testing the network policy support functionality
$ openstack security group list | grep sg-test-network-policy
| dabdf308-7eed-43ef-a058-af84d1954acb | sg-test-network-policy
4. Check that the rules are in place for the security group:
#. Check that the rules are in place for the security group:
.. code-block:: console
@ -230,7 +230,7 @@ Testing the network policy support functionality
| tcp | 5978:5978 | egress |
+-------------+------------+-----------+
5. Create a pod:
#. Create a pod:
.. code-block:: console
@ -241,7 +241,7 @@ Testing the network policy support functionality
NAME READY STATUS RESTARTS AGE IP
demo-5558c7865d-fdkdv 1/1 Running 0 44s 10.0.0.68
6. Get the pod port and check its security group rules:
#. Get the pod port and check its security group rules:
.. code-block:: console
@ -260,13 +260,13 @@ Testing the network policy support functionality
| tcp | 5978:5978 | egress |
+-------------+------------+-----------+
7. Try to curl the pod on port 8080 (hint: it won't work!):
#. Try to curl the pod on port 8080 (hint: it won't work!):
.. code-block:: console
$ curl 10.0.0.68:8080
8. Update network policy to allow ingress 8080 port:
#. Update network policy to allow ingress 8080 port:
.. code-block:: console
@ -343,19 +343,18 @@ Testing the network policy support functionality
| tcp | 5978:5978 | egress |
+-------------+------------+-----------+
9. Try to curl the pod ip after patching the network policy:
#. Try to curl the pod ip after patching the network policy:
.. code-block:: console
$ curl 10.0.0.68:8080
demo-5558c7865d-fdkdv: HELLO! I AM ALIVE!!!
Note the curl only works from pods (neutron ports) on a namespace that has
the label `project: default` as stated on the policy namespaceSelector.
Note the curl only works from pods (neutron ports) on a namespace that has
the label `project: default` as stated on the policy namespaceSelector.
10. We can also create a single pod, without a label and check that there is
no connectivity to it, as it does not match the network policy
podSelector:
#. We can also create a single pod, without a label and check that there is no
connectivity to it, as it does not match the network policy podSelector:
.. code-block:: console
@ -374,7 +373,7 @@ the label `project: default` as stated on the policy namespaceSelector.
$ curl demo-pod-IP:8080
NO REPLY
11. If we add to the pod a label that match a network policy podSelector, in
#. If we add to the pod a label that match a network policy podSelector, in
this case 'project: default', the network policy will get applied on the
pod, and the traffic will be allowed:
@ -384,7 +383,7 @@ the label `project: default` as stated on the policy namespaceSelector.
$ curl demo-pod-IP:8080
demo-pod-XXX: HELLO! I AM ALIVE!!!
12. Confirm the teardown of the resources once the network policy is removed:
#. Confirm the teardown of the resources once the network policy is removed:
.. code-block:: console

View File

@ -84,14 +84,14 @@ Router:
Configure Kuryr to support L7 Router and OCP-Route resources
------------------------------------------------------------
1. Configure the L7 Router by adding the LB UUID at kuryr.conf:
#. Configure the L7 Router by adding the LB UUID at kuryr.conf:
.. code-block:: ini
[ingress]
l7_router_uuid = 99f580e6-d894-442a-bc5f-4d14b41e10d2
2. Enable the ocp-route and k8s-endpoint handlers. For that you need to add
#. Enable the ocp-route and k8s-endpoint handlers. For that you need to add
this handlers to the enabled handlers list at kuryr.conf (details on how to
edit this for containerized deployment can be found at
:doc:`./devstack/containerized`):
@ -127,7 +127,7 @@ with devstack, you just need to add the following at local.conf file:
Testing OCP-Route functionality
-------------------------------
1. Create a service:
#. Create a service:
.. code-block:: console
@ -135,7 +135,7 @@ Testing OCP-Route functionality
$ oc scale dc/kuryr-demo --replicas=2
$ oc expose dc/kuryr-demo --port 80 --target-port 8080
2. Create a Route object pointing to above service (kuryr-demo):
#. Create a Route object pointing to above service (kuryr-demo):
.. code-block:: console
@ -152,7 +152,7 @@ Testing OCP-Route functionality
> EOF
$ oc create -f route.yaml
3. Curl L7 router's FIP using specified hostname:
#. Curl L7 router's FIP using specified hostname:
.. code-block:: console

View File

@ -93,13 +93,12 @@ pools just by restarting the kuryr-controller (or even before installing it).
To do that you just need to ensure the ports are created with the right
device_owner:
- For neutron pod driver: compute:kuryr (of the value at
- For neutron pod driver: compute:kuryr (of the value at
kuryr.lib.constants.py)
- For nested-vlan pod driver: trunk:subport or compute:kuryr (or the value
at kuryr.lib.constants.py). But in this case they also need to be
attached to an active neutron trunk port, i.e., they need to be subports
of an existing trunk
- For nested-vlan pod driver: trunk:subport or compute:kuryr (or the value at
kuryr.lib.constants.py). But in this case they also need to be attached to an
active neutron trunk port, i.e., they need to be subports of an existing
trunk
Subports pools management tool

View File

@ -425,15 +425,17 @@ The services and pods subnets should be created.
#. For the external services (type=LoadBalancer) case,
two methods are supported:
* Pool - external IPs are allocated from pre-defined pool
* User - user specify the external IP address
+ Pool - external IPs are allocated from pre-defined pool
+ User - user specify the external IP address
In case 'Pool' method should be supported, execute the next steps
In case 'Pool' method should be supported, execute the next steps:
A. Create an external/provider network
B. Create subnet/pool range of external CIDR
C. Connect external subnet to kuryr-kubernetes router
D. Configure external network details in Kuryr.conf as follows:
#. Create an external/provider network
#. Create subnet/pool range of external CIDR
#. Connect external subnet to kuryr-kubernetes router
#. Configure external network details in Kuryr.conf as follows:
.. code-block:: ini
[neutron_defaults]
external_svc_net= <id of external network>

View File

@ -8,34 +8,34 @@ Current approach of SR-IOV relies on `sriov-device-plugin`_. While creating
pods with SR-IOV, sriov-device-plugin should be turned on on all nodes. To use
a SR-IOV port on a baremetal installation the 3 following steps should be done:
1. Create OpenStack network and subnet for SR-IOV.
Following steps should be done with admin rights.
#. Create OpenStack network and subnet for SR-IOV. Following steps should be
done with admin rights.
.. code-block:: console
.. code-block:: console
neutron net-create vlan-sriov-net --shared --provider:physical_network physnet10_4 --provider:network_type vlan --provider:segmentation_id 3501
neutron subnet-create vlan-sriov-net 203.0.114.0/24 --name vlan-sriov-subnet --gateway 203.0.114.1
$ neutron net-create vlan-sriov-net --shared --provider:physical_network physnet10_4 --provider:network_type vlan --provider:segmentation_id 3501
$ neutron subnet-create vlan-sriov-net 203.0.114.0/24 --name vlan-sriov-subnet --gateway 203.0.114.1
Subnet id <UUID of vlan-sriov-net> will be used later in NetworkAttachmentDefinition.
Subnet id <UUID of vlan-sriov-net> will be used later in NetworkAttachmentDefinition.
2. Add sriov section into kuryr.conf.
#. Add sriov section into kuryr.conf.
.. code-block:: ini
.. code-block:: ini
[sriov]
physical_device_mappings = physnet1:ens4f0
default_physnet_subnets = physnet1:<UUID of vlan-sriov-net>
This mapping is required for ability to find appropriate PF/VF functions at
binding phase. physnet1 is just an identifier for subnet <UUID of
vlan-sriov-net>. Such kind of transition is necessary to support many-to-many
relation.
This mapping is required for ability to find appropriate PF/VF functions at
binding phase. physnet1 is just an identifier for subnet <UUID of
vlan-sriov-net>. Such kind of transition is necessary to support
many-to-many relation.
3. Prepare NetworkAttachmentDefinition object.
Apply NetworkAttachmentDefinition with "sriov" driverType inside,
as described in `NPWG spec`_.
#. Prepare NetworkAttachmentDefinition object. Apply
NetworkAttachmentDefinition with "sriov" driverType inside, as described in
`NPWG spec`_.
.. code-block:: yaml
.. code-block:: yaml
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
@ -47,10 +47,11 @@ as described in `NPWG spec`_.
"driverType": "sriov"
}'
Then add k8s.v1.cni.cncf.io/networks and request/limits for SR-IOV
into the pod's yaml.
.. code-block:: yaml
Then add k8s.v1.cni.cncf.io/networks and request/limits for SR-IOV into the
pod's yaml.
.. code-block:: yaml
kind: Pod
metadata:
@ -70,30 +71,30 @@ into the pod's yaml.
limits:
intel.com/sriov: '2'
In the above example two SR-IOV devices will be attached to pod. First one is
described in sriov-net1 NetworkAttachmentDefinition, second one in sriov-net2.
They may have different subnetId.
In the above example two SR-IOV devices will be attached to pod. First one
is described in sriov-net1 NetworkAttachmentDefinition, second one in
sriov-net2. They may have different subnetId.
4. Specify resource names
#. Specify resource names
The resource name *intel.com/sriov*, which used in the above example is the
default resource name. This name was used in SR-IOV network device plugin in
version 1 (release-v1 branch). But since latest version the device plugin can
use any arbitrary name of the resources (see `SRIOV network device plugin for
Kubernetes`_). This name should match "^\[a-zA-Z0-9\_\]+$" regular expression.
To be able to work with arbitrary resource names physnet_resource_mappings and
device_plugin_resource_prefix in [sriov] section of kuryr-controller
configuration file should be filled. The default value for
device_plugin_resource_prefix is intel.com, the same as in SR-IOV network
device plugin, in case of SR-IOV network device plugin was started with value
of -resource-prefix option different from intel.com, than value should be set
to device_plugin_resource_prefix, otherwise kuryr-kubernetes will not work with
resource.
The resource name *intel.com/sriov*, which used in the above example is the
default resource name. This name was used in SR-IOV network device plugin in
version 1 (release-v1 branch). But since latest version the device plugin
can use any arbitrary name of the resources (see `SRIOV network device
plugin for Kubernetes`_). This name should match "^\[a-zA-Z0-9\_\]+$"
regular expression. To be able to work with arbitrary resource names
physnet_resource_mappings and device_plugin_resource_prefix in [sriov]
section of kuryr-controller configuration file should be filled. The
default value for device_plugin_resource_prefix is intel.com, the same as in
SR-IOV network device plugin, in case of SR-IOV network device plugin was
started with value of -resource-prefix option different from intel.com, than
value should be set to device_plugin_resource_prefix, otherwise
kuryr-kubernetes will not work with resource.
Assume we have following SR-IOV network device plugin (defined by -config-file
option)
Assume we have following SR-IOV network device plugin (defined by
-config-file option)
.. code-block:: json
.. code-block:: json
{
"resourceList":
@ -107,12 +108,12 @@ option)
]
}
We defined numa0 resource name, also assume we started sriovdp with
-resource-prefix samsung.com value. The PCI address of ens4f0 interface is
"0000:02:00.0". If we assigned 8 VF to ens4f0 and launch SR-IOV network device
plugin, we can see following state of kubernetes
We defined numa0 resource name, also assume we started sriovdp with
-resource-prefix samsung.com value. The PCI address of ens4f0 interface is
"0000:02:00.0". If we assigned 8 VF to ens4f0 and launch SR-IOV network
device plugin, we can see following state of kubernetes
.. code-block:: console
.. code-block:: console
$ kubectl get node node1 -o json | jq '.status.allocatable'
{
@ -125,49 +126,49 @@ plugin, we can see following state of kubernetes
"pods": "1k"
}
We have to add to the sriov section following mapping:
We have to add to the sriov section following mapping:
.. code-block:: ini
.. code-block:: ini
[sriov]
device_plugin_resource_prefix = samsung.com
physnet_resource_mappings = physnet1:numa0
5. Enable Kubelet Pod Resources feature
#. Enable Kubelet Pod Resources feature
To use SR-IOV functionality properly it is necessary to enable Kubelet Pod
Resources feature. Pod Resources is a service provided by Kubelet via gRPC
server that allows to request list of resources allocated for each pod and
container on the node. These resources are devices allocated by k8s device
plugins. Service was implemented mainly for monitoring purposes, but it also
suitable for SR-IOV binding driver allowing it to know which VF was allocated
for particular container.
To use SR-IOV functionality properly it is necessary to enable Kubelet Pod
Resources feature. Pod Resources is a service provided by Kubelet via gRPC
server that allows to request list of resources allocated for each pod and
container on the node. These resources are devices allocated by k8s device
plugins. Service was implemented mainly for monitoring purposes, but it also
suitable for SR-IOV binding driver allowing it to know which VF was
allocated for particular container.
To enable Pod Resources service it is needed to add
``--feature-gates KubeletPodResources=true`` into ``/etc/sysconfig/kubelet``.
This file could look like:
To enable Pod Resources service it is needed to add ``--feature-gates
KubeletPodResources=true`` into ``/etc/sysconfig/kubelet``. This file could
look like:
.. code-block:: bash
.. code-block:: bash
KUBELET_EXTRA_ARGS="--feature-gates KubeletPodResources=true"
Note that it is important to set right value for parameter ``kubelet_root_dir``
in ``kuryr.conf``. By default it is ``/var/lib/kubelet``.
In case of using containerized CNI it is necessary to mount
``'kubelet_root_dir'/pod-resources`` directory into CNI container.
Note that it is important to set right value for parameter
``kubelet_root_dir`` in ``kuryr.conf``. By default it is
``/var/lib/kubelet``. In case of using containerized CNI it is necessary to
mount ``'kubelet_root_dir'/pod-resources`` directory into CNI container.
To use this feature add ``enable_pod_resource_service`` into kuryr.conf.
To use this feature add ``enable_pod_resource_service`` into kuryr.conf.
.. code-block:: ini
.. code-block:: ini
[sriov]
enable_pod_resource_service = True
6. Use privileged user
#. Use privileged user
To make neutron ports active kuryr-k8s makes requests to neutron API to update
ports with binding:profile information. Due to this it is necessary to make
actions with privileged user with admin rights.
To make neutron ports active kuryr-k8s makes requests to neutron API to
update ports with binding:profile information. Due to this it is necessary
to make actions with privileged user with admin rights.
.. _NPWG spec: https://docs.openstack.org/kuryr-kubernetes/latest/specs/rocky/npwg_spec_support.html

View File

@ -52,11 +52,11 @@ that is expected to be used for SR-IOV ports:
| updated_at | 2018-11-21T10:57:34Z |
+-------------------+--------------------------------------------------+
1. Create deployment definition <DEFINITION_FILE_NAME> with one SR-IOV
#. Create deployment definition <DEFINITION_FILE_NAME> with one SR-IOV
interface (apart from default one). Deployment definition file might look
like:
.. code-block:: yaml
.. code-block:: yaml
apiVersion: extensions/v1beta1
kind: Deployment
@ -73,7 +73,7 @@ that is expected to be used for SR-IOV ports:
k8s.v1.cni.cncf.io/networks: net-sriov
spec:
containers:
- name: nginx-sriov
1. name: nginx-sriov
image: nginx
resources:
requests:
@ -85,36 +85,36 @@ that is expected to be used for SR-IOV ports:
cpu: "1"
memory: "512Mi"
Here ``net-sriov`` is the name of ``NetworkAttachmentDefinition``
created before.
Here ``net-sriov`` is the name of ``NetworkAttachmentDefinition`` created
before.
2. Create deployment with the following command:
#. Create deployment with the following command:
.. code-block:: console
.. code-block:: console
$ kubectl create -f <DEFINITION_FILE_NAME>
3. Wait for the pod to get to Running phase.
#. Wait for the pod to get to Running phase.
.. code-block:: console
.. code-block:: console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-sriov-558db554d7-rvpxs 1/1 Running 0 1m
4. If your image contains ``iputils`` (for example, busybox image), you can
#. If your image contains ``iputils`` (for example, busybox image), you can
attach to the pod and check that the correct interface has been attached to
the Pod.
.. code-block:: console
.. code-block:: console
$ kubectl get pod
$ kubectl exec -it nginx-sriov-558db554d7-rvpxs -- /bin/bash
$ ip a
You should see default and eth1 interfaces. eth1 is the SR-IOV VF interface.
You should see default and eth1 interfaces. eth1 is the SR-IOV VF interface.
.. code-block:: console
.. code-block::
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
@ -135,35 +135,36 @@ You should see default and eth1 interfaces. eth1 is the SR-IOV VF interface.
inet6 fe80::f816:3eff:fea8:55af/64 scope link
valid_lft forever preferred_lft forever
4.1. Alternatively you can login to k8s worker and do the same from the host
system. Use the following command to find out ID of running SR-IOV container:
Alternatively you can login to k8s worker and do the same from the host
system. Use the following command to find out ID of running SR-IOV
container:
.. code-block:: console
.. code-block:: console
$ docker ps
Suppose that ID of created container is ``eb4e10f38763``. Use the following
command to get PID of that container:
Suppose that ID of created container is ``eb4e10f38763``.
Use the following command to get PID of that container:
.. code-block:: console
.. code-block:: console
$ docker inspect --format {{.State.Pid}} eb4e10f38763
Suppose that output of previous command is bellow:
Suppose that output of previous command is bellow:
.. code-block:: console
.. code-block:: console
$ 32609
Use the following command to get interfaces of container:
Use the following command to get interfaces of container:
.. code-block:: console
.. code-block:: console
$ nsenter -n -t 32609 ip a
You should see default and eth1 interfaces. eth1 is the SR-IOV VF interface.
You should see default and eth1 interfaces. eth1 is the SR-IOV VF interface.
.. code-block:: console
.. code-block:: console
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
@ -184,20 +185,20 @@ You should see default and eth1 interfaces. eth1 is the SR-IOV VF interface.
inet6 fe80::f816:3eff:fea8:55af/64 scope link
valid_lft forever preferred_lft forever
In our example sriov interface has address 192.168.2.6
In our example sriov interface has address 192.168.2.6
5. Use neutron CLI to check the port with exact address has been created on
#. Use neutron CLI to check the port with exact address has been created on
neutron:
.. code-block:: console
.. code-block:: console
$ openstack port list | grep 192.168.2.6
Suppose that previous command returns a list with one openstack port that
has ID ``545ec21d-6bfc-4179-88c6-9dacaf435ea7``. You can see its information
with the following command:
Suppose that previous command returns a list with one openstack port that
has ID ``545ec21d-6bfc-4179-88c6-9dacaf435ea7``. You can see its information
with the following command:
.. code-block:: console
.. code-block:: console
$ openstack port show 545ec21d-6bfc-4179-88c6-9dacaf435ea7
+-----------------------+----------------------------------------------------------------------------+
@ -235,12 +236,12 @@ with the following command:
| updated_at | 2018-11-26T09:13:07Z |
+-----------------------+----------------------------------------------------------------------------+
The port would have the name of the pod, ``compute::kuryr::sriov`` for device
owner and 'direct' vnic_type. Verify that IP and MAC addresses of the port
match the ones on the container. Currently the neutron-sriov-nic-agent does
not properly detect SR-IOV ports assigned to containers. This means that direct
ports in neutron would always remain in *DOWN* state. This doesn't affect the
feature in any way other than cosmetically.
The port would have the name of the pod, ``compute::kuryr::sriov`` for
device owner and 'direct' vnic_type. Verify that IP and MAC addresses of the
port match the ones on the container. Currently the neutron-sriov-nic-agent
does not properly detect SR-IOV ports assigned to containers. This means
that direct ports in neutron would always remain in *DOWN* state. This
doesn't affect the feature in any way other than cosmetically.
.. _sriov-device-plugin: https://docs.google.com/document/d/1Ewe9Of84GkP0b2Q2PC0y9RVZNkN2WeVEagX9m99Nrzc

View File

@ -5,7 +5,7 @@ Boot VM with a Trunk Port
To create a VM that makes use of the Neutron Trunk port support, the next
steps can be followed:
1. Use the demo tenant and create a key to be used to log in into the overcloud
#. Use the demo tenant and create a key to be used to log in into the overcloud
VM:
.. code-block:: console
@ -14,14 +14,14 @@ steps can be followed:
$ openstack keypair create demo > id_rsa_demo
$ chmod 600 id_rsa_demo
2. Ensure the demo default security group allows ping and ssh access:
#. Ensure the demo default security group allows ping and ssh access:
.. code-block:: console
$ openstack security group rule create --protocol icmp default
$ openstack security group rule create --protocol tcp --dst-port 22 default
3. Download and import an image that allows vlans, as cirros does not support
#. Download and import an image that allows vlans, as cirros does not support
it:
.. code-block:: console
@ -29,7 +29,7 @@ steps can be followed:
$ wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
$ openstack image create --container-format bare --disk-format qcow2 --file CentOS-7-x86_64-GenericCloud.qcow2 centos7
4. Create a port for the overcloud VM and create the trunk with that port as
#. Create a port for the overcloud VM and create the trunk with that port as
the parent port (untagged traffic):
.. code-block:: console
@ -37,7 +37,7 @@ steps can be followed:
$ openstack port create --network private --security-group default port0
$ openstack network trunk create --parent-port port0 trunk0
5. Create the overcloud VM and assign a floating ip to it to be able to log in
#. Create the overcloud VM and assign a floating ip to it to be able to log in
into it:
.. code-block:: console
@ -45,9 +45,9 @@ steps can be followed:
$ openstack server create --image centos7 --flavor ds4G --nic port-id=port0 --key-name demo overcloud_vm
$ openstack floating ip create --port port0 public
Note subports can be added to the trunk port, and be used inside the VM with
the specific vlan, 102 in the example, by doing:
Note subports can be added to the trunk port, and be used inside the VM with
the specific vlan, 102 in the example, by doing:
.. code-block:: console
.. code-block:: console
$ openstack network trunk set --subport port=subport0,segmentation-type=vlan,segmentation-id=102 trunk0