Correct ordered/unordered lists.

There is a lot of lists in documentation, unfortunately they were used
incorrectly. In this patch we fix that. For ordered lists we use
auto numbering restructuredtext feature.

Change-Id: I77c4aa2fa333a3968daf8cdf9440848c51374582
This commit is contained in:
Roman Dobosz 2019-11-13 10:43:55 +01:00
parent 80b5ecd41b
commit ad4c460093
18 changed files with 576 additions and 586 deletions

View File

@ -55,16 +55,16 @@ Please see below the component view of the integrated system:
Design Principles Design Principles
----------------- -----------------
1. Loose coupling between integration components. #. Loose coupling between integration components.
2. Flexible deployment options to support different project, subnet and #. Flexible deployment options to support different project, subnet and
security groups assignment profiles. security groups assignment profiles.
3. The communication of the pod binding data between Controller and CNI driver #. The communication of the pod binding data between Controller and CNI driver
should rely on existing communication channels, currently added to the pod should rely on existing communication channels, currently added to the pod
metadata via annotations. metadata via annotations.
4. CNI Driver should not depend on Neutron. It gets all required details #. CNI Driver should not depend on Neutron. It gets all required details
from Kubernetes API server (currently through Kubernetes annotations), from Kubernetes API server (currently through Kubernetes annotations),
therefore depending on Controller to perform its translation tasks. therefore depending on Controller to perform its translation tasks.
5. Allow different neutron backends to bind Kubernetes pods without code #. Allow different neutron backends to bind Kubernetes pods without code
modification. This means that both Controller and CNI binding mechanism modification. This means that both Controller and CNI binding mechanism
should allow loading of the vif management and binding components, should allow loading of the vif management and binding components,
manifested via configuration. If some vendor requires some extra code, it manifested via configuration. If some vendor requires some extra code, it

View File

@ -31,19 +31,19 @@ and then cover a nested environment where containers are created inside VMs.
Single Node Test Environment Single Node Test Environment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Create a test system. #. Create a test system.
It's best to use a throwaway dev system for running DevStack. Your best bet is It's best to use a throwaway dev system for running DevStack. Your best bet
to use either Fedora 25 or the latest Ubuntu LTS (16.04, Xenial). is to use either Fedora 25 or the latest Ubuntu LTS (16.04, Xenial).
2. Create the ``stack`` user. #. Create the ``stack`` user.
.. code-block:: console .. code-block:: console
$ git clone https://opendev.org/openstack-dev/devstack.git $ git clone https://opendev.org/openstack-dev/devstack.git
$ sudo ./devstack/tools/create-stack-user.sh $ sudo ./devstack/tools/create-stack-user.sh
3. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes. #. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes.
.. code-block:: console .. code-block:: console
@ -51,12 +51,13 @@ to use either Fedora 25 or the latest Ubuntu LTS (16.04, Xenial).
$ git clone https://opendev.org/openstack-dev/devstack.git $ git clone https://opendev.org/openstack-dev/devstack.git
$ git clone https://opendev.org/openstack/kuryr-kubernetes.git $ git clone https://opendev.org/openstack/kuryr-kubernetes.git
4. Configure DevStack to use Dragonflow. #. Configure DevStack to use Dragonflow.
kuryr-kubernetes comes with a sample DevStack configuration file for Dragonflow kuryr-kubernetes comes with a sample DevStack configuration file for
you can start with. You may change some values for the various variables in Dragonflow you can start with. You may change some values for the various
that file, like password settings or what LBaaS service provider to use. variables in that file, like password settings or what LBaaS service
Feel free to edit it if you'd like, but it should work as-is. provider to use. Feel free to edit it if you'd like, but it should work
as-is.
.. code-block:: console .. code-block:: console
@ -64,19 +65,18 @@ Feel free to edit it if you'd like, but it should work as-is.
$ cp ../kuryr-kubernetes/devstack/local.conf.df.sample local.conf $ cp ../kuryr-kubernetes/devstack/local.conf.df.sample local.conf
Optionally, the ports pool funcionality can be enabled by following: Optionally, the ports pool functionality can be enabled by following:
`How to enable ports pool with devstack`_. `How to enable ports pool with devstack`_.
5. Run DevStack. #. Run DevStack.
Expect it to take a while. It installs required packages, clones a bunch Expect it to take a while. It installs required packages, clones a bunch of
of git repos, and installs everything from these git repos. git repos, and installs everything from these git repos.
.. code-block:: console .. code-block:: console
$ ./stack.sh $ ./stack.sh
Once DevStack completes successfully, you should see output that looks Once DevStack completes successfully, you should see output that looks
something like this: something like this:
@ -88,11 +88,11 @@ something like this:
The default users are: admin and demo The default users are: admin and demo
The password: pass The password: pass
#. Extra configurations.
6. Extra configurations. Create NAT rule that will cause "external" traffic from your instances to
get rewritten to your network controller's ip address and sent out on the
Create NAT rule that will cause "external" traffic from your instances to get network:
rewritten to your network controller's ip address and sent out on the network:
.. code-block:: console .. code-block:: console
@ -142,19 +142,14 @@ The main differences with the default dragonflow local.conf sample are that:
- There is no need to enable the kuryr-kubernetes plugin as this will be - There is no need to enable the kuryr-kubernetes plugin as this will be
installed inside the VM (overcloud). installed inside the VM (overcloud).
- There is no need to enable the kuryr related services as they will also be
- There is no need to enable the kuryr related services as they will also installed inside the VM: kuryr-kubernetes, kubelet, kubernetes-api,
be installed inside the VM: kuryr-kubernetes, kubelet, kubernetes-controller-manager, kubernetes-scheduler and kubelet.
kubernetes-api, kubernetes-controller-manager, kubernetes-scheduler and
kubelet.
- Nova and Glance components need to be enabled to be able to create the VM - Nova and Glance components need to be enabled to be able to create the VM
where we will install the overcloud. where we will install the overcloud.
- Dragonflow Trunk service plugin need to be enable to ensure Trunk ports - Dragonflow Trunk service plugin need to be enable to ensure Trunk ports
support. support.
Once the undercloud deployment has finished, the next steps are related to Once the undercloud deployment has finished, the next steps are related to
creating the overcloud VM by using a parent port of a Trunk so that containers creating the overcloud VM by using a parent port of a Trunk so that containers
can be created inside with their own networks. To do that we follow the next can be created inside with their own networks. To do that we follow the next
@ -168,17 +163,16 @@ Once the VM is up and running, we can start with the overcloud configuration.
The steps to perform are the same as without Dragonflow integration, i.e., the The steps to perform are the same as without Dragonflow integration, i.e., the
same steps as for ML2/OVS: same steps as for ML2/OVS:
1. Log in into the VM: #. Log in into the VM:
.. code-block:: console .. code-block:: console
$ ssh -i id_rsa_demo centos@FLOATING_IP $ ssh -i id_rsa_demo centos@FLOATING_IP
2. Deploy devstack following steps 3 and 4 detailed at #. Deploy devstack following steps 3 and 4 detailed at
`How to try out nested-pods locally (VLAN + trunk)`_. `How to try out nested-pods locally (VLAN + trunk)`_.
Testing Nested Network Connectivity Testing Nested Network Connectivity
+++++++++++++++++++++++++++++++++++ +++++++++++++++++++++++++++++++++++

View File

@ -5,22 +5,22 @@ How to try out nested-pods locally (MACVLAN)
Following are the instructions for an all-in-one setup, using the Following are the instructions for an all-in-one setup, using the
nested MACVLAN driver rather than VLAN and trunk ports. nested MACVLAN driver rather than VLAN and trunk ports.
1. To install OpenStack services run devstack with #. To install OpenStack services run devstack with
``devstack/local.conf.pod-in-vm.undercloud.sample``. ``devstack/local.conf.pod-in-vm.undercloud.sample``.
2. Launch a Nova VM with MACVLAN support #. Launch a Nova VM with MACVLAN support
.. todo:: .. todo::
Add a list of neutron commands, required to launch a such a VM Add a list of neutron commands, required to launch a such a VM
3. Log into the VM and set up Kubernetes along with Kuryr using devstack: #. Log into the VM and set up Kubernetes along with Kuryr using devstack:
- Since undercloud Neutron will be used by pods, Neutron services should be - Since undercloud Neutron will be used by pods, Neutron services should be
disabled in localrc. disabled in localrc.
- Run devstack with ``devstack/local.conf.pod-in-vm.overcloud.sample``. - Run devstack with ``devstack/local.conf.pod-in-vm.overcloud.sample``.
Fill in the needed information, such as the subnet pool id to use or the Fill in the needed information, such as the subnet pool id to use or the
router. router.
4. Once devstack is done and all services are up inside VM. Next steps are to #. Once devstack is done and all services are up inside VM. Next steps are to
configure the missing information at ``/etc/kuryr/kuryr.conf``: configure the missing information at ``/etc/kuryr/kuryr.conf``:
- Configure worker VMs subnet: - Configure worker VMs subnet:

View File

@ -7,7 +7,7 @@ also be running inside the same Nova VM in which Kuryr-controller and Kuryr-cni
will be running. 4GB memory and 2 vCPUs, is the minimum resource requirement will be running. 4GB memory and 2 vCPUs, is the minimum resource requirement
for the VM: for the VM:
1. To install OpenStack services run devstack with #. To install OpenStack services run devstack with
``devstack/local.conf.pod-in-vm.undercloud.sample``. Ensure that "trunk" ``devstack/local.conf.pod-in-vm.undercloud.sample``. Ensure that "trunk"
service plugin is enabled in ``/etc/neutron/neutron.conf``: service plugin is enabled in ``/etc/neutron/neutron.conf``:
@ -16,14 +16,15 @@ for the VM:
[DEFAULT] [DEFAULT]
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.trunk.plugin.TrunkPlugin service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.trunk.plugin.TrunkPlugin
2. Launch a VM with `Neutron trunk port`_. The next steps can be followed: #. Launch a VM with `Neutron trunk port`_. The next steps can be followed:
`Boot VM with a Trunk Port`_. `Boot VM with a Trunk Port`_.
3. Inside VM, install and setup Kubernetes along with Kuryr using devstack: #. Inside VM, install and setup Kubernetes along with Kuryr using devstack:
- Since undercloud Neutron will be used by pods, Neutron services should be - Since undercloud Neutron will be used by pods, Neutron services should be
disabled in localrc. disabled in localrc.
- Run devstack with ``devstack/local.conf.pod-in-vm.overcloud.sample``. - Run devstack with ``devstack/local.conf.pod-in-vm.overcloud.sample``.
but first fill in the needed information: But first fill in the needed information:
- Point to the undercloud deployment by setting: - Point to the undercloud deployment by setting:
@ -31,9 +32,8 @@ for the VM:
SERVICE_HOST=UNDERCLOUD_CONTROLLER_IP SERVICE_HOST=UNDERCLOUD_CONTROLLER_IP
- Fill in the subnetpool id of the undercloud deployment, as well as - Fill in the subnetpool id of the undercloud deployment, as well as the
the router where the new pod and service networks need to be router where the new pod and service networks need to be connected:
connected:
.. code-block:: bash .. code-block:: bash
@ -49,14 +49,14 @@ for the VM:
- Optionally, the ports pool funcionality can be enabled by following: - Optionally, the ports pool funcionality can be enabled by following:
`How to enable ports pool with devstack`_. `How to enable ports pool with devstack`_.
- [OPTIONAL] If you want to enable the subport pools driver and the - [OPTIONAL] If you want to enable the subport pools driver and the VIF
VIF Pool Manager you need to include: Pool Manager you need to include:
.. code-block:: bash .. code-block:: bash
KURYR_VIF_POOL_MANAGER=True KURYR_VIF_POOL_MANAGER=True
4. Once devstack is done and all services are up inside VM. Next steps are to #. Once devstack is done and all services are up inside VM. Next steps are to
configure the missing information at ``/etc/kuryr/kuryr.conf``: configure the missing information at ``/etc/kuryr/kuryr.conf``:
- Configure worker VMs subnet: - Configure worker VMs subnet:

View File

@ -26,19 +26,19 @@ and then cover a nested environment where containers are created inside VMs.
Single Node Test Environment Single Node Test Environment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Create a test system. #. Create a test system.
It's best to use a throwaway dev system for running DevStack. Your best bet is It's best to use a throwaway dev system for running DevStack. Your best bet is
to use either CentOS 7 or the latest Ubuntu LTS (16.04, Xenial). to use either CentOS 7 or the latest Ubuntu LTS (16.04, Xenial).
2. Create the ``stack`` user. #. Create the ``stack`` user.
.. code-block:: console .. code-block:: console
$ git clone https://opendev.org/openstack-dev/devstack.git $ git clone https://opendev.org/openstack-dev/devstack.git
$ sudo ./devstack/tools/create-stack-user.sh $ sudo ./devstack/tools/create-stack-user.sh
3. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes. #. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes.
.. code-block:: console .. code-block:: console
@ -46,25 +46,25 @@ to use either CentOS 7 or the latest Ubuntu LTS (16.04, Xenial).
$ git clone https://opendev.org/openstack-dev/devstack.git $ git clone https://opendev.org/openstack-dev/devstack.git
$ git clone https://opendev.org/openstack/kuryr-kubernetes.git $ git clone https://opendev.org/openstack/kuryr-kubernetes.git
4. Configure DevStack to use ODL. #. Configure DevStack to use ODL.
kuryr-kubernetes comes with a sample DevStack configuration file for ODL you kuryr-kubernetes comes with a sample DevStack configuration file for ODL you
can start with. For example, you may want to set some values for the various can start with. For example, you may want to set some values for the various
PASSWORD variables in that file, or change the LBaaS service provider to use. PASSWORD variables in that file, or change the LBaaS service provider to
Feel free to edit it if you'd like, but it should work as-is. use. Feel free to edit it if you'd like, but it should work as-is.
.. code-block:: console .. code-block:: console
$ cd devstack $ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.odl.sample local.conf $ cp ../kuryr-kubernetes/devstack/local.conf.odl.sample local.conf
Optionally, the ports pool funcionality can be enabled by following: Optionally, the ports pool functionality can be enabled by following:
`How to enable ports pool with devstack`_. `How to enable ports pool with devstack`_.
5. Run DevStack. #. Run DevStack.
This is going to take a while. It installs a bunch of packages, clones a bunch This is going to take a while. It installs a bunch of packages, clones a
of git repos, and installs everything from these git repos. bunch of git repos, and installs everything from these git repos.
.. code-block:: console .. code-block:: console
@ -81,10 +81,10 @@ something like this:
The default users are: admin and demo The default users are: admin and demo
The password: pass The password: pass
6. Extra configurations. #. Extra configurations.
Devstack does not wire up the public network by default so we must do Devstack does not wire up the public network by default so we must do some
some extra steps for floating IP usage as well as external connectivity: extra steps for floating IP usage as well as external connectivity:
.. code-block:: console .. code-block:: console
@ -93,8 +93,8 @@ some extra steps for floating IP usage as well as external connectivity:
$ sudo ip addr add 172.24.4.1/24 dev br-ex $ sudo ip addr add 172.24.4.1/24 dev br-ex
Then you can create forwarding and NAT rules that will cause "external" Then you can create forwarding and NAT rules that will cause "external"
traffic from your instances to get rewritten to your network controller's traffic from your instances to get rewritten to your network controller's ip
ip address and sent out on the network: address and sent out on the network:
.. code-block:: console .. code-block:: console
@ -146,15 +146,11 @@ The main differences with the default odl local.conf sample are that:
- There is no need to enable the kuryr-kubernetes plugin as this will be - There is no need to enable the kuryr-kubernetes plugin as this will be
installed inside the VM (overcloud). installed inside the VM (overcloud).
- There is no need to enable the kuryr related services as they will also be
- There is no need to enable the kuryr related services as they will also installed inside the VM: kuryr-kubernetes, kubelet, kubernetes-api,
be installed inside the VM: kuryr-kubernetes, kubelet, kubernetes-controller-manager, kubernetes-scheduler and kubelet.
kubernetes-api, kubernetes-controller-manager, kubernetes-scheduler and
kubelet.
- Nova and Glance components need to be enabled to be able to create the VM - Nova and Glance components need to be enabled to be able to create the VM
where we will install the overcloud. where we will install the overcloud.
- ODL Trunk service plugin need to be enable to ensure Trunk ports support. - ODL Trunk service plugin need to be enable to ensure Trunk ports support.
Once the undercloud deployment has finished, the next steps are related to Once the undercloud deployment has finished, the next steps are related to
@ -167,16 +163,16 @@ Overcloud deployment
++++++++++++++++++++ ++++++++++++++++++++
Once the VM is up and running, we can start with the overcloud configuration. Once the VM is up and running, we can start with the overcloud configuration.
The steps to perform are the same as without ODL integration, i.e., the The steps to perform are the same as without ODL integration, i.e., the same
same steps as for ML2/OVS: steps as for ML2/OVS:
1. Log in into the VM: #. Log in into the VM:
.. code-block:: console .. code-block:: console
$ ssh -i id_rsa_demo centos@FLOATING_IP $ ssh -i id_rsa_demo centos@FLOATING_IP
2. Deploy devstack following steps 3 and 4 detailed at #. Deploy devstack following steps 3 and 4 detailed at
`How to try out nested-pods locally (VLAN + trunk)`_. `How to try out nested-pods locally (VLAN + trunk)`_.

View File

@ -23,19 +23,19 @@ and then cover a nested environment where containers are created inside VMs.
Single Node Test Environment Single Node Test Environment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Create a test system. #. Create a test system.
It's best to use a throwaway dev system for running DevStack. Your best bet is It's best to use a throwaway dev system for running DevStack. Your best bet
to use either CentOS 7 or the latest Ubuntu LTS (16.04, Xenial). is to use either CentOS 7 or the latest Ubuntu LTS (16.04, Xenial).
2. Create the ``stack`` user. #. Create the ``stack`` user.
.. code-block:: console .. code-block:: console
$ git clone https://opendev.org/openstack-dev/devstack.git $ git clone https://opendev.org/openstack-dev/devstack.git
$ sudo ./devstack/tools/create-stack-user.sh $ sudo ./devstack/tools/create-stack-user.sh
3. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes. #. Switch to the ``stack`` user and clone DevStack and kuryr-kubernetes.
.. code-block:: console .. code-block:: console
@ -43,12 +43,12 @@ to use either CentOS 7 or the latest Ubuntu LTS (16.04, Xenial).
$ git clone https://opendev.org/openstack-dev/devstack.git $ git clone https://opendev.org/openstack-dev/devstack.git
$ git clone https://opendev.org/openstack/kuryr-kubernetes.git $ git clone https://opendev.org/openstack/kuryr-kubernetes.git
4. Configure DevStack to use OVN. #. Configure DevStack to use OVN.
kuryr-kubernetes comes with a sample DevStack configuration file for OVN you kuryr-kubernetes comes with a sample DevStack configuration file for OVN you
can start with. For example, you may want to set some values for the various can start with. For example, you may want to set some values for the various
PASSWORD variables in that file, or change the LBaaS service provider to use. PASSWORD variables in that file, or change the LBaaS service provider to
Feel free to edit it if you'd like, but it should work as-is. use. Feel free to edit it if you'd like, but it should work as-is.
.. code-block:: console .. code-block:: console
@ -56,23 +56,25 @@ Feel free to edit it if you'd like, but it should work as-is.
$ cp ../kuryr-kubernetes/devstack/local.conf.ovn.sample local.conf $ cp ../kuryr-kubernetes/devstack/local.conf.ovn.sample local.conf
Note that due to OVN compiling OVS from source at Note that due to OVN compiling OVS from source at
/usr/local/var/run/openvswitch we need to state at the local.conf that the path /usr/local/var/run/openvswitch we need to state at the local.conf that the
is different from the default one (i.e., /var/run/openvswitch). path is different from the default one (i.e., /var/run/openvswitch).
Optionally, the ports pool functionality can be enabled by following: Optionally, the ports pool functionality can be enabled by following:
:doc:`./ports-pool` :doc:`./ports-pool`
5. Run DevStack. #. Run DevStack.
This is going to take a while. It installs a bunch of packages, clones a bunch This is going to take a while. It installs a bunch of packages, clones a
of git repos, and installs everything from these git repos. bunch of git repos, and installs everything from these git repos.
.. code-block:: console .. code-block:: console
$ ./stack.sh $ ./stack.sh
Once DevStack completes successfully, you should see output that looks Once DevStack completes successfully, you should see output that looks
something like this:: something like this:
.. code-block::
This is your host IP address: 192.168.5.10 This is your host IP address: 192.168.5.10
This is your host IPv6 address: ::1 This is your host IPv6 address: ::1
@ -80,11 +82,10 @@ something like this::
The default users are: admin and demo The default users are: admin and demo
The password: pass The password: pass
#. Extra configurations.
6. Extra configurations. Devstack does not wire up the public network by default so we must do some
extra steps for floating IP usage as well as external connectivity:
Devstack does not wire up the public network by default so we must do
some extra steps for floating IP usage as well as external connectivity:
.. code-block:: console .. code-block:: console
@ -93,8 +94,8 @@ some extra steps for floating IP usage as well as external connectivity:
$ sudo ip addr add 172.24.4.1/24 dev br-ex $ sudo ip addr add 172.24.4.1/24 dev br-ex
Then you can create forwarding and NAT rules that will cause "external" Then you can create forwarding and NAT rules that will cause "external"
traffic from your instances to get rewritten to your network controller's traffic from your instances to get rewritten to your network controller's ip
ip address and sent out on the network: address and sent out on the network:
.. code-block:: console .. code-block:: console
@ -141,22 +142,18 @@ local.conf to use (step 4), in this case:
$ cd devstack $ cd devstack
$ cp ../kuryr-kubernetes/devstack/local.conf.pod-in-vm.undercloud.ovn.sample local.conf $ cp ../kuryr-kubernetes/devstack/local.conf.pod-in-vm.undercloud.ovn.sample local.conf
The main differences with the default ovn local.conf sample are that: The main differences with the default ovn local.conf sample are that:
- There is no need to enable the kuryr-kubernetes plugin as this will be - There is no need to enable the kuryr-kubernetes plugin as this will be
installed inside the VM (overcloud). installed inside the VM (overcloud).
- There is no need to enable the kuryr related services as they will also be
- There is no need to enable the kuryr related services as they will also installed inside the VM: kuryr-kubernetes, kubelet, kubernetes-api,
be installed inside the VM: kuryr-kubernetes, kubelet, kubernetes-controller-manager, kubernetes-scheduler and kubelet.
kubernetes-api, kubernetes-controller-manager, kubernetes-scheduler and
kubelet.
- Nova and Glance components need to be enabled to be able to create the VM - Nova and Glance components need to be enabled to be able to create the VM
where we will install the overcloud. where we will install the overcloud.
- OVN Trunk service plugin need to be enable to ensure Trunk ports support. - OVN Trunk service plugin need to be enable to ensure Trunk ports support.
Once the undercloud deployment has finished, the next steps are related to Once the undercloud deployment has finished, the next steps are related to
create the overcloud VM by using a parent port of a Trunk so that containers create the overcloud VM by using a parent port of a Trunk so that containers
can be created inside with their own networks. To do that we follow the next can be created inside with their own networks. To do that we follow the next
@ -170,13 +167,13 @@ Once the VM is up and running, we can start with the overcloud configuration.
The steps to perform are the same as without OVN integration, i.e., the The steps to perform are the same as without OVN integration, i.e., the
same steps as for ML2/OVS: same steps as for ML2/OVS:
1. Log in into the VM: #. Log in into the VM:
.. code-block:: console .. code-block:: console
$ ssh -i id_rsa_demo centos@FLOATING_IP $ ssh -i id_rsa_demo centos@FLOATING_IP
2. Deploy devstack following steps 3 and 4 detailed at :doc:`./nested-vlan` #. Deploy devstack following steps 3 and 4 detailed at :doc:`./nested-vlan`
Testing Nested Network Connectivity Testing Nested Network Connectivity

View File

@ -5,13 +5,13 @@ How to enable ports pool with devstack
To enable the utilization of the ports pool feature through devstack, the next To enable the utilization of the ports pool feature through devstack, the next
options needs to be set at the local.conf file: options needs to be set at the local.conf file:
1. First, you need to enable the pools by setting: #. First, you need to enable the pools by setting:
.. code-block:: bash .. code-block:: bash
KURYR_USE_PORT_POOLS=True KURYR_USE_PORT_POOLS=True
2. Then, the proper pool driver needs to be set. This means that for the #. Then, the proper pool driver needs to be set. This means that for the
baremetal case you need to ensure the pod vif driver and the vif pool driver baremetal case you need to ensure the pod vif driver and the vif pool driver
are set to the right baremetal drivers, for instance: are set to the right baremetal drivers, for instance:
@ -27,7 +27,7 @@ options needs to be set at the local.conf file:
KURYR_POD_VIF_DRIVER=nested-vlan KURYR_POD_VIF_DRIVER=nested-vlan
KURYR_VIF_POOL_DRIVER=nested KURYR_VIF_POOL_DRIVER=nested
3. Then, in case you want to set a limit to the maximum number of ports, or #. Then, in case you want to set a limit to the maximum number of ports, or
increase/reduce the default one for the minimum number, as well as to modify increase/reduce the default one for the minimum number, as well as to modify
the way the pools are repopulated, both in time as well as regarding bulk the way the pools are repopulated, both in time as well as regarding bulk
operation sizes, the next option can be included and modified accordingly: operation sizes, the next option can be included and modified accordingly:

View File

@ -6,17 +6,17 @@ To create pods with additional Interfaces follow the `Kubernetes Network Custom
Resource Definition De-facto Standard Version 1`_, the next steps can be Resource Definition De-facto Standard Version 1`_, the next steps can be
followed: followed:
1. Create Neutron net/subnets which you want the additional interfaces attach #. Create Neutron net/subnets which you want the additional interfaces attach
to. to.
.. code-block:: bash .. code-block:: console
$ openstack network create net-a $ openstack network create net-a
$ openstack subnet create subnet-a --subnet-range 192.0.2.0/24 --network net-a $ openstack subnet create subnet-a --subnet-range 192.0.2.0/24 --network net-a
2. Create CRD of 'NetworkAttachmentDefinition' as defined in NPWG spec. #. Create CRD of 'NetworkAttachmentDefinition' as defined in NPWG spec.
.. code-block:: bash .. code-block:: console
$ cat << EOF > nad.yaml $ cat << EOF > nad.yaml
apiVersion: apiextensions.k8s.io/v1beta1 apiVersion: apiextensions.k8s.io/v1beta1
@ -43,10 +43,10 @@ followed:
EOF EOF
$ kubectl apply -f nad.yaml $ kubectl apply -f nad.yaml
3. Create NetworkAttachmentDefinition object with the UUID of Neutron subnet #. Create NetworkAttachmentDefinition object with the UUID of Neutron subnet
defined in step 1. defined in step 1.
.. code-block:: bash .. code-block:: console
$ cat << EOF > net-a.yaml $ cat << EOF > net-a.yaml
apiVersion: "k8s.cni.cncf.io/v1" apiVersion: "k8s.cni.cncf.io/v1"
@ -60,7 +60,7 @@ defined in step 1.
EOF EOF
$ kubectl apply -f net-a.yaml $ kubectl apply -f net-a.yaml
4. Enable the multi-vif driver by setting 'multi_vif_drivers' in kuryr.conf. #. Enable the multi-vif driver by setting 'multi_vif_drivers' in kuryr.conf.
Then restart kuryr-controller. Then restart kuryr-controller.
.. code-block:: ini .. code-block:: ini
@ -70,7 +70,7 @@ defined in step 1.
5. Add additional interfaces to pods definition. e.g. 5. Add additional interfaces to pods definition. e.g.
.. code-block:: bash .. code-block:: console
$ cat << EOF > pod.yaml $ cat << EOF > pod.yaml
apiVersion: v1 apiVersion: v1
@ -88,7 +88,8 @@ defined in step 1.
EOF EOF
$ kubectl apply -f pod.yaml $ kubectl apply -f pod.yaml
You may put a list of network separated with comma to attach Pods to more networks. You may put a list of network separated with comma to attach Pods to more
networks.
.. _Kubernetes Network Custom Resource Definition De-facto Standard Version 1: https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR7UdTPAG_RNydhVE1Kx54kFQ/edit?usp=sharing .. _Kubernetes Network Custom Resource Definition De-facto Standard Version 1: https://docs.google.com/document/d/1Ny03h6IDVy_e_vmElOqR7UdTPAG_RNydhVE1Kx54kFQ/edit?usp=sharing

View File

@ -5,7 +5,7 @@ Enable network per namespace functionality (handler + driver)
To enable the subnet driver that creates a new network for each new namespace To enable the subnet driver that creates a new network for each new namespace
the next steps are needed: the next steps are needed:
1. Enable the namespace handler to reach to namespace events, in this case, #. Enable the namespace handler to reach to namespace events, in this case,
creation and deletion. To do that you need to add it to the list of the creation and deletion. To do that you need to add it to the list of the
enabled handlers at kuryr.conf (details on how to edit this for enabled handlers at kuryr.conf (details on how to edit this for
containerized deployment can be found at :doc:`./devstack/containerized`): containerized deployment can be found at :doc:`./devstack/containerized`):
@ -24,7 +24,7 @@ the next steps are needed:
[kubernetes] [kubernetes]
enabled_handlers=vif,lb,lbaasspec,namespace,kuryrnet enabled_handlers=vif,lb,lbaasspec,namespace,kuryrnet
2. Enable the namespace subnet driver by modifying the default #. Enable the namespace subnet driver by modifying the default
pod_subnet_driver option at kuryr.conf: pod_subnet_driver option at kuryr.conf:
.. code-block:: ini .. code-block:: ini
@ -43,7 +43,7 @@ the next steps are needed:
pod_security_groups_driver = namespace pod_security_groups_driver = namespace
service_security_groups_driver = namespace service_security_groups_driver = namespace
3. Select (and create if needed) the subnet pool from where the new subnets #. Select (and create if needed) the subnet pool from where the new subnets
will get their CIDR (e.g., the default on devstack deployment is will get their CIDR (e.g., the default on devstack deployment is
shared-default-subnetpool-v4): shared-default-subnetpool-v4):
@ -52,7 +52,7 @@ the next steps are needed:
[namespace_subnet] [namespace_subnet]
pod_subnet_pool = SUBNET_POOL_ID pod_subnet_pool = SUBNET_POOL_ID
4. Select (and create if needed) the router where the new subnet will be #. Select (and create if needed) the router where the new subnet will be
connected (e.g., the default on devstack deployments is router1): connected (e.g., the default on devstack deployments is router1):
.. code-block:: ini .. code-block:: ini
@ -64,7 +64,7 @@ the next steps are needed:
requirements between pod, service and public subnets, as in the case for requirements between pod, service and public subnets, as in the case for
the default subnet driver. the default subnet driver.
5. Select (and create if needed) the security groups to be attached to the #. Select (and create if needed) the security groups to be attached to the
pods at the default namespace and to the others, enabling the cross access pods at the default namespace and to the others, enabling the cross access
between them: between them:
@ -108,14 +108,14 @@ to add the namespace handler and state the namespace subnet driver with:
Testing the network per namespace functionality Testing the network per namespace functionality
----------------------------------------------- -----------------------------------------------
1. Create two namespaces: #. Create two namespaces:
.. code-block:: console .. code-block:: console
$ kubectl create namespace test1 $ kubectl create namespace test1
$ kubectl create namespace test2 $ kubectl create namespace test2
2. Check resources has been created: #. Check resources has been created:
.. code-block:: console .. code-block:: console
@ -136,7 +136,7 @@ Testing the network per namespace functionality
$ openstack subnet list | grep test1 $ openstack subnet list | grep test1
| 8640d134-5ea2-437d-9e2a-89236f6c0198 | ns/test1-subnet | 7c7b68c5-d3c4-431c-9f69-fbc777b43ee5 | 10.0.1.128/26 | | 8640d134-5ea2-437d-9e2a-89236f6c0198 | ns/test1-subnet | 7c7b68c5-d3c4-431c-9f69-fbc777b43ee5 | 10.0.1.128/26 |
3. Create a pod in the created namespaces: #. Create a pod in the created namespaces:
.. code-block:: console .. code-block:: console
@ -154,7 +154,7 @@ Testing the network per namespace functionality
NAME READY STATUS RESTARTS AGE IP NODE NAME READY STATUS RESTARTS AGE IP NODE
demo-5135352253-dfghd 1/1 Running 0 7s 10.0.1.134 node1 demo-5135352253-dfghd 1/1 Running 0 7s 10.0.1.134 node1
4. Create a service: #. Create a service:
.. code-block:: console .. code-block:: console
@ -165,7 +165,7 @@ Testing the network per namespace functionality
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo ClusterIP 10.0.0.141 <none> 80/TCP 18s demo ClusterIP 10.0.0.141 <none> 80/TCP 18s
5. Test service connectivity from both namespaces: #. Test service connectivity from both namespaces:
.. code-block:: console .. code-block:: console
@ -177,7 +177,7 @@ Testing the network per namespace functionality
test-2-pod$ curl 10.0.0.141 test-2-pod$ curl 10.0.0.141
## No response ## No response
6. And finally, to remove the namespace and all its resources, including #. And finally, to remove the namespace and all its resources, including
openstack networks, kuryrnet CRD, svc, pods, you just need to do: openstack networks, kuryrnet CRD, svc, pods, you just need to do:
.. code-block:: console .. code-block:: console

View File

@ -100,7 +100,7 @@ to add the policy, pod_label and namespace handler and drivers with:
Testing the network policy support functionality Testing the network policy support functionality
------------------------------------------------ ------------------------------------------------
1. Given a yaml file with a network policy, such as: #. Given a yaml file with a network policy, such as:
.. code-block:: yaml .. code-block:: yaml
@ -133,13 +133,13 @@ Testing the network policy support functionality
- protocol: TCP - protocol: TCP
port: 5978 port: 5978
2. Apply the network policy: #. Apply the network policy:
.. code-block:: console .. code-block:: console
$ kubectl apply -f network_policy.yml $ kubectl apply -f network_policy.yml
3. Check that the resources has been created: #. Check that the resources has been created:
.. code-block:: console .. code-block:: console
@ -154,7 +154,7 @@ Testing the network policy support functionality
$ openstack security group list | grep sg-test-network-policy $ openstack security group list | grep sg-test-network-policy
| dabdf308-7eed-43ef-a058-af84d1954acb | sg-test-network-policy | dabdf308-7eed-43ef-a058-af84d1954acb | sg-test-network-policy
4. Check that the rules are in place for the security group: #. Check that the rules are in place for the security group:
.. code-block:: console .. code-block:: console
@ -230,7 +230,7 @@ Testing the network policy support functionality
| tcp | 5978:5978 | egress | | tcp | 5978:5978 | egress |
+-------------+------------+-----------+ +-------------+------------+-----------+
5. Create a pod: #. Create a pod:
.. code-block:: console .. code-block:: console
@ -241,7 +241,7 @@ Testing the network policy support functionality
NAME READY STATUS RESTARTS AGE IP NAME READY STATUS RESTARTS AGE IP
demo-5558c7865d-fdkdv 1/1 Running 0 44s 10.0.0.68 demo-5558c7865d-fdkdv 1/1 Running 0 44s 10.0.0.68
6. Get the pod port and check its security group rules: #. Get the pod port and check its security group rules:
.. code-block:: console .. code-block:: console
@ -260,13 +260,13 @@ Testing the network policy support functionality
| tcp | 5978:5978 | egress | | tcp | 5978:5978 | egress |
+-------------+------------+-----------+ +-------------+------------+-----------+
7. Try to curl the pod on port 8080 (hint: it won't work!): #. Try to curl the pod on port 8080 (hint: it won't work!):
.. code-block:: console .. code-block:: console
$ curl 10.0.0.68:8080 $ curl 10.0.0.68:8080
8. Update network policy to allow ingress 8080 port: #. Update network policy to allow ingress 8080 port:
.. code-block:: console .. code-block:: console
@ -343,7 +343,7 @@ Testing the network policy support functionality
| tcp | 5978:5978 | egress | | tcp | 5978:5978 | egress |
+-------------+------------+-----------+ +-------------+------------+-----------+
9. Try to curl the pod ip after patching the network policy: #. Try to curl the pod ip after patching the network policy:
.. code-block:: console .. code-block:: console
@ -353,9 +353,8 @@ Testing the network policy support functionality
Note the curl only works from pods (neutron ports) on a namespace that has Note the curl only works from pods (neutron ports) on a namespace that has
the label `project: default` as stated on the policy namespaceSelector. the label `project: default` as stated on the policy namespaceSelector.
10. We can also create a single pod, without a label and check that there is #. We can also create a single pod, without a label and check that there is no
no connectivity to it, as it does not match the network policy connectivity to it, as it does not match the network policy podSelector:
podSelector:
.. code-block:: console .. code-block:: console
@ -374,7 +373,7 @@ the label `project: default` as stated on the policy namespaceSelector.
$ curl demo-pod-IP:8080 $ curl demo-pod-IP:8080
NO REPLY NO REPLY
11. If we add to the pod a label that match a network policy podSelector, in #. If we add to the pod a label that match a network policy podSelector, in
this case 'project: default', the network policy will get applied on the this case 'project: default', the network policy will get applied on the
pod, and the traffic will be allowed: pod, and the traffic will be allowed:
@ -384,7 +383,7 @@ the label `project: default` as stated on the policy namespaceSelector.
$ curl demo-pod-IP:8080 $ curl demo-pod-IP:8080
demo-pod-XXX: HELLO! I AM ALIVE!!! demo-pod-XXX: HELLO! I AM ALIVE!!!
12. Confirm the teardown of the resources once the network policy is removed: #. Confirm the teardown of the resources once the network policy is removed:
.. code-block:: console .. code-block:: console

View File

@ -84,14 +84,14 @@ Router:
Configure Kuryr to support L7 Router and OCP-Route resources Configure Kuryr to support L7 Router and OCP-Route resources
------------------------------------------------------------ ------------------------------------------------------------
1. Configure the L7 Router by adding the LB UUID at kuryr.conf: #. Configure the L7 Router by adding the LB UUID at kuryr.conf:
.. code-block:: ini .. code-block:: ini
[ingress] [ingress]
l7_router_uuid = 99f580e6-d894-442a-bc5f-4d14b41e10d2 l7_router_uuid = 99f580e6-d894-442a-bc5f-4d14b41e10d2
2. Enable the ocp-route and k8s-endpoint handlers. For that you need to add #. Enable the ocp-route and k8s-endpoint handlers. For that you need to add
this handlers to the enabled handlers list at kuryr.conf (details on how to this handlers to the enabled handlers list at kuryr.conf (details on how to
edit this for containerized deployment can be found at edit this for containerized deployment can be found at
:doc:`./devstack/containerized`): :doc:`./devstack/containerized`):
@ -127,7 +127,7 @@ with devstack, you just need to add the following at local.conf file:
Testing OCP-Route functionality Testing OCP-Route functionality
------------------------------- -------------------------------
1. Create a service: #. Create a service:
.. code-block:: console .. code-block:: console
@ -135,7 +135,7 @@ Testing OCP-Route functionality
$ oc scale dc/kuryr-demo --replicas=2 $ oc scale dc/kuryr-demo --replicas=2
$ oc expose dc/kuryr-demo --port 80 --target-port 8080 $ oc expose dc/kuryr-demo --port 80 --target-port 8080
2. Create a Route object pointing to above service (kuryr-demo): #. Create a Route object pointing to above service (kuryr-demo):
.. code-block:: console .. code-block:: console
@ -152,7 +152,7 @@ Testing OCP-Route functionality
> EOF > EOF
$ oc create -f route.yaml $ oc create -f route.yaml
3. Curl L7 router's FIP using specified hostname: #. Curl L7 router's FIP using specified hostname:
.. code-block:: console .. code-block:: console

View File

@ -95,11 +95,10 @@ device_owner:
- For neutron pod driver: compute:kuryr (of the value at - For neutron pod driver: compute:kuryr (of the value at
kuryr.lib.constants.py) kuryr.lib.constants.py)
- For nested-vlan pod driver: trunk:subport or compute:kuryr (or the value at
- For nested-vlan pod driver: trunk:subport or compute:kuryr (or the value kuryr.lib.constants.py). But in this case they also need to be attached to an
at kuryr.lib.constants.py). But in this case they also need to be active neutron trunk port, i.e., they need to be subports of an existing
attached to an active neutron trunk port, i.e., they need to be subports trunk
of an existing trunk
Subports pools management tool Subports pools management tool

View File

@ -425,15 +425,17 @@ The services and pods subnets should be created.
#. For the external services (type=LoadBalancer) case, #. For the external services (type=LoadBalancer) case,
two methods are supported: two methods are supported:
* Pool - external IPs are allocated from pre-defined pool + Pool - external IPs are allocated from pre-defined pool
* User - user specify the external IP address + User - user specify the external IP address
In case 'Pool' method should be supported, execute the next steps In case 'Pool' method should be supported, execute the next steps:
A. Create an external/provider network #. Create an external/provider network
B. Create subnet/pool range of external CIDR #. Create subnet/pool range of external CIDR
C. Connect external subnet to kuryr-kubernetes router #. Connect external subnet to kuryr-kubernetes router
D. Configure external network details in Kuryr.conf as follows: #. Configure external network details in Kuryr.conf as follows:
.. code-block:: ini
[neutron_defaults] [neutron_defaults]
external_svc_net= <id of external network> external_svc_net= <id of external network>

View File

@ -8,17 +8,17 @@ Current approach of SR-IOV relies on `sriov-device-plugin`_. While creating
pods with SR-IOV, sriov-device-plugin should be turned on on all nodes. To use pods with SR-IOV, sriov-device-plugin should be turned on on all nodes. To use
a SR-IOV port on a baremetal installation the 3 following steps should be done: a SR-IOV port on a baremetal installation the 3 following steps should be done:
1. Create OpenStack network and subnet for SR-IOV. #. Create OpenStack network and subnet for SR-IOV. Following steps should be
Following steps should be done with admin rights. done with admin rights.
.. code-block:: console .. code-block:: console
neutron net-create vlan-sriov-net --shared --provider:physical_network physnet10_4 --provider:network_type vlan --provider:segmentation_id 3501 $ neutron net-create vlan-sriov-net --shared --provider:physical_network physnet10_4 --provider:network_type vlan --provider:segmentation_id 3501
neutron subnet-create vlan-sriov-net 203.0.114.0/24 --name vlan-sriov-subnet --gateway 203.0.114.1 $ neutron subnet-create vlan-sriov-net 203.0.114.0/24 --name vlan-sriov-subnet --gateway 203.0.114.1
Subnet id <UUID of vlan-sriov-net> will be used later in NetworkAttachmentDefinition. Subnet id <UUID of vlan-sriov-net> will be used later in NetworkAttachmentDefinition.
2. Add sriov section into kuryr.conf. #. Add sriov section into kuryr.conf.
.. code-block:: ini .. code-block:: ini
@ -28,12 +28,12 @@ Subnet id <UUID of vlan-sriov-net> will be used later in NetworkAttachmentDefini
This mapping is required for ability to find appropriate PF/VF functions at This mapping is required for ability to find appropriate PF/VF functions at
binding phase. physnet1 is just an identifier for subnet <UUID of binding phase. physnet1 is just an identifier for subnet <UUID of
vlan-sriov-net>. Such kind of transition is necessary to support many-to-many vlan-sriov-net>. Such kind of transition is necessary to support
relation. many-to-many relation.
3. Prepare NetworkAttachmentDefinition object. #. Prepare NetworkAttachmentDefinition object. Apply
Apply NetworkAttachmentDefinition with "sriov" driverType inside, NetworkAttachmentDefinition with "sriov" driverType inside, as described in
as described in `NPWG spec`_. `NPWG spec`_.
.. code-block:: yaml .. code-block:: yaml
@ -47,8 +47,9 @@ as described in `NPWG spec`_.
"driverType": "sriov" "driverType": "sriov"
}' }'
Then add k8s.v1.cni.cncf.io/networks and request/limits for SR-IOV
into the pod's yaml. Then add k8s.v1.cni.cncf.io/networks and request/limits for SR-IOV into the
pod's yaml.
.. code-block:: yaml .. code-block:: yaml
@ -70,28 +71,28 @@ into the pod's yaml.
limits: limits:
intel.com/sriov: '2' intel.com/sriov: '2'
In the above example two SR-IOV devices will be attached to pod. First one is In the above example two SR-IOV devices will be attached to pod. First one
described in sriov-net1 NetworkAttachmentDefinition, second one in sriov-net2. is described in sriov-net1 NetworkAttachmentDefinition, second one in
They may have different subnetId. sriov-net2. They may have different subnetId.
4. Specify resource names #. Specify resource names
The resource name *intel.com/sriov*, which used in the above example is the The resource name *intel.com/sriov*, which used in the above example is the
default resource name. This name was used in SR-IOV network device plugin in default resource name. This name was used in SR-IOV network device plugin in
version 1 (release-v1 branch). But since latest version the device plugin can version 1 (release-v1 branch). But since latest version the device plugin
use any arbitrary name of the resources (see `SRIOV network device plugin for can use any arbitrary name of the resources (see `SRIOV network device
Kubernetes`_). This name should match "^\[a-zA-Z0-9\_\]+$" regular expression. plugin for Kubernetes`_). This name should match "^\[a-zA-Z0-9\_\]+$"
To be able to work with arbitrary resource names physnet_resource_mappings and regular expression. To be able to work with arbitrary resource names
device_plugin_resource_prefix in [sriov] section of kuryr-controller physnet_resource_mappings and device_plugin_resource_prefix in [sriov]
configuration file should be filled. The default value for section of kuryr-controller configuration file should be filled. The
device_plugin_resource_prefix is intel.com, the same as in SR-IOV network default value for device_plugin_resource_prefix is intel.com, the same as in
device plugin, in case of SR-IOV network device plugin was started with value SR-IOV network device plugin, in case of SR-IOV network device plugin was
of -resource-prefix option different from intel.com, than value should be set started with value of -resource-prefix option different from intel.com, than
to device_plugin_resource_prefix, otherwise kuryr-kubernetes will not work with value should be set to device_plugin_resource_prefix, otherwise
resource. kuryr-kubernetes will not work with resource.
Assume we have following SR-IOV network device plugin (defined by -config-file Assume we have following SR-IOV network device plugin (defined by
option) -config-file option)
.. code-block:: json .. code-block:: json
@ -109,8 +110,8 @@ option)
We defined numa0 resource name, also assume we started sriovdp with We defined numa0 resource name, also assume we started sriovdp with
-resource-prefix samsung.com value. The PCI address of ens4f0 interface is -resource-prefix samsung.com value. The PCI address of ens4f0 interface is
"0000:02:00.0". If we assigned 8 VF to ens4f0 and launch SR-IOV network device "0000:02:00.0". If we assigned 8 VF to ens4f0 and launch SR-IOV network
plugin, we can see following state of kubernetes device plugin, we can see following state of kubernetes
.. code-block:: console .. code-block:: console
@ -133,28 +134,28 @@ We have to add to the sriov section following mapping:
device_plugin_resource_prefix = samsung.com device_plugin_resource_prefix = samsung.com
physnet_resource_mappings = physnet1:numa0 physnet_resource_mappings = physnet1:numa0
5. Enable Kubelet Pod Resources feature #. Enable Kubelet Pod Resources feature
To use SR-IOV functionality properly it is necessary to enable Kubelet Pod To use SR-IOV functionality properly it is necessary to enable Kubelet Pod
Resources feature. Pod Resources is a service provided by Kubelet via gRPC Resources feature. Pod Resources is a service provided by Kubelet via gRPC
server that allows to request list of resources allocated for each pod and server that allows to request list of resources allocated for each pod and
container on the node. These resources are devices allocated by k8s device container on the node. These resources are devices allocated by k8s device
plugins. Service was implemented mainly for monitoring purposes, but it also plugins. Service was implemented mainly for monitoring purposes, but it also
suitable for SR-IOV binding driver allowing it to know which VF was allocated suitable for SR-IOV binding driver allowing it to know which VF was
for particular container. allocated for particular container.
To enable Pod Resources service it is needed to add To enable Pod Resources service it is needed to add ``--feature-gates
``--feature-gates KubeletPodResources=true`` into ``/etc/sysconfig/kubelet``. KubeletPodResources=true`` into ``/etc/sysconfig/kubelet``. This file could
This file could look like: look like:
.. code-block:: bash .. code-block:: bash
KUBELET_EXTRA_ARGS="--feature-gates KubeletPodResources=true" KUBELET_EXTRA_ARGS="--feature-gates KubeletPodResources=true"
Note that it is important to set right value for parameter ``kubelet_root_dir`` Note that it is important to set right value for parameter
in ``kuryr.conf``. By default it is ``/var/lib/kubelet``. ``kubelet_root_dir`` in ``kuryr.conf``. By default it is
In case of using containerized CNI it is necessary to mount ``/var/lib/kubelet``. In case of using containerized CNI it is necessary to
``'kubelet_root_dir'/pod-resources`` directory into CNI container. mount ``'kubelet_root_dir'/pod-resources`` directory into CNI container.
To use this feature add ``enable_pod_resource_service`` into kuryr.conf. To use this feature add ``enable_pod_resource_service`` into kuryr.conf.
@ -163,11 +164,11 @@ To use this feature add ``enable_pod_resource_service`` into kuryr.conf.
[sriov] [sriov]
enable_pod_resource_service = True enable_pod_resource_service = True
6. Use privileged user #. Use privileged user
To make neutron ports active kuryr-k8s makes requests to neutron API to update To make neutron ports active kuryr-k8s makes requests to neutron API to
ports with binding:profile information. Due to this it is necessary to make update ports with binding:profile information. Due to this it is necessary
actions with privileged user with admin rights. to make actions with privileged user with admin rights.
.. _NPWG spec: https://docs.openstack.org/kuryr-kubernetes/latest/specs/rocky/npwg_spec_support.html .. _NPWG spec: https://docs.openstack.org/kuryr-kubernetes/latest/specs/rocky/npwg_spec_support.html

View File

@ -52,7 +52,7 @@ that is expected to be used for SR-IOV ports:
| updated_at | 2018-11-21T10:57:34Z | | updated_at | 2018-11-21T10:57:34Z |
+-------------------+--------------------------------------------------+ +-------------------+--------------------------------------------------+
1. Create deployment definition <DEFINITION_FILE_NAME> with one SR-IOV #. Create deployment definition <DEFINITION_FILE_NAME> with one SR-IOV
interface (apart from default one). Deployment definition file might look interface (apart from default one). Deployment definition file might look
like: like:
@ -73,7 +73,7 @@ that is expected to be used for SR-IOV ports:
k8s.v1.cni.cncf.io/networks: net-sriov k8s.v1.cni.cncf.io/networks: net-sriov
spec: spec:
containers: containers:
- name: nginx-sriov 1. name: nginx-sriov
image: nginx image: nginx
resources: resources:
requests: requests:
@ -85,16 +85,16 @@ that is expected to be used for SR-IOV ports:
cpu: "1" cpu: "1"
memory: "512Mi" memory: "512Mi"
Here ``net-sriov`` is the name of ``NetworkAttachmentDefinition`` Here ``net-sriov`` is the name of ``NetworkAttachmentDefinition`` created
created before. before.
2. Create deployment with the following command: #. Create deployment with the following command:
.. code-block:: console .. code-block:: console
$ kubectl create -f <DEFINITION_FILE_NAME> $ kubectl create -f <DEFINITION_FILE_NAME>
3. Wait for the pod to get to Running phase. #. Wait for the pod to get to Running phase.
.. code-block:: console .. code-block:: console
@ -102,7 +102,7 @@ created before.
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
nginx-sriov-558db554d7-rvpxs 1/1 Running 0 1m nginx-sriov-558db554d7-rvpxs 1/1 Running 0 1m
4. If your image contains ``iputils`` (for example, busybox image), you can #. If your image contains ``iputils`` (for example, busybox image), you can
attach to the pod and check that the correct interface has been attached to attach to the pod and check that the correct interface has been attached to
the Pod. the Pod.
@ -114,7 +114,7 @@ created before.
You should see default and eth1 interfaces. eth1 is the SR-IOV VF interface. You should see default and eth1 interfaces. eth1 is the SR-IOV VF interface.
.. code-block:: console .. code-block::
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
@ -135,15 +135,16 @@ You should see default and eth1 interfaces. eth1 is the SR-IOV VF interface.
inet6 fe80::f816:3eff:fea8:55af/64 scope link inet6 fe80::f816:3eff:fea8:55af/64 scope link
valid_lft forever preferred_lft forever valid_lft forever preferred_lft forever
4.1. Alternatively you can login to k8s worker and do the same from the host Alternatively you can login to k8s worker and do the same from the host
system. Use the following command to find out ID of running SR-IOV container: system. Use the following command to find out ID of running SR-IOV
container:
.. code-block:: console .. code-block:: console
$ docker ps $ docker ps
Suppose that ID of created container is ``eb4e10f38763``. Use the following Suppose that ID of created container is ``eb4e10f38763``.
command to get PID of that container: Use the following command to get PID of that container:
.. code-block:: console .. code-block:: console
@ -186,7 +187,7 @@ You should see default and eth1 interfaces. eth1 is the SR-IOV VF interface.
In our example sriov interface has address 192.168.2.6 In our example sriov interface has address 192.168.2.6
5. Use neutron CLI to check the port with exact address has been created on #. Use neutron CLI to check the port with exact address has been created on
neutron: neutron:
.. code-block:: console .. code-block:: console
@ -235,12 +236,12 @@ with the following command:
| updated_at | 2018-11-26T09:13:07Z | | updated_at | 2018-11-26T09:13:07Z |
+-----------------------+----------------------------------------------------------------------------+ +-----------------------+----------------------------------------------------------------------------+
The port would have the name of the pod, ``compute::kuryr::sriov`` for device The port would have the name of the pod, ``compute::kuryr::sriov`` for
owner and 'direct' vnic_type. Verify that IP and MAC addresses of the port device owner and 'direct' vnic_type. Verify that IP and MAC addresses of the
match the ones on the container. Currently the neutron-sriov-nic-agent does port match the ones on the container. Currently the neutron-sriov-nic-agent
not properly detect SR-IOV ports assigned to containers. This means that direct does not properly detect SR-IOV ports assigned to containers. This means
ports in neutron would always remain in *DOWN* state. This doesn't affect the that direct ports in neutron would always remain in *DOWN* state. This
feature in any way other than cosmetically. doesn't affect the feature in any way other than cosmetically.
.. _sriov-device-plugin: https://docs.google.com/document/d/1Ewe9Of84GkP0b2Q2PC0y9RVZNkN2WeVEagX9m99Nrzc .. _sriov-device-plugin: https://docs.google.com/document/d/1Ewe9Of84GkP0b2Q2PC0y9RVZNkN2WeVEagX9m99Nrzc

View File

@ -5,7 +5,7 @@ Boot VM with a Trunk Port
To create a VM that makes use of the Neutron Trunk port support, the next To create a VM that makes use of the Neutron Trunk port support, the next
steps can be followed: steps can be followed:
1. Use the demo tenant and create a key to be used to log in into the overcloud #. Use the demo tenant and create a key to be used to log in into the overcloud
VM: VM:
.. code-block:: console .. code-block:: console
@ -14,14 +14,14 @@ steps can be followed:
$ openstack keypair create demo > id_rsa_demo $ openstack keypair create demo > id_rsa_demo
$ chmod 600 id_rsa_demo $ chmod 600 id_rsa_demo
2. Ensure the demo default security group allows ping and ssh access: #. Ensure the demo default security group allows ping and ssh access:
.. code-block:: console .. code-block:: console
$ openstack security group rule create --protocol icmp default $ openstack security group rule create --protocol icmp default
$ openstack security group rule create --protocol tcp --dst-port 22 default $ openstack security group rule create --protocol tcp --dst-port 22 default
3. Download and import an image that allows vlans, as cirros does not support #. Download and import an image that allows vlans, as cirros does not support
it: it:
.. code-block:: console .. code-block:: console
@ -29,7 +29,7 @@ steps can be followed:
$ wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2 $ wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
$ openstack image create --container-format bare --disk-format qcow2 --file CentOS-7-x86_64-GenericCloud.qcow2 centos7 $ openstack image create --container-format bare --disk-format qcow2 --file CentOS-7-x86_64-GenericCloud.qcow2 centos7
4. Create a port for the overcloud VM and create the trunk with that port as #. Create a port for the overcloud VM and create the trunk with that port as
the parent port (untagged traffic): the parent port (untagged traffic):
.. code-block:: console .. code-block:: console
@ -37,7 +37,7 @@ steps can be followed:
$ openstack port create --network private --security-group default port0 $ openstack port create --network private --security-group default port0
$ openstack network trunk create --parent-port port0 trunk0 $ openstack network trunk create --parent-port port0 trunk0
5. Create the overcloud VM and assign a floating ip to it to be able to log in #. Create the overcloud VM and assign a floating ip to it to be able to log in
into it: into it:
.. code-block:: console .. code-block:: console