Review OVN pages

Mostly improve upon words and formatting.

Only significant additions were to the main page
(index):

 * Introduction

 * The deployment commands were converted to an
   overlay as per documentation policy. See the
   shared-filesystem-services page as a model.

 * Restructure (headings)

Bug citation LP #1857026 was removed from the
internal-dns page as it appears to have been
sufficiently resovlved.

Rationalise the ha page where OVN is mentioned.

Change-Id: I05df65d9fbb45cd07a7827be3417a98a51302bb9
This commit is contained in:
Peter Matulis 2022-08-03 17:09:47 -04:00
parent 6713dd89b6
commit 6ea929f1b8
8 changed files with 224 additions and 180 deletions

View File

@ -692,18 +692,13 @@ availability using DVR`_ in the Neutron documentation for more information.
that the components needed for their operation are all HA (RabbitMQ, Neutron
API, and MySQL).
.. _ha_ovn:
OVN
~~~
`Open Virtual Network`_ (OVN) complements the existing capabilities of OVS by
adding native support for virtual network abstractions, such as virtual L2 and
L3 overlays and security groups.
.. important::
OVN is available as an option starting with Ubuntu 20.04 LTS on OpenStack
Ussuri. The use of OVN obviates the need for the neutron-gateway and
neutron-openvswitch charms.
For general information on OVN, refer to the main :doc:`networking/ovn/index`
page.
Control plane HA
^^^^^^^^^^^^^^^^
@ -803,9 +798,6 @@ deployment of OVN with MySQL 8:
vault/0* active idle 3/lxd/2 10.246.114.74 8200/tcp Unit is ready (active: true, mlock: disabled)
vault-mysql-router/0* active idle 10.246.114.74 Unit is ready
Refer to the :doc:`networking/ovn/index` page for more information on how to
deploy OVN.
Other items of interest
-----------------------

View File

@ -2,23 +2,22 @@
Enabling DPDK with OVN
======================
It is possible to configure chassis to use experimental DPDK userspace network
The OVN chassis can be configured to use experimental DPDK userspace network
acceleration.
.. note::
Please see the :doc:`index` page for general information on using OVN with
Charmed OpenStack.
For general information on OVN, refer to the main :doc:`index` page.
.. note::
Currently instances are required to be attached to a external network (also
known as provider network) for connectivity. OVN supports distributed DHCP
for provider networks. For OpenStack workloads use of `Nova config drive`_
Instances are required to be attached to a external network (also known as
provider network) for connectivity. OVN supports distributed DHCP for
provider networks. For OpenStack workloads, the use of `Nova config drive`_
is required to provide metadata to instances.
Prerequisites
^^^^^^^^^^^^^
-------------
To use the feature you need to use a supported CPU architecture and network
interface card (NIC) hardware. Please consult the `DPDK supported hardware
@ -31,34 +30,34 @@ provisioning layer (for example `MAAS`_).
Example:
.. code:: bash
.. code-block:: none
default_hugepagesz=1G hugepagesz=1G hugepages=64 intel_iommu=on iommu=pt
For the communication between the host userspace networking stack and the guest
virtual NIC driver to work the instances need to be configured to use
hugepages. For OpenStack this can be accomplished by `Customizing instance huge
hugepages. For OpenStack this can be accomplished by `Customising instance huge
pages allocations`_.
Example:
.. code:: bash
.. code-block:: none
openstack flavor set m1.large --property hw:mem_page_size=large
By default, the charm will configure Open vSwitch/DPDK to consume one processor
core + 1G of RAM from each NUMA node on the unit being deployed. This can be
tuned using the ``dpdk-socket-memory`` and ``dpdk-socket-cores`` configuration
options.
By default, the charms will configure Open vSwitch/DPDK to consume one
processor core + 1G of RAM from each NUMA node on the unit being deployed. This
can be tuned using the ``dpdk-socket-memory`` and ``dpdk-socket-cores``
configuration options.
.. note::
Please check that the value of dpdk-socket-memory is large enough to
accommodate the MTU size being used. For more information please refer to
`DPDK shared memory calculations`_
Please check that the value of dpdk-socket-memory is large enough to
accommodate the MTU size being used. For more information please refer to
`DPDK shared memory calculations`_
The userspace kernel driver can be configured using the ``dpdk-driver``
configuration option. See config.yaml for more details.
configuration option. See ``config.yaml`` for more details.
.. note::
@ -66,7 +65,7 @@ configuration option. See config.yaml for more details.
Open vSwitch, and subsequently interrupt instance connectivity.
Charm configuration
^^^^^^^^^^^^^^^^^^^
-------------------
The below example bundle excerpt will enable the use of DPDK for an OVN
deployment.
@ -77,6 +76,7 @@ deployment.
options:
enable-dpdk: True
bridge-interface-mappings: br-ex:00:53:00:00:00:42
ovn-chassis:
options:
enable-dpdk: False
@ -98,17 +98,17 @@ deployment.
DPDK-enabled nodes.
DPDK bonding
............
~~~~~~~~~~~~
Once Network interface cards are bound to DPDK they will be invisible to the
standard Linux kernel network stack and subsequently it is not possible to use
Once network interface cards are bound to DPDK they will be invisible to the
standard Linux kernel network stack, and subsequently it is not possible to use
standard system tools to configure bonding.
For DPDK interfaces the charm supports configuring bonding in Open vSwitch.
This is accomplished through the ``dpdk-bond-mappings`` and
``dpdk-bond-config`` configuration options. Example:
This is accomplished via the ``dpdk-bond-mappings`` and ``dpdk-bond-config``
configuration options. Example:
.. code:: yaml
.. code-block:: yaml
ovn-chassis-dpdk:
options:
@ -116,6 +116,7 @@ This is accomplished through the ``dpdk-bond-mappings`` and
bridge-interface-mappings: br-ex:dpdk-bond0
dpdk-bond-mappings: "dpdk-bond0:00:53:00:00:00:42 dpdk-bond0:00:53:00:00:00:51"
dpdk-bond-config: ":balance-slb:off:fast"
ovn-chassis:
options:
enable-dpdk: False
@ -130,5 +131,5 @@ addresses provided will be used to build a bond identified by a port named
.. _Nova config drive: https://docs.openstack.org/nova/latest/user/metadata.html#config-drives
.. _DPDK supported hardware page: http://core.dpdk.org/supported/
.. _MAAS: https://maas.io/
.. _Customizing instance huge pages allocations: https://docs.openstack.org/nova/latest/admin/huge-pages.html#customizing-instance-huge-pages-allocations
.. _Customising instance huge pages allocations: https://docs.openstack.org/nova/latest/admin/huge-pages.html#customizing-instance-huge-pages-allocations
.. _DPDK shared memory calculations: https://docs.openvswitch.org/en/latest/topics/dpdk/memory/#shared-memory-calculations

View File

@ -2,47 +2,43 @@
Setting up external connectivity with OVN
=========================================
Interface and network to bridge mapping is done through the
`ovn-chassis charm`_.
OVN provides a more flexible way of configuring external Layer3 networking than
the legacy ML2+DVR configuration as OVN does not require every node
(``Chassis`` in OVN terminology) in a deployment to have direct external
connectivity. This plays nicely with Layer3-only datacenter fabrics (RFC 7938).
OVN provides a more flexible way of configuring external Layer 3 networking
than the legacy ML2+DVR configuration. This is because every chassis is not
required to have direct external connectivity. This plays nicely with Layer
3-only datacentre fabrics (RFC 7938).
.. note::
Please see the :doc:`index` page for general information on using OVN with
Charmed OpenStack.
For general information on OVN, refer to the main :doc:`index` page.
East/West traffic is distributed by default. North/South traffic is highly
available by default. Liveness detection is done using the Bidirectional
Forwarding Detection (BFD) protocol.
Networks for use with external Layer3 connectivity should have mappings on
chassis located in the vicinity of the datacenter border gateways. Having two
or more chassis with mappings for a Layer3 network will have OVN automatically
Networks for use with external Layer 3 connectivity should have mappings on
chassis located in the vicinity of the datacentre border gateways. Having two
or more chassis with mappings for a Layer 3 network will have OVN automatically
configure highly available routers with liveness detection provided by the
Bidirectional Forwarding Detection (BFD) protocol.
Chassis without direct external mapping to a external Layer3 network will
Chassis without direct external mapping to a external Layer 3 network will
forward traffic through a tunnel to one of the chassis acting as a gateway for
that network.
Networks for use with external Layer2 connectivity should have mappings present
Networks for use with external Layer 2 connectivity should have mappings present
on all chassis with potential to host the consuming payload.
.. note::
It is not necessary nor recommended to add mapping for external
Layer3 networks to all chassis. Doing so will create a scaling problem at
the physical network layer that needs to be resolved with globally shared
Layer2 (does not scale) or tunneling at the top-of-rack switch layer (adds
It is not necessary nor recommended to add mapping for external Layer 3
networks to all chassis. Doing so will create a scaling problem at the
physical network layer that needs to be resolved with globally shared Layer2
(does not scale) or tunneling at the top-of-rack switch layer (adds
complexity) and is generally not a recommended configuration.
Example configuration with explicit bridge-interface-mappings:
.. code:: bash
.. code-block:: none
juju config neutron-api flat-network-providers=physnet1
juju config ovn-chassis ovn-bridge-mappings=physnet1:br-provider
@ -58,7 +54,7 @@ Example configuration with explicit bridge-interface-mappings:
It is also possible to influence the scheduling of routers on a per named
ovn-chassis application basis. The benefit of this method is that you do not
need to provide MAC addresses when configuring Layer3 connectivity in the
need to provide MAC addresses when configuring Layer 3 connectivity in the
charm. For example:
.. code-block:: none
@ -74,6 +70,3 @@ charm. For example:
In the above example units of the ovn-chassis-border application with
appropriate bridge mappings will be eligible for router scheduling.
.. LINKS
.. _ovn-chassis charm: https://jaas.ai/u/openstack-charmers/ovn-chassis/

View File

@ -2,21 +2,20 @@
Implementing hardware offloading with OVN
=========================================
It is possible to configure chassis to prepare network interface cards (NICs)
for use with hardware offloading and make them available to OpenStack.
The OVN chassis can be configured to prepare network interface cards (NICs) for
use with hardware offloading and make them available to OpenStack.
.. note::
Please see the :doc:`index` page for general information on using OVN with
Charmed OpenStack.
For general information on OVN, refer to the main :doc:`index` page.
.. caution::
This feature is to be considered Tech Preview. OVN has more stringent
requirements for match/action support in the hardware than for example
Neutron ML2+OVS. Make sure to acquire hardware with appropriate support.
Depending on hardware vendor, it may be required to install third party
Depending on hardware vendor, it may be required to install third-party
drivers (DKMS) in order to successfully use this feature.
Hardware offload support makes use of SR-IOV as an underlying mechanism to
@ -24,14 +23,14 @@ accelerate the data path between a virtual machine instance and the NIC
hardware. But as opposed to traditional SR-IOV support the accelerated ports
can be connected to the Open vSwitch integration bridge which allows instances
to take part in regular tenant networks. The NIC also supports hardware
offloading of tunnel encapsulation and decapsulation.
offloading of tunnel encapsulation and de-encapsulation.
With OVN the Layer3 routing features are implemented as flow rules in Open
With OVN, the Layer 3 routing features are implemented as flow rules in Open
vSwitch. This in turn may allow Layer 3 routing to also be offloaded to NICs
with appropriate driver and firmware support.
Prerequisites
^^^^^^^^^^^^^
-------------
* Ubuntu 22.04 LTS or later
@ -46,7 +45,7 @@ Prerequisites
Please refer to the :doc:`sriov` page for information on kernel configuration.
Charm configuration
^^^^^^^^^^^^^^^^^^^
-------------------
The below example bundle excerpt will enable hardware offloading for an OVN
deployment.
@ -54,42 +53,54 @@ deployment.
.. code-block:: yaml
applications:
ovn-chassis:
charm: cs:ovn-chassis
charm: ch:ovn-chassis
channel: $CHANNEL_OVN
options:
enable-hardware-offload: true
sriov-numvfs: "enp3s0f0:32 enp3s0f1:32"
sriov-numvfs: "enp3s0f0:32 enp3s0f1:32"
neutron-api:
charm: cs:neutron-api
charm: ch:neutron-api
channel: $CHANNEL_OPENSTACK
options:
enable-hardware-offload: true
nova-compute:
charm: cs:nova-compute
charm: ch:nova-compute
channel: $CHANNEL_OPENSTACK
options:
pci-passthrough-whitelist: '{"address": "*:03:*", "physical_network": null}'
.. caution::
After deploying the above example the machines hosting ovn-chassis
After deploying the above example, the machines hosting ovn-chassis
units must be rebooted for the changes to take effect.
Boot an instance
^^^^^^^^^^^^^^^^
----------------
Now we can tell OpenStack to boot an instance and attach it to an hardware
offloaded port. This must be done in two stages, first we create a port with
``vnic-type`` 'direct' and ``binding-profile`` with 'switchdev' capabilities.
Then we create an instance connected to the newly created port:
OpenStack can now be directed to boot an instance and attach it to a hardware
offloaded port.
First create a port with ``vnic-type`` 'direct' and ``binding-profile`` with
'switchdev' capabilities:
.. code-block:: none
openstack port create --network my-network --vnic-type direct \
--binding-profile '{"capabilities": ["switchdev"]}' direct_port1
Then create an instance connected to the newly created port:
.. code-block:: none
openstack server create --flavor my-flavor --key-name my-key \
--nic port-id=direct_port1 my-instance
Validate that traffic is offloaded
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
----------------------------------
The `traffic control monitor`_ command can be used to observe updates to
filters which is one of the mechanisms used to program the NIC switch hardware.

View File

@ -5,79 +5,131 @@ Open Virtual Network (OVN)
Overview
--------
Open Virtual Network (OVN) can be deployed to provide networking services as
part of an OpenStack cloud.
Open Virtual Network (OVN) is an SDN platform. When used with OpenStack the
overall solution is known as "Neutron ML2+OVN". OVN extends the existing
capabilities of a solution based solely on Open vSwitch, which is known as
"Neutron ML2+OVS".
.. note::
There are feature `gaps from ML2/OVS`_ and deploying legacy ML2/OVS with
the OpenStack Charms is still available.
OVN charms:
OVN is implemented via a suite of charms:
* neutron-api-plugin-ovn
* ovn-central
* ovn-chassis
* ovn-dedicated-chassis
* ovn-chassis (or ovn-dedicated-chassis)
.. note::
OVN is supported by Charmed OpenStack starting with OpenStack Train. OVN is
the default configuration in the `OpenStack Base bundle`_ reference
implementation.
The OpenStack Charms project supports OVN starting with OpenStack Train, and
uses it by default starting with OpenStack Ussuri.
Instructions for migrating legacy clouds to OVN are found on the
Instructions for migrating non-OVN clouds to OVN are found on the
:doc:`../../../project/procedures/ovn-migration` page.
Due to `feature gaps with ML2+OVS`_, the OpenStack Charms project continues
to support ML2+OVS.
Deployment
----------
OVN makes use of Public Key Infrastructure (PKI) to authenticate and authorize
control plane communication. The charm requires a Certificate Authority to be
present in the model as represented by the ``certificates`` relation.
Certificates must be managed by Vault.
.. important::
.. note::
OVN is typically deployed alongside other core components via a
comprehensive cloud bundle. For example, see the `openstack-base bundle`_.
For Vault deployment instructions see the `vault charm`_. For certificate
management information read the :doc:`../../security/tls` page.
The below overlay bundle encapsulates what is needed in terms of the
deployment.
To deploy OVN:
.. important::
.. code-block:: none
An overlay's parameters should be adjusted as per the local environment
(e.g. the machine mappings). In particular, the following placeholders must
be replaced with actual values:
juju config neutron-api manage-neutron-plugin-legacy-mode=false
* ``$SERIES``
* ``$OPENSTACK_ORIGIN``
* ``$CHANNEL_OVN``
juju deploy neutron-api-plugin-ovn
juju deploy ovn-central -n 3 --config source=cloud:bionic-ussuri
juju deploy ovn-chassis
Replace ``$SERIES`` with the Ubuntu release running on the cloud nodes (e.g.
'jammy'). For ``$OPENSTACK_ORIGIN`` see the corresponding charm options.
For channel information see the :doc:`../../../project/charm-delivery` page.
juju add-relation neutron-api-plugin-ovn:certificates vault:certificates
juju add-relation neutron-api-plugin-ovn:neutron-plugin \
neutron-api:neutron-plugin-api-subordinate
juju add-relation neutron-api-plugin-ovn:ovsdb-cms ovn-central:ovsdb-cms
juju add-relation ovn-central:certificates vault:certificates
juju add-relation ovn-chassis:ovsdb ovn-central:ovsdb
juju add-relation ovn-chassis:certificates vault:certificates
juju add-relation ovn-chassis:nova-compute nova-compute:neutron-plugin
.. code-block:: yaml
The OVN components used for the data plane is deployed by the ovn-chassis
subordinate charm. A subordinate charm is deployed together with a principle
charm, nova-compute in the example above.
series: $SERIES
If you require a dedicated software gateway you may deploy the data plane
components as a principle charm through the use of the
`ovn-dedicated-chassis charm`_.
machines:
'0':
'1':
'2':
.. note::
relations:
- - neutron-api-plugin-ovn:certificates
- vault:certificates
- - neutron-api-plugin-ovn:neutron-plugin
- neutron-api:neutron-plugin-api-subordinate
- - neutron-api-plugin-ovn:ovsdb-cms
- ovn-central:ovsdb-cms
- - ovn-central:certificates
- vault:certificates
- - ovn-chassis:ovsdb
- ovn-central:ovsdb
- - ovn-chassis:certificates
- vault:certificates
- - ovn-chassis:nova-compute
- nova-compute:neutron-plugin
For a concrete example take a look at the `OpenStack Base bundle`_.
applications:
neutron-api:
options:
manage-neutron-plugin-legacy-mode=false
neutron-api-plugin-ovn
charm: ch:neutron-api-plugin-ovn
channel: $CHANNEL_OVN
ovn-central
charm: ch:ovn-central
channel: $CHANNEL_OVN
num_units: 3
options:
source: $OPENSTACK_ORIGIN
to:
- '0'
- '1'
- '2'
ovn-chassis
charm: ch:ovn-chassis
channel: $CHANNEL_OVN
TLS and Vault
~~~~~~~~~~~~~
With the OpenStack charms, OVN requires Vault, which is the chosen software for
managing the TLS certificates that secure control plane communication. This is
achieved via the ``ovn-chassis:certificates vault:certificates`` relation (as
shown in the overlay).
For certificate management information see the :doc:`../../security/tls` page.
See the `vault charm`_ for details on Vault itself.
Data plane
~~~~~~~~~~
The OVN components used for the data plane are deployed by the ovn-chassis
subordinate charm, in conjunction with the nova-compute principal charm. This
is achieved via the ``ovn-chassis:nova-compute nova-compute:neutron-plugin``
relation (as shown in the overlay).
To obtain a dedicated software gateway, the data plane components should be
deployed with the principal `ovn-dedicated-chassis charm`_.
High availability
-----------------
~~~~~~~~~~~~~~~~~
OVN is HA by design; take a look at the `OVN section of the Infrastructure high
availability`_ page.
OVN is natively HA. See the :ref:`OVN section <ha_ovn>` of the Infrastructure
high availability page.
Configuration
-------------
@ -94,7 +146,7 @@ and the subset of configuration specific to OVN is done through the
Usage
-----
Create networks, routers and subnets through the OpenStack API or CLI as you
Create networks, routers, and subnets through the OpenStack API or CLI as you
normally would.
The OVN ML2 driver will translate the OpenStack network constructs into high
@ -119,12 +171,11 @@ Specific topics on OVN usage are given below:
queries
.. LINKS
.. _vault charm: https://jaas.ai/vault/
.. _Toward Convergence of ML2+OVS+DVR and OVN: http://specs.openstack.org/openstack/neutron-specs/specs/ussuri/ml2ovs-ovn-convergence.html
.. _ovn-dedicated-chassis charm: https://jaas.ai/u/openstack-charmers/ovn-dedicated-chassis/
.. _vault charm: https://charmhub.io/vault
.. _ovn-dedicated-chassis charm: https://charmhub.io/ovn-dedicated-chassis
.. _neutron-api charm: https://charmhub.io/neutron-api
.. _neutron-api-plugin-ovn charm: https://charmhub.io/neutron-api-plugin-ovn
.. _networking-ovn plugin: https://docs.openstack.org/networking-ovn/latest/
.. _neutron-api charm: https://jaas.ai/neutron-api/
.. _neutron-api-plugin-ovn charm: https://jaas.ai/u/openstack-charmers/neutron-api-plugin-ovn/
.. _OpenStack Base bundle: https://github.com/openstack-charmers/openstack-bundles/tree/master/development/openstack-base-bionic-ussuri-ovn
.. _gaps from ML2/OVS: https://docs.openstack.org/neutron/latest/ovn/gaps.html
.. _OVN section of the Infrastructure high availability: https://docs.openstack.org/charm-guide/latest/admin/ha.html#ovn
.. _feature gaps with ML2+OVS: https://docs.openstack.org/neutron/latest/ovn/gaps.html
.. _Toward Convergence of ML2+OVS+DVR and OVN: http://specs.openstack.org/openstack/neutron-specs/specs/ussuri/ml2ovs-ovn-convergence.html
.. _openstack-base bundle: https://github.com/openstack-charmers/openstack-bundles/blob/master/stable/openstack-base/bundle.yaml

View File

@ -2,39 +2,33 @@
Setting up internal DNS resolution with OVN
===========================================
OVN supports Neutron internal DNS resolution. To configure this:
OVN supports Neutron internal DNS resolution.
.. note::
Please see the :doc:`index` page for general information on using OVN with
Charmed OpenStack.
For general information on OVN, refer to the main :doc:`index` page.
.. caution::
To configure:
At the time of this writing the internal DNS support does not include
reverse lookup (PTR-records) of instance IP addresses, only forward lookup
(A and AAAA-records) of instance names. This is tracked in `LP #1857026`_.
.. code::
.. code-block:: none
juju config neutron-api enable-ml2-dns=true
juju config neutron-api dns-domain=openstack.example.
juju config neutron-api-plugin-ovn dns-servers="1.1.1.1 8.8.8.8"
.. note::
.. important::
The value for the ``dns-domain`` configuration option must
not be set to 'openstack.local.' as that will effectively disable the
feature.
The value for the ``dns-domain`` configuration option must not be set to
'openstack.local.' as doing so will effectively disable the feature.
It is also important to end the string with a '.' (dot).
The provided value must also end with a '.' (dot).
When you set ``enable-ml2-dns`` to 'true' and set a value for ``dns-domain``,
Neutron will add details such as instance name and DNS domain name to each
individual Neutron port associated with instances. The OVN ML2 driver will
populate the ``DNS`` table of the Northbound and Southbound databases:
.. code::
.. code-block:: console
# ovn-sbctl list DNS
_uuid : 2e149fa8-d27f-4106-99f5-a08f60c443bf
@ -46,13 +40,10 @@ populate the ``DNS`` table of the Northbound and Southbound databases:
On the chassis, OVN creates flow rules to redirect UDP port 53 packets (DNS)
to the local ``ovn-controller`` process:
.. code::
.. code-block:: console
cookie=0xdeaffed, duration=77.575s, table=22, n_packets=0, n_bytes=0, idle_age=77, priority=100,udp6,metadata=0x2,tp_dst=53 actions=controller(userdata=00.00.00.06.00.00.00.00.00.01.de.10.00.00.00.64,pause),resubmit(,23)
cookie=0xdeaffed, duration=77.570s, table=22, n_packets=0, n_bytes=0, idle_age=77, priority=100,udp,metadata=0x2,tp_dst=53 actions=controller(userdata=00.00.00.06.00.00.00.00.00.01.de.10.00.00.00.64,pause),resubmit(,23)
The local ``ovn-controller`` process then decides if it should respond to the
DNS query directly or if it needs to be forwarded to the real DNS server.
.. BUGS
.. _LP #1857026: https://bugs.launchpad.net/ubuntu/+source/ovn/+bug/1857026

View File

@ -9,8 +9,7 @@ leader to operate.
.. note::
Please see the :doc:`index` page for general information on using OVN with
Charmed OpenStack.
For general information on OVN, refer to the main :doc:`index` page.
The leader of the Northbound and Southbound databases does not have to coincide
with the charm leader, so before querying databases you must consult the output
@ -33,7 +32,7 @@ In the above example 'ovn-central/0' is the leader for the Northbound DB,
leader for the Southbound DB.
OVSDB Cluster status
^^^^^^^^^^^^^^^^^^^^
--------------------
The cluster status as conveyed through :command:`juju status` is updated each
time a hook is run, in some circumstances it may be necessary to get an
@ -49,7 +48,7 @@ To get an immediate view of the database clusters:
/var/run/ovn/ovnsb_db.ctl cluster/status OVN_Southbound'
Querying DBs
^^^^^^^^^^^^
------------
To query the individual databases:
@ -86,9 +85,10 @@ use port number '16642'. This is due to OVN RBAC being enabled on the standard
show
Data plane flow tracing
^^^^^^^^^^^^^^^^^^^^^^^
-----------------------
SSH into one of the chassis units to get access to various diagnostic tools:
Connect (by SSH) to one of the chassis units to get access to various
diagnostic tools:
.. code-block:: none
@ -117,7 +117,7 @@ SSH into one of the chassis units to get access to various diagnostic tools:
.. note::
OVN makes use of OpenFlow 1.3 or newer and as such the charm configures
OVN makes use of OpenFlow 1.3 (and newer) and as such the charm configures
bridges to use these protocols. To be able to successfully use the
:command:`ovs-ofctl` command you must specify the OpenFlow version as shown
in the example above.

View File

@ -2,7 +2,7 @@
Using SR-IOV with OVN
=====================
Single root I/O virtualization (SR-IOV) enables splitting a single physical
Single root I/O virtualisation (SR-IOV) enables splitting a single physical
network port into multiple virtual network ports known as virtual functions
(VFs). The division is done at the PCI level which allows attaching the VF
directly to a virtual machine instance, bypassing the networking stack of the
@ -10,8 +10,7 @@ hypervisor hosting the instance.
.. note::
Please see the :doc:`index` page for general information on using OVN with
Charmed OpenStack.
For general information on OVN, refer to the main :doc:`index` page.
The main use case for this feature is to support applications with high
bandwidth requirements. For such applications the normal plumbing through the
@ -21,12 +20,12 @@ It is possible to configure chassis to prepare network interface cards (NICs)
for use with SR-IOV and make them available to OpenStack.
Prerequisites
^^^^^^^^^^^^^
-------------
To use the feature you need to use a NIC with support for SR-IOV.
Machines need to be pre-configured with appropriate kernel command-line
parameters. The charm does not handle this facet of configuration and it is
parameters. The charms do not handle this facet of configuration and it is
expected that the user configure this either manually or through the bare metal
provisioning layer (for example `MAAS`_). Example:
@ -35,10 +34,10 @@ provisioning layer (for example `MAAS`_). Example:
intel_iommu=on iommu=pt probe_vf=0
Charm configuration
^^^^^^^^^^^^^^^^^^^
-------------------
Enable SR-IOV, map physical network name 'physnet2' to the physical port named
'enp3s0f0' and create 4 virtual functions on it:
'enp3s0f0' and create four virtual functions on it:
.. code-block:: none
@ -68,7 +67,7 @@ and ``product_id`` of the virtual functions:
In the above example ``vendor_id`` is '8086' and ``product_id`` is '10ed'.
Add mapping between physical network name, physical port and Open vSwitch
Add a mapping between physical network name, physical port, and Open vSwitch
bridge:
.. code-block:: none
@ -79,33 +78,39 @@ bridge:
.. note::
The above configuration allows OVN to configure an 'external' port on one
of the chassis for providing DHCP and metadata to instances connected
of the Chassis for providing DHCP and metadata to instances connected
directly to the network through SR-IOV.
For OpenStack to make use of the VFs the ``neutron-sriov-agent`` needs to talk
For OpenStack to make use of the VFs, the ``neutron-sriov-agent`` needs to talk
to RabbitMQ:
.. code:: bash
.. code-block:: none
juju add-relation ovn-chassis:amqp rabbitmq-server:amqp
OpenStack Nova also needs to know which PCI devices it is allowed to pass
through to instances:
.. code:: bash
.. code-block:: none
juju config nova-compute pci-passthrough-whitelist='{"vendor_id":"8086", "product_id":"10ed", "physical_network":"physnet2"}'
Boot an instance
^^^^^^^^^^^^^^^^
----------------
Now we can tell OpenStack to boot an instance and attach it to an SR-IOV port.
This must be done in two stages, first we create a port with ``vnic-type``
'direct' and then we create an instance connected to the newly created port:
OpenStack can now be directed to boot an instance and attach it to an SR-IOV
port.
.. code:: bash
First create a port with ``vnic-type`` 'direct':
.. code-block:: none
openstack port create --network my-network --vnic-type direct my-port
Then create an instance connected to the newly created port:
.. code-block:: none
openstack server create --flavor my-flavor --key-name my-key \
--nic port-id=my-port my-instance