Remove networking pages

Remove the Networking pages as they are being migrated
to the charm-guide.

Some links need to be changed as a consequence.

Add HTML redirects (and tests).

Depends-On: I39304470b71742a6cf01aa04cd80e2c2de1cdf9f
Change-Id: Ibc9db2ac43c24f9568d0a2446907ba517856a2df
This commit is contained in:
Peter Matulis 2022-06-13 14:31:00 -04:00
parent 41823a4651
commit 52849548d6
9 changed files with 14 additions and 1331 deletions

View File

@ -33,3 +33,7 @@ RedirectMatch 301 ^/project-deploy-guide/charm-deployment-guide/([^/]+)/cinder-v
RedirectMatch 301 ^/project-deploy-guide/charm-deployment-guide/([^/]+)/encryption-at-rest.html$ /charm-guide/$1/admin/storage/encryption-at-rest.html
RedirectMatch 301 ^/project-deploy-guide/charm-deployment-guide/([^/]+)/manila-ganesha.html$ /charm-guide/$1/admin/storage/shared-filesystem-services.html
RedirectMatch 301 ^/project-deploy-guide/charm-deployment-guide/([^/]+)/swift.html$ /charm-guide/$1/admin/storage/swift.html
RedirectMatch 301 ^/project-deploy-guide/charm-deployment-guide/([^/]+)/app-ovn.html$ /charm-guide/$1/admin/networking/ovn.html
RedirectMatch 301 ^/project-deploy-guide/charm-deployment-guide/([^/]+)/app-octavia.html$ /charm-guide/$1/admin/networking/load-balancing.html
RedirectMatch 301 ^/project-deploy-guide/charm-deployment-guide/([^/]+)/app-hardware-offload.html$ /charm-guide/$1/admin/networking/hardware-offloading.html
RedirectMatch 301 ^/project-deploy-guide/charm-deployment-guide/([^/]+)/configure-bridge.html$ /charm-guide/$1/admin/networking/interface-config.html

View File

@ -1,142 +0,0 @@
===================
Hardware offloading
===================
Overview
--------
As of the 20.05 release, the OpenStack charms support configuration of Open
vSwitch hardware offloading with Mellanox ConnectX-5 NICs. Hardware offloading
can be used to accelerate VLAN and VXLAN networking using the capabilities of
the underlying network card to achieve much higher performance than with virtio
based VM ports.
See the Neutron documentation on `OVS hardware offload`_ for background
information.
For OVN-specific information on hardware offloading see the `OVN`_ page.
.. warning::
Hardware offloading cannot be used with either SR-IOV or DPDK networking
support as provided by the OpenStack charms.
Prerequisites
-------------
* Ubuntu 18.04 LTS or later
* Linux kernel >= 5.3
* Open vSwitch >= 2.11
* OpenStack Stein or later
* Mellanox ConnectX-5 NICs using recent firmware (>= 16.26.4012)
.. note::
Hardware offload does not currently support offloading of Neutron Security
Group rules - experimental support is expected in Open vSwitch 2.13 when
used with Linux >= 5.4 and as yet unreleased NIC firmware. It is recommended
that port security is disabled on Neutron networks being used for hardware
offloading use cases due to the performance overhead of enforcing security
group rules in userspace.
Deployment
----------
MAAS - Hardware Enablement Kernel
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As a more recent Linux kernel than that provided as part of Ubuntu 18.04 LTS
is required to support this feature, machines with compatible network cards
must be commissioned and configured in MAAS to use the Ubuntu 18.04 LTS
Hardware Enablement (HWE) kernel rather than the standard release kernel.
MAAS - SR-IOV VF-LAG
~~~~~~~~~~~~~~~~~~~~
Mellanox `SR-IOV VF-LAG`_ provides hardware link-aggregation (LAG) for
hardware offloaded ports and is recommended for deployment as it avoids the
need to pass two hardware offloaded ports to each VM for resilience. This
feature is configured in the underlying NIC using standard Linux bonding as
configured through MAAS.
.. note::
VF-LAG can only be used with NIC ports that reside on the same underlying
NIC.
.. note::
Use of VF-LAG reduces the offloaded port capacity of the card by 50%.
Charm configuration
~~~~~~~~~~~~~~~~~~~
Hardware offload support is enabled using the ``enable-hardware-offload``
option provided by the neutron-api and neutron-openvswitch charms.
Enabling hardware offloading requires configuration of VF representator ports
on the NICs supporting the hardware offload - these are used to route network
packets without flow rules to the OVS userspace daemon for handling and
subsequent programming into the hardware offloaded flows. This is supported
via use of the ``sriov-numvfs`` option provided by the neutron-openvswitch
charm.
Finally the ``openvswitch`` firewall driver must be used with hardware
offloading. Eventually it will be possible to offload security group rules
using this driver (see note above).
The following overlay may be used with the OpenStack base deployment bundle:
.. code-block:: yaml
series: bionic
applications:
neutron-openvswitch:
charm: cs:neutron-openvswitch
options:
enable-hardware-offload: true
sriov-numvfs: "enp3s0f0:32 enp3s0f1:32"
firewall-driver: openvswitch
neutron-api:
charm: cs:neutron-api
options:
enable-hardware-offload: true
nova-compute:
options:
pci-passthrough-whitelist: '{"address": "*:03:*", "physical_network": null}'
In this overlay ``enp3s0f0`` and ``enp3s0f1`` are two ports on the same
Mellanox ConnectX-5 card and are configured as a Linux bond ``bond1`` to enable
VF-LAG for resilience and performance. ``bond1`` is also configured with the
network interface used for VXLAN overlay traffic to allow full offloading of
networks of this type.
The nova-compute charm is configured to use the VF functions provided by the
network cards using the ``pci-passthrough-whitelist`` option. The above example
demonstrates configuration for VXLAN overlay networking.
.. caution::
After deploying the above example the machines hosting neutron-openvswitch
units must be rebooted for the changes to take effect.
Creating hardware offloaded ports
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Hardware offloaded ports must be created via Neutron and then passed to Nova
for use by VMs:
.. code-block:: none
openstack port create --network private --vnic-type=direct \
--binding-profile '{"capabilities": ["switchdev"]}' direct_port1
openstack server create --flavor m1.small --image bionic \
--nic port-id=direct_port1 vm1
The image used for the VM must include the Mellanox kernel driver. Ubuntu 18.04
LTS (or later) cloud images include this driver by default.
.. LINKS
.. _OVS hardware offload: https://docs.openstack.org/neutron/stein/admin/config-ovs-offload.html
.. _SR-IOV VF-LAG: https://docs.mellanox.com/pages/releaseview.action?pageId=25133702
.. _OVN: app-ovn.html#configuration

View File

@ -1,307 +0,0 @@
==============
Load balancing
==============
Overview
--------
OpenStack Octavia can be deployed to provide Load balancing services as part of
an OpenStack cloud. This service supersedes the LBaaS v2 services provided
directly through Neutron in earlier releases; when Octavia is deployed a proxy
service is configured to proxy LBaaS v2 API calls directly to Octavia.
.. note::
Octavia is supported by Charmed OpenStack starting with OpenStack Rocky.
Octavia uses cloud resources to provision instances to provide LBaaS services
unlike the LBaaS v2 support in the OpenStack Charms which placed haproxy
instances on neutron-gateway units.
.. warning::
There is no automatic migration path for Neutron LBaaS haproxy-on-host
configurations (as deployed under the existing support) to Octavia Amphora
configurations. New load balancers must be created in Octavia before the
octavia charm is related to the existing neutron-api charm. Floating IPs
can then be moved prior to deletion of existing LBaaS based balancers.
Deployment
----------
.. note::
Throughout the deployment, ensure that the value for ``openstack-origin``
matches the currently deployed OpenStack release.
Octavia uses OpenStack Barbican to store certificates for TLS termination on
load balancers. Barbican, in turn, uses Vault to securely store that data.
.. note::
For Vault deployment instructions see the `vault charm`_. For certificate
management information read the `Managing TLS certificates`_ section of this
guide.
To deploy Barbican:
.. code-block:: none
juju deploy barbican --config openstack-origin=cloud:bionic-rocky
juju deploy barbican-vault
juju add-relation barbican mysql
juju add-relation barbican rabbitmq-server
juju add-relation barbican keystone
juju add-relation barbican barbican-vault
juju add-relation barbican-vault vault
Octavia can then be deployed. Use the appropriate section depending on your
cloud's networking framework:
Neutron ML2+OVS
~~~~~~~~~~~~~~~
.. code-block:: none
juju deploy octavia --config openstack-origin=cloud:bionic-rocky
juju add-relation octavia rabbitmq-server
juju add-relation octavia mysql
juju add-relation octavia keystone
juju add-relation octavia neutron-openvswitch
juju add-relation octavia neutron-api
juju config neutron-api enable-ml2-port-security=True
juju deploy octavia-dashboard
juju add-relation octavia-dashboard openstack-dashboard
Neutron ML2+OVN
~~~~~~~~~~~~~~~
.. code-block:: none
juju deploy octavia --config openstack-origin=cloud:bionic-ussuri
juju add-relation octavia rabbitmq-server
juju add-relation octavia mysql
juju add-relation octavia keystone
juju add-relation octavia ovn-chassis
juju add-relation octavia neutron-api
juju config neutron-api enable-ml2-port-security=True
juju deploy octavia-dashboard
juju add-relation octavia-dashboard openstack-dashboard
.. note::
Octavia uses a Neutron network for communication between Octavia control
plane services and Octavia Amphorae; units will deploy into a 'blocked'
state until the configuration steps are executed.
Configuration
-------------
Generate certificates
~~~~~~~~~~~~~~~~~~~~~
Octavia uses client certificates for authentication and security of
communication between Amphorae (load balancers) and the Octavia control plane.
The commands below show how keys and certificates can be generated. These are
examples only; modify the parameters as required.
.. code-block:: none
mkdir -p demoCA/newcerts
touch demoCA/index.txt
touch demoCA/index.txt.attr
openssl genpkey -algorithm RSA -aes256 -pass pass:foobar -out issuing_ca_key.pem
openssl req -x509 -passin pass:foobar -new -nodes -key issuing_ca_key.pem \
-config /etc/ssl/openssl.cnf \
-subj "/C=US/ST=Somestate/O=Org/CN=www.example.com" \
-days 365 \
-out issuing_ca.pem
openssl genpkey -algorithm RSA -aes256 -pass pass:foobar -out controller_ca_key.pem
openssl req -x509 -passin pass:foobar -new -nodes \
-key controller_ca_key.pem \
-config /etc/ssl/openssl.cnf \
-subj "/C=US/ST=Somestate/O=Org/CN=www.example.com" \
-days 365 \
-out controller_ca.pem
openssl req \
-newkey rsa:2048 -nodes -keyout controller_key.pem \
-subj "/C=US/ST=Somestate/O=Org/CN=www.example.com" \
-out controller.csr
openssl ca -passin pass:foobar -config /etc/ssl/openssl.cnf \
-cert controller_ca.pem -keyfile controller_ca_key.pem \
-create_serial -batch \
-in controller.csr -days 365 -out controller_cert.pem
cat controller_cert.pem controller_key.pem > controller_cert_bundle.pem
This information is then provided to Octavia via charm configuration options:
.. code-block:: none
juju config octavia \
lb-mgmt-issuing-cacert="$(base64 issuing_ca.pem)" \
lb-mgmt-issuing-ca-private-key="$(base64 issuing_ca_key.pem)" \
lb-mgmt-issuing-ca-key-passphrase=foobar \
lb-mgmt-controller-cacert="$(base64 controller_ca.pem)" \
lb-mgmt-controller-cert="$(base64 controller_cert_bundle.pem)"
Resource configuration
~~~~~~~~~~~~~~~~~~~~~~
The charm will automatically create and maintain the resources required for
operation of the Octavia service by running the `configure-resources` action
on the lead octavia unit:
.. code-block:: none
juju run-action --wait octavia/0 configure-resources
This action must be run before Octavia is fully operational.
Access to the Octavia load-balancer API is guarded by policies and end users
must have specific roles to gain access to the service. The charm will request
Keystone to pre-create these roles for you on deployment but you must assign the
roles to your end users as you see fit. Take a look at
`Octavia Policies`_.
The charm also allows the operator to pre-configure these resources to support
full custom configuration of the management network for Octavia. If you want
to manage these resources yourself you must set the `create-mgmt-network`
configuration option to false.
Network resources for use by Octavia must be tagged using Neutron resource tags
(typically by passing a '--tag' CLI parameter when creating resources - see the
OpenStack CLI for more details) using the following schema:
=========================== ====================== =========================================================
Resource Type Tag Description
=========================== ====================== =========================================================
Neutron Network charm-octavia Management network
Neutron Subnet charm-octavia Management network subnet
Neutron Router charm-octavia (Optional) Router for IPv6 RA or north/south mgmt traffic
Amphora Security Group charm-octavia Security group for Amphora ports
Controller Security Group charm-octavia-health Security group for Controller ports
=========================== ====================== =========================================================
Execution of the `configure-resources` action will detect the pre-configured
network resources in Neutron using tags and configure the Octavia service as
appropriate.
The UUID of the Nova flavor to use for Amphorae can be set using the
`custom-amp-flavor-id` configuration option.
Amphora image
~~~~~~~~~~~~~
Octavia uses Amphorae (cloud instances running HAProxy) to provide LBaaS
services; an appropriate image must be uploaded to Glance with the tag
`octavia-amphora`.
You can use the ``octavia-diskimage-retrofit`` tool to transform a stock Ubuntu
cloud image into a Octavia HAProxy Amphora image.
This tool is available as a snap and for convenience there is also a charm
available that can transform Ubuntu images already available in your Glance
image store.
Example usage:
.. code-block:: none
juju deploy glance-simplestreams-sync \
--config source=ppa:simplestreams-dev/trunk
juju deploy octavia-diskimage-retrofit \
--config amp-image-tag=octavia-amphora
juju add-relation glance-simplestreams-sync keystone
juju add-relation glance-simplestreams-sync rabbitmq-server
juju add-relation octavia-diskimage-retrofit glance-simplestreams-sync
juju add-relation octavia-diskimage-retrofit keystone
After the deployment has settled and ``glance-simplestreams-sync`` has
completed its initial image sync, you may ask a ``octavia-diskimage-retrofit``
unit to initiate the Amphora image retrofitting process.
This is accomplished by running an action on one of the units.
.. code-block:: none
juju run-action --wait octavia-diskimage-retrofit/leader retrofit-image
Octavia will use this image for all Amphora instances.
.. warning::
It's important to keep the Amphora image up-to-date to ensure that LBaaS
services remain secure; this process is not covered in this document.
See the Octavia `operators maintenance`_ guide for more details.
Octavia user roles
------------------
To provide access to the Octavia API endpoints a load-balancer role must be
added to a user. For example:
.. code-block:: none
openstack role add --user-domain admin_domain --user admin \
--project-domain admin_domain --project admin \
load-balancer_admin
See `Managing Octavia User Roles`_ in the upstream documentation.
.. note::
Explicit user role assignments are required starting with OpenStack Wallaby.
Usage
-----
To deploy a basic HTTP load balancer using a floating IP for access:
.. code-block:: none
lb_vip_port_id=$(openstack loadbalancer create -f value -c vip_port_id --name lb1 --vip-subnet-id private_subnet)
# Re-run the following until lb1 shows ACTIVE and ONLINE status':
openstack loadbalancer show lb1
openstack loadbalancer listener create --name listener1 --protocol HTTP --protocol-port 80 lb1
openstack loadbalancer pool create --name pool1 --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP
openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type HTTP --url-path /healthcheck pool1
openstack loadbalancer member create --subnet-id private_subnet --address 192.168.21.100 --protocol-port 80 pool1
openstack loadbalancer member create --subnet-id private_subnet --address 192.168.21.101 --protocol-port 80 pool1
floating_ip=$(openstack floating ip create -f value -c floating_ip_address ext_net)
openstack floating ip set --port $lb_vip_port_id $floating_ip
The example above assumes:
* The user and project executing the example has a subnet configured with the
name 'private_subnet' with the CIDR 192.168.21.0/24
* An external network definition for floating IPs has been configured by the
cloud operator with the name 'ext_net'
* Two instances running HTTP services attached to 'private_subnet' on IP
addresses 192.168.21.{100,101} exposing a heat check on '/healthcheck'
The example is also most applicable in cloud deployments that use overlay
networking for project networks and floating IPs for network ingress to project
networks.
For more information on creating and configuring load balancing services in
Octavia please refer to the `Octavia cookbook`_.
.. LINKS
.. _Managing TLS certificates: app-certificate-management.html
.. _Octavia Policies: https://docs.openstack.org/octavia/latest/configuration/policy.html
.. _Octavia cookbook: https://docs.openstack.org/octavia/latest/user/guides/basic-cookbook.html
.. _operators maintenance: https://docs.openstack.org/octavia/latest/admin/guides/operator-maintenance.html#rotating-the-amphora-images
.. _vault charm: https://jaas.ai/vault/
.. _Managing Octavia User Roles: https://docs.openstack.org/octavia/latest/configuration/policy.html#managing-octavia-user-roles

View File

@ -1,746 +0,0 @@
==========================
Open Virtual Network (OVN)
==========================
Overview
--------
Open Virtual Network (OVN) can be deployed to provide networking services as
part of an OpenStack cloud.
.. note::
There are feature `gaps from ML2/OVS`_ and deploying legacy ML2/OVS with
the OpenStack Charms is still available.
OVN charms:
* neutron-api-plugin-ovn
* ovn-central
* ovn-chassis
* ovn-dedicated-chassis
.. note::
OVN is supported by Charmed OpenStack starting with OpenStack Train. OVN is
the default configuration in the `OpenStack Base bundle`_ reference
implementation.
Instructions for migrating legacy clouds to OVN are found on the
:doc:`ovn-migration` page.
Deployment
----------
OVN makes use of Public Key Infrastructure (PKI) to authenticate and authorize
control plane communication. The charm requires a Certificate Authority to be
present in the model as represented by the ``certificates`` relation.
Certificates must be managed by Vault.
.. note::
For Vault deployment instructions see the `vault charm`_. For certificate
management information read the `Managing TLS certificates`_ section of this
guide.
To deploy OVN:
.. code-block:: none
juju config neutron-api manage-neutron-plugin-legacy-mode=false
juju deploy neutron-api-plugin-ovn
juju deploy ovn-central -n 3 --config source=cloud:bionic-ussuri
juju deploy ovn-chassis
juju add-relation neutron-api-plugin-ovn:certificates vault:certificates
juju add-relation neutron-api-plugin-ovn:neutron-plugin \
neutron-api:neutron-plugin-api-subordinate
juju add-relation neutron-api-plugin-ovn:ovsdb-cms ovn-central:ovsdb-cms
juju add-relation ovn-central:certificates vault:certificates
juju add-relation ovn-chassis:ovsdb ovn-central:ovsdb
juju add-relation ovn-chassis:certificates vault:certificates
juju add-relation ovn-chassis:nova-compute nova-compute:neutron-plugin
The OVN components used for the data plane is deployed by the ovn-chassis
subordinate charm. A subordinate charm is deployed together with a principle
charm, nova-compute in the example above.
If you require a dedicated software gateway you may deploy the data plane
components as a principle charm through the use of the
`ovn-dedicated-chassis charm`_.
.. note::
For a concrete example take a look at the `OpenStack Base bundle`_.
High availability
-----------------
OVN is HA by design; take a look at the `OVN section of the Infrastructure high
availability`_ page.
Configuration
-------------
OVN integrates with OpenStack through the OVN ML2 driver. On OpenStack Ussuri
and onwards the OVN ML2 driver is maintained as an in-tree driver in Neutron.
On OpenStack Train it is maintained separately as per the `networking-ovn
plugin`_.
General Neutron configuration is still done through the `neutron-api charm`_,
and the subset of configuration specific to OVN is done through the
`neutron-api-plugin-ovn charm`_.
Hardware offloading support
~~~~~~~~~~~~~~~~~~~~~~~~~~~
It is possible to configure chassis to prepare network interface cards (NICs)
for use with hardware offloading and make them available to OpenStack.
.. caution::
This feature is to be considered Tech Preview. OVN has more stringent
requirements for match/action support in the hardware than for example
Neutron ML2+OVS. Make sure to acquire hardware with appropriate support.
Depending on hardware vendor, it may be required to install third party
drivers (DKMS) in order to successfully use this feature.
Hardware offload support makes use of SR-IOV as an underlying mechanism to
accelerate the data path between a virtual machine instance and the NIC
hardware. But as opposed to traditional SR-IOV support the accelerated ports
can be connected to the Open vSwitch integration bridge which allows instances
to take part in regular tenant networks. The NIC also supports hardware
offloading of tunnel encapsulation and decapsulation.
With OVN the Layer3 routing features are implemented as flow rules in Open
vSwitch. This in turn may allow Layer 3 routing to also be offloaded to NICs
with appropriate driver and firmware support.
Prerequisites
^^^^^^^^^^^^^
* Ubuntu 22.04 LTS or later
* Linux kernel >= 5.15
* Open vSwitch 2.17
* OVN 22.03
* OpenStack Yoga or later
Please refer to the `SR-IOV for networking support`_ section for information on
kernel configuration.
Charm configuration
^^^^^^^^^^^^^^^^^^^
The below example bundle excerpt will enable hardware offloading for an OVN
deployment.
.. code-block:: yaml
applications:
ovn-chassis:
charm: cs:ovn-chassis
options:
enable-hardware-offload: true
sriov-numvfs: "enp3s0f0:32 enp3s0f1:32"
neutron-api:
charm: cs:neutron-api
options:
enable-hardware-offload: true
nova-compute:
charm: cs:nova-compute
options:
pci-passthrough-whitelist: '{"address": "*:03:*", "physical_network": null}'
.. caution::
After deploying the above example the machines hosting ovn-chassis
units must be rebooted for the changes to take effect.
Boot an instance
^^^^^^^^^^^^^^^^
Now we can tell OpenStack to boot an instance and attach it to an hardware
offloaded port. This must be done in two stages, first we create a port with
``vnic-type`` 'direct' and ``binding-profile`` with 'switchdev' capabilities.
Then we create an instance connected to the newly created port:
.. code-block:: none
openstack port create --network my-network --vnic-type direct \
--binding-profile '{"capabilities": ["switchdev"]}' direct_port1
openstack server create --flavor my-flavor --key-name my-key \
--nic port-id=direct_port1 my-instance
Validate that traffic is offloaded
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The `traffic control monitor`_ command can be used to observe updates to
filters which is one of the mechanisms used to program the NIC switch hardware.
Look for the 'in_hw' and 'not_in_hw' labels.
.. code-block:: none
sudo tc monitor
.. code-block:: console
replaced filter dev eth62 ingress protocol ip pref 3 flower chain 0 handle 0x9
dst_mac fa:16:3e:b2:20:82
src_mac fa:16:3e:b9:db:c8
eth_type ipv4
ip_proto tcp
ip_tos 67deeb90
dst_ip 10.42.0.17/28
tcp_flags 22
ip_flags nofrag
in_hw
action order 1: tunnel_key set
src_ip 0.0.0.0
dst_ip 10.6.12.8
key_id 4
dst_port 6081
csum pipe
index 15 ref 1 bind 1
action order 2: mirred (Egress Redirect to device genev_sys_6081) stolen
index 18 ref 1 bind 1
cookie d4885b4d38419f7fd7ae77a11bc78b0b
Open vSwitch has a rich set of tools to monitor traffic flows and you can use
the `data path control tools`_ to monitor offloaded flows.
.. code-block:: none
sudo ovs-appctl dpctl/dump-flows type=offloaded
.. code-block:: console
tunnel(tun_id=0x4,src=10.6.12.3,dst=10.6.12.7,tp_dst=6081,geneve({class=0x102,type=0x80,len=4,0x20007/0x7fffffff}),flags(+key)),recirc_id(0),in_port(2),eth(src=fa:16:3e:f8:52:5c,dst=00:00:00:00:00:00/01:00:00:00:00:00),eth_type(0x0800),ipv4(proto=6,frag=no),tcp_flags(psh|ack), packets:2, bytes:204, used:5.710s, actions:7
tunnel(tun_id=0x4,src=10.6.12.3,dst=10.6.12.7,tp_dst=6081,geneve({class=0x102,type=0x80,len=4,0x20007/0x7fffffff}),flags(+key)),recirc_id(0),in_port(2),eth(src=fa:16:3e:f8:52:5c,dst=00:00:00:00:00:00/01:00:00:00:00:00),eth_type(0x0800),ipv4(proto=6,frag=no),tcp_flags(ack), packets:3, bytes:230, used:5.710s, actions:7
tunnel(tun_id=0x4,src=10.6.12.8,dst=10.6.12.7,tp_dst=6081,geneve({class=0x102,type=0x80,len=4,0x60007/0x7fffffff}),flags(+key)),recirc_id(0),in_port(2),eth(src=fa:16:3e:b2:20:82,dst=00:00:00:00:00:00/01:00:00:00:00:00),eth_type(0x0800),ipv4(proto=6,frag=no),tcp_flags(syn|ack), packets:0, bytes:0, used:6.740s, actions:7
tunnel(tun_id=0x4,src=10.6.12.8,dst=10.6.12.7,tp_dst=6081,geneve({class=0x102,type=0x80,len=4,0x60007/0x7fffffff}),flags(+key)),recirc_id(0),in_port(2),eth(src=fa:16:3e:b2:20:82,dst=00:00:00:00:00:00/01:00:00:00:00:00),eth_type(0x0800),ipv4(proto=6,frag=no),tcp_flags(ack), packets:180737, bytes:9400154, used:0.000s, actions:7
recirc_id(0),in_port(6),eth(src=26:8a:07:82:a7:2f,dst=01:80:c2:00:00:0e),eth_type(0x88cc), packets:5, bytes:990, used:14.340s, actions:drop
recirc_id(0),in_port(7),eth(src=fa:16:3e:b9:db:c8,dst=fa:16:3e:b2:20:82),eth_type(0x0800),ipv4(dst=10.42.0.16/255.255.255.240,proto=6,tos=0/0x3,frag=no),tcp_flags(syn), packets:0, bytes:0, used:6.910s, actions:set(tunnel(tun_id=0x4,dst=10.6.12.8,ttl=64,tp_dst=6081,key6(bad key length 1, expected 0)(01)geneve({class=0x102,type=0x80,len=4,0x70006}),flags(key))),2
recirc_id(0),in_port(7),eth(src=fa:16:3e:b9:db:c8,dst=fa:16:3e:b2:20:82),eth_type(0x0800),ipv4(dst=10.42.0.16/255.255.255.240,proto=6,tos=0/0x3,frag=no),tcp_flags(ack), packets:935904, bytes:7504070178, used:0.590s, actions:set(tunnel(tun_id=0x4,dst=10.6.12.8,ttl=64,tp_dst=6081,key6(bad key length 1, expected 0)(01)geneve({class=0x102,type=0x80,len=4,0x70006}),flags(key))),2
recirc_id(0),in_port(7),eth(src=fa:16:3e:b9:db:c8,dst=fa:16:3e:b2:20:82),eth_type(0x0800),ipv4(dst=10.42.0.16/255.255.255.240,proto=6,tos=0/0x3,frag=no),tcp_flags(psh|ack), packets:3873, bytes:31053714, used:0.590s, actions:set(tunnel(tun_id=0x4,dst=10.6.12.8,ttl=64,tp_dst=6081,key6(bad key length 1, expected 0)(01)geneve({class=0x102,type=0x80,len=4,0x70006}),flags(key))),2
SR-IOV for networking support
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Single root I/O virtualization (SR-IOV) enables splitting a single physical
network port into multiple virtual network ports known as virtual functions
(VFs). The division is done at the PCI level which allows attaching the VF
directly to a virtual machine instance, bypassing the networking stack of the
hypervisor hosting the instance.
The main use case for this feature is to support applications with high
bandwidth requirements. For such applications the normal plumbing through the
userspace virtio driver in QEMU will consume too much resources from the host.
It is possible to configure chassis to prepare network interface cards (NICs)
for use with SR-IOV and make them available to OpenStack.
Prerequisites
^^^^^^^^^^^^^
To use the feature you need to use a NIC with support for SR-IOV.
Machines need to be pre-configured with appropriate kernel command-line
parameters. The charm does not handle this facet of configuration and it is
expected that the user configure this either manually or through the bare metal
provisioning layer (for example `MAAS`_). Example:
.. code-block:: none
intel_iommu=on iommu=pt probe_vf=0
Charm configuration
^^^^^^^^^^^^^^^^^^^
Enable SR-IOV, map physical network name 'physnet2' to the physical port named
'enp3s0f0' and create 4 virtual functions on it:
.. code-block:: none
juju config neutron-api enable-sriov=true
juju config ovn-chassis enable-sriov=true
juju config ovn-chassis sriov-device-mappings=physnet2:enp3s0f0
juju config ovn-chassis sriov-numvfs=enp3s0f0:4
.. caution::
After deploying the above example the machines hosting ovn-chassis
units must be rebooted for the changes to take effect.
After enabling the virtual functions you should take note of the ``vendor_id``
and ``product_id`` of the virtual functions:
.. code-block:: none
juju run --application ovn-chassis 'lspci -nn | grep "Virtual Function"'
.. code-block:: console
03:10.0 Ethernet controller [0200]: Intel Corporation 82599 Ethernet Controller Virtual Function [8086:10ed] (rev 01)
03:10.2 Ethernet controller [0200]: Intel Corporation 82599 Ethernet Controller Virtual Function [8086:10ed] (rev 01)
03:10.4 Ethernet controller [0200]: Intel Corporation 82599 Ethernet Controller Virtual Function [8086:10ed] (rev 01)
03:10.6 Ethernet controller [0200]: Intel Corporation 82599 Ethernet Controller Virtual Function [8086:10ed] (rev 01)
In the above example ``vendor_id`` is '8086' and ``product_id`` is '10ed'.
Add mapping between physical network name, physical port and Open vSwitch
bridge:
.. code-block:: none
juju config ovn-chassis ovn-bridge-mappings=physnet2:br-ex
juju config ovn-chassis bridge-interface-mappings br-ex:a0:36:9f:dd:37:a8
.. note::
The above configuration allows OVN to configure an 'external' port on one
of the chassis for providing DHCP and metadata to instances connected
directly to the network through SR-IOV.
For OpenStack to make use of the VFs the ``neutron-sriov-agent`` needs to talk
to RabbitMQ:
.. code:: bash
juju add-relation ovn-chassis:amqp rabbitmq-server:amqp
OpenStack Nova also needs to know which PCI devices it is allowed to pass
through to instances:
.. code:: bash
juju config nova-compute pci-passthrough-whitelist='{"vendor_id":"8086", "product_id":"10ed", "physical_network":"physnet2"}'
Boot an instance
^^^^^^^^^^^^^^^^
Now we can tell OpenStack to boot an instance and attach it to an SR-IOV port.
This must be done in two stages, first we create a port with ``vnic-type``
'direct' and then we create an instance connected to the newly created port:
.. code:: bash
openstack port create --network my-network --vnic-type direct my-port
openstack server create --flavor my-flavor --key-name my-key \
--nic port-id=my-port my-instance
DPDK support
~~~~~~~~~~~~
It is possible to configure chassis to use experimental DPDK userspace network
acceleration.
.. note::
Currently instances are required to be attached to a external network (also
known as provider network) for connectivity. OVN supports distributed DHCP
for provider networks. For OpenStack workloads use of `Nova config drive`_
is required to provide metadata to instances.
Prerequisites
^^^^^^^^^^^^^
To use the feature you need to use a supported CPU architecture and network
interface card (NIC) hardware. Please consult the `DPDK supported hardware
page`_.
Machines need to be pre-configured with appropriate kernel command-line
parameters. The charm does not handle this facet of configuration and it is
expected that the user configure this either manually or through the bare metal
provisioning layer (for example `MAAS`_).
Example:
.. code:: bash
default_hugepagesz=1G hugepagesz=1G hugepages=64 intel_iommu=on iommu=pt
For the communication between the host userspace networking stack and the guest
virtual NIC driver to work the instances need to be configured to use
hugepages. For OpenStack this can be accomplished by `Customizing instance huge
pages allocations`_.
Example:
.. code:: bash
openstack flavor set m1.large --property hw:mem_page_size=large
By default, the charm will configure Open vSwitch/DPDK to consume one processor
core + 1G of RAM from each NUMA node on the unit being deployed. This can be
tuned using the ``dpdk-socket-memory`` and ``dpdk-socket-cores`` configuration
options.
.. note::
Please check that the value of dpdk-socket-memory is large enough to
accommodate the MTU size being used. For more information please refer to
`DPDK shared memory calculations`_
The userspace kernel driver can be configured using the ``dpdk-driver``
configuration option. See config.yaml for more details.
.. note::
Changing dpdk related configuration options will trigger a restart of
Open vSwitch, and subsequently interrupt instance connectivity.
Charm configuration
^^^^^^^^^^^^^^^^^^^
The below example bundle excerpt will enable the use of DPDK for an OVN
deployment.
.. code-block:: yaml
ovn-chassis-dpdk:
options:
enable-dpdk: True
bridge-interface-mappings: br-ex:00:53:00:00:00:42
ovn-chassis:
options:
enable-dpdk: False
bridge-interface-mappings: br-ex:bond0
prefer-chassis-as-gw: True
.. caution::
As configured by the charms, the units configured to use DPDK will not
participate in the overlay network and will also not be able to provide
services such as external DHCP to SR-IOV enabled units in the same
deployment.
As such it is important to have at least one other named ovn-chassis
application in the deployment with ``enable-dpdk`` set to 'False' and the
``prefer-chassis-as-gw`` configuration option set to 'True'. Doing so will
inform the CMS (Cloud Management System) that shared services such as
gateways and external DHCP services should not be scheduled to the
DPDK-enabled nodes.
DPDK bonding
............
Once Network interface cards are bound to DPDK they will be invisible to the
standard Linux kernel network stack and subsequently it is not possible to use
standard system tools to configure bonding.
For DPDK interfaces the charm supports configuring bonding in Open vSwitch.
This is accomplished through the ``dpdk-bond-mappings`` and
``dpdk-bond-config`` configuration options. Example:
.. code:: yaml
ovn-chassis-dpdk:
options:
enable-dpdk: True
bridge-interface-mappings: br-ex:dpdk-bond0
dpdk-bond-mappings: "dpdk-bond0:00:53:00:00:00:42 dpdk-bond0:00:53:00:00:00:51"
dpdk-bond-config: ":balance-slb:off:fast"
ovn-chassis:
options:
enable-dpdk: False
bridge-interface-mappings: br-ex:bond0
prefer-chassis-as-gw: True
In this example, the network interface cards associated with the two MAC
addresses provided will be used to build a bond identified by a port named
'dpdk-bond0' which will be attached to the 'br-ex' bridge.
Internal DNS resolution
~~~~~~~~~~~~~~~~~~~~~~~
OVN supports Neutron internal DNS resolution. To configure this:
.. caution::
At the time of this writing the internal DNS support does not include
reverse lookup (PTR-records) of instance IP addresses, only forward lookup
(A and AAAA-records) of instance names. This is tracked in `LP #1857026`_.
.. code::
juju config neutron-api enable-ml2-dns=true
juju config neutron-api dns-domain=openstack.example.
juju config neutron-api-plugin-ovn dns-servers="1.1.1.1 8.8.8.8"
.. note::
The value for the ``dns-domain`` configuration option must
not be set to 'openstack.local.' as that will effectively disable the
feature.
It is also important to end the string with a '.' (dot).
When you set ``enable-ml2-dns`` to 'true' and set a value for ``dns-domain``,
Neutron will add details such as instance name and DNS domain name to each
individual Neutron port associated with instances. The OVN ML2 driver will
populate the ``DNS`` table of the Northbound and Southbound databases:
.. code::
# ovn-sbctl list DNS
_uuid : 2e149fa8-d27f-4106-99f5-a08f60c443bf
datapaths : [b25ed99a-89f1-49cc-be51-d215aa6fb073]
external_ids : {dns_id="4c79807e-0755-4d17-b4bc-eb57b93bf78d"}
records : {"c-1"="192.0.2.239", "c-1.openstack.example"="192.0.2.239"}
On the chassis, OVN creates flow rules to redirect UDP port 53 packets (DNS)
to the local ``ovn-controller`` process:
.. code::
cookie=0xdeaffed, duration=77.575s, table=22, n_packets=0, n_bytes=0, idle_age=77, priority=100,udp6,metadata=0x2,tp_dst=53 actions=controller(userdata=00.00.00.06.00.00.00.00.00.01.de.10.00.00.00.64,pause),resubmit(,23)
cookie=0xdeaffed, duration=77.570s, table=22, n_packets=0, n_bytes=0, idle_age=77, priority=100,udp,metadata=0x2,tp_dst=53 actions=controller(userdata=00.00.00.06.00.00.00.00.00.01.de.10.00.00.00.64,pause),resubmit(,23)
The local ``ovn-controller`` process then decides if it should respond to the
DNS query directly or if it needs to be forwarded to the real DNS server.
External connectivity
~~~~~~~~~~~~~~~~~~~~~
Interface and network to bridge mapping is done through the
`ovn-chassis charm`_.
OVN provides a more flexible way of configuring external Layer3 networking than
the legacy ML2+DVR configuration as OVN does not require every node
(``Chassis`` in OVN terminology) in a deployment to have direct external
connectivity. This plays nicely with Layer3-only datacenter fabrics (RFC 7938).
East/West traffic is distributed by default. North/South traffic is highly
available by default. Liveness detection is done using the Bidirectional
Forwarding Detection (BFD) protocol.
Networks for use with external Layer3 connectivity should have mappings on
chassis located in the vicinity of the datacenter border gateways. Having two
or more chassis with mappings for a Layer3 network will have OVN automatically
configure highly available routers with liveness detection provided by the
Bidirectional Forwarding Detection (BFD) protocol.
Chassis without direct external mapping to a external Layer3 network will
forward traffic through a tunnel to one of the chassis acting as a gateway for
that network.
Networks for use with external Layer2 connectivity should have mappings present
on all chassis with potential to host the consuming payload.
.. note::
It is not necessary nor recommended to add mapping for external
Layer3 networks to all chassis. Doing so will create a scaling problem at
the physical network layer that needs to be resolved with globally shared
Layer2 (does not scale) or tunneling at the top-of-rack switch layer (adds
complexity) and is generally not a recommended configuration.
Example configuration with explicit bridge-interface-mappings:
.. code:: bash
juju config neutron-api flat-network-providers=physnet1
juju config ovn-chassis ovn-bridge-mappings=physnet1:br-provider
juju config ovn-chassis \
bridge-interface-mappings='br-provider:00:00:5e:00:00:42 \
br-provider:00:00:5e:00:00:51'
openstack network create --external --share --provider-network-type flat \
--provider-physical-network physnet1 ext-net
openstack subnet create --network ext-net \
--subnet-range 192.0.2.0/24 \
--no-dhcp --gateway 192.0.2.1 \
ext
It is also possible to influence the scheduling of routers on a per named
ovn-chassis application basis. The benefit of this method is that you do not
need to provide MAC addresses when configuring Layer3 connectivity in the
charm. For example:
.. code-block:: none
juju config ovn-chassis-border \
ovn-bridge-mappings=physnet1:br-provider \
bridge-interface-mappings=br-provider:bond0 \
prefer-chassis-as-gw=true
juju config ovn-chassis \
ovn-bridge-mappings=physnet1:br-provider \
bridge-interface-mappings=br-provider:bond0 \
In the above example units of the ovn-chassis-border application with
appropriate bridge mappings will be eligible for router scheduling.
.. _ovn_usage:
Usage
-----
Create networks, routers and subnets through the OpenStack API or CLI as you
normally would.
The OVN ML2 driver will translate the OpenStack network constructs into high
level logical rules in the OVN Northbound database.
The ``ovn-northd`` daemon in turn translates this into data in the Southbound
database.
The local ``ovn-controller`` daemon on each chassis consumes these rules and
programs flows in the local Open vSwitch database.
Information queries
~~~~~~~~~~~~~~~~~~~
The OVN databases are configured to use the `Clustered Database Service
Model`_. In this configuration only the leader processes transactions and the
administrative client tools are configured to require a connection to the
leader to operate.
The leader of the Northbound and Southbound databases does not have to coincide
with the charm leader, so before querying databases you must consult the output
of :command:`juju status` to check which unit is the leader of the database you
want to query. Example:
.. code-block:: none
juju status ovn-central
.. code-block:: console
Unit Workload Agent Machine Public address Ports Message
ovn-central/0* active idle 0/lxd/5 10.246.114.39 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db)
ovn-central/1 active idle 1/lxd/4 10.246.114.15 6641/tcp,6642/tcp Unit is ready (northd: active)
ovn-central/2 active idle 2/lxd/2 10.246.114.27 6641/tcp,6642/tcp Unit is ready (leader: ovnsb_db)
In the above example 'ovn-central/0' is the leader for the Northbound DB,
'ovn-central/1' has the active ``ovn-northd`` daemon and 'ovn-central/2' is the
leader for the Southbound DB.
OVSDB Cluster status
^^^^^^^^^^^^^^^^^^^^
The cluster status as conveyed through :command:`juju status` is updated each
time a hook is run, in some circumstances it may be necessary to get an
immediate view of the current cluster status.
To get an immediate view of the database clusters:
.. code-block:: none
juju run --application ovn-central 'ovn-appctl -t \
/var/run/ovn/ovnnb_db.ctl cluster/status OVN_Northbound'
juju run --application ovn-central 'ovn-appctl -t \
/var/run/ovn/ovnsb_db.ctl cluster/status OVN_Southbound'
Querying DBs
^^^^^^^^^^^^
To query the individual databases:
.. code-block:: none
juju run --unit ovn-central/0 'ovn-nbctl show'
juju run --unit ovn-central/2 'ovn-sbctl show'
juju run --unit ovn-central/2 'ovn-sbctl lflow-list'
As an alternative you may provide the administrative client tools with
command-line arguments for path to certificates and IP address of servers so
that you can run the client from anywhere:
.. code-block:: none
ovn-nbctl \
-p /etc/ovn/key_host \
-C /etc/ovn/ovn-central.crt \
-c /etc/ovn/cert_host \
--db ssl:10.246.114.39:6641,ssl:10.246.114.15:6641,ssl:10.246.114.27:6641 \
show
Note that for remote administrative write access to the Southbound DB you must
use port number '16642'. This is due to OVN RBAC being enabled on the standard
'6642' port:
.. code-block:: none
ovn-sbctl \
-p /etc/ovn/key_host \
-C /etc/ovn/ovn-central.crt \
-c /etc/ovn/cert_host \
--db ssl:10.246.114.39:16642,ssl:10.246.114.15:16642,ssl:10.246.114.27:16642 \
show
Data plane flow tracing
^^^^^^^^^^^^^^^^^^^^^^^
SSH into one of the chassis units to get access to various diagnostic tools:
.. code-block:: none
juju ssh ovn-chassis/0
sudo ovs-vsctl show
sudo ovs-ofctl -O OpenFlow13 dump-flows br-int
sudo ovs-appctl -t ovs-vswitchd \
ofproto/trace br-provider \
in_port=enp3s0f0,icmp,nw_src=192.0.2.1,nw_dst=192.0.2.100
sudo ovn-trace \
-p /etc/ovn/key_host \
-C /etc/ovn/ovn-chassis.crt \
-c /etc/ovn/cert_host \
--db ssl:10.246.114.39:6642,ssl:10.246.114.15:6642,ssl:10.246.114.27:6642 \
--ovs ext-net 'inport=="provnet-dde76bc9-0620-44f7-b99a-99cfc66e1095" && \
eth.src==30:e1:71:5c:7a:b5 && \
eth.dst==fa:16:3e:f7:15:73 && \
ip4.src==10.172.193.250 && \
ip4.dst==10.246.119.8 && \
icmp4.type==8 && \
ip.ttl == 64'
.. note::
OVN makes use of OpenFlow 1.3 or newer and as such the charm configures
bridges to use these protocols. To be able to successfully use the
:command:`ovs-ofctl` command you must specify the OpenFlow version as shown
in the example above.
You may issue the :command:`ovs-vsctl list bridge` command to show what
protocols are enabled on the bridges.
.. LINKS
.. _vault charm: https://jaas.ai/vault/
.. _Managing TLS certificates: app-certificate-management.html
.. _Toward Convergence of ML2+OVS+DVR and OVN: http://specs.openstack.org/openstack/neutron-specs/specs/ussuri/ml2ovs-ovn-convergence.html
.. _ovn-dedicated-chassis charm: https://jaas.ai/u/openstack-charmers/ovn-dedicated-chassis/
.. _networking-ovn plugin: https://docs.openstack.org/networking-ovn/latest/
.. _neutron-api charm: https://jaas.ai/neutron-api/
.. _neutron-api-plugin-ovn charm: https://jaas.ai/u/openstack-charmers/neutron-api-plugin-ovn/
.. _ovn-chassis charm: https://jaas.ai/u/openstack-charmers/ovn-chassis/
.. _OpenStack Base bundle: https://github.com/openstack-charmers/openstack-bundles/tree/master/development/openstack-base-bionic-ussuri-ovn
.. _gaps from ML2/OVS: https://docs.openstack.org/neutron/latest/ovn/gaps.html
.. _OVN section of the Infrastructure high availability: https://docs.openstack.org/charm-guide/latest/admin/ha.html#ovn
.. _OpenStack Charms Deployment Guide: https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/
.. _Nova config drive: https://docs.openstack.org/nova/latest/user/metadata.html#config-drives
.. _DPDK supported hardware page: http://core.dpdk.org/supported/
.. _MAAS: https://maas.io/
.. _Customizing instance huge pages allocations: https://docs.openstack.org/nova/latest/admin/huge-pages.html#customizing-instance-huge-pages-allocations
.. _Hardware offloading: app-hardware-offload.html
.. _SR-IOV for networking support: app-ovn.html#sr-iov-for-networking-support
.. _traffic control monitor: http://manpages.ubuntu.com/manpages/focal/man8/tc.8.html#monitor
.. _data path control tools: http://manpages.ubuntu.com/manpages/focal/man8/ovs-dpctl.8.html
.. _Clustered Database Service Model: http://docs.openvswitch.org/en/latest/ref/ovsdb.7/#clustered-database-service-model
.. _DPDK shared memory calculations: https://docs.openvswitch.org/en/latest/topics/dpdk/memory/#shared-memory-calculations
.. _Usage: app-ovn#usage
.. BUGS
.. _LP #1857026: https://bugs.launchpad.net/ubuntu/+source/ovn/+bug/1857026

View File

@ -1,122 +0,0 @@
==============================
Bridge interface configuration
==============================
Overview
--------
This page explores physical interface configuration options available for OVS
and OVN charms.
Available configuration options
-------------------------------
For both OVS and OVN charms there is a need to provide physical interfaces for
vSwitch bridges in order to be able to use VLAN and flat provider networks in
Neutron.
These OVS charms use the ``data-port`` configuration option:
* neutron-openvswitch
* neutron-gateway
These OVN charms use the ``bridge-interface-mappings`` configuration option:
* ovn-chassis
* ovn-dedicated-chassis
.. important::
Any changes (subsequent to the initial setting) made to the value
of option ``data-port`` will merely add the new values along with
the existing ones. If removal of old values is desired, they have
to be done manually through the command ``ovs-vsctl`` in the
affected units. If the new values conflict with the previous
ones, it may cause a network outage as seen in bug `LP #1915967`_.
Utilising one interface for multiple purposes
---------------------------------------------
It is common to use a single network interface for providing network access to
different types of workloads. Workloads such as OpenStack API services and Ceph
may use VLAN interfaces at the host level or via VLAN-specific container
bridges. Neutron services may rely on OVS bridges with physical network
interfaces or bonds used as virtual switch uplinks. The following diagram shows
how this can be achieved
.. code-block:: console
+----------------+ +-----------------+
| LXD | |VM or router port|
| container port | +-----------------+
+----------------+ ||
|| +-----------------+
|| | br-int (OVS) |
+----------------+ +-----------------+
| br-bond1.100 | ||
| Linux bridge | +-----------------+
| with L3 config | |br-provider (OVS)|
+----------------+ +-----------------+
|| ||
+----------------+ +-----------------+
| bond1.100 | | bond1 |
| no L3 config | | no L3 config |
+----------------+ +-----------------+
|| ||
+------+ +------+
| hwe0 | | hwe1 |
+------+ +------+
``bond1`` configured at the Linux kernel level can have VLAN interfaces
such as ``bond1.100`` which will make all traffic tagged with VLAN 100 to be
forwarded to ``bond1.100`` instead of ``bond1`` so it will not reach
``br-provider``. In this example, all untagged and tagged traffic, except for
VLAN 100, will be forwarded to ``br-provider`` via ``bond1``.
Any VLAN interface explicitly configured on top of ``bond1`` will make its
VLAN unusable in Neutron through the associated provider bridge because the
inbound traffic for that VLAN will always be intercepted by a ``bond1.<vid>``
interface at the kernel level.
.. warning::
``bond1`` must not have any L3 configuration for this setup to work.
This allows VLAN provider networks to be used for a range of VLANs dedicated
for use with Neutron in conjunction with some VLANs dedicated to host workloads.
Charm configuration examples
----------------------------
The following configuration assumes the setup mentioned above and that there
are no VLAN tenant networks - only VLAN provider networks (thus the
``vlan-ranges`` option only includes a physnet name).
For OVS deployments the charm configuration would look like this:
.. code-block:: none
juju config neutron-api vlan-ranges='dcfabric'
juju config neutron-openvswitch data-port='br-provider:bond1' bridge-mappings='dcfabric:br-provider' vlan-ranges='dcfabric'
juju config neutron-gateway data-port='br-provider:bond1' bridge-mappings='dcfabric:br-provider' vlan-ranges='dcfabric'
Whereas for OVN deployments:
.. code-block:: none
juju config neutron-api vlan-ranges='dcfabric'
juju config ovn-chassis bridge-interface-mappings='br-provider:bond1' ovn-bridge-mappings='dcfabric:br-provider'
juju config ovn-dedicated-chassis bridge-interface-mappings='br-provider:bond1' ovn-bridge-mappings='dcfabric:br-provider'
To configure a VLAN provider network the following command can be used with any
segment ID other than 100 as bond1.100 is present:
.. code-block:: none
# --external is only needed for setups targeted at using floating IPs.
openstack network create --external --provider-network-type vlan --provider-physical-network dcfabric --provider-segment 99
.. _LP #1915967: https://bugs.launchpad.net/charm-neutron-gateway/+bug/1915967

View File

@ -40,15 +40,6 @@ OpenStack Charms usage. To help improve it you can `file an issue`_ or
keystone
app-certificate-management
.. toctree::
:caption: Networking
:maxdepth: 1
app-ovn
app-hardware-offload
app-octavia
configure-bridge
.. LINKS
.. _file an issue: https://bugs.launchpad.net/charm-deployment-guide/+filebug
.. _submit a contribution: https://opendev.org/openstack/charm-deployment-guide

View File

@ -20,8 +20,8 @@ with these charms:
* ovn-chassis (or ovn-dedicated-chassis)
* neutron-api-plugin-ovn charms
Post-migration, the :doc:`app-ovn` page includes information on configuration
and usage.
Post-migration, the :doc:`cg:admin/networking/ovn` page in the Charm Guide
includes information on configuration and usage.
MTU considerations
------------------
@ -202,8 +202,8 @@ of the actual migration.
5. Unseal Vault (see the `vault charm`_), set up TLS certificates (see
`Managing TLS certificates`_), and validate that the services on ovn-central
units are running as expected. Please refer to the OVN :ref:`ovn_usage`
section in the Charm Guide for more information.
units are running as expected. Please refer to the OVN
:doc:`cg:admin/networking/ovn` page in the Charm Guide for more information.
Perform the migration
---------------------

View File

@ -64,7 +64,8 @@ Neutron LBaaS retired
As of Train, support for Neutron LBaaS has been retired. The load-balancing
services are now provided by Octavia LBaaS. There is no automatic migration
path, please review the :doc:`app-octavia` page for more information.
path, please review the :doc:`cg:admin/networking/load-balancing` page in the
Charm Guide for more information.
Designate encoding issue
------------------------

View File

@ -46,3 +46,7 @@
/project-deploy-guide/charm-deployment-guide/latest/encryption-at-rest.html 301 /charm-guide/latest/admin/storage/encryption-at-rest.html
/project-deploy-guide/charm-deployment-guide/latest/manila-ganesha.html 301 /charm-guide/latest/admin/storage/shared-filesystem-services.html
/project-deploy-guide/charm-deployment-guide/latest/swift.html 301 /charm-guide/latest/admin/storage/swift.html
/project-deploy-guide/charm-deployment-guide/latest/app-ovn.html 301 /charm-guide/latest/admin/networking/ovn.html
/project-deploy-guide/charm-deployment-guide/latest/app-octavia.html 301 /charm-guide/latest/admin/networking/load-balancing.html
/project-deploy-guide/charm-deployment-guide/latest/app-hardware-offload.html 301 /charm-guide/latest/admin/networking/hardware-offloading.html
/project-deploy-guide/charm-deployment-guide/latest/configure-bridge.html 301 /charm-guide/latest/admin/networking/interface-config.html