24dc92049b
Newer versions of rpds-py need rust > 1.76; use the right version from the distro and symlink the versionless binaries. Add zuul charm build check to validate. Change-Id: I9f644d97470c6dda679f12a5f4cb18684dd82fec |
||
---|---|---|
src | ||
unit_tests | ||
.gitignore | ||
.gitreview | ||
.stestr.conf | ||
.travis.yml | ||
.zuul.yaml | ||
bindep.txt | ||
charmcraft.yaml | ||
metadata.yaml | ||
osci.yaml | ||
README.md | ||
rebuild | ||
rename.sh | ||
requirements.txt | ||
test-requirements.txt | ||
tox.ini |
Overview
The ovn-chassis charm provides the Open Virtual Network (OVN) local controller, Open vSwitch Database and Switch. It is used in conjunction with the ovn-central charm.
Open vSwitch bridges for integration, external Layer2 and Layer3 connectivity is managed by the charm.
On successful deployment the unit will be enlisted as a Chassis in the OVN network.
The ovn-chassis charm is a subordinate charm. Alternatively, the principle ovn-dedicated-chassis charm can be used, resulting in a dedicated software gateway.
Note
: The OVN charms are supported starting with OpenStack Train.
Usage
The OpenStack Base bundle gives an example of how you can deploy OpenStack and OVN with Vault to automate certificate lifecycle management.
OVN makes use of Public Key Infrastructure (PKI) to authenticate and authorize
control plane communication. The charm therefore requires a Certificate
Authority to be present in the model as represented by the certificates
relation.
Refer to Open Virtual Network (OVN) in the OpenStack Charms Deployment Guide for details, including deployment steps.
OpenStack support
When related to the nova-compute charm the ovn-chassis charm will enable services that provide Nova metadata to instances.
DPDK, SR-IOV and hardware offload support
It is possible to configure chassis to prepare network interface cards (NICs) for use with DPDK, SR-IOV and hardware offload support.
Please refer to the OVN Configuration in the OpenStack Charms Deployment Guide for details.
Network spaces
This charm supports the use of Juju network spaces.
By binding the ovsdb
endpoint you can influence which interface will be used
for communication with the OVN Southbound DB as well as overlay traffic.
juju deploy ovn-chassis --bind "ovsdb=internal-space"
By binding the data
extra-binding you can influence which interface will be
used for overlay traffic.
juju deploy ovn-chassis --bind "data=overlay-space"
Port configuration
Chassis port configuration is composed of a mapping between physical network
names to bridge names (ovn-bridge-mappings
) and individual interface to
bridge names (bridge-interface-mappings
). There must be a match in both
configuration options before the charm will configure bridge and interfaces on
a unit.
The physical network name can be referenced when the administrator programs the OVN logical flows, either by talking directly to the Northbound database, or by interfacing with a Cloud Management System (CMS).
Networks for use with external Layer3 connectivity should have mappings on chassis located in the vicinity of the datacenter border gateways. Having two or more chassis with mappings for a Layer3 network will have OVN automatically configure highly available routers with liveness detection provided by the Bidirectional Forwarding Detection (BFD) protocol.
Chassis without direct external mapping to a external Layer3 network will forward traffic through a tunnel to one of the chassis acting as a gateway for that network.
Note
: It is not necessary, nor recommended, to add mapping for external Layer3 networks to all chassis. Doing so will create a scaling problem at the physical network layer that needs to be resolved with globally shared Layer2 (does not scale) or tunneling at the top-of-rack switch layer (adds complexity) and is generally not a recommended configuration.
Networks for use with external Layer2 connectivity should have mappings present on all chassis with potential to host the consuming payload.
Deferred service events
Operational or maintenance procedures applied to a cloud often lead to the restarting of various OpenStack services and/or the calling of certain charm hooks. Although normal, such events can be undesirable due to the service interruptions they can cause.
The deferred service events feature provides the operator the choice of preventing these service restarts and hook calls from occurring, which can then be resolved at a more opportune time.
See the Deferred service events page in the OpenStack Charms Deployment Guide for an in-depth treatment of this feature.
Bugs
Please report bugs on Launchpad.
For general questions please refer to the OpenStack Charm Guide.