Docs: Implement doc8 linter

Implement the doc8 linter for RST documentation, convert
errors to warnings for the '-docs' gate job, and fix any
files that fail checks.

Change-Id: I98f135e446034ccef31f07a8d6ba0f25a197c9fa
changes/48/274948/6
Matthew Kassawara 7 years ago committed by Andreas Jaeger
parent e121526ff1
commit b1ae456702

@ -16,8 +16,8 @@
> enable_service ovn
> EOF
You can also use the provided example local.conf, or look at its contents to add
to your own::
You can also use the provided example local.conf, or look at its contents to
add to your own::
cd devstack
cp ../networking-ovn/devstack/local.conf.sample local.conf

@ -9,12 +9,12 @@ containers to use.
The second mode is very interesting in the context of OpenStack. OVN makes
special accommodation for running containers inside of VMs when the networking
for those VMs is already being managed by OVN. You can create a special type of
port in OVN for these containers and have them directly connected to virtual
for those VMs is already being managed by OVN. You can create a special type
of port in OVN for these containers and have them directly connected to virtual
networks managed by OVN. There are two major benefits of this:
* It allows containers to use virtual networks without creating another layer of
overlay networks. This reduces networking complexity and increases
* It allows containers to use virtual networks without creating another layer
of overlay networks. This reduces networking complexity and increases
performance.
* It allows arbitrary connections between any VMs and any containers running
@ -28,17 +28,17 @@ Neutron port. First, you must specify the parent port that the VM is using.
Second, you must specify a tag. This tag is a VLAN ID today, though that may
change in the future. Traffic from the container must be tagged with this VLAN
ID by open vSwitch running inside the VM. Traffic destined for the container
will arrive on the parent VM port with this VLAN ID. Open vSwitch inside the VM
will forward this traffic to the container.
will arrive on the parent VM port with this VLAN ID. Open vSwitch inside the
VM will forward this traffic to the container.
These two attributes are not currently supported in the Neutron API. As a
result, we are initially allowing these attributes to be set in the
'binding:profile' extension for ports. If this approach gains traction and more
general support, we will revisit making this a real extension to the Neutron
API.
'binding:profile' extension for ports. If this approach gains traction and
more general support, we will revisit making this a real extension to the
Neutron API.
Note that the default /etc/neutron/policy.json does not allow a regular user to
set a 'binding:profile'. If you want to allow this, you must update
Note that the default /etc/neutron/policy.json does not allow a regular user
to set a 'binding:profile'. If you want to allow this, you must update
policy.json. To do so, change::
"create_port:binding:profile": "rule:admin_only",

@ -21,7 +21,8 @@ Network
status
tenant_id
Once a network is created, we should create an entry in the Logical Switch table.
Once a network is created, we should create an entry in the Logical Switch
table.
::
@ -186,8 +187,8 @@ Security Groups
Security groups maps between three neutron objects to one OVN-NB object, this
enable us to do the mapping in various ways, depending on OVN capabilities
The current implementation will use the first option in this list for simplicity,
but all options are kept here for future reference
The current implementation will use the first option in this list for
simplicity, but all options are kept here for future reference
1) For every <neutron port, security rule> pair, define an ACL entry::
@ -234,11 +235,11 @@ but all options are kept here for future reference
Which option to pick depends on OVN match field length capabilities, and the
trade off between better performance due to less ACL entries compared to the complexity
to manage them
trade off between better performance due to less ACL entries compared to the
complexity to manage them.
If the default behaviour is not "drop" for unmatched entries, a rule with lowest priority must
be added to drop all traffic ("match==1")
If the default behaviour is not "drop" for unmatched entries, a rule with
lowest priority must be added to drop all traffic ("match==1")
Spoofing protection rules are being added by OVN internally and we need to ignore
the automatically added rules in Neutron
Spoofing protection rules are being added by OVN internally and we need to
ignore the automatically added rules in Neutron

@ -2,7 +2,8 @@ OVN Neutron Worker and Port status handling
===========================================
When the logical port's VIF is attached or removed to/from the ovn integration
bridge, ovn-northd updates the Logical_Port.up to 'True' or 'False' accordingly.
bridge, ovn-northd updates the Logical_Port.up to 'True' or 'False'
accordingly.
In order for the OVN Neutron plugin to update the corresponding neutron port's
status to 'ACTIVE' or 'DOWN' in the db, it needs to monitor the
@ -40,7 +41,8 @@ If there are multiple neutron servers running, then each neutron server will
have one ovn worker which listens for the notify events. When the
'Logical_Port.up' is updated by ovn-northd, we do not want all the
neutron servers to handle the event and update the neutron port status.
In order for only one neutron server to handle the events, ovsdb locks are used.
In order for only one neutron server to handle the events, ovsdb locks are
used.
At start, each neutron server's ovn worker will try to acquire a lock with id -
'neutron_ovn_event_lock'. The ovn worker which has acquired the lock will

@ -8,8 +8,8 @@ FAQ
mechanism driver?**
The primary benefit of using ML2 is to support multiple mechanism drivers. OVN
does not currently support a deployment model that would benefit from the use of
ML2.
does not currently support a deployment model that would benefit from the use
of ML2.
**Q: Does OVN support DVR or distributed L3 routing?**
@ -23,18 +23,18 @@ When using OVN's native L3 support, L3 routing is always distributed.
**Q: Does OVN support integration with physical switches?**
OVN currently integrates with physical switches by optionally using them as VTEP
gateways from logical to physical networks.
OVN currently integrates with physical switches by optionally using them as
VTEP gateways from logical to physical networks.
OVN does not support using VLANs to implement tenant networks in such a way that
physical switch integration would be needed. It exclusively uses tunnel
OVN does not support using VLANs to implement tenant networks in such a way
that physical switch integration would be needed. It exclusively uses tunnel
overlays for that purpose.
**Q: What's the status of HA for networking-ovn and OVN?**
Typically, multiple copies of neutron-server are run across multiple servers and
uses a load balancer. The neutron plugin provided by networking-ovn supports
this deployment model.
Typically, multiple copies of neutron-server are run across multiple servers
and uses a load balancer. The neutron plugin provided by networking-ovn
supports this deployment model.
The network controller portion of OVN is distributed - an instance of the
ovn-controller service runs on every hypervisor. OVN also includes some
@ -51,13 +51,13 @@ OVN's northbound and sounthbound databases both reside in an instance of
ovsdb-server. OVN started out using this database because it already came with
OVS and is used everywhere OVS is used. The OVN project has also been clear
from the beginning that if ovsdb-server doesn't work out, we'll switch. Someone
is looking at making ovsdb-server distributed for both scale and HA reasons. In
is looking at making ovsdb-server distributed for both scale and HA reasons. In
the meantime, you can run this instance of ovsdb-server in an active/passive HA
mode. This requires having the database reside on shared storage.
Development in 2015 was largely focused on the initial architecture and
getting the core networking features working through the whole system. There is
now active work on improving scale and HA, including addressing the issues
getting the core networking features working through the whole system. There
is now active work on improving scale and HA, including addressing the issues
discussed here.
See :doc:`readme` for links to more details on OVN's architecture.

@ -44,8 +44,8 @@ of git repos, and installs everything from these git repos.
$ ./stack.sh
Once DevStack completes successfully, you should see output that looks something
like this::
Once DevStack completes successfully, you should see output that looks
something like this::
This is your host ip: 192.168.122.8
Horizon is now available at http://192.168.122.8/
@ -120,22 +120,26 @@ Neutron DHCP agent that is providing DHCP services to the ``private`` network.
..
One can determine the DHCP port by running: ``neutron port-list --device-owner 'network:dhcp'``. This
will return the DHCP port that was created by Neutron.
One can determine the DHCP port by running:
The owner of the port, that is, the 'device_owner', will have details of the port owner.
For example the port owner by a Nova instance with will have device_owner 'compute:None'.
``neutron port-list --device-owner 'network:dhcp'``
This will return the DHCP port that was created by Neutron.
The owner of the port, that is, the 'device_owner', will have details of the
port owner. For example the port owner by a Nova instance with will have
device_owner 'compute:None'.
Booting VMs
-----------
In this section we'll go through the steps to create two VMs that have a virtual
NIC attached to the ``private`` Neutron network.
In this section we'll go through the steps to create two VMs that have a
virtual NIC attached to the ``private`` Neutron network.
DevStack uses libvirt as the Nova backend by default. If KVM is available, it
will be used. Otherwise, it will just run qemu emulated guests. This is
perfectly fine for our testing, as we only need these VMs to be able to send and
receive a small amount of traffic so performance is not very important.
perfectly fine for our testing, as we only need these VMs to be able to send
and receive a small amount of traffic so performance is not very important.
1. Get the Network UUID.
@ -156,7 +160,8 @@ the public key be put in the VM so we can SSH into it.
3. Choose a flavor.
We need minimal resources for these test VMs, so the ``m1.nano`` flavor is sufficient.
We need minimal resources for these test VMs, so the ``m1.nano`` flavor is
sufficient.
::
@ -195,8 +200,8 @@ It's a very small test image.
5. Setup a security rule so that we can access the VMs we will boot up next.
By default, DevStack does not allow users to access VMs, to enable that, we will need to
add a rule. We will allow both ICMP and SSH.
By default, DevStack does not allow users to access VMs, to enable that, we
will need to add a rule. We will allow both ICMP and SSH.
::
@ -291,8 +296,9 @@ Once both VMs have been started, they will have a status of ``ACTIVE``::
| 7a8c12da-54b3-4adf-bba5-74df9fd2e907 | test2 | ACTIVE | - | Running | private=fde5:95da:6b50:0:f816:3eff:fe42:cbc7, 10.0.0.4 |
+--------------------------------------+-------+--------+------------+-------------+--------------------------------------------------------+
Our two VMs have addresses of ``10.0.0.3`` and ``10.0.0.4``. If we list Neutron
ports again, there are two new ports with these addresses associated with the::
Our two VMs have addresses of ``10.0.0.3`` and ``10.0.0.4``. If we list
Neutron ports again, there are two new ports with these addresses associated
with the::
$ neutron port-list
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------------------------------+
@ -384,8 +390,8 @@ controller node.
Replace ``IFACE_ID`` with the UUID of the neutron port.
#. Configure the MAC and IP address of the OVS port to use the same values as the
neutron port in step 1 and bring it up.
#. Configure the MAC and IP address of the OVS port to use the same values as
the neutron port in step 1 and bring it up.
.. code-block:: console
@ -405,8 +411,8 @@ controller node.
Adding Another Compute Node
---------------------------
After completing the earlier instructions for setting up devstack, you can use a
second VM to emulate an additional compute node. This is important for OVN
After completing the earlier instructions for setting up devstack, you can use
a second VM to emulate an additional compute node. This is important for OVN
testing as it exercises the tunnels created by OVN between the hypervisors.
Just as before, create a throwaway VM but make sure that this VM has a
@ -419,8 +425,8 @@ Once the VM is setup, create a user with sudo access and install git.
$ git clone http://git.openstack.org/openstack-dev/devstack.git
$ git clone http://git.openstack.org/openstack/networking-ovn.git
networking-ovn comes with another sample configuration file that can be used for
this::
networking-ovn comes with another sample configuration file that can be used
for this::
$ cd devstack
$ cp ../networking-ovn/devstack/computenode-local.conf.sample local.conf
@ -439,8 +445,8 @@ ovsdb-server). The final output will look something like this::
This is your host ip: 172.16.189.10
2015-05-09 01:21:49.565 | stack.sh completed in 308 seconds.
Now go back to your main DevStack host. You can use admin credentials to verify
that the additional hypervisor has been added to the deployment::
Now go back to your main DevStack host. You can use admin credentials to
verify that the additional hypervisor has been added to the deployment::
$ cd devstack
$ . openrc admin
@ -454,9 +460,9 @@ that the additional hypervisor has been added to the deployment::
+----+------------------------------------+-------+---------+
You can also look at OVN and OVS to see that the second host has shown up. For
example, there will be a second entry in the Chassis table of the OVN_Southbound
database. You can use the ``ovn-sbctl`` utility to list chassis, their
configuration, and the ports bound to each of them::
example, there will be a second entry in the Chassis table of the
OVN_Southbound database. You can use the ``ovn-sbctl`` utility to list
chassis, their configuration, and the ports bound to each of them::
$ ovn-sbctl show

@ -13,7 +13,8 @@ Launching VM's failure
Using Ubuntu you might encounter libvirt permission errors when trying
to create OVS ports after launching a VM (from the nova compute log).
Disabling AppArmor might help with this problem, check out
https://help.ubuntu.com/community/AppArmor for instructions on how to disable it.
https://help.ubuntu.com/community/AppArmor for instructions on how to
disable it.
Multi-Node setup not working
@ -26,14 +27,16 @@ Older kernels (< 3.18) don't support the Geneve module and hence tunneling cant
work. You can check it with this command 'lsmod | grep openvswitch'
(geneve should show up in the result list)
For more information about which upstream Kernel version is required for support
of each tunnel type, see the answer to " Why do tunnels not work when using a
kernel module other than the one packaged with Open vSwitch?" in the OVS FAQ:
For more information about which upstream Kernel version is required for
support of each tunnel type, see the answer to " Why do tunnels not work when
using a kernel module other than the one packaged with Open vSwitch?" in the
OVS FAQ:
https://github.com/openvswitch/ovs/blob/master/FAQ.md
2. MTU configuration:
This problem is not unique to OVN but is amplified due to the possible larger size of
geneve header compared to other common tunneling protocols (VXLAN).
If you are using VM's as compute nodes make sure that you either lower the MTU size
on the virtual interface or enable fragmentation on it.
This problem is not unique to OVN but is amplified due to the possible larger
size of geneve header compared to other common tunneling protocols (VXLAN).
If you are using VM's as compute nodes make sure that you either lower the MTU
size on the virtual interface or enable fragmentation on it.

@ -1,7 +1,7 @@
Rally plugins
=============
All *.py modules from this directory will be auto-loaded by Rally and all
All ``*.py`` modules from this directory will be auto-loaded by Rally and all
plugins will be discoverable. There is no need of any extra configuration
and there is no difference between writing them here and in rally code base.

@ -55,3 +55,6 @@ universal = 1
[entry_points]
oslo.config.opts =
networking_ovn = networking_ovn.common.config:list_opts
[pbr]
warnerrors = true

@ -8,6 +8,7 @@ coverage>=3.6 # Apache-2.0
python-subunit>=0.0.18 # Apache-2.0/BSD
sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD
oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
doc8 # Apache-2.0
oslotest>=1.10.0 # Apache-2.0
testrepository>=0.0.18 # Apache-2.0/BSD
testscenarios>=0.4 # Apache-2.0/BSD

@ -17,6 +17,7 @@ passenv = http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY
[testenv:pep8]
commands = flake8
doc8 doc/source devstack releasenotes/source vagrant rally-jobs
[testenv:venv]
commands = {posargs}
@ -27,6 +28,7 @@ commands = python setup.py testr --coverage --coverage-package-name=networking_o
[testenv:docs]
commands =
rm -rf doc/build
doc8 doc/source devstack releasenotes/source vagrant rally-jobs
sphinx-build -W -b html doc/source doc/build/html
[testenv:debug]
@ -41,6 +43,10 @@ whitelist_externals = mkdir
[testenv:releasenotes]
commands = sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html
[doc8]
# File extensions to check
extensions = .rst
[flake8]
# E123, E125 skipped as they are invalid PEP-8.

Loading…
Cancel
Save