Merge "Add RST linting to neutron"

This commit is contained in:
Zuul 2024-10-02 09:05:05 +00:00 committed by Gerrit Code Review
commit ba8550ddd0
86 changed files with 1015 additions and 822 deletions

View File

@ -97,10 +97,10 @@ At the end of each test run:
* The in-memory database is cleared of content, but its schema is maintained.
* The global Oslo configuration object is reset.
The unit testing framework can be used to effectively test database interaction,
for example, distributed routers allocate a MAC address for every host running
an OVS agent. One of DVR's DB mixins implements a method that lists all host
MAC addresses. Its test looks like this:
The unit testing framework can be used to effectively test database
interaction, for example, distributed routers allocate a MAC address for
every host running an OVS agent. One of DVR's DB mixins implements a method
that lists all host MAC addresses. Its test looks like this:
.. code-block:: python
@ -159,9 +159,9 @@ One of its methods is called 'device_exists' which accepts a device name
and a namespace and returns True if the device exists in the given namespace.
It's easy building a test that targets the method directly, and such a test
would be considered a 'unit' test. However, what framework should such a test
use? A test using the unit tests framework could not mutate state on the system,
and so could not actually create a device and assert that it now exists. Such
a test would look roughly like this:
use? A test using the unit tests framework could not mutate state on the
system, and so could not actually create a device and assert that it now
exists. Such a test would look roughly like this:
* It would mock 'execute', a method that executes shell commands against the
system to return an IP device named 'foo'.
@ -261,9 +261,10 @@ should be validated, and all interaction with the daemon should be via
a REST client.
The neutron-tempest-plugin/neutron_tempest_plugin directory was copied from the
Tempest project around the Kilo timeframe. At the time, there was an overlap of tests
between the Tempest and Neutron repositories. This overlap was then eliminated by carving
out a subset of resources that belong to Tempest, with the rest in Neutron.
Tempest project around the Kilo timeframe. At the time, there was an overlap of
tests between the Tempest and Neutron repositories. This overlap was then
eliminated by carving out a subset of resources that belong to Tempest, with
the rest in Neutron.
API tests that belong to Tempest deal with a subset of Neutron's resources:
@ -296,9 +297,10 @@ define a list of required extensions for particular test class.
Scenario Tests
~~~~~~~~~~~~~~
Scenario tests (neutron-tempest-plugin/neutron_tempest_plugin/scenario), like API tests,
use the Tempest test infrastructure and have the same requirements. Guidelines for
writing a good scenario test may be found at the Tempest developer guide:
Scenario tests (neutron-tempest-plugin/neutron_tempest_plugin/scenario), like
API tests, use the Tempest test infrastructure and have the same requirements.
Guidelines for writing a good scenario test may be found at the Tempest
developer guide:
https://docs.openstack.org/tempest/latest/field_guide/scenario.html
Scenario tests, like API tests, are split between the Tempest and Neutron
@ -322,15 +324,18 @@ Specific test requirements for advanced images are:
Rally Tests
~~~~~~~~~~~
Rally tests (rally-jobs/plugins) use the `rally <http://rally.readthedocs.io/>`_
infrastructure to exercise a neutron deployment. Guidelines for writing a
good rally test can be found in the `rally plugin documentation <http://rally.readthedocs.io/en/latest/plugins/>`_.
Rally tests (rally-jobs/plugins) use the
`rally <http://rally.readthedocs.io/>`_ infrastructure to exercise a neutron
deployment. Guidelines for writing a good rally test can be found in the
`rally plugin documentation <http://rally.readthedocs.io/en/latest/plugins/>`_.
There are also some examples in tree; the process for adding rally plugins to
neutron requires three steps: 1) write a plugin and place it under rally-jobs/plugins/.
This is your rally scenario; 2) (optional) add a setup file under rally-jobs/extra/.
This is any devstack configuration required to make sure your environment can
successfully process your scenario requests; 3) edit neutron-neutron.yaml. This
is your scenario 'contract' or SLA.
neutron requires three steps:
1) write a plugin and place it under rally-jobs/plugins/. This is your rally
scenario;
2) (optional) add a setup file under rally-jobs/extra/. This is any devstack
configuration required to make sure your environment can successfully process
your scenario requests;
3) edit neutron-neutron.yaml. This is your scenario 'contract' or SLA.
Grenade Tests
~~~~~~~~~~~~~
@ -346,8 +351,8 @@ similar to deploying OpenStack using Devstack. All is described in the
`Project's wiki <https://wiki.openstack.org/wiki/Grenade>`_ and `documentation
<https://opendev.org/openstack/grenade/src/branch/master/README.rst>`_.
More info about how to troubleshoot Grenade failures in the CI jobs can be found
in the :ref:`Troubleshooting Grenade jobs <troubleshooting-grenade-jobs>`
More info about how to troubleshoot Grenade failures in the CI jobs can be
found in the :ref:`Troubleshooting Grenade jobs <troubleshooting-grenade-jobs>`
document.
Development Process
@ -438,8 +443,11 @@ To run only pep8::
tox -e pep8
Since pep8 includes running pylint on all files, it can take quite some time to run.
To restrict the pylint check to only the files altered by the latest patch changes::
Since pep8 includes running pylint on all files, it can take quite some time
to run.
To restrict the pylint check to only the files altered by the latest patch
changes::
tox -e pep8 HEAD~1

View File

@ -90,7 +90,7 @@ This extract is from the default ``policy.yaml`` file:
administrator or the owner of the resource specified in the request
(project identifier is equal).
.. code-block:: none
.. code-block:: yaml
"admin_or_owner": "role:admin or tenant_id:%(tenant_id)s"
"admin_or_network_owner": "role:admin or tenant_id:%(network_tenant_id)s"
@ -101,7 +101,7 @@ This extract is from the default ``policy.yaml`` file:
- The default policy that is always evaluated if an API operation does
not match any of the policies in ``policy.yaml``.
.. code-block:: none
.. code-block:: yaml
"default": "rule:admin_or_owner"
"create_subnet": "rule:admin_or_network_owner"
@ -113,7 +113,7 @@ This extract is from the default ``policy.yaml`` file:
- This policy evaluates successfully if either *admin_or_owner*, or
*shared* evaluates successfully.
.. code-block:: none
.. code-block:: yaml
"get_network": "rule:admin_or_owner or rule:shared"
"create_network:shared": "rule:admin_only"
@ -121,7 +121,7 @@ This extract is from the default ``policy.yaml`` file:
- This policy restricts the ability to manipulate the *shared*
attribute for a network to administrators only.
.. code-block:: none
.. code-block:: yaml
"update_network": "rule:admin_or_owner"
"delete_network": "rule:admin_or_owner"
@ -133,7 +133,7 @@ This extract is from the default ``policy.yaml`` file:
attribute for a port only to administrators and the owner of the
network where the port is attached.
.. code-block:: none
.. code-block:: yaml
"get_port": "rule:admin_or_owner"
"update_port": "rule:admin_or_owner"
@ -144,7 +144,7 @@ This example shows you how to modify a policy file to permit project to
define networks, see their resources, and permit administrative users to
perform all other operations:
.. code-block:: none
.. code-block:: yaml
"admin_or_owner": "role:admin or tenant_id:%(tenant_id)s"
"admin_only": "role:admin"

View File

@ -107,7 +107,8 @@ Set these options to configure SSL:
Firewall-as-a-Service (FWaaS) overview
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
For information on Firewall-as-a-Service (FWaaS), please consult the :doc:`Networking Guide <../fwaas>`.
For information on Firewall-as-a-Service (FWaaS), please consult the
:doc:`Networking Guide <../fwaas>`.
Allowed-address-pairs
~~~~~~~~~~~~~~~~~~~~~

View File

@ -130,7 +130,8 @@ can be set per router at router creation time by passing the
``--enable-default-route-bfd`` argument or by updating an existing router using
the ``openstack router set`` command.
The default behavior for new routers can be controlled using the `enable_default_route_bfd`_ configuration option.
The default behavior for new routers can be controlled using the
`enable_default_route_bfd`_ configuration option.
It is recommended to enable this when `adding multiple default routes to a
router`_ as failure to do so will lead to degraded performance in the event of

View File

@ -47,10 +47,11 @@ Once this is done, the user has to take the following steps and restart
Networking service to create and update reverse lookup (PTR) zones.
* ``project_name``: the name of the project to be used by the
Networking service to create and update reverse lookup (PTR) zones.
* ``project_domain_name``: the name of the domain for the project to be used by the
Networking service to create and update reverse lookup (PTR) zones.
* ``user_domain_name``: the name of the domain for the user to be used by the
Networking service to create and update reverse lookup (PTR) zones.
* ``project_domain_name``: the name of the domain for the project to be
used by the Networking service to create and update reverse lookup (PTR)
zones.
* ``user_domain_name``: the name of the domain for the user to be used by
the Networking service to create and update reverse lookup (PTR) zones.
* ``region_name``: the name of the region to be used by the
Networking service to create and update reverse lookup (PTR) zones.
* ``allow_reverse_dns_lookup``: a boolean value specifying whether to enable
@ -60,10 +61,11 @@ Once this is done, the user has to take the following steps and restart
* ``ipv6_ptr_zone_prefix_size``: the size in bits of the prefix for the IPv6
reverse lookup (PTR) zones.
* ``ptr_zone_email``: the email address to use when creating new reverse
lookup (PTR) zones. The default is ``admin@<dns_domain>`` where ``<dns_domain>``
is the domain for the first record being created in that zone.
* ``insecure``: whether to disable SSL certificate validation. By default, certificates
are validated.
lookup (PTR) zones. The default is ``admin@<dns_domain>`` where
``<dns_domain>`` is the domain for the first record being created in that
zone.
* ``insecure``: whether to disable SSL certificate validation. By default,
certificates are validated.
* ``cafile``: Path to a valid Certificate Authority (CA) certificate.
Optional, the system CAs are used as default.
@ -908,8 +910,8 @@ Only for :ref:`config-dns-use-case-3`, if the port binding extension is
enabled in the Networking service, the Compute service will execute one
additional port update operation when allocating the port for the instance
during the boot process. This may have a noticeable adverse effect in the
performance of the boot process that should be evaluated before adoption of this
use case.
performance of the boot process that should be evaluated before adoption of
this use case.
.. _config-dns-int-ext-serv-net:

View File

@ -31,7 +31,7 @@ experimetal:
This is an example of how to enable the use of an experimental feature:
.. code-block:: none
.. code-block:: ini
[experimental]
linuxbridge = true

View File

@ -350,8 +350,8 @@ follows:
Setting DHCPv6-stateless for ``ipv6_ra_mode`` configures the neutron
router with an radvd agent to send Router Advertisements. The list below
captures the values set for the address configuration flags in the Router
Advertisement messages in this scenario. Similarly, setting DHCPv6-stateless for
``ipv6_address_mode`` configures neutron DHCP implementation to provide
Advertisement messages in this scenario. Similarly, setting DHCPv6-stateless
for ``ipv6_address_mode`` configures neutron DHCP implementation to provide
the additional network information.
* Autonomous Address Configuration Flag = 1
@ -361,8 +361,8 @@ the additional network information.
Setting DHCPv6-stateful for ``ipv6_ra_mode`` configures the neutron
router with an radvd agent to send Router Advertisements. The list below
captures the values set for the address configuration flags in the Router
Advertisements messages in this scenario. Similarly, setting DHCPv6-stateful for
``ipv6_address_mode`` configures neutron DHCP implementation to provide
Advertisements messages in this scenario. Similarly, setting DHCPv6-stateful
for ``ipv6_address_mode`` configures neutron DHCP implementation to provide
addresses and additional network information through DHCPv6.
* Autonomous Address Configuration Flag = 0
@ -609,7 +609,7 @@ Configuring the Dibbler server
After installing Dibbler, edit the ``/etc/dibbler/server.conf`` file:
.. code-block:: none
.. code-block::
script "/var/lib/dibbler/pd-server.sh"

View File

@ -40,7 +40,7 @@ Service Configuration
To enable the logging service, add ``log`` to the ``service_plugins`` setting
in ``/etc/neutron/neutron.conf``:
.. code-block:: none
.. code-block:: ini
service_plugins = router,metering,log
@ -181,13 +181,14 @@ To enable the logging service, follow the below steps.
#. On Neutron controller node, add ``log`` to ``service_plugins`` setting in
``/etc/neutron/neutron.conf`` file. For example:
.. code-block:: none
.. code-block:: ini
service_plugins = router,metering,log
#. To enable logging service for ``security_group`` in Layer 2, add ``log`` to
option ``extensions`` in section ``[agent]`` in ``/etc/neutron/plugins/ml2/ml2_conf.ini``
for controller node and in ``/etc/neutron/plugins/ml2/openvswitch_agent.ini``
option ``extensions`` in section ``[agent]`` in
``/etc/neutron/plugins/ml2/ml2_conf.ini`` for controller node and in
``/etc/neutron/plugins/ml2/openvswitch_agent.ini``
for compute/network nodes. For example:
.. code-block:: ini
@ -210,8 +211,8 @@ To enable the logging service, follow the below steps.
extensions = fwaas_v2,fwaas_v2_log
#. On compute/network nodes, add configuration for logging service to
``[network_log]`` in ``/etc/neutron/plugins/ml2/openvswitch_agent.ini`` and in
``/etc/neutron/l3_agent.ini`` as shown below:
``[network_log]`` in ``/etc/neutron/plugins/ml2/openvswitch_agent.ini``
and in ``/etc/neutron/l3_agent.ini`` as shown below:
.. code-block:: ini
@ -245,14 +246,14 @@ cloud, neutron's policy file ``policy.yaml`` can be modified to allow this.
Modify ``/etc/neutron/policy.yaml`` entries as follows:
.. code-block:: none
.. code-block:: yaml
"get_loggable_resources": "rule:regular_user",
"create_log": "rule:regular_user",
"get_log": "rule:regular_user",
"get_logs": "rule:regular_user",
"update_log": "rule:regular_user",
"delete_log": "rule:regular_user",
"get_loggable_resources": "rule:regular_user"
"create_log": "rule:regular_user"
"get_log": "rule:regular_user"
"get_logs": "rule:regular_user"
"update_log": "rule:regular_user"
"delete_log": "rule:regular_user"
Service workflow for Operator
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -406,8 +407,8 @@ The general characteristics of each event will be shown as the following:
* A timestamp of the flow.
* A status of the flow ``ACCEPT``/``DROP``.
* An indication of the originator of the flow, e.g which project or log resource
generated the events.
* An indication of the originator of the flow, e.g which project or log
resource generated the events.
* An identifier of the associated instance interface (neutron port id).
* A layer 2, 3 and 4 information (mac, address, port, protocol, etc).

View File

@ -316,7 +316,9 @@ The ML2 plug-in also supports extension drivers that allows other pluggable
drivers to extend the core resources implemented in the ML2 plug-in
(``networks``, ``ports``, etc.). Examples of extension drivers include support
for QoS, port security, etc. For more details see the ``extension_drivers``
configuration option in the `Configuration Reference <../configuration/ml2-conf.html#ml2.extension_drivers>`__.
configuration option in the
`Configuration Reference
<../configuration/ml2-conf.html#ml2.extension_drivers>`__.
Agents

View File

@ -83,10 +83,10 @@ To configure NDP proxy, take the following steps:
a single, integrated subnetpool. In order to make NDP proxy work correctly,
the admin operator needs to set direct routes for these subnetpools.
Such as, we have a IPv6 subnetpool, it's CIDR is 2001:db8::/96. The direct route
like below should be set:
Such as, we have a IPv6 subnetpool, it's CIDR is 2001:db8::/96. The direct
route like below should be set:
.. code-block:: none
.. code-block:: console
2001:db8::/96 dev <ext-gw>
@ -275,7 +275,8 @@ network (such as: public network) are the following:
:ref:`prefix-delegation` etc.) to publish the internal IPv6 address, the
command will break dataplane traffic.
#. Create an internal network and IPv6 subnet and add the subnet to the above router:
#. Create an internal network and IPv6 subnet and add the subnet to the above
router:
.. code-block:: console

View File

@ -75,7 +75,8 @@ Prerequisites
Using Open vSwitch hardware offloading
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In order to enable Open vSwitch hardware offloading, the following steps are required:
In order to enable Open vSwitch hardware offloading, the following steps are
required:
#. Enable SR-IOV
#. Configure NIC to switchdev mode (relevant Nodes)
@ -425,7 +426,8 @@ Validate Open vSwitch hardware offloading
.. end
#. Check traffic on the representor port. Verify that only the first ICMP packet appears.
#. Check traffic on the representor port. Verify that only the first ICMP
packet appears.
.. code-block:: console

View File

@ -82,18 +82,19 @@ Both OVS and iptables firewall drivers should always behave in the same way if
the same rules are configured for the security group. But in some cases that is
not true and there may be slight differences between those drivers.
+----------------------------------------+-----------------------+-----------------------+
+-------------------------------------+----------------+----------------------+
| Case | OVS | iptables |
+========================================+=======================+=======================+
| Traffic marked as INVALID by conntrack | Blocked | Allowed because it |
| but matching some of the SG rules | | first matches SG rule,|
| (please check [1]_ and [2]_ | | never reaches rule to |
| for details) | | drop invalid packets |
+----------------------------------------+-----------------------+-----------------------+
+=====================================+================+======================+
| Traffic marked as INVALID by | Blocked | Allowed because it |
| conntrack but matching some of the | | first matches SG |
| SG rules (please check [1]_ and | | rule, never reaches |
| [2]_ for details) | | rule to drop invalid |
| | | packets |
+-------------------------------------+----------------+----------------------+
| Multicast traffic sent in the group | Allowed always | Blocked, |
| 224.0.0.X | | Can be enabled by SG |
| (please check [3]_ for details) | | rule. |
+----------------------------------------+-----------------------+-----------------------+
+-------------------------------------+----------------+----------------------+
Open Flow rules processing considerations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -172,7 +172,8 @@ In release Stein the following agent-based ML2 mechanism drivers are
supported:
* Open vSwitch (``openvswitch``) vnic_types: ``normal``, ``direct``
* SR-IOV (``sriovnicswitch``) vnic_types: ``direct``, ``macvtap``, ``direct-physical``
* SR-IOV (``sriovnicswitch``) vnic_types: ``direct``, ``macvtap``,
``direct-physical``
* OVN (``ovn``) vnic_types: ``normal``
.. note::

View File

@ -230,7 +230,9 @@ with:
* :oslo.config:option:`ovs.resource_provider_packet_processing_without_direction`
Format for this option is ``<hypervisor>:<packet_rate>``. This option should
be used for non-hardware-offloaded OVS deployments.
* :oslo.config:option:`ovs.resource_provider_packet_processing_with_direction`
Format for this option is
``<hypervisor>:<egress_packet_rate>:<ingress_packet_rate>``. You may set only
one direction and omit the other. This option should be used for

View File

@ -40,7 +40,8 @@ QoS supported rule types are now available as ``VALID_RULE_TYPES`` in `QoS rule
* minimum_bandwidth: Minimum bandwidth constraints on certain types of traffic.
* minimum_packet_rate: Minimum packet rate constraints on certain types of traffic.
* minimum_packet_rate: Minimum packet rate constraints on certain types of
traffic.
Any QoS driver can claim support for some QoS rule types
@ -182,7 +183,7 @@ On the controller nodes:
#. Add the QoS service to the ``service_plugins`` setting in
``/etc/neutron/neutron.conf``. For example:
.. code-block:: none
.. code-block:: ini
service_plugins = router,metering,qos
@ -194,7 +195,7 @@ On the controller nodes:
set the ``service_plugins`` option in ``/etc/neutron/neutron.conf`` to
include both ``router`` and ``qos``. For example:
.. code-block:: none
.. code-block:: ini
service_plugins = router,qos
@ -321,7 +322,7 @@ your cloud, neutron's file ``policy.yaml`` can be modified to allow this.
Modify ``/etc/neutron/policy.yaml`` policy entries as follows:
.. code-block:: none
.. code-block:: yaml
"get_policy": "rule:regular_user"
"create_policy": "rule:regular_user"
@ -331,7 +332,7 @@ Modify ``/etc/neutron/policy.yaml`` policy entries as follows:
To enable bandwidth limit rule:
.. code-block:: none
.. code-block:: yaml
"get_policy_bandwidth_limit_rule": "rule:regular_user"
"create_policy_bandwidth_limit_rule": "rule:regular_user"
@ -340,7 +341,7 @@ To enable bandwidth limit rule:
To enable DSCP marking rule:
.. code-block:: none
.. code-block:: yaml
"get_policy_dscp_marking_rule": "rule:regular_user"
"create_policy_dscp_marking_rule": "rule:regular_user"
@ -349,7 +350,7 @@ To enable DSCP marking rule:
To enable minimum bandwidth rule:
.. code-block:: none
.. code-block:: yaml
"get_policy_minimum_bandwidth_rule": "rule:regular_user"
"create_policy_minimum_bandwidth_rule": "rule:regular_user"
@ -358,7 +359,7 @@ To enable minimum bandwidth rule:
To enable minimum packet rate rule:
.. code-block:: none
.. code-block:: yaml
"get_policy_minimum_packet_rate_rule": "rule:regular_user"
"create_policy_minimum_packet_rate_rule": "rule:regular_user"

View File

@ -826,7 +826,8 @@ database following the next steps:
* Insert the indexes for the "target_tenant" and "action" columns:
$ for table in $tables do; mysql -e \
"alter table $table add key (action); alter table $table add key (target_tenant);"; done
"alter table $table add key (action); \
alter table $table add key (target_tenant);"; done
In order to prevent errors during a system upgrade, [3]_ was

View File

@ -111,11 +111,11 @@ To address this problem, operators should use the ``AGENT`` config group option
``kill_scripts_path`` to configure a path to where ``kill scripts`` for such
processes live. By default, it is set to ``/etc/neutron/kill_scripts/``.
If option ``kill_scripts_path`` is changed in the config to the different
location, ``exec_dirs`` in ``/etc/rootwrap.conf`` should be changed accordingly.
If ``kill_scripts_path`` is set, every time neutron has to kill a process,
for example ``dnsmasq``, it will look in this directory for a file with the name
``<process_name>-kill``. So for ``dnsmasq`` process it will look for a
``dnsmasq-kill`` script. If such a file exists there, it will be called
location, ``exec_dirs`` in ``/etc/rootwrap.conf`` should be changed
accordingly. If ``kill_scripts_path`` is set, every time neutron has to kill a
process, for example ``dnsmasq``, it will look in this directory for a file
with the name ``<process_name>-kill``. So for ``dnsmasq`` process it will look
for a ``dnsmasq-kill`` script. If such a file exists there, it will be called
instead of using the ``kill`` command.
Kill scripts are called with two parameters:

View File

@ -323,9 +323,9 @@ Update a port chain or port pair group
SFC steers traffic matching the additional flow classifier to the
port pair groups in the port chain.
* Use the :command:`openstack sfc port pair group set` command to perform dynamic
scale-out or scale-in operations by adding or removing port pairs on a port
pair group.
* Use the :command:`openstack sfc port pair group set` command to perform
dynamic scale-out or scale-in operations by adding or removing port pairs
on a port pair group.
.. code-block:: console

View File

@ -493,17 +493,17 @@ Once configuration is complete, you can launch instances with SR-IOV ports.
SR-IOV with ConnectX-3/ConnectX-3 Pro Dual Port Ethernet
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In contrast to Mellanox newer generation NICs, ConnectX-3 family network adapters expose a single
PCI device (PF) in the system regardless of the number of physical ports.
When the device is **dual port** and SR-IOV is enabled and configured we can observe some inconsistencies
in linux networking subsystem.
In contrast to Mellanox newer generation NICs, ConnectX-3 family network
adapters expose a single PCI device (PF) in the system regardless of the number
of physical ports. When the device is **dual port** and SR-IOV is enabled and
configured we can observe some inconsistencies in linux networking subsystem.
.. note::
In the example below ``enp4s0`` represents PF net device associated with physical port 1 and
``enp4s0d1`` represents PF net device associated with physical port 2.
**Example:** A system with ConnectX-3 dual port device and a total of four VFs configured,
two VFs assigned to port one and two VFs assigned to port two.
**Example:** A system with ConnectX-3 dual port device and a total of four VFs
configured, two VFs assigned to port one and two VFs assigned to port two.
.. code-block:: console
@ -532,18 +532,20 @@ Four VFs are available in the system, however,
vf 2 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
vf 3 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state auto
**ip** command identifies each PF associated net device as having four VFs *each*.
**ip** command identifies each PF associated net device as having four VFs
*each*.
.. note::
Mellanox ``mlx4`` driver allows *ip* commands to perform configuration of *all*
VFs from either PF associated network devices.
To allow neutron SR-IOV agent to properly identify the VFs that belong to the correct PF network device
(thus to the correct network port) Admin is required to provide the ``exclude_devices`` configuration option
in ``sriov_agent.ini``
To allow neutron SR-IOV agent to properly identify the VFs that belong to the
correct PF network device (thus to the correct network port) Admin is required
to provide the ``exclude_devices`` configuration option in ``sriov_agent.ini``
**Step 1**: derive the VF to Port mapping from mlx4 driver configuration file: ``/etc/modprobe.d/mlnx.conf`` or ``/etc/modprobe.d/mlx4.conf``
**Step 1**: derive the VF to Port mapping from mlx4 driver configuration file:
``/etc/modprobe.d/mlnx.conf`` or ``/etc/modprobe.d/mlx4.conf``
.. code-block:: console
@ -554,12 +556,15 @@ Where:
``num_vfs=n1,n2,n3`` - The driver will enable ``n1`` VFs on physical port 1,
``n2`` VFs on physical port 2 and
``n3`` dual port VFs (applies only to dual port HCA when all ports are Ethernet ports).
``n3`` dual port VFs (applies only to dual port HCA when all ports are
Ethernet ports).
``probe_vfs=m1,m2,m3`` - the driver probes ``m1`` single port VFs on physical port 1,
``probe_vfs=m1,m2,m3`` - the driver probes ``m1`` single port VFs on
physical port 1,
``m2`` single port VFs on physical port 2 (applies only if such a port exist)
``m3`` dual port VFs. Those VFs are attached to the hypervisor. (applies only if all ports are configured as Ethernet).
``m3`` dual port VFs. Those VFs are attached to the hypervisor. (applies only
if all ports are configured as Ethernet).
The VFs will be enumerated in the following order:
@ -575,7 +580,8 @@ In our example:
| 04:00.3 : VF associated to port **2**
| 04:00.4 : VF associated to port **2**
**Step 2:** Update ``exclude_devices`` configuration option in ``sriov_agent.ini`` with the correct mapping
**Step 2:** Update ``exclude_devices`` configuration option in
``sriov_agent.ini`` with the correct mapping
Each PF associated net device shall exclude the **other** port's VFs
@ -637,18 +643,20 @@ Known limitations
* SR-IOV is not integrated into the OpenStack Dashboard (horizon). Users must
use the CLI or API to configure SR-IOV interfaces.
* Live migration support has been added to the Libvirt Nova virt-driver in the Train
release for instances with neutron SR-IOV ports. Indirect mode SR-IOV interfaces
(vnic-type: macvtap or virtio-forwarder) can now be migrated transparently to
the guest. Direct mode SR-IOV interfaces (vnic-type: direct or direct-physical)
are detached before the migration and reattached after the migration so this is not
transparent to the guest. To avoid loss of network connectivy when live migrating
with direct mode sriov the user should create a failover bond in the guest with a
transparently live migration port type e.g. vnic-type normal or indirect mode SR-IOV.
* Live migration support has been added to the Libvirt Nova virt-driver in the
Train release for instances with neutron SR-IOV ports. Indirect mode SR-IOV
interfaces (vnic-type: macvtap or virtio-forwarder) can now be migrated
transparently to the guest. Direct mode SR-IOV interfaces (vnic-type: direct
or direct-physical) are detached before the migration and reattached after
the migration so this is not transparent to the guest. To avoid loss of
network connectivy when live migrating with direct mode sriov the user should
create a failover bond in the guest with a transparently live migration port
type e.g. vnic-type normal or indirect mode SR-IOV.
.. note::
SR-IOV features may require a specific NIC driver version, depending on the vendor.
Intel NICs, for example, require ixgbe version 4.4.6 or greater, and ixgbevf version
3.2.2 or greater.
* Attaching SR-IOV ports to existing servers is supported starting with the Victoria release.
* Attaching SR-IOV ports to existing servers is supported starting with the
Victoria release.

View File

@ -95,8 +95,8 @@ Create ``/etc/apache2/neutron.conf`` with content below:
.. end
For deb-based systems copy or symlink the file to ``/etc/apache2/sites-available``.
Then enable the neutron site:
For deb-based systems copy or symlink the file to
``/etc/apache2/sites-available``. Then enable the neutron site:
.. code-block:: console

View File

@ -231,7 +231,7 @@ Create initial networks
Verify network operation
------------------------
.. include:: shared/deploy-provider-verifynetworkoperation.txt
.. include:: deploy-provider-verifynetworkoperation.txt
Network traffic flow
~~~~~~~~~~~~~~~~~~~~

View File

@ -211,7 +211,7 @@ Create initial networks
Verify network operation
------------------------
.. include:: shared/deploy-selfservice-verifynetworkoperation.txt
.. include:: deploy-selfservice-verifynetworkoperation.txt
.. _deploy-lb-selfservice-networktrafficflow:

View File

@ -261,7 +261,7 @@ Create initial networks
Verify network operation
------------------------
.. include:: shared/deploy-provider-verifynetworkoperation.txt
.. include:: deploy-provider-verifynetworkoperation.txt
Network traffic flow
~~~~~~~~~~~~~~~~~~~~

View File

@ -207,7 +207,7 @@ Create initial networks
Verify network operation
------------------------
.. include:: shared/deploy-selfservice-verifynetworkoperation.txt
.. include:: deploy-selfservice-verifynetworkoperation.txt
.. _deploy-ovs-selfservice-networktrafficflow:

View File

@ -236,11 +236,11 @@ filtering technology such as ``iptables``.
Each project contains a ``default`` security group that by default allows all
egress traffic and denies all ingress traffic. You can change the rules in the
``default`` security group. Admin user can also define own set of security group
rules which will be added by default to each new ``default`` and each new non
default (custom) security group created for every project in the cloud. There is
``security-group-default-rules`` API extension which allows to define such own
set of the default security group rules.
``default`` security group. Admin user can also define own set of security
group rules which will be added by default to each new ``default`` and each new
non-default (custom) security group created for every project in the cloud.
There is ``security-group-default-rules`` API extension which allows to define
such own set of the default security group rules.
If you launch an instance without specifying a security group, the ``default``
security group automatically applies to it. Similarly, if you create a port
without specifying a security group, the ``default`` security group

View File

@ -7,7 +7,8 @@ Routing
North/South
-----------
The different configurations are detailed in the :doc:`/admin/ovn/refarch/refarch`
The different configurations are detailed in the
:doc:`/admin/ovn/refarch/refarch`
Non distributed FIP
~~~~~~~~~~~~~~~~~~~

View File

@ -16,12 +16,12 @@ familiar with the following specifications
Overview
--------
A class of devices collectively referred to as off-path SmartNIC DPUs introduces
an important change to earlier architectures where compute and networking agents
used to coexist at the hypervisor host: networking control plane components
are now moved to the SmartNIC DPU's CPU side which includes ``ovs-vswitchd``
and ``ovn-controller``. The following diagram provides an overview of the
components involved::
A class of devices collectively referred to as off-path SmartNIC DPUs
introduces an important change to earlier architectures where compute and
networking agents used to coexist at the hypervisor host: networking control
plane components are now moved to the SmartNIC DPU's CPU side which includes
``ovs-vswitchd`` and ``ovn-controller``. The following diagram provides an
overview of the components involved::
┌────────────────────────────────────┐
│ Hypervisor │ LoM Ports

View File

@ -10,7 +10,8 @@
| remote_ip_prefix | 0.0.0.0/0 |
+------------------+-----------+
$ openstack security group rule create --ethertype IPv6 --proto ipv6-icmp default
$ openstack security group rule create --ethertype IPv6 \
--proto ipv6-icmp default
+-----------+-----------+
| Field | Value |
+-----------+-----------+
@ -31,13 +32,14 @@
| remote_ip_prefix | 0.0.0.0/0 |
+------------------+-----------+
$ openstack security group rule create --ethertype IPv6 --proto tcp --dst-port 22 default
+------------------+-----------+
$ openstack security group rule create --ethertype IPv6 --proto tcp \
--dst-port 22 default
+----------------+---------+
| Field | Value |
+------------------+-----------+
+----------------+---------+
| direction | ingress |
| ethertype | IPv6 |
| port_range_max | 22 |
| port_range_min | 22 |
| protocol | tcp |
+------------------+-----------+
+----------------+---------+

View File

@ -42,7 +42,8 @@ NAT for IPv4 network traffic and directly routes IPv6 network traffic.
If you are using an MTU value on your network below 1280, please
read the warning listed in the
`IPv6 configuration guide <../config-ipv6.html#project-network-considerations>`__
`IPv6 configuration guide
<../config-ipv6.html#project-network-considerations>`__
before creating any subnets.
#. Create a IPv4 subnet on the self-service network.
@ -67,12 +68,13 @@ NAT for IPv4 network traffic and directly routes IPv6 network traffic.
.. code-block:: console
$ openstack subnet create --subnet-range fd00:192:0:2::/64 --ip-version 6 \
--ipv6-ra-mode slaac --ipv6-address-mode slaac --network selfservice1 \
--dns-nameserver 2001:4860:4860::8844 selfservice1-v6
+-------------------+------------------------------------------------------+
$ openstack subnet create --subnet-range fd00:192:0:2::/64 \
--ip-version 6 --ipv6-ra-mode slaac --ipv6-address-mode slaac \
--network selfservice1 --dns-nameserver 2001:4860:4860::8844 \
selfservice1-v6
+-------------------+--------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------------+
+-------------------+--------------------------------------------------+
| allocation_pools | fd00:192:0:2::2-fd00:192:0:2:ffff:ffff:ffff:ffff |
| cidr | fd00:192:0:2::/64 |
| dns_nameservers | 2001:4860:4860::8844 |
@ -82,7 +84,7 @@ NAT for IPv4 network traffic and directly routes IPv6 network traffic.
| ipv6_address_mode | slaac |
| ipv6_ra_mode | slaac |
| name | selfservice1-v6 |
+-------------------+------------------------------------------------------+
+-------------------+--------------------------------------------------+
#. Create a router.

View File

@ -25,7 +25,8 @@ they provide their version of manuals.
set suitable plugin for your own deployment.
#. Configure the VPNaaS service provider by creating the
``/etc/neutron/neutron_vpnaas.conf`` file as follows, ``strongswan`` used in Ubuntu distribution:
``/etc/neutron/neutron_vpnaas.conf`` file as follows, ``strongswan`` used
in Ubuntu distribution:
.. code-block:: ini
@ -41,7 +42,8 @@ they provide their version of manuals.
Consider to use the appropriate one for your deployment.
#. Configure the VPNaaS plugin for the L3 agent by adding to
``/etc/neutron/l3_agent.ini`` the following section, ``StrongSwanDriver`` used in Ubuntu distribution:
``/etc/neutron/l3_agent.ini`` the following section, ``StrongSwanDriver``
used in Ubuntu distribution:
.. code-block:: ini
@ -78,13 +80,13 @@ Using VPNaaS with endpoint group (recommended)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
IPsec site-to-site connections will support multiple local subnets,
in addition to the current multiple peer CIDRs. The multiple local subnet feature
is triggered by not specifying a local subnet, when creating a VPN service.
Backwards compatibility is maintained with single local subnets, by providing
the subnet in the VPN service creation.
in addition to the current multiple peer CIDRs. The multiple local subnet
feature is triggered by not specifying a local subnet, when creating a VPN
service. Backwards compatibility is maintained with single local subnets, by
providing the subnet in the VPN service creation.
To support multiple local subnets, a new capability called "End Point Groups" has
been added. Each endpoint group will define one or more endpoints of
To support multiple local subnets, a new capability called "End Point Groups"
has been added. Each endpoint group will define one or more endpoints of
a specific type, and can be used to specify both local and peer endpoints for
IPsec connections. The endpoint groups separate the "what gets connected" from
the "how to connect" for a VPN service, and can be used for different flavors

View File

@ -319,7 +319,9 @@ Expand and Contract Scripts
The obsolete "branchless" design of a migration script included that it
indicates a specific "version" of the schema, and includes directives that
apply all necessary changes to the database at once. If we look for example at
the script ``2d2a8a565438_hierarchical_binding.py``, we will see::
the script ``2d2a8a565438_hierarchical_binding.py``, we will see:
.. code-block:: python
# .../alembic_migrations/versions/2d2a8a565438_hierarchical_binding.py
@ -351,20 +353,23 @@ the script ``2d2a8a565438_hierarchical_binding.py``, we will see::
# ... more DROP instructions ...
The above script contains directives that are both under the "expand"
and "contract" categories, as well as some data migrations. the ``op.create_table``
directive is an "expand"; it may be run safely while the old version of the
application still runs, as the old code simply doesn't look for this table.
and "contract" categories, as well as some data migrations.
The ``op.create_table`` directive is an "expand"; it may be run safely while
the old version of the application still runs, as the old code simply doesn't
look for this table.
The ``op.drop_constraint`` and ``op.drop_column`` directives are
"contract" directives (the drop column more so than the drop constraint); running
at least the ``op.drop_column`` directives means that the old version of the
application will fail, as it will attempt to access these columns which no longer
exist.
"contract" directives (the drop column more so than the drop constraint);
running at least the ``op.drop_column`` directives means that the old version
of the application will fail, as it will attempt to access these columns which
no longer exist.
The data migrations in this script are adding new
rows to the newly added ``ml2_port_binding_levels`` table.
Under the new migration script directory structure, the above script would be
stated as two scripts; an "expand" and a "contract" script::
stated as two scripts; an "expand" and a "contract" script:
.. code-block:: python
# expansion operations
# .../alembic_migrations/versions/liberty/expand/2bde560fc638_hierarchical_binding.py
@ -427,7 +432,9 @@ For such cases, we use the ``contract_creation_exceptions`` that should be
implemented as part of such migrations. This is needed to get functional tests
pass.
Usage::
Usage:
.. code-block:: python
def contract_creation_exceptions():
"""Docstring should explain why we allow such exception for contract
@ -445,7 +452,8 @@ HEAD files for conflict management
In directory ``neutron/db/migration/alembic_migrations/versions`` there are two
files, ``CONTRACT_HEAD`` and ``EXPAND_HEAD``. These files contain the ID of the
head revision in each branch. The purpose of these files is to validate the
revision timelines and prevent non-linear changes from entering the merge queue.
revision timelines and prevent non-linear changes from entering the merge
queue.
When you create a new migration script by neutron-db-manage these files will be
updated automatically. But if another migration script is merged while your

View File

@ -24,8 +24,8 @@
Client command extension support
================================
The client command extension adds support for extending the neutron client while
considering ease of creation.
The client command extension adds support for extending the neutron client
while considering ease of creation.
The full document can be found in the python-neutronclient repository:
https://docs.openstack.org/python-neutronclient/latest/contributor/client_command_extensions.html

View File

@ -142,10 +142,10 @@ code base.
potentially breaks your code. It is then up to you maintaining the affected
plugin/driver to determine whether the failure is transient or real, and
resolve the problem if it is.
* it communicates to a patch author that they may be breaking a plugin/driver.
If they have the time/energy/relationship with the maintainer of the
plugin/driver in question, then they can (at their discretion) work to
resolve the breakage.
* it communicates to a patch author that they may be breaking a
plugin/driver. If they have the time/energy/relationship with the
maintainer of the plugin/driver in question, then they can (at their
discretion) work to resolve the breakage.
* it communicates to the community at large whether a given plugin/driver
is being actively maintained.
* A maintainer that is perceived to be responsive to failures in their
@ -251,12 +251,14 @@ it does not affect Neutron core code stability.
DevStack Integration Strategies
-------------------------------
When developing and testing a new or existing plugin or driver, the aid provided
by DevStack is incredibly valuable: DevStack can help get all the software bits
installed, and configured correctly, and more importantly in a predictable way.
For DevStack integration there are a few options available, and they may or may not
make sense depending on whether you are contributing a new or existing plugin or
driver.
When developing and testing a new or existing plugin or driver, the aid
provided by DevStack is incredibly valuable: DevStack can help get all the
software bits installed, and configured correctly, and more importantly in a
predictable way.
For DevStack integration there are a few options available, and they may or
may not make sense depending on whether you are contributing a new or
existing plugin or driver.
If you are contributing a new plugin, the approach to choose should be based on
`Extras.d Hooks' externally hosted plugins
@ -290,15 +292,16 @@ find on http://docs.openstack.org/infra/manual/creators.html. They are meant to
be the bare minimum you have to complete in order to get you off the ground.
* Create a public repository: this can be a personal opendev.org repo or any
publicly available git repo, e.g. ``https://github.com/john-doe/foo.git``. This
would be a temporary buffer to be used to feed the one on opendev.org.
publicly available git repo, e.g. ``https://github.com/john-doe/foo.git``.
This would be a temporary buffer to be used to feed the one on opendev.org.
* Initialize the repository: if you are starting afresh, you may *optionally*
want to use cookiecutter to get a skeleton project. You can learn how to use
cookiecutter on https://opendev.org/openstack-dev/cookiecutter.
If you want to build the repository from an existing Neutron module, you may
want to skip this step now, build the history first (next step), and come back
here to initialize the remainder of the repository with other files being
generated by the cookiecutter (like tox.ini, setup.cfg, setup.py, etc.).
want to skip this step now, build the history first (next step), and come
back here to initialize the remainder of the repository with other files
being generated by the cookiecutter (like tox.ini, setup.cfg,
setup.py, etc.).
* Create a repository on opendev.org. For
this you need the help of the OpenStack infra team. It is worth noting that
you only get one shot at creating the repository on opendev.org. This
@ -328,8 +331,8 @@ Internationalization support
----------------------------
OpenStack is committed to broad international support.
Internationalization (I18n) is one of important areas to make OpenStack ubiquitous.
Each project is recommended to support i18n.
Internationalization (I18n) is one of important areas to make OpenStack
ubiquitous. Each project is recommended to support i18n.
This section describes how to set up translation support.
The description in this section uses the following variables:
@ -485,9 +488,9 @@ For example, for the ``networking-foo`` repo::
neutron.ml2.extension_drivers =
foo_ext = networking_foo.plugins.ml2.drivers.foo:FooExtensionDriver
* Note: It is advisable to include ``foo`` in the names of these entry points to
avoid conflicts with other third-party packages that may get installed in the
same environment.
* Note: It is advisable to include ``foo`` in the names of these entry points
to avoid conflicts with other third-party packages that may get installed in
the same environment.
API Extensions
@ -531,8 +534,9 @@ Interface (VIF) drivers for the reference implementations are defined in
``neutron/agent/linux/interface.py``. Third-party interface drivers shall be
defined in a similar location within their own repo.
The entry point for the interface driver is a Neutron config option. It is up to
the installer to configure this item in the ``[default]`` section. For example::
The entry point for the interface driver is a Neutron config option. It is up
to the installer to configure this item in the ``[default]`` section.
For example::
[default]
interface_driver = networking_foo.agent.linux.interface.FooInterfaceDriver

View File

@ -4,9 +4,9 @@ So You Want to Contribute...
For general information on contributing to OpenStack, please check out the
`contributor guide <https://docs.openstack.org/contributors/>`_ to get started.
It covers all the basics that are common to all OpenStack projects: the accounts
you need, the basics of interacting with our Gerrit review system, how we
communicate as a community, etc.
It covers all the basics that are common to all OpenStack projects: the
accounts you need, the basics of interacting with our Gerrit review system,
how we communicate as a community, etc.
Below will cover the more project specific information you need to get started
with Neutron.
@ -21,9 +21,9 @@ Communication
- Team Meeting:
This is general Neutron team meeting. The discussion in this meeting is about
all things related to the Neutron project, like community goals, progress with
blueprints, bugs, etc. There is also ``On Demand Agenda`` at the end of this
meeting, where anyone can add a topic to discuss with the Neutron team.
all things related to the Neutron project, like community goals, progress
with blueprints, bugs, etc. There is also ``On Demand Agenda`` at the end of
this meeting, where anyone can add a topic to discuss with the Neutron team.
- time: http://eavesdrop.openstack.org/#Neutron_Team_Meeting
- agenda: https://wiki.openstack.org/wiki/Network/Meetings
@ -47,9 +47,9 @@ Communication
Contacting the Core Team
~~~~~~~~~~~~~~~~~~~~~~~~~
.. This section should list the core team, their irc nicks, emails, timezones etc. If
all this info is maintained elsewhere (i.e. a wiki), you can link to that instead of
enumerating everyone here.
.. This section should list the core team, their irc nicks, emails, timezones
etc. If all this info is maintained elsewhere (i.e. a wiki), you can link
to that instead of enumerating everyone here.
The list of current Neutron core reviewers is available on `gerrit
<https://review.opendev.org/#/admin/groups/38,members>`_.
@ -68,14 +68,15 @@ RFE should be submitted as a Launchpad bug first (see section
:ref:`reporting_a_bug`). The title of RFE bug should starts with ``[RFE]`` tag.
Such RFEs need to be discussed and approved by the :ref:`Neutron drivers
team<drivers_team>`. In some cases an additional spec proposed to the `Neutron
specs <https://opendev.org/openstack/neutron-specs>`_ repo may be necessary. The
complete process is described in detail in :ref:`Blueprints
specs <https://opendev.org/openstack/neutron-specs>`_ repo may be necessary.
The complete process is described in detail in :ref:`Blueprints
guide<neutron_blueprints>`.
Task Tracking
~~~~~~~~~~~~~~
.. This section is about where you track tasks- launchpad? storyboard? is there more
than one launchpad project? what's the name of the project group in storyboard?
.. This section is about where you track tasks- launchpad? storyboard? is
there more than one launchpad project? What's the name of the project group
in storyboard?
We track our tasks in `Launchpad <https://bugs.launchpad.net/neutron>`__.
If you're looking for some smaller, easier work item to pick up and get started
@ -85,16 +86,16 @@ List of all official tags which Neutron team is using is available on
:ref:`bugs<neutron_bugs>`.
Every week, one of our team members is the :ref:`bug
deputy<neutron_bug_deputy>` and at the end of the week such person usually
sends report about new bugs to the mailing list openstack-discuss@lists.openstack.org
or talks about it on our team meeting. This is also good place to look for some
work to do.
sends report about new bugs to the mailing list
openstack-discuss@lists.openstack.org or talks about it on our team meeting.
This is also good place to look for some work to do.
.. _reporting_a_bug:
Reporting a Bug
~~~~~~~~~~~~~~~
.. Pretty self explanatory section, link directly to where people should report bugs for
your project.
.. Pretty self explanatory section, link directly to where people should
report bugs for your project.
You found an issue and want to make sure we are aware of it? You can do so on
`Launchpad <https://bugs.launchpad.net/neutron/+filebug>`__.
@ -103,9 +104,9 @@ More info about Launchpad usage can be found on `OpenStack docs page
Getting Your Patch Merged
~~~~~~~~~~~~~~~~~~~~~~~~~
.. This section should have info about what it takes to get something merged. Do
you require one or two +2's before +W? Do some of your repos require unit test
changes with all patches? etc.
.. This section should have info about what it takes to get something merged.
Do you require one or two +2's before +W? Do some of your repos require
unit test changes with all patches? etc.
All changes proposed to the Neutron or one of the Neutron stadium projects
require two +2 votes from Neutron core reviewers before one of the core
@ -127,16 +128,17 @@ Additionally to what is described in this guide, Neutron's PTL duties are:
- triage new RFEs and prepare `Neutron drivers team meeting
<http://eavesdrop.openstack.org/#Neutron_drivers_Meeting>`_,
- maintain list of the :ref:`stadium projects<neutron_stadium>` health - if each
project has gotten active team members and if it is following community and
Neutron's guidelines and goals,
- maintain list of the :ref:`stadium projects<neutron_stadium>` health - if
each project has gotten active team members and if it is following community
and Neutron's guidelines and goals,
- maintain list of the :ref:`stadium projects
lieutenants<subproject_lieutenants>` - check if those people are still active
in the projects, if their contact data are correct, maybe there is someone
new who is active in the stadium project and could be added to this list.
Over the past few years, the Neutron team has followed a mentoring approach for:
Over the past few years, the Neutron team has followed a mentoring
approach for:
- new contributors,
- potential new core reviewers,

View File

@ -18,7 +18,8 @@ Useful dashboard definitions are found in ``dashboards`` directory.
Grafana Dashboards
------------------
Look for neutron and networking-* dashboard by names by going to the following link:
Look for neutron and networking-* dashboard by names by going to the following
link:
`Grafana <https://grafana.opendev.org/>`_

View File

@ -34,8 +34,8 @@ By reading and collaboratively contributing to such a knowledge base, your
development and review cycle becomes shorter, because you will learn (and teach
to others after you) what to watch out for, and how to be proactive in order
to prevent negative feedback, minimize programming errors, writing better
tests, and so on and so forth...in a nutshell, how to become an effective Neutron
developer.
tests, and so on and so forth...in a nutshell, how to become an effective
Neutron developer.
The notes below are meant to be free-form and brief by design. They are not meant
to replace or duplicate `OpenStack documentation <http://docs.openstack.org>`_,
@ -57,7 +57,8 @@ Developing better software
Plugin development
~~~~~~~~~~~~~~~~~~
Document common pitfalls as well as good practices done during plugin development.
Document common pitfalls as well as good practices done during plugin
development.
* Use mixin classes as last resort. They can be a powerful tool to add behavior
but their strength is also a weakness, as they can introduce `unpredictable <https://review.opendev.org/#/c/121290/>`_
@ -75,23 +76,26 @@ Document common pitfalls as well as good practices done during plugin developmen
there is an agent on the other side of the message broker that interacts
with the server. Plugins may not rely on `agents <https://review.opendev.org/#/c/174020/>`_ at all.
* Be mindful of required capabilities when you develop plugin extensions. The
`Extension description <https://github.com/openstack/neutron/blob/b14c06b5/neutron/api/extensions.py#L122>`_ provides the ability to specify the list of required capabilities
`Extension description <https://github.com/openstack/neutron/blob/b14c06b5/neutron/api/extensions.py#L122>`_
provides the ability to specify the list of required capabilities
for the extension you are developing. By declaring this list, the server will
not start up if the requirements are not met, thus avoiding leading the system
to experience undetermined behavior at runtime.
not start up if the requirements are not met, thus avoiding leading the
system to experience undetermined behavior at runtime.
Database interaction
~~~~~~~~~~~~~~~~~~~~
Document common pitfalls as well as good practices done during database development.
Document common pitfalls as well as good practices done during database
development.
* `first() <http://docs.sqlalchemy.org/en/rel_1_0/orm/query.html#sqlalchemy.orm.query.Query.first>`_
does not raise an exception.
* Do not use `delete() <http://docs.sqlalchemy.org/en/rel_1_0/orm/query.html#sqlalchemy.orm.query.Query.delete>`_
to remove objects. A delete query does not load the object so no sqlalchemy events
can be triggered that would do things like recalculate quotas or update revision
numbers of parent objects. For more details on all of the things that can go wrong
using bulk delete operations, see the "Warning" sections in the link above.
to remove objects. A delete query does not load the object so no sqlalchemy
events can be triggered that would do things like recalculate quotas or
update revision numbers of parent objects. For more details on all of the
things that can go wrong using bulk delete operations, see the "Warning"
sections in the link above.
* For PostgreSQL if you're using GROUP BY everything in the SELECT list must be
an aggregate SUM(...), COUNT(...), etc or used in the GROUP BY.
@ -170,8 +174,8 @@ Document common pitfalls as well as good practices done during database developm
System development
~~~~~~~~~~~~~~~~~~
Document common pitfalls as well as good practices done when invoking system commands
and interacting with linux utils.
Document common pitfalls as well as good practices done when invoking system
commands and interacting with linux utils.
* When a patch requires a new platform tool or a new feature in an existing
tool, check if common platforms ship packages with the aforementioned
@ -179,37 +183,39 @@ and interacting with linux utils.
visibility (as these patches are brought up to the attention of the core team
during team meetings).
More details in :ref:`review guidelines <spec-review-practices>`.
* When a patch or the code depends on a new feature in the kernel or in any platform tools
(dnsmasq, ip, Open vSwitch etc.), consider introducing a new sanity check to
validate deployments for the expected features. Note that sanity checks *must
not* check for version numbers of underlying platform tools because
distributions may decide to backport needed features into older versions.
Instead, sanity checks should validate actual features by attempting to use them.
* When a patch or the code depends on a new feature in the kernel or in any
platform tools (dnsmasq, ip, Open vSwitch etc.), consider introducing a new
sanity check to validate deployments for the expected features. Note that
sanity checks *must not* check for version numbers of underlying platform
tools because distributions may decide to backport needed features into older
versions. Instead, sanity checks should validate actual features by
attempting to use them.
Eventlet concurrent model
~~~~~~~~~~~~~~~~~~~~~~~~~
Document common pitfalls as well as good practices done when using eventlet and monkey
patching.
Document common pitfalls as well as good practices done when using eventlet
and monkey patching.
* Do not use with_lockmode('update') on SQL queries without protecting the operation
with a lockutils semaphore. For some SQLAlchemy database drivers that operators may
choose (e.g. MySQLdb) it may result in a temporary deadlock by yielding to another
coroutine while holding the DB lock. The following wiki provides more details:
* Do not use with_lockmode('update') on SQL queries without protecting the
operation with a lockutils semaphore. For some SQLAlchemy database drivers
that operators may choose (e.g. MySQLdb) it may result in a temporary
deadlock by yielding to another coroutine while holding the DB lock.
The following wiki provides more details:
https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#MySQLdb_.2B_eventlet_.3D_sad
Mocking and testing
~~~~~~~~~~~~~~~~~~~
Document common pitfalls as well as good practices done when writing tests, any test.
For anything more elaborate, please visit the testing section.
Document common pitfalls as well as good practices done when writing tests,
any test. For anything more elaborate, please visit the testing section.
* Preferring low level testing versus full path testing (e.g. not testing database
via client calls). The former is to be favored in unit testing, whereas the latter
is to be favored in functional testing.
* Prefer specific assertions (assert(Not)In, assert(Not)IsInstance, assert(Not)IsNone,
etc) over generic ones (assertTrue/False, assertEqual) because they raise more
meaningful errors:
* Preferring low level testing versus full path testing (e.g. not testing
database via client calls). The former is to be favored in unit testing,
whereas the latter is to be favored in functional testing.
* Prefer specific assertions (assert(Not)In, assert(Not)IsInstance,
assert(Not)IsNone, etc) over generic ones (assertTrue/False, assertEqual)
because they raise more meaningful errors:
.. code:: python
@ -221,28 +227,30 @@ For anything more elaborate, please visit the testing section.
self.assertTrue(3 in [1, 2])
# raise meaningless error: "AssertionError: False is not true"
* Use the pattern "self.assertEqual(expected, observed)" not the opposite, it helps
reviewers to understand which one is the expected/observed value in non-trivial
assertions. The expected and observed values are also labeled in the output when
the assertion fails.
* Prefer specific assertions (assertTrue, assertFalse) over assertEqual(True/False, observed).
* Don't write tests that don't test the intended code. This might seem silly but
it's easy to do with a lot of mocks in place. Ensure that your tests break as
expected before your code change.
* Avoid heavy use of the mock library to test your code. If your code requires more
than one mock to ensure that it does the correct thing, it needs to be refactored
into smaller, testable units. Otherwise we depend on fullstack/tempest/api tests
to test all of the real behavior and we end up with code containing way too many
hidden dependencies and side effects.
* Use the pattern "self.assertEqual(expected, observed)" not the opposite, it
helps reviewers to understand which one is the expected/observed value in
non-trivial assertions. The expected and observed values are also labeled
in the output when the assertion fails.
* Prefer specific assertions (assertTrue, assertFalse) over
assertEqual(True/False, observed).
* Don't write tests that don't test the intended code. This might seem silly
but it is easy to do with a lot of mocks in place. Ensure that your tests
break as expected before your code change.
* Avoid heavy use of the mock library to test your code. If your code requires
more than one mock to ensure that it does the correct thing, it needs to be
refactored into smaller, testable units. Otherwise we depend on
fullstack/tempest/api tests to test all of the real behavior and we end up
with code containing way too many hidden dependencies and side effects.
* All behavior changes to fix bugs should include a test that prevents a
regression. If you made a change and it didn't break a test, it means the
code was not adequately tested in the first place, it's not an excuse to leave
it untested.
code was not adequately tested in the first place, it's not an excuse to
leave it untested.
* Test the failure cases. Use a mock side effect to throw the necessary
exceptions to test your 'except' clauses.
* Don't mimic existing tests that violate these guidelines. We are attempting to
replace all of these so more tests like them create more work. If you need help
writing a test, reach out to the testing lieutenants and the team on IRC.
* Don't mimic existing tests that violate these guidelines. We are attempting
to replace all of these so more tests like them create more work. If you
need help writing a test, reach out to the testing lieutenants and the team
on IRC.
* Mocking open() is a dangerous practice because it can lead to unexpected
bugs like `bug 1503847 <https://bugs.launchpad.net/neutron/+bug/1503847>`_.
In fact, when the built-in open method is mocked during tests, some
@ -269,14 +277,17 @@ down into the following directories based on content:
Additional documentation resides in the neutron-lib repository:
* api-ref - API reference documentation for Neutron resource and API extensions.
* api-ref - API reference documentation for Neutron resource and API
extensions.
Backward compatibility
~~~~~~~~~~~~~~~~~~~~~~
Document common pitfalls as well as good practices done when extending the RPC Interfaces.
Document common pitfalls as well as good practices done when extending the
RPC Interfaces.
* Make yourself familiar with :ref:`Upgrade review guidelines <upgrade_review_guidelines>`.
* Make yourself familiar with
:ref:`Upgrade review guidelines <upgrade_review_guidelines>`.
Deprecation
+++++++++++
@ -304,13 +315,14 @@ In terms of neutron development, this means:
Scalability issues
~~~~~~~~~~~~~~~~~~
Document common pitfalls as well as good practices done when writing code that needs to process
a lot of data.
Document common pitfalls as well as good practices done when writing code
that needs to process a lot of data.
Translation and logging
~~~~~~~~~~~~~~~~~~~~~~~
Document common pitfalls as well as good practices done when instrumenting your code.
Document common pitfalls as well as good practices done when instrumenting
your code.
* Make yourself familiar with `OpenStack logging guidelines <http://specs.openstack.org/openstack/openstack-specs/specs/log-guidelines.html#definition-of-log-levels>`_
to avoid littering the logs with traces logged at inappropriate levels.
@ -326,13 +338,14 @@ Document common pitfalls as well as good practices done when instrumenting your
Project interfaces
~~~~~~~~~~~~~~~~~~
Document common pitfalls as well as good practices done when writing code that is used
to interface with other projects, like Keystone or Nova.
Document common pitfalls as well as good practices done when writing code
that is used to interface with other projects, like Keystone or Nova.
Documenting your code
~~~~~~~~~~~~~~~~~~~~~
Document common pitfalls as well as good practices done when writing docstrings.
Document common pitfalls as well as good practices done when writing
docstrings.
Landing patches more rapidly
----------------------------
@ -363,17 +376,19 @@ Nits and pedantic comments
Document common nits and pedantic comments to watch out for.
* Make sure you spell correctly, the best you can, no-one wants rebase generators at
the end of the release cycle!
* Make sure you spell correctly, the best you can, no-one wants rebase
generators at the end of the release cycle!
* The odd pep8 error may cause an entire CI run to be wasted. Consider running
validation (pep8 and/or tests) before submitting your patch. If you keep forgetting
consider installing a git `hook <https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks>`_
validation (pep8 and/or tests) before submitting your patch. If you keep
forgetting consider installing a git
`hook <https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks>`_
so that Git will do it for you.
* Sometimes, new contributors want to dip their toes with trivial patches, but we
at OpenStack *love* bike shedding and their patches may sometime stall. In
some extreme cases, the more trivial the patch, the higher the chances it fails
to merge. To ensure we as a team provide/have a frustration-free experience
new contributors should be redirected to fixing `low-hanging-fruit bugs <https://bugs.launchpad.net/neutron/+bugs?field.tag=low-hanging-fruit>`_
* Sometimes, new contributors want to dip their toes with trivial patches,
but we at OpenStack *love* bike shedding and their patches may sometime
stall. In some extreme cases, the more trivial the patch, the higher the
chances it fails to merge. To ensure we as a team provide/have a
frustration-free experience new contributors should be redirected to fixing
`low-hanging-fruit bugs <https://bugs.launchpad.net/neutron/+bugs?field.tag=low-hanging-fruit>`_
that have a tangible positive impact to the codebase. Spelling mistakes, and
docstring are fine, but there is a lot more that is relatively easy to fix
and has a direct impact to Neutron users.
@ -381,36 +396,39 @@ Document common nits and pedantic comments to watch out for.
Reviewer comments
~~~~~~~~~~~~~~~~~
* Acknowledge them one by one by either clicking 'Done' or by replying extensively.
If you do not, the reviewer won't know whether you thought it was not important,
or you simply forgot. If the reply satisfies the reviewer, consider capturing the
input in the code/document itself so that it's for reviewers of newer patchsets to
see (and other developers when the patch merges).
* Acknowledge them one by one by either clicking 'Done' or by replying
extensively. If you do not, the reviewer won't know whether you thought it
was not important, or you simply forgot. If the reply satisfies the reviewer,
consider capturing the input in the code/document itself so that it's for
reviewers of newer patchsets to see (and other developers when the
patch merges).
* Watch for the feedback on your patches. Acknowledge it promptly and act on it
quickly, so that the reviewer remains engaged. If you disappear for a week after
you posted a patchset, it is very likely that the patch will end up being
neglected.
* Do not take negative feedback personally. Neutron is a large project with lots
of contributors with different opinions on how things should be done. Many come
from widely varying cultures and languages so the English, text-only feedback
can unintentionally come across as harsh. Getting a -1 means reviewers are
trying to help get the patch into a state that can be merged, it doesn't just
mean they are trying to block it. It's very rare to get a patch merged on the
first iteration that makes everyone happy.
quickly, so that the reviewer remains engaged. If you disappear for a week
after you posted a patchset, it is very likely that the patch will end up
being neglected.
* Do not take negative feedback personally. Neutron is a large project with
lots of contributors with different opinions on how things should be done.
Many come from widely varying cultures and languages so the English,
text-only feedback can unintentionally come across as harsh. Getting a -1
means reviewers are trying to help get the patch into a state that can be
merged, it doesn't just mean they are trying to block it. It's very rare to
get a patch merged on the first iteration that makes everyone happy.
Code Review
~~~~~~~~~~~
* You should visit `OpenStack How To Review wiki <https://wiki.openstack.org/wiki/How_To_Contribute#Reviewing>`_
* Stay focussed and review what matters for the release. Please check out the Neutron
section for the `Gerrit dashboard <http://status.openstack.org/reviews/>`_. The output
is generated by this `tool <https://github.com/openstack-infra/reviewday/blob/master/bin/neutron>`_.
* Stay focussed and review what matters for the release. Please check out the
Neutron section for the
`Gerrit dashboard <http://status.openstack.org/reviews/>`_. The output
is generated by this
`tool <https://github.com/openstack-infra/reviewday/blob/master/bin/neutron>`_.
IRC
~~~~
* IRC is a place where you can speak with many of the Neutron developers and core
reviewers. For more information you should visit
* IRC is a place where you can speak with many of the Neutron developers
and core reviewers. For more information you should visit
`OpenStack IRC wiki <http://wiki.openstack.org/wiki/IRC>`_
Neutron IRC channel is #openstack-neutron
* There are weekly IRC meetings related to many different projects/teams
@ -428,8 +446,9 @@ IRC
up the feedback loop.
* Each area of Neutron or sub-project of Neutron has a specific lieutenant
in charge of it.
You can most likely find these lieutenants on IRC, it is advised however to try
and send public questions to the channel rather then to a specific person if possible.
You can most likely find these lieutenants on IRC, it is advised however to
try and send public questions to the channel rather then to a specific person
if possible.
(This increase the chances of getting faster answers to your questions).
A list of the areas and lieutenants nicknames can be found at
:doc:`Core Reviewers <policies/neutron-teams>`.
@ -437,7 +456,8 @@ IRC
Commit messages
~~~~~~~~~~~~~~~
Document common pitfalls as well as good practices done when writing commit messages.
Document common pitfalls as well as good practices done when writing commit
messages.
For more details see `Git commit message best practices <https://wiki.openstack.org/wiki/GitCommitMessages>`_.
This is the TL;DR version with the important points for committing to Neutron.
@ -456,13 +476,13 @@ This is the TL;DR version with the important points for committing to Neutron.
code will fix the problem. If it's part of a feature implementation, explain
what component of the feature the patch implements. Do not just describe the
bug, that's what launchpad is for.
* Use the "Closes-Bug: #BUG-NUMBER" tag if the patch addresses a bug. Submitting
a bugfix without a launchpad bug reference is unacceptable, even if it's
trivial. Launchpad is how bugs are tracked so fixes without a launchpad bug are
a nightmare when users report the bug from an older version and the Neutron team
can't tell if/why/how it's been fixed. Launchpad is also how backports are
identified and tracked so patches without a bug report cannot be picked to stable
branches.
* Use the "Closes-Bug: #BUG-NUMBER" tag if the patch addresses a bug.
Submitting a bugfix without a launchpad bug reference is unacceptable, even
if it's trivial. Launchpad is how bugs are tracked so fixes without a
launchpad bug are a nightmare when users report the bug from an older
version and the Neutron team can't tell if/why/how it's been fixed.
Launchpad is also how backports are identified and tracked so patches
without a bug report cannot be picked to stable branches.
* Use the "Implements: blueprint NAME-OF-BLUEPRINT" or "Partially-Implements:
blueprint NAME-OF-BLUEPRINT" for features so reviewers can determine if the
code matches the spec that was agreed upon. This also updates the blueprint
@ -482,12 +502,14 @@ This is the TL;DR version with the important points for committing to Neutron.
Dealing with Zuul
~~~~~~~~~~~~~~~~~
Document common pitfalls as well as good practices done when dealing with OpenStack CI.
Document common pitfalls as well as good practices done when dealing with
OpenStack CI.
* When you submit a patch, consider checking its `status <http://status.openstack.org/zuul/>`_
in the queue. If you see a job failures, you might as well save time and try to figure out
in advance why it is failing.
* Excessive use of 'recheck' to get test to pass is discouraged. Please examine the logs for
the failing test(s) and make sure your change has not tickled anything that might be causing
a new failure or race condition. Getting your change in could make it even harder to debug
what is actually broken later on.
in the queue. If you see a job failures, you might as well save time and try
to figure out in advance why it is failing.
* Excessive use of 'recheck' to get test to pass is discouraged. Please examine
the logs for the failing test(s) and make sure your change has not tickled
anything that might be causing a new failure or race condition. Getting your
change in could make it even harder to debug what is actually broken
later on.

View File

@ -40,7 +40,8 @@ Interactions with the agent API object are in the following order:
#. The agent initializes the agent API object.
#. The agent passes the agent API object into the extension manager.
#. The manager passes the agent API object into each extension.
#. An extension calls the new agent API object method to receive, for instance, bridge wrappers with cookies allocated.
#. An extension calls the new agent API object method to receive, for instance,
bridge wrappers with cookies allocated.
::

View File

@ -76,4 +76,5 @@ Current API resources extended by standard attr extensions:
- ports: neutron.db.models_v2.Port
- security_groups: neutron.db.models.securitygroup.SecurityGroup
- floatingips: neutron.db.l3_db.FloatingIP
- network_segment_ranges: neutron.db.models.network_segment_range.NetworkSegmentRange
- network_segment_ranges:
neutron.db.models.network_segment_range.NetworkSegmentRange

View File

@ -40,9 +40,10 @@ The explanation is quite simple:
* `server_default <http://docs.sqlalchemy.org/en/rel_0_9/core/metadata.html#sqlalchemy.schema.Column.params.server_default>`_ - the default value for a column that SQLAlchemy will specify in DDL.
Summarizing, 'default' is useless in migrations and only 'server_default'
should be used. For synchronizing migrations with models server_default parameter
should also be added in model. If default value in database is not needed,
'server_default' should not be used. The declarative approach can be bypassed
should be used. For synchronizing migrations with models server_default
parameter should also be added in model. If default value in database is not
needed, 'server_default' should not be used. The declarative approach can be
bypassed
(i.e. 'default' may be omitted in the model) if default is enforced through
business logic.
@ -97,7 +98,8 @@ A model that supports tag mechanism must implement the property
The introduction of a new standard attribute only requires one column addition
to the 'standardattribute' table for one-to-one relationships or a new table
for one-to-many or one-to-zero relationships. Then all of the models using the
'HasStandardAttribute' mixin will automatically gain access to the new attribute.
'HasStandardAttribute' mixin will automatically gain access to the new
attribute.
Any attributes that will apply to every neutron resource (e.g. timestamps)
can be added directly to the 'standardattribute' table. For things that will

View File

@ -49,8 +49,8 @@ Pre-configured domains for projects and users
ML2 plugin extension ``dns_domain_keywords`` provides same dns integration as
``dns_domain_ports`` and ``subnet_dns_publish_fixed_ip`` and it also allows to
configure network's dns_domain with some specific keywords: ``<project_id>``,
``<project_name>``, ``<user_id>``, ``<user_name>``. Please see example below for
more details.
``<project_name>``, ``<user_id>``, ``<user_name>``. Please see example below
for more details.
* Create DNS zone. ``0511951bd56e4a0aac27ac65e00bddd0`` is ID of the project
used in the example

View File

@ -141,7 +141,8 @@ Neutron Routers are realized in OpenVSwitch
.. image:: images/under-the-hood-scenario-1-ovs-network.png
"router1" in the Neutron logical network is realized through a port ("qr-0ba8700e-da") in OpenVSwitch - attached to "br-int"::
"router1" in the Neutron logical network is realized through a port
("qr-0ba8700e-da") in OpenVSwitch - attached to "br-int"::
vagrant@bionic64:~/devstack$ sudo ovs-vsctl show
b9b27fc3-5057-47e7-ba64-0b6afe70a398
@ -202,13 +203,16 @@ Neutron Routers are realized in OpenVSwitch
Finding the router in ip/ipconfig
---------------------------------
The neutron-l3-agent uses the Linux IP stack and iptables to perform L3 forwarding and NAT.
In order to support multiple routers with potentially overlapping IP addresses, neutron-l3-agent
defaults to using Linux network namespaces to provide isolated forwarding contexts. As a result,
the IP addresses of routers will not be visible simply by running "ip addr list" or "ifconfig" on
The neutron-l3-agent uses the Linux IP stack and iptables to perform L3
forwarding and NAT. In order to support multiple routers with potentially
overlapping IP addresses, neutron-l3-agent defaults to using Linux network
namespaces to provide isolated forwarding contexts. As a result,
the IP addresses of routers will not be visible simply by running
"ip addr list" or "ifconfig" on
the node. Similarly, you will not be able to directly ping fixed IPs.
To do either of these things, you must run the command within a particular router's network
To do either of these things, you must run the command within a particular
router's network
namespace. The namespace will have the name "qrouter-<UUID of the router>.
.. image:: images/under-the-hood-scenario-1-ovs-netns.png

View File

@ -374,8 +374,8 @@ types that can be used to implement them.
# implemented in some object-specific way.
synthetic_fields = ['dhcp_agents', 'shared', 'subnets']
:code:`ObjectField` and :code:`ListOfObjectsField` take the name of object class
as an argument.
:code:`ObjectField` and :code:`ListOfObjectsField` take the name of object
class as an argument.
Implementing custom synthetic fields

View File

@ -44,8 +44,8 @@ VLAN Tags
GRE Tunnels
-----------
GRE Tunneling is documented in depth in the `Networking in too much
detail <http://openstack.redhat.com/networking/networking-in-too-much-detail/>`_
GRE Tunneling is documented in depth in the `Networking in too much detail
<http://openstack.redhat.com/networking/networking-in-too-much-detail/>`_
by RedHat.
VXLAN Tunnels
@ -327,28 +327,28 @@ br-int or into the firewall bridge if using iptables firewall. In the
external-ids of the port Nova will store the port ID of the parent port.
The OVS agent detects that a new vif has been plugged. It gets
the details of the new port and wires it.
The agent configures it in the same way as a traditional port: packets coming out
from the VM will be tagged using the internal VLAN ID associated to the network,
packets going to the VM will be stripped of the VLAN ID.
After wiring it successfully the OVS agent will send a message notifying Neutron
server that the parent port is up. Neutron will send back to Nova an event to
signal that the wiring was successful.
If the parent port is associated with one or more subports the agent will process
them as described in the next paragraph.
The agent configures it in the same way as a traditional port: packets coming
out from the VM will be tagged using the internal VLAN ID associated to the
network, packets going to the VM will be stripped of the VLAN ID.
After wiring it successfully the OVS agent will send a message notifying
Neutron server that the parent port is up. Neutron will send back to Nova an
event to signal that the wiring was successful.
If the parent port is associated with one or more subports the agent will
process them as described in the next paragraph.
Subport creation
++++++++++++++++
If a subport is added to a parent port but no VM was booted using that parent port
yet, no L2 agent will process it (because at that point the parent port is
If a subport is added to a parent port but no VM was booted using that parent
port yet, no L2 agent will process it (because at that point the parent port is
not bound to any host).
When a subport is created for a parent port and a VM that uses that parent port is
already running, the OVS agent will create a VLAN interface on the VM tap
using the VLAN ID specified in the subport segmentation id. There's a small possibility
that a race might occur: the firewall bridge might be created and plugged while the vif
is not there yet. The OVS agent needs to check if the vif exists before trying to create
a subinterface.
Let's see how the models differ when using the iptables firewall or the ovs native
firewall.
When a subport is created for a parent port and a VM that uses that parent port
is already running, the OVS agent will create a VLAN interface on the VM tap
using the VLAN ID specified in the subport segmentation id. There's a small
possibility that a race might occur: the firewall bridge might be created and
plugged while the vif is not there yet. The OVS agent needs to check if the
vif exists before trying to create a subinterface.
Let's see how the models differ when using the iptables firewall or the OVS
native firewall.
Iptables Firewall
'''''''''''''''''
@ -393,10 +393,11 @@ and the packet will finally get to eth0.100.
*Outbound traffic from the VM point of view*
The untagged traffic will flow from eth0 to port1 going through qbr1 where
firewall rules will be applied. Traffic tagged with VLAN 100 will leave eth0.100,
go through tap1.100 where the VLAN 100 is stripped. It will reach qbr2 where
iptables rules will be applied and go to port 2. The internal VLAN of network2
will be pushed by br-int when the packet enters port2 because it's a tagged port.
firewall rules will be applied. Traffic tagged with VLAN 100 will leave
eth0.100, go through tap1.100 where the VLAN 100 is stripped. It will reach
qbr2 where iptables rules will be applied and go to port 2. The internal VLAN
of network2 will be pushed by br-int when the packet enters port2 because it's
a tagged port.
OVS Firewall case
@ -422,56 +423,59 @@ OVS Firewall case
| br-int |
+----------------------------+
When a subport is created the OVS agent will create the VLAN interface tap1.100 and
plug it into br-int. Let's assume the subport is on network2.
When a subport is created the OVS agent will create the VLAN interface tap1.100
and plug it into br-int. Let's assume the subport is on network2.
*Inbound traffic from the VM point of view*
The traffic will flow untagged from port 1 to eth0. The traffic going out from port 2
will be stripped of the VLAN ID assigned to network2. It will be filtered by the rules
installed by the firewall and reach tap1.100.
tap1.100 will tag the traffic using VLAN 100. It will then reach the VM's eth0.100.
The traffic will flow untagged from port 1 to eth0. The traffic going out from
port 2 will be stripped of the VLAN ID assigned to network2. It will be
filtered by the rules installed by the firewall and reach tap1.100.
tap1.100 will tag the traffic using VLAN 100. It will then reach the VM's
eth0.100.
*Outbound traffic from the VM point of view*
The untagged traffic will flow and reach port 1 where it will be tagged using the
VLAN ID associated to the network. Traffic tagged with VLAN 100 will leave eth0.100
reach tap1.100 where VLAN 100 will be stripped. It will then reach port2.
It will be filtered by the rules installed by the firewall on port 2. Then the packets
will be tagged using the internal VLAN associated to network2 by br-int since port 2 is a
tagged port.
The untagged traffic will flow and reach port 1 where it will be tagged using
the VLAN ID associated to the network. Traffic tagged with VLAN 100 will leave
eth0.100 and reach tap1.100 where VLAN 100 will be stripped. It will then reach
port2. It will be filtered by the rules installed by the firewall on port 2.
Then the packets will be tagged using the internal VLAN associated to network2
by br-int since port 2 is a tagged port.
Parent port deletion
++++++++++++++++++++
Deleting a port that is an active parent in a trunk is forbidden. If the parent port has
no trunk associated (it's a "normal" port), it can be deleted.
The OVS agent doesn't need to perform any action, the deletion will result in a removal
of the port data from the DB.
Deleting a port that is an active parent in a trunk is forbidden. If the parent
port has no trunk associated (it's a "normal" port), it can be deleted.
The OVS agent doesn't need to perform any action, the deletion will result in
a removal of the port data from the DB.
Trunk deletion
++++++++++++++
When Nova deletes a VM, it deletes the VM's corresponding Neutron ports only if they were
created by Nova when booting the VM. In the vlan-aware-vm case the parent port is passed to Nova, so
the port data will remain in the DB after the VM deletion. Nova will delete
the VIF of the VM (in the example tap1) as part of the VM termination. The OVS agent
will detect that deletion and notify the Neutron server that the parent port is down.
The OVS agent will clean up the corresponding subports as explained in the next paragraph.
When Nova deletes a VM, it deletes the VM's corresponding Neutron ports only if
they were created by Nova when booting the VM. In the vlan-aware-vm case the
parent port is passed to Nova, so the port data will remain in the DB after the
VM deletion. Nova will delete the VIF of the VM (in the example tap1) as part
of the VM termination. The OVS agent will detect that deletion and notify the
Neutron server that the parent port is down. The OVS agent will clean up the
corresponding subports as explained in the next paragraph.
The deletion of a trunk that is used by a VM is not allowed.
The trunk can be deleted (leaving the parent port intact) when the parent port is not
used by any VM. After the trunk is deleted, the parent port can also be deleted.
The trunk can be deleted (leaving the parent port intact) when the parent port
is not used by any VM. After the trunk is deleted, the parent port can also be
deleted.
Subport deletion
++++++++++++++++
Removing a subport that is associated with a parent port that was not used to boot any
VM is a no op from the OVS agent perspective.
When a subport associated with a parent port that was used to boot a VM is deleted,
the OVS agent will take care of removing the firewall bridge if using iptables firewall
and the port on br-int.
Removing a subport that is associated with a parent port that was not used to
boot any VM is a no op from the OVS agent perspective.
When a subport associated with a parent port that was used to boot a VM is
deleted, the OVS agent will take care of removing the firewall bridge if using
the iptables firewall, and the port on br-int.
Implementation Trunk Bridge (Option C)
@ -539,11 +543,11 @@ will process them as described in the next paragraph.
Subport creation
++++++++++++++++
If a subport is added to a parent port but no VM was booted using that parent port
yet, the agent won't process the subport (because at this point there's no node
associated with the parent port).
When a subport is added to a parent port that is used by a VM the OVS agent will
create a new patch port:
If a subport is added to a parent port but no VM was booted using that parent
port yet, the agent won't process the subport (because at this point there's
no node associated with the parent port).
When a subport is added to a parent port that is used by a VM the OVS agent
will create a new patch port:
::
@ -555,7 +559,8 @@ spt-subport-id, the trunk bridge side of the patch is tagged using VLAN 100.
We assume that the segmentation ID of the subport is 100.
spi-subport-id, the br-int side of the patch port is tagged with VLAN 5. We
assume that the subport is on network2 that on this host uses VLAN 5.
The OVS agent will set the subport ID in the external-ids of spt-subport-id and spi-subport-id.
The OVS agent will set the subport ID in the external-ids of spt-subport-id
and spi-subport-id.
*Inbound traffic from the VM point of view*
@ -577,34 +582,37 @@ will reach spi-subport-id, where it's tagged using VLAN 5.
Parent port deletion
++++++++++++++++++++
Deleting a port that is an active parent in a trunk is forbidden. If the parent port has
no trunk associated, it can be deleted. The OVS agent doesn't need to perform any action.
Deleting a port that is an active parent in a trunk is forbidden. If the parent
port has no trunk associated, it can be deleted. The OVS agent doesn't need to
perform any action.
Trunk deletion
++++++++++++++
When Nova deletes a VM, it deletes the VM's corresponding Neutron ports only if they were
created by Nova when booting the VM. In the vlan-aware-vm case the parent port is passed to Nova, so
the port data will remain in the DB after the VM deletion. Nova will delete
the port on the trunk bridge where the VM is plugged. The L2 agent
will detect that and delete the trunk bridge. It will notify the Neutron server that the parent
port is down.
When Nova deletes a VM, it deletes the VM's corresponding Neutron ports only if
they were created by Nova when booting the VM. In the vlan-aware-vm case the
parent port is passed to Nova, so the port data will remain in the DB after the
VM deletion. Nova will delete the port on the trunk bridge where the VM is
plugged. The L2 agent will detect that and delete the trunk bridge. It will
notify the Neutron server that the parent port is down.
The deletion of a trunk that is used by a VM is not allowed.
The trunk can be deleted (leaving the parent port intact) when the parent port is not
used by any VM. After the trunk is deleted, the parent port can also be deleted.
The trunk can be deleted (leaving the parent port intact) when the parent port
is not used by any VM. After the trunk is deleted, the parent port can also be
deleted.
Subport deletion
++++++++++++++++
The OVS agent will delete the patch port pair corresponding to the subport deleted.
The OVS agent will delete the patch port pair corresponding to the subport
deleted.
Agent resync
~~~~~~~~~~~~
During resync the agent should check that all the trunk and subports are
still valid. It will delete the stale trunk and subports using the procedure specified
in the previous paragraphs according to the implementation.
still valid. It will delete the stale trunk and subports using the procedure
specified in the previous paragraphs according to the implementation.
Local IP

View File

@ -239,9 +239,9 @@ SLAAC, NDP) for egress traffic, and allows ARP replies. Also identifies not
tracked connections which are processed later with information obtained from
conntrack. Notice the ``zone=NXM_NX_REG6[0..15]`` in ``actions`` when obtaining
information from conntrack. It says every port has its own conntrack zone
defined by the value in ``register 6`` (OVSDB port tag identifying the network).
It's there to avoid accepting established traffic that belongs to a different
port with the same conntrack parameters.
defined by the value in ``register 6`` (OVSDB port tag identifying the
network). It's there to avoid accepting established traffic that belongs to a
different port with the same conntrack parameters.
The very first rule in |table_71| is a rule removing conntrack information for
a use-case where a Neutron logical port is placed directly to the hypervisor.

View File

@ -109,7 +109,8 @@ Similar approach has been implemeneted for DHCP rescheduling `[4]`_.
The primary chassis gateway could be moved only to other, previously scheduled
gateway. Rebalancing of chassis occurs only if number of scheduled primary
chassis ports per each provider network hosted by given chassis is higher than
average number of hosted primary gateway ports per chassis per provider network.
average number of hosted primary gateway ports per chassis per provider
network.
This dependency is determined by formula:

View File

@ -10,7 +10,7 @@ manage affected security group rules. Thus, there is no need for an agent.
It is good to keep in mind that Openstack Security Groups (SG) and their rules
(SGR) map 1:1 into OVN's Port Groups (PG) and Access Control Lists (ACL):
.. code-block:: none
.. code-block::
Openstack Security Group <=> OVN Port Group
Openstack Security Group Rule <=> OVN ACL
@ -33,16 +33,16 @@ Meter Table
-----------
Meters are how network logging events get throttled, so they do not negatively
affect the control plane. Logged events are sent to the ovn-controller that runs
locally on each compute node. Thus, the throttle keeps ovn-controller from getting
overwhelmed. Note that the meters used for network logging do
affect the control plane. Logged events are sent to the ovn-controller that
runs locally on each compute node. Thus, the throttle keeps ovn-controller
from getting overwhelmed. Note that the meters used for network logging do
not rate-limit the datapath; they only affect the logs themselves.
With the addition of 'fair meters', multiple ACLs can refer to the same
meter without competing with each other for what logs get rate limited.
This attribute is a pre-requisite for this feature, as the design aspires
to keep the complexity associated with the management of meters outside Openstack.
The benefit of ACLs sharing a 'fair' meter is that a noisy neighbor (ACL)
will not consume all the available capacity set for the meter.
to keep the complexity associated with the management of meters outside
Openstack. The benefit of ACLs sharing a 'fair' meter is that a noisy
neighbor (ACL) will not consume all the available capacity set for the meter.
For more info on fair meters, see:
https://github.com/ovn-org/ovn/commit/880dca99eaf73db7e783999c29386d03c82093bf
@ -78,7 +78,7 @@ Moreover, there are a few attributes in each ACL that makes it able to
provide the networking logging feature. Let's use the example below
to point out the relevant fields:
.. code-block:: none
.. code-block:: bash
$ openstack network log create --resource-type security_group \
--resource ${SG} --event ACCEPT logme -f value -c ID
@ -112,23 +112,26 @@ to point out the relevant fields:
priority : 1002
severity : info
The first command creates a networking-log for a given SG. The second shows an SGR from that SG.
The third shell command is where we can see how the ACL with the meter information gets populated.
The first command creates a networking-log for a given SG. The second shows an
SGR from that SG. The third shell command is where we can see how the ACL with
the meter information gets populated.
These are the attributes pertinent to network logging:
* log: a boolean that dictates whether a log will be generated. Even if the NLE applies to the SGR
via its associated SG, this may be 'false' if the action is not a match. That would be the case
if the NLE specified "--event DROP", in this example.
* log: a boolean that dictates whether a log will be generated. Even if the
NLE applies to the SGR via its associated SG, this may be 'false' if the
action is not a match. That would be the case if the NLE specified
"--event DROP", in this example.
* meter: this is the name of the fair meter. It is the same for all ACLs.
* name: This is a string composed of the prefix "neutron-" and the id of the NLE. It will be part of
the generated logs.
* severity: this is the log severity that will be used by the ovn-controller. It is currently hard
coded in Neutron, but can be made configurable in future releases.
* name: This is a string composed of the prefix "neutron-" and the id of the
NLE. It will be part of the generated logs.
* severity: this is the log severity that will be used by the ovn-controller.
It is currently hard coded in Neutron, but can be made configurable in
future releases.
If we poked the SGR with packets that match its criteria, the ovn-controller local to where the ACLs
is enforced will log something that looks like this:
If we poked the SGR with packets that match its criteria, the ovn-controller
local to where the ACLs is enforced will log something that looks like this:
.. code-block:: none
.. code-block:: bash
2021-02-16T11:59:00.640Z|00045|acl_log(ovn_pinctrl0)|INFO|
name="neutron-2e456c7f-154e-40a8-bb10-f88ba51b90b5",
@ -137,8 +140,8 @@ is enforced will log something that looks like this:
nw_src=10.0.0.12,nw_dst=10.0.0.11,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,
icmp_code=0
It is beyond the scope of this document to talk about what happens after the logs are generated
by ovn-controllers. The harvesting of files across compute nodes is something a project like
`Monasca`_ may be a good fit.
It is beyond the scope of this document to talk about what happens after the
logs are generated by ovn-controllers. The harvesting of files across compute
nodes is something a project like `Monasca`_ may be a good fit.
.. _`Monasca`: https://wiki.openstack.org/wiki/Monasca

View File

@ -14,7 +14,7 @@ load_balancer table for all mappings for a given FIP+protocol. All PFs
for the same FIP+protocol are kept as Virtual IP (VIP) mappings inside a
LB entry. See the diagram below for an example of how that looks like:
.. code-block:: none
.. code-block::
VIP:PORT = MEMBER1:MPORT1, MEMBER2:MPORT2

View File

@ -130,10 +130,10 @@ before returning it to the API client.
The neutron.policy API
----------------------
The ``neutron.policy`` module exposes a simple API whose main goal if to allow the
REST API controllers to implement the authorization workflow discussed in this
document. It is a bad practice to call the policy engine from within the plugin
layer, as this would make request authorization dependent on configured
The ``neutron.policy`` module exposes a simple API whose main goal if to allow
the REST API controllers to implement the authorization workflow discussed inu
this document. It is a bad practice to call the policy engine from within the
plugin layer, as this would make request authorization dependent on configured
plugins, and therefore make API behaviour dependent on the plugin itself, which
defies Neutron tenet of being backend agnostic.
@ -360,7 +360,7 @@ projects. Each neutron related project should register the following two entry
points ``oslo.policy.policies`` and ``neutron.policies`` in ``setup.cfg`` like
below:
.. code-block:: none
.. code-block:: ini
oslo.policy.policies =
neutron = neutron.conf.policies:list_rules
@ -381,7 +381,7 @@ projects, so the second entry point is required.
The recommended entry point name is a repository name: For example,
'neutron-fwaas' for FWaaS and 'networking-sfc' for SFC:
.. code-block:: none
.. code-block:: ini
oslo.policy.policies =
neutron-fwaas = neutron_fwaas.policies:list_rules

View File

@ -82,9 +82,10 @@ QoS plugin implementation guide
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The neutron.extensions.qos.QoSPluginBase class uses method proxies for methods
relating to QoS policy rules. Each of these such methods is generic in the sense
that it is intended to handle any rule type. For example, QoSPluginBase has a
create_policy_rule method instead of both create_policy_dscp_marking_rule and
relating to QoS policy rules. Each of these such methods is generic in the
sense that it is intended to handle any rule type. For example, QoSPluginBase
has a create_policy_rule method instead of both
create_policy_dscp_marking_rule and
create_policy_bandwidth_limit_rule methods. The logic behind the proxies allows
a call to a plugin's create_policy_dscp_marking_rule to be handled by the
create_policy_rule method, which will receive a QosDscpMarkingRule object as an
@ -168,11 +169,13 @@ For QoS, the following neutron objects are implemented:
* QosPolicyDefault: defines a default QoS policy per project.
* QosBandwidthLimitRule: defines the instance bandwidth limit rule type,
characterized by a max kbps and a max burst kbits. This rule has also a
direction parameter to set the traffic direction, from the instance's point of view.
* QosDscpMarkingRule: defines the DSCP rule type, characterized by an even integer
between 0 and 56. These integers are the result of the bits in the DiffServ section
of the IP header, and only certain configurations are valid. As a result, the list
of valid DSCP rule types is: 0, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32,
direction parameter to set the traffic direction, from the instance's point
of view.
* QosDscpMarkingRule: defines the DSCP rule type, characterized by an even
integer between 0 and 56. These integers are the result of the bits in the
DiffServ section of the IP header, and only certain configurations are valid.
As a result, the list of valid DSCP rule types is:
0, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32,
34, 36, 38, 40, 46, 48, and 56.
* QosMinimumBandwidthRule: defines the minimum assured bandwidth rule type,
characterized by a min_kbps parameter. This rule has also a direction
@ -223,8 +226,8 @@ instantiated (and to suggest just that, the base rule class is marked as ABC).
QoS objects rely on some primitive database API functions that are added in:
* neutron_lib.db.api: those can be reused to fetch other models that do not have
corresponding versioned objects yet, if needed.
* neutron_lib.db.api: those can be reused to fetch other models that do not
have corresponding versioned objects yet, if needed.
* neutron.db.qos.api: contains database functions that are specific to QoS
models.
@ -342,9 +345,9 @@ The DSCP markings are in fact configured on the port by means of
openflow rules.
.. note::
As of Ussuri release, the QoS rules can be applied for direct ports with hardware
offload capability (switchdev), this requires Open vSwitch version 2.11.0 or newer
and Linux kernel based on kernel 5.4.0 or newer.
As of Ussuri release, the QoS rules can be applied for direct ports with
hardware offload capability (switchdev), this requires Open vSwitch
version 2.11.0 or newer and Linux kernel based on kernel 5.4.0 or newer.
SR-IOV
++++++
@ -380,10 +383,10 @@ For egress bandwidth limit rule:
The egress bandwidth limit is configured on the tap port by setting traffic
policing on tc ingress queueing discipline (qdisc). Details about ingress
qdisc can be found on `lartc how-to <http://lartc.org/howto/lartc.adv-qdisc.ingress.html>`__.
The reason why ingress qdisc is used to configure egress bandwidth limit is that
tc is working on traffic which is visible from "inside bridge" perspective. So
traffic incoming to bridge via tap interface is in fact outgoing from Neutron's
port.
The reason why ingress qdisc is used to configure egress bandwidth limit is
that tc is working on traffic which is visible from "inside bridge"
perspective. So traffic incoming to bridge via tap interface is in fact
outgoing from Neutron's port.
This implementation is the same as what Open vSwitch is doing when
ingress_policing_rate and ingress_policing_burst are set for port.
@ -394,12 +397,12 @@ For ingress bandwidth limit rule:
* delete_tbf_bw_limit
The ingress bandwidth limit is configured on the tap port by setting a simple
`tc-tbf <http://linux.die.net/man/8/tc-tbf>`_ queueing discipline (qdisc) on the
port. It requires a value of HZ parameter configured in kernel on the host.
`tc-tbf <http://linux.die.net/man/8/tc-tbf>`_ queueing discipline (qdisc) on
the port. It requires a value of HZ parameter configured in kernel on the host.
This value is necessary to calculate the minimal burst value which is set in
tc. Details about how it is calculated can be found in
`here <http://unix.stackexchange.com/a/100797>`_. This solution is similar to Open
vSwitch implementation.
`here <http://unix.stackexchange.com/a/100797>`_.
This solution is similar to Open vSwitch implementation.
The Linux bridge DSCP marking implementation relies on the
linuxbridge_extension_api to request access to the IptablesManager class
@ -408,17 +411,18 @@ and to manage chains in the ``mangle`` table in iptables.
QoS driver design
-----------------
QoS framework is flexible enough to support any third-party vendor. To integrate a
third party driver (that just wants to be aware of the QoS create/update/delete API
calls), one needs to implement 'neutron.services.qos.drivers.base', and register
QoS framework is flexible enough to support any third-party vendor. To
integrate a third party driver (that just wants to be aware of the QoS
create/update/delete API calls), one needs to implement
'neutron.services.qos.drivers.base', and register
the driver during the core plugin or mechanism driver load, see
neutron.services.qos.drivers.openvswitch.driver register method for an example.
.. note::
All the functionality MUST be implemented by the vendor, neutron's QoS framework
will just act as an interface to bypass the received QoS API request and help with
database persistence for the API operations.
All the functionality MUST be implemented by the vendor, neutron's QoS
framework will just act as an interface to bypass the received QoS API
request and help with database persistence for the API operations.
.. note::
L3 agent ``fip_qos`` extension does not have a driver implementation,
@ -441,7 +445,8 @@ On agent side (OVS):
On L3 agent side:
* For for floating IPs QoS support, add 'fip_qos' to extensions in [agent] section.
* For for floating IPs QoS support, add 'fip_qos' to extensions in [agent]
section.
Testing strategy
@ -498,6 +503,7 @@ New functional tests for L3 agent floating IP rate limit:
API tests
~~~~~~~~~
API tests for basic CRUD operations for ports, networks, policies, and rules were added in:
API tests for basic CRUD operations for ports, networks, policies, and rules
were added in:
* neutron-tempest-plugin.api.test_qos

View File

@ -43,8 +43,8 @@ limits are currently not enforced on RPC interfaces listening on the AMQP
bus.
Plugin and ML2 drivers are not supposed to enforce quotas for resources they
manage. However, the ``subnet_allocation`` [1]_ extension is an exception and will
be discussed below.
manage. However, the ``subnet_allocation`` [1]_ extension is an exception and
will be discussed below.
The quota management and enforcement mechanisms discussed here apply to every
resource which has been registered with the Quota engine, regardless of
@ -69,12 +69,12 @@ configuration option ``quota_driver``.
The Quota API extension handles quota management, whereas the Quota Engine
component handles quota enforcement. This API extension is loaded like any
other extension. For this reason plugins must explicitly support it by including
"quotas" in the supported_extension_aliases attribute.
other extension. For this reason plugins must explicitly support it by
including "quotas" in the supported_extension_aliases attribute.
In the Quota API simple CRUD operations are used for managing project quotas.
Please note that the current behaviour when deleting a project quota is to reset
quota limits for that project to configuration defaults. The API
Please note that the current behaviour when deleting a project quota is to
reset quota limits for that project to configuration defaults. The API
extension does not validate the project identifier with the identity service.
In addition, the Quota Detail API extension complements the Quota API extension
@ -107,7 +107,8 @@ delete operations are implemented by the usual index, show, update and
delete methods. These method simply call into the quota driver for either
fetching project quotas or updating them.
The ``_update_attributes`` method is called only once in the controller lifetime.
The ``_update_attributes`` method is called only once in the controller
lifetime.
This method dynamically updates Neutron's resource attribute map [4]_ so that
an attribute is added for every resource managed by the quota engine.
Request authorisation is performed in this controller, and only 'admin' users
@ -119,11 +120,11 @@ The driver operations dealing with quota management are:
* ``delete_tenant_quota``, which simply removes all entries from the 'quotas'
table for a given project identifier;
* ``update_quota_limit``, which adds or updates an entry in the 'quotas' project
for a given project identifier and a given resource name;
* ``_get_quotas``, which fetches limits for a set of resource and a given project
identifier
* ``_get_all_quotas``, which behaves like ``_get_quotas``, but for all projects.
* ``update_quota_limit``, which adds or updates an entry in the 'quotas'
project for a given project identifier and a given resource name;
* ``_get_quotas``, which fetches limits for a set of resource and a given
project identifier;
* ``_get_all_quotas``, which behaves like ``_get_quotas``, but for all projects
Resource Usage Info
@ -145,15 +146,16 @@ Neutron has two ways of tracking resource usage info:
``TrackedResource`` depends on one single database model (table) and the
resource count is done directly on this table only.
Another difference between ``CountableResource`` and ``TrackedResource`` is that the
former invokes a plugin method to count resources. ``CountableResource`` should be
Another difference between ``CountableResource`` and ``TrackedResource`` is
that the former invokes a plugin method to count resources.
``CountableResource`` should be
therefore employed for plugins which do not leverage the Neutron database.
The actual class that the Neutron quota engine will use is determined by the
``track_quota_usage`` variable in the quota configuration section. If ``True``,
``TrackedResource`` instances will be created, otherwise the quota engine will
use ``CountableResource`` instances.
Resource creation is performed by the ``create_resource_instance`` factory method
in the ``neutron.quota.resource`` module.
Resource creation is performed by the ``create_resource_instance`` factory
method in the ``neutron.quota.resource`` module.
DbQuotaDriver description
-------------------------
@ -164,9 +166,9 @@ executing queries to explicitly count objects will increase with the number of
records in the table. On the other hand, using ``TrackedResource`` will fetch a
single record, but has the drawback of having to execute an UPDATE statement
once the operation is completed.
Nevertheless, ``CountableResource`` instances do not simply perform a SELECT query
on the relevant table for a resource, but invoke a plugin method, which might
execute several statements and sometimes even interacts with the backend
Nevertheless, ``CountableResource`` instances do not simply perform a SELECT
query on the relevant table for a resource, but invoke a plugin method, which
might execute several statements and sometimes even interacts with the backend
before returning.
Resource usage tracking also becomes important for operational correctness
when coupled with the concept of resource reservation, discussed in another
@ -227,10 +229,10 @@ the chances of overcommiting resources over the quota limits are low. Neutron
does not enforce quota in such way that a quota limit violation could never
occur [5]_.
Regardless of whether ``CountableResource`` or ``TrackedResource`` is used, the quota
engine always invokes its ``count()`` method to retrieve resource usage.
Therefore, from the perspective of the Quota engine there is absolutely no
difference between ``CountableResource`` and ``TrackedResource``.
Regardless of whether ``CountableResource`` or ``TrackedResource`` is used,
the quota engine always invokes its ``count()`` method to retrieve resource
usage. Therefore, from the perspective of the Quota engine there is absolutely
no difference between ``CountableResource`` and ``TrackedResource``.
Quota Enforcement in DbQuotaDriver
----------------------------------
@ -266,13 +268,13 @@ In order to ensure correct operations, a row-level lock is acquired in
the transaction which creates the reservation. The lock is acquired when
reading usage data. In case of write-set certification failures,
which can occur in active/active clusters such as MySQL galera, the decorator
``neutron_lib.db.api.retry_db_errors`` will retry the transaction if a DBDeadLock
exception is raised.
``neutron_lib.db.api.retry_db_errors`` will retry the transaction if a
DBDeadLock exception is raised.
While non-locking approaches are possible, it has been found out that, since
a non-locking algorithms increases the chances of collision, the cost of
handling a ``DBDeadlock`` is still lower than the cost of retrying the operation
when a collision is detected. A study in this direction was conducted for
IP allocation operations, but the same principles apply here as well [7]_.
handling a ``DBDeadlock`` is still lower than the cost of retrying the
operation when a collision is detected. A study in this direction was conducted
for IP allocation operations, but the same principles apply here as well [7]_.
Nevertheless, moving away for DB-level locks is something that must happen
for quota enforcement in the future.
@ -366,9 +368,9 @@ Please be aware of the following limitations of the quota enforcement engine:
in resource usage. Since the event mechanism monitors the data model class,
it is paramount for a correct quota enforcement, that resources are always
created and deleted using object relational mappings. For instance, deleting
a resource with a ``query.delete`` call will not trigger the event. SQLAlchemy
events should be considered as a temporary measure adopted as Neutron lacks
persistent API objects.
a resource with a ``query.delete`` call will not trigger the event.
SQLAlchemy events should be considered as a temporary measure adopted as
Neutron lacks persistent API objects.
* As ``CountableResource`` instance do not track usage data, when making a
reservation no write-intent lock is acquired. Therefore the quota engine
with ``CountableResource`` is not concurrency-safe.

View File

@ -113,8 +113,8 @@ Example Change
As an example minor API change, let's assume we want to add a new parameter to
my_remote_method_2. First, we add the argument on the server side. To be
backwards compatible, the new argument must have a default value set so that the
interface will still work even if the argument is not supplied. Also, the
backwards compatible, the new argument must have a default value set so that
the interface will still work even if the argument is not supplied. Also, the
interface's minor version number must be incremented. So, the new server side
code would look like this:

View File

@ -101,7 +101,8 @@ Resource push notifications
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Agents will subscribe to the neutron-vo-<resource_type>-<version> fanout queue
which carries updated objects for the version they know about. The versions
they know about depend on the runtime Neutron versioned objects they started with.
they know about depend on the runtime Neutron versioned objects they started
with.
When the server upgrades, it should be able to instantly calculate a census of
agent versions per object (we will define a mechanism for this in a later
@ -257,26 +258,27 @@ Unsubscribing from resources
To unsubscribe registered callbacks:
* unsubscribe(callback, resource_type): unsubscribe from specific resource type.
* unsubscribe(callback, resource_type): unsubscribe from specific
resource type.
* unsubscribe_all(): unsubscribe from all resources.
Sending resource events
-----------------------
On the server side, resource updates could come from anywhere, a service plugin,
an extension, anything that updates, creates, or destroys the resources and that
is of any interest to subscribed agents.
On the server side, resource updates could come from anywhere, a service
plugin, an extension, anything that updates, creates, or destroys the
resources and that is of any interest to subscribed agents.
A callback is expected to receive a list of resources. When resources in the list
belong to the same resource type, a single push RPC message is sent; if the list
contains objects of different resource types, resources of each type are grouped
and sent separately, one push RPC message per type. On the receiver side,
resources in a list always belong to the same type. In other words, a server-side
push of a list of heterogeneous objects will result into N messages on bus and
N client-side callback invocations, where N is the number of unique resource
types in the given list, e.g. L(A, A, B, C, C, C) would be fragmented into
L1(A, A), L2(B), L3(C, C, C), and each list pushed separately.
A callback is expected to receive a list of resources. When resources in the
list belong to the same resource type, a single push RPC message is sent;
if the list contains objects of different resource types, resources of each
type are grouped and sent separately, one push RPC message per type. On the
receiver side, resources in a list always belong to the same type. In other
words, a server-side push of a list of heterogeneous objects will result into
N messages on bus and N client-side callback invocations, where N is the number
of unique resource types in the given list, e.g. L(A, A, B, C, C, C) would be
fragmented into L1(A, A), L2(B), L3(C, C, C), and each list pushed separately.
Note: there is no guarantee in terms of order in which separate resource lists
will be delivered to consumers.

View File

@ -30,7 +30,8 @@ https://wiki.openstack.org/wiki/Neutron/SecurityGroups
API Extension
-------------
The API extension is the 'front' end portion of the code, which handles defining a `REST-ful API`_, which is used by projects.
The API extension is the 'front' end portion of the code, which handles
defining a `REST-ful API`_, which is used by projects.
.. _`REST-ful API`: https://opendev.org/openstack/neutron/src/neutron/extensions/securitygroup.py
@ -39,35 +40,47 @@ The API extension is the 'front' end portion of the code, which handles defining
Database API
------------
The Security Group API extension adds a number of `methods to the database layer`_ of Neutron
The Security Group API extension adds a number of
`methods to the database layer`_ of Neutron
.. _`methods to the database layer`: https://opendev.org/openstack/neutron/src/neutron/db/securitygroups_db.py
Agent RPC
---------
This portion of the code handles processing requests from projects, after they have been stored in the database. It involves messaging all the L2 agents
running on the compute nodes, and modifying the IPTables rules on each hypervisor.
This portion of the code handles processing requests from projects, after they
have been stored in the database. It involves messaging all the L2 agents
running on the compute nodes, and modifying the IPTables rules on each
hypervisor.
* `Plugin RPC classes <https://opendev.org/openstack/neutron/src/neutron/db/securitygroups_rpc_base.py>`_
* `SecurityGroupServerRpcMixin <https://opendev.org/openstack/neutron/src/neutron/db/securitygroups_rpc_base.py>`_ - defines the RPC API that the plugin uses to communicate with the agents running on the compute nodes
* SecurityGroupServerRpcMixin - Defines the API methods used to fetch data from the database, in order to return responses to agents via the RPC API
* SecurityGroupServerRpcMixin - Defines the API methods used to fetch data
from the database, in order to return responses to agents via the RPC API
* `Agent RPC classes <https://opendev.org/openstack/neutron/src/neutron/api/rpc/handlers/securitygroups_rpc.py>`_
* The SecurityGroupServerRpcApi defines the API methods that can be called by agents, back to the plugin that runs on the Neutron controller
* The SecurityGroupAgentRpcCallbackMixin defines methods that a plugin uses to call back to an agent after performing an action called by an agent.
* The SecurityGroupServerRpcApi defines the API methods that can be called
by agents, back to the plugin that runs on the Neutron controller
* The SecurityGroupAgentRpcCallbackMixin defines methods that a plugin uses
to call back to an agent after performing an action called by an agent.
IPTables Driver
---------------
* ``prepare_port_filter`` takes a ``port`` argument, which is a ``dictionary`` object that contains information about the port - including the ``security_group_rules``
* ``prepare_port_filter`` takes a ``port`` argument, which is a ``dictionary``
object that contains information about the port - including the
``security_group_rules``
* ``prepare_port_filter`` appends the port to an internal dictionary, ``filtered_ports`` which is used to track the internal state.
* ``prepare_port_filter`` appends the port to an internal dictionary,
``filtered_ports`` which is used to track the internal state.
* Each security group has a `chain <http://www.thegeekstuff.com/2011/01/iptables-fundamentals/>`_ in Iptables.
* Each security group has a
`chain <http://www.thegeekstuff.com/2011/01/iptables-fundamentals/>`_
in Iptables.
* The ``IptablesFirewallDriver`` has a method to convert security group rules into iptables statements.
* The ``IptablesFirewallDriver`` has a method to convert security group rules
into iptables statements.

View File

@ -32,8 +32,8 @@ services. Among those of special interest:
#. neutron-server that provides API endpoints and serves as a single point of
access to the database. It usually runs on nodes called Controllers.
#. Layer2 agent that can utilize Open vSwitch, Linuxbridge or other vendor
specific technology to provide network segmentation and isolation for project
networks. The L2 agent should run on every node where it is deemed
specific technology to provide network segmentation and isolation for
project networks. The L2 agent should run on every node where it is deemed
responsible for wiring and securing virtual interfaces (usually both Compute
and Network nodes).
#. Layer3 agent that runs on Network node and provides East-West and

View File

@ -26,9 +26,11 @@ SR-IOV Networking L2 Agent
SR-IOV (Single Root I/O Virtualization) is a specification that allows
a PCIe device to appear to be multiple separate physical PCIe devices.
SR-IOV works by introducing the idea of physical functions (PFs) and virtual functions (VFs).
SR-IOV works by introducing the idea of physical functions (PFs) and virtual
functions (VFs).
Physical functions (PFs) are full-featured PCIe functions.
Virtual functions (VFs) are “lightweight” functions that lack configuration resources.
Virtual functions (VFs) are “lightweight” functions that lack configuration
resources.
SR-IOV supports VLANs for L2 network isolation, other networking technologies
such as VXLAN/GRE may be supported in the future.
@ -37,11 +39,13 @@ SR-IOV NIC agent manages configuration of SR-IOV Virtual Functions that connect
VM instances running on the compute node to the public network.
In most common deployments, there are compute and a network nodes.
Compute node can support VM connectivity via SR-IOV enabled NIC. SR-IOV NIC Agent manages
Virtual Functions admin state. Quality of service is partially implemented with the bandwidth limit
and minimum bandwidth rules. In the future it will manage additional settings, such as additional
Compute node can support VM connectivity via SR-IOV enabled NIC. SR-IOV NIC
Agent manages Virtual Functions admin state. Quality of service is partially
implemented with the bandwidth limit and minimum bandwidth rules. In the future
it will manage additional settings, such as additional
quality of service rules, rate limit settings, spoofcheck and more.
Network node will be usually deployed with either Open vSwitch or Linux Bridge to support network node functionality.
Network node will be usually deployed with either Open vSwitch or Linux Bridge
to support network node functionality.
Further Reading

View File

@ -35,7 +35,8 @@ Upgrade strategy
There are two general upgrade scenarios supported by Neutron:
#. All services are shut down, code upgraded, then all services are started again.
#. All services are shut down, code upgraded, then all services are started
again.
#. Services are upgraded gradually, based on operator service windows.
The latter is the preferred way to upgrade an OpenStack cloud, since it allows

View File

@ -32,7 +32,8 @@ plugin/driver repositories do it.
Neutron modules differ in their API stability a lot, and there is no part of it
that is explicitly marked to be consumed by other projects.
That said, there are modules that other projects should definitely avoid relying on.
That said, there are modules that other projects should definitely avoid
relying on.
Breakages
@ -61,39 +62,44 @@ The changes are listed in reverse chronological order (newer at the top).
* change: Consume sslutils and wsgi modules from oslo.service.
- commit: Ibfdf07e665fcfcd093a0e31274e1a6116706aec2
- solution: switch using oslo_service.wsgi.Router; stop using neutron.wsgi.Router.
- solution: switch using oslo_service.wsgi.Router; stop using
neutron.wsgi.Router.
- severity: Low (some out-of-tree plugins might be affected).
* change: oslo.service adopted.
- commit: 6e693fc91dd79cfbf181e3b015a1816d985ad02c
- solution: switch using oslo_service.* namespace; stop using ANY neutron.openstack.* contents.
- solution: switch using oslo_service.* namespace; stop using ANY
neutron.openstack.* contents.
- severity: low (plugins must not rely on that subtree).
* change: oslo.utils.fileutils adopted.
- commit: I933d02aa48260069149d16caed02b020296b943a
- solution: switch using oslo_utils.fileutils module; stop using neutron.openstack.fileutils module.
- solution: switch using oslo_utils.fileutils module; stop using
neutron.openstack.fileutils module.
- severity: low (plugins must not rely on that subtree).
* change: Reuse caller's session in DB methods.
- commit: 47dd65cf986d712e9c6ca5dcf4420dfc44900b66
- solution: Add context to args and reuse.
- severity: High (mostly undetected, because 3rd party CI run Tempest tests only).
- severity: High (mostly undetected, as 3rd party CI run Tempest tests only).
* change: switches to oslo.log, removes neutron.openstack.common.log.
- commit: 22328baf1f60719fcaa5b0fbd91c0a3158d09c31
- solution: a) switch to oslo.log; b) copy log module into your tree and use it
(may not work due to conflicts between the module and oslo.log configuration options).
- solution: a) switch to oslo.log; b) copy log module into your tree and
use it (may not work due to conflicts between the module
and oslo.log configuration options).
- severity: High (most CI systems are affected).
* change: Implements reorganize-unit-test-tree spec.
- commit: 1105782e3914f601b8f4be64939816b1afe8fb54
- solution: Code affected need to update existing unit tests to reflect new locations.
- severity: High (mostly undetected, because 3rd party CI run Tempest tests only).
- solution: Code affected needs to update existing unit tests to reflect
new locations.
- severity: High (mostly undetected, as 3rd party CI run Tempest tests only).
* change: drop linux/ovs_lib compat layer.

View File

@ -74,10 +74,10 @@ be converted to/from the
`legacy networking-ovn <https://review.opendev.org/#/q/project:openstack/networking-ovn>`__ and
`Neutron <https://review.opendev.org/#/q/project:openstack/neutron>`__ repositories.
The mapping of how the files are renamed is based on ``migrate_names.txt``, which is located
in the same directory where ``migrate_names.py`` is installed. That behavior can be modified
via the ``--mapfile`` option. More information on how the map is parsed is provided in the header
section of that file.
The mapping of how the files are renamed is based on ``migrate_names.txt``,
which is located in the same directory where ``migrate_names.py`` is installed.
That behavior can be modified via the ``--mapfile`` option. More information on
how the map is parsed is provided in the header section of that file.
.. code-block:: console

View File

@ -6,7 +6,8 @@ Deploying an OVN Development Environment with vagrant
The vagrant directory contains a set of vagrant configurations which will
help you deploy Neutron with the OVN driver for testing or development purposes.
help you deploy Neutron with the OVN driver for testing or development
purposes.
We provide a sparse multinode architecture with clear separation between
services. In the future we will include all-in-one and multi-gateway

View File

@ -99,13 +99,13 @@ The workflow for the life an RFE in Launchpad is as follows:
* Risky implementations that may require complex and/or pervasive
changes to API and the logical model;
Low priority is to be chosen for everything else. RFEs without an associated
blueprint are effectively equivalent to low priority items. Bear in mind that,
even though staffing should take priorities into account (i.e. by giving more
resources to high priority items over low priority ones), the open source
reality is that they can both proceed at their own pace and low priority items
can indeed complete faster than high priority ones, even though they are
given fewer resources.
Low priority is to be chosen for everything else. RFEs without an
associated blueprint are effectively equivalent to low priority items.
Bear in mind that, even though staffing should take priorities into
account (i.e. by giving more resources to high priority items over low
priority ones), the open source reality is that they can both proceed at
their own pace and low priority items can indeed complete faster than high
priority ones, even though they are given fewer resources.
* Drafter: who is going to submit and iterate on the spec proposal; he/she
may be the RFE submitter.
@ -155,22 +155,23 @@ The workflow for the life an RFE in Launchpad is as follows:
will have to be deferred.
* In either case (a spec being required or not), once the discussion has
happened and there is positive consensus on the RFE, the report is 'approved',
and its tag will move from `rfe-triaged` to `rfe-approved`.
happened and there is positive consensus on the RFE, the report is
'approved', and its tag will move from `rfe-triaged` to `rfe-approved`.
* An RFE can be occasionally marked as 'rfe-postponed' if the team identifies
a dependency between the proposed RFE and other pending tasks that prevent
the RFE from being worked on immediately.
* Once an RFE is approved, it needs volunteers. Approved RFEs that do not have an
assignee but sound relatively simple or limited in scope (e.g. the addition of
a new API with no ramification in the plugin backends), should be promoted
during team meetings or the ML so that volunteers can pick them up and get
started with neutron development. The team will regularly scan `rfe-approved`
or `rfe-postponed` RFEs to see what their latest status is and mark them
incomplete if no assignees can be found, or they are no longer relevant.
* Once an RFE is approved, it needs volunteers. Approved RFEs that do not have
an assignee but sound relatively simple or limited in scope (e.g. the
addition of a new API with no ramification in the plugin backends), should be
promoted during team meetings or the ML so that volunteers can pick them up
and get started with neutron development. The team will regularly scan
`rfe-approved` or `rfe-postponed` RFEs to see what their latest status is and
mark them incomplete if no assignees can be found, or they are no longer
relevant.
* As for setting the milestone (both for RFE bugs or blueprints), the current
milestone is always chosen, assuming that work will start as soon as the feature
is approved. Work that fails to complete by the defined milestone will roll
over automatically until it gets completed or abandoned.
milestone is always chosen, assuming that work will start as soon as the
feature is approved. Work that fails to complete by the defined milestone
will roll over automatically until it gets completed or abandoned.
* If the code fails to merge, the bug report may be marked as incomplete,
unassigned and untargeted, and it will be garbage collected by
the Launchpad Janitor if no-one takes over in time. Renewed interest in the
@ -178,27 +179,32 @@ The workflow for the life an RFE in Launchpad is as follows:
In summary:
+------------+-----------------------------------------------------------------------------+
+------------+-----------------------------------------------------------+
| State | Meaning |
+============+=============================================================================+
|New | This is where all RFE's start, as filed by the community. |
+------------+-----------------------------------------------------------------------------+
|Incomplete | Drivers/LTs - Move to this state to mean, "more needed before proceeding" |
+------------+-----------------------------------------------------------------------------+
|Confirmed | Drivers/LTs - Move to this state to mean, "yeah, I see that you filed it" |
+------------+-----------------------------------------------------------------------------+
|Triaged | Drivers/LTs - Move to this state to mean, "discussion is ongoing" |
+------------+-----------------------------------------------------------------------------+
|Won't Fix | Drivers/LTs - Move to this state to reject an RFE. |
+------------+-----------------------------------------------------------------------------+
+============+===========================================================+
| New | This is where all RFE's start, as filed by the community |
+------------+-----------------------------------------------------------+
| Incomplete | Drivers/LTs - Move to this state to mean, |
| | "more information needed before proceeding" |
+------------+-----------------------------------------------------------+
| Confirmed | Drivers/LTs - Move to this state to mean, |
| | "yes, I see that you filed it" |
+------------+-----------------------------------------------------------+
| Triaged | Drivers/LTs - Move to this state to mean, |
| | "discussion is ongoing" |
+------------+-----------------------------------------------------------+
| Won't Fix | Drivers/LTs - Move to this state to reject an RFE |
+------------+-----------------------------------------------------------+
Once the triaging (discussion is complete) and the RFE is approved, the tag goes from 'rfe'
to 'rfe-approved', and at this point the bug report goes through the usual state transition.
Note, that the importance will be set to 'wishlist', to reflect the fact that the bug report
is indeed not a bug, but a new feature or enhancement. This will also help have RFEs that are
not followed up by a blueprint standout in the Launchpad `milestone dashboards <https://launchpad.net/neutron/+milestones>`_.
Once the triaging (discussion is complete) and the RFE is approved, the tag
goes from 'rfe' to 'rfe-approved', and at this point the bug report goes
through the usual state transition. Note, that the importance will be set to
'wishlist', to reflect the fact that the bug report is indeed not a bug, but
a new feature or enhancement. This will also help have RFEs that are not
followed up by a blueprint standout in the Launchpad `milestone dashboards <https://launchpad.net/neutron/+milestones>`_.
The drivers team will be discussing the following bug reports during their IRC meeting:
The drivers team will be discussing the following bug reports during their
IRC meeting:
* `New RFE's <https://bugs.launchpad.net/neutron/+bugs?field.status%3Alist=NEW&field.tag=rfe>`_
* `Incomplete RFE's <https://bugs.launchpad.net/neutron/+bugs?field.status%3Alist=INCOMPLETE&field.tag=rfe>`_
@ -209,17 +215,18 @@ The drivers team will be discussing the following bug reports during their IRC m
RFE Submission Guidelines
-------------------------
Before we dive into the guidelines for writing a good RFE, it is worth mentioning
that depending on your level of engagement with the Neutron project and your role
(user, developer, deployer, operator, etc.), you are more than welcome to have
a preliminary discussion of a potential RFE by reaching out to other people involved
in the project. This usually happens by posting mails on the relevant mailing
lists (e.g. `openstack-discuss <http://lists.openstack.org>`_ - include [neutron] in
Before we dive into the guidelines for writing a good RFE, it is worth
mentioning that depending on your level of engagement with the Neutron project
and your role (user, developer, deployer, operator, etc.), you are more than
welcome to have a preliminary discussion of a potential RFE by reaching out to
other people involved in the project. This usually happens by posting mails on
the relevant mailing lists
(e.g. `openstack-discuss <http://lists.openstack.org>`_ - include [neutron] in
the subject) or on #openstack-neutron IRC channel on OFTC. If current ongoing
code reviews are related to your feature, posting comments/questions on gerrit
may also be a way to engage. Some amount of interaction with Neutron developers
will give you an idea of the plausibility and form of your RFE before you submit
it. That said, this is not mandatory.
will give you an idea of the plausibility and form of your RFE before you
submit it. That said, this is not mandatory.
When you submit a bug report on https://bugs.launchpad.net/neutron/+filebug,
there are two fields that must be filled: 'summary' and 'further information'.
@ -229,14 +236,14 @@ RFE at once, or that you are having a hard time defining what you are trying to
solve at all.
The 'further information' section must be a description of what you would like
to see implemented in Neutron. The description should provide enough details for
a knowledgeable developer to understand what is the existing problem in the
to see implemented in Neutron. The description should provide enough details
for a knowledgeable developer to understand what is the existing problem in the
current platform that needs to be addressed, or what is the enhancement that
would make the platform more capable, both for a functional and a non-functional
standpoint. To this aim it is important to describe 'why' you believe the RFE
should be accepted, and motivate the reason why without it Neutron is a poorer
platform. The description should be self contained, and no external references
should be necessary to further explain the RFE.
would make the platform more capable, both for a functional and a
non-functional standpoint. To this aim it is important to describe 'why' you
believe the RFE should be accepted, and motivate the reason why without it
Neutron is a poorer platform. The description should be self contained, and no
external references should be necessary to further explain the RFE.
In other words, when you write an RFE you should ask yourself the following
questions:

View File

@ -3,16 +3,17 @@
Code Reviews
============
Code reviews are a critical component of all OpenStack projects. Neutron accepts patches from many
diverse people with diverse backgrounds, employers, and experience levels. Code reviews provide a
way to enforce a level of consistency across the project, and also allow for the careful on boarding
Code reviews are a critical component of all OpenStack projects. Neutron
accepts patches from many diverse people with diverse backgrounds, employers,
and experience levels. Code reviews provide a way to enforce a level of
consistency across the project, and also allow for the careful on boarding
of contributions from new contributors.
Neutron Code Review Practices
-----------------------------
Neutron follows the `code review guidelines <https://wiki.openstack.org/wiki/ReviewChecklist>`_ as
set forth for all OpenStack projects. It is expected that all reviewers are following the guidelines
set forth on that page.
set forth for all OpenStack projects. It is expected that all reviewers are
following the guidelines set forth on that page.
In addition to that, the following rules are to follow:
@ -88,8 +89,8 @@ In addition to that, the following rules are to follow:
scenario tests be added where it is appropriate.
Scenario tests should cover not only the base level of new functionality, but
also standard ways in which the functionality can be used. For example, if the
feature adds a new kind of networking (like e.g. trunk ports) then tests
also standard ways in which the functionality can be used. For example, if
the feature adds a new kind of networking (like e.g. trunk ports) then tests
should make sure that instances can use IPs provided by that networking,
can be migrated, etc.
@ -99,33 +100,37 @@ In addition to that, the following rules are to follow:
* It is usually enough for any "mechanical" changes, like e.g. translation
imports or imports of updated CI templates, to have only one +2 Code-Review
vote to be approved. If there is any uncertainty about a specific patch, it is
better to wait for review from another core reviewer before approving the patch.
vote to be approved. If there is any uncertainty about a specific patch, it
is better to wait for review from another core reviewer before approving the
patch.
.. _spec-review-practices:
Neutron Spec Review Practices
-----------------------------
In addition to code reviews, Neutron also maintains a BP specification git repository. Detailed
instructions for the use of this repository are provided `here <https://wiki.openstack.org/wiki/Blueprints>`_.
It is expected that Neutron core team members are actively reviewing specifications which are pushed out
for review to the specification repository. In addition, there is a neutron-drivers team, composed of a
In addition to code reviews, Neutron also maintains a BP specification git
repository. Detailed instructions for the use of this repository are provided
`here <https://wiki.openstack.org/wiki/Blueprints>`_.
It is expected that Neutron core team members are actively reviewing
specifications which are pushed out for review to the specification repository.
In addition, there is a neutron-drivers team, composed of a
handful of Neutron core reviewers, who can approve and merge Neutron specs.
Some guidelines around this process are provided below:
* Once a specification has been pushed, it is expected that it will not be approved for at least 3 days
after a first Neutron core reviewer has reviewed it. This allows for additional cores to review the
specification.
* For blueprints which the core team deems of High or Critical importance, core reviewers may be assigned
based on their subject matter expertise.
* Specification priority will be set by the PTL with review by the core team once the specification is
approved.
* Once a specification has been pushed, it is expected that it will not be
approved for at least 3 days after a first Neutron core reviewer has reviewed
it. This allows for additional cores to review the specification.
* For blueprints which the core team deems of High or Critical importance,
core reviewers may be assigned based on their subject matter expertise.
* Specification priority will be set by the PTL with review by the core team
once the specification is approved.
Tracking Review Statistics
--------------------------
Stackalytics provides some nice interfaces to track review statistics. The links are provided below. These
statistics are used to track not only Neutron core reviewer statistics, but also to track review statistics
Stackalytics provides some nice interfaces to track review statistics. The
links are provided below. These statistics are used to track not only Neutron
core reviewer statistics, but also to track review statistics
for potential future core members.
* `30 day review stats <https://www.stackalytics.io/report/contribution?module=neutron-group&project_type=openstack&days=30>`_

View File

@ -6,14 +6,17 @@ For new contributors, the following are useful onboarding information.
Contributing to Neutron
-----------------------
Work within Neutron is discussed on the openstack-discuss mailing list, as well as in the
#openstack-neutron IRC channel. While these are great channels for engaging Neutron,
the bulk of discussion of patches and code happens in gerrit itself.
Work within Neutron is discussed on the openstack-discuss mailing list, as
well as in the #openstack-neutron IRC channel. While these are great channels
for engaging Neutron, the bulk of discussion of patches and code happens in
gerrit itself.
With regards to gerrit, code reviews are a great way to learn about the project. There
is also a list of `low or wishlist <https://bugs.launchpad.net/neutron/+bugs?field.searchtext=&orderby=-importance&field.status%3Alist=NEW&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.status%3Alist=FIXCOMMITTED&field.status%3Alist=INCOMPLETE_WITH_RESPONSE&field.status%3Alist=INCOMPLETE_WITHOUT_RESPONSE&field.importance%3Alist=LOW&field.importance%3Alist=WISHLIST&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_branches=on&field.has_no_branches.used=&field.has_no_branches=on&field.has_blueprints.used=&field.has_blueprints=on&field.has_no_blueprints.used=&field.has_no_blueprints=on&search=Search>`_ priority bugs which are ideal for a new contributor to take
on. If you haven't done so you should setup a Neutron development environment so you
can actually run the code. Devstack is the usual convenient environment to setup such
With regards to gerrit, code reviews are a great way to learn about the
project. There is also a list of
`low or wishlist <https://bugs.launchpad.net/neutron/+bugs?field.searchtext=&orderby=-importance&field.status%3Alist=NEW&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.status%3Alist=FIXCOMMITTED&field.status%3Alist=INCOMPLETE_WITH_RESPONSE&field.status%3Alist=INCOMPLETE_WITHOUT_RESPONSE&field.importance%3Alist=LOW&field.importance%3Alist=WISHLIST&assignee_option=any&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_branches=on&field.has_no_branches.used=&field.has_no_branches=on&field.has_blueprints.used=&field.has_blueprints=on&field.has_no_blueprints.used=&field.has_no_blueprints=on&search=Search>`_
priority bugs which are ideal for a new contributor to take on. If you haven't
done so you should setup a Neutron development environment so you can actually
run the code. Devstack is the usual convenient environment to setup such
an environment. See `devstack.org <http://devstack.org/>`_ or `NeutronDevstack <https://wiki.openstack.org/wiki/NeutronDevstack#Basic_Setup>`_
for more information on using Neutron with devstack.

View File

@ -2,8 +2,8 @@
Gate Failure Triage
===================
This page provides guidelines for spotting and assessing neutron gate failures. Some hints for triaging
failures are also provided.
This page provides guidelines for spotting and assessing neutron gate failures.
Some hints for triaging failures are also provided.
Spotting Gate Failures
----------------------
@ -15,34 +15,44 @@ This can be achieved using several tools:
For checking gate failures with opensearch please see `documentation <https://docs.openstack.org/project-team-guide/testing.html#checking-status-of-other-job-results>`_.
The following query will return failures for a specific job:
> build_status:FAILURE AND message:Finished AND build_name:"check-tempest-dsvm-neutron" AND build_queue:"gate"
> build_status:FAILURE AND message:Finished AND
build_name:"check-tempest-dsvm-neutron" AND build_queue:"gate"
And divided by the total number of jobs executed:
> message:Finished AND build_name:"check-tempest-dsvm-neutron" AND build_queue:"gate"
> message:Finished AND build_name:"check-tempest-dsvm-neutron" AND
build_queue:"gate"
It will return the failure rate in the selected period for a given job. It is important to remark that
failures in the check queue might be misleading as the problem causing the failure is most of the time in
the patch being checked. Therefore it is always advisable to work on failures occurred in the gate queue.
However, these failures are a precious resource for assessing frequency and determining root cause of
failures which manifest in the gate queue.
It will return the failure rate in the selected period for a given job. It is
important to remark that failures in the check queue might be misleading as
the problem causing the failure is most of the time in the patch being checked.
Therefore it is always advisable to work on failures occurred in the gate
queue. However, these failures are a precious resource for assessing frequency
and determining root cause of failures which manifest in the gate queue.
The step above will provide a quick outlook of where things stand. When the failure rate raises above 10% for
a job in 24 hours, it's time to be on alert. 25% is amber alert. 33% is red alert. Anything above 50% means
that probably somebody from the infra team has already a contract out on you. Whether you are relaxed, in
alert mode, or freaking out because you see a red dot on your chest, it is always a good idea to check on
daily bases the elastic-recheck pages.
The step above will provide a quick outlook of where things stand. When the
failure rate raises above 10% for a job in 24 hours, it's time to be on alert.
25% is amber alert. 33% is red alert. Anything above 50% means that probably
somebody from the infra team has already a contract out on you. Whether you
are relaxed, in alert mode, or freaking out because you see a red dot on your
chest, it is always a good idea to check on daily bases the elastic-recheck
pages.
Under the `gate pipeline <http://status.openstack.org/elastic-recheck/gate.html>`_ tab, you can see gate
failure rates for already known bugs. The bugs in this page are ordered by decreasing failure rates (for the
past 24 hours). If one of the bugs affecting Neutron is among those on top of that list, you should check
that the corresponding bug is already assigned and somebody is working on it. If not, and there is not a good
reason for that, it should be ensured somebody gets a crack at it as soon as possible. The other part of the
Under the
`gate pipeline <http://status.openstack.org/elastic-recheck/gate.html>`_
tab, you can see gate failure rates for already known bugs. The bugs in this
page are ordered by decreasing failure rates (for the past 24 hours). If one
of the bugs affecting Neutron is among those on top of that list, you should
check that the corresponding bug is already assigned and somebody is working
on it. If not, and there is not a good reason for that, it should be ensured
somebody gets a crack at it as soon as possible. The other part of the
story is to check for `uncategorized <http://status.openstack.org/elastic-recheck/data/uncategorized.html>`_
failures. This is where failures for new (unknown) gate breaking bugs end up; on the other hand also infra
error causing job failures end up here. It should be duty of the diligent Neutron developer to ensure the
classification rate for neutron jobs is as close as possible to 100%. To this aim, the diligent Neutron
developer should adopt the procedure outlined in the following sections.
failures. This is where failures for new (unknown) gate breaking bugs end up;
on the other hand also infra error causing job failures end up here. It should
be duty of the diligent Neutron developer to ensure the classification rate
for neutron jobs is as close as possible to 100%. To this aim, the diligent
Neutron developer should adopt the procedure outlined in the following
sections.
.. _troubleshooting-tempest-jobs:
@ -50,15 +60,19 @@ Troubleshooting Tempest jobs
----------------------------
1. Open logs for failed jobs and look for logs/testr_results.html.gz.
2. If that file is missing, check console.html and see where the job failed.
1. If there is a failure in devstack-gate-cleanup-host.txt it's likely to be an infra issue.
2. If the failure is in devstacklog.txt it could a devstack, neutron, or infra issue.
3. However, most of the time the failure is in one of the tempest tests. Take note of the error message and go to
opensearch.
4. On opensearch, search for occurrences of this error message, and try to identify the root cause for the failure
(see below).
5. File a bug for this failure, and push an :ref:`Elastic Recheck Query <elastic-recheck-query>` for it.
6. If you are confident with the area of this bug, and you have time, assign it to yourself; otherwise look for an
assignee or talk to the Neutron's bug czar to find an assignee.
1. If there is a failure in devstack-gate-cleanup-host.txt it's likely to
be an infra issue.
2. If the failure is in devstacklog.txt it could a devstack, neutron, or
infra issue.
3. However, most of the time the failure is in one of the tempest tests. Take
note of the error message and go to opensearch.
4. On opensearch, search for occurrences of this error message, and try to
identify the root cause for the failure (see below).
5. File a bug for this failure, and push an
:ref:`Elastic Recheck Query <elastic-recheck-query>` for it.
6. If you are confident with the area of this bug, and you have time, assign
it to yourself; otherwise look for an assignee or talk to the Neutron's
bug deputy to find an assignee.
Troubleshooting functional/fullstack job
----------------------------------------
@ -110,10 +124,10 @@ The difference is that in the logs of the Grenade job, there is always
of the Devstack's stack.sh script.
In the "logs/grenade.sh_log.txt" file there is a full log of the grenade.sh run
and you should always start checking failures from that file.
Logs of the Neutron services for "old" and "new" versions are in the same files,
like, for example, "logs/screen-q-svc.txt" for neutron-server logs. You will
find in that log when the service was restarted - that is the moment when it
was upgraded by Grenade and it is now running the new version.
Logs of the Neutron services for "old" and "new" versions are in the same
files, like, for example, "logs/screen-q-svc.txt" for neutron-server logs.
You will find in that log when the service was restarted - that is the moment
when it was upgraded by Grenade and it is now running the new version.
Advanced Troubleshooting of Gate Jobs
-------------------------------------

View File

@ -8,8 +8,8 @@ Neutron Core Reviewers
======================
The `Neutron Core Reviewer Team <https://review.opendev.org/#/admin/groups/38,members>`_
is responsible for many things related to Neutron. A lot of these things include mundane
tasks such as the following:
is responsible for many things related to Neutron. A lot of these things
include mundane tasks such as the following:
* Ensuring the bug count is low
* Curating the gate and triaging failures
@ -120,8 +120,9 @@ Some notes on the above:
Sub-project Lieutenants
~~~~~~~~~~~~~~~~~~~~~~~
Neutron also consists of several plugins, drivers, and agents that are developed
effectively as sub-projects within Neutron in their own git repositories.
Neutron also consists of several plugins, drivers, and agents that are
developed effectively as sub-projects within Neutron in their own git
repositories.
Lieutenants are also named for these sub-projects to identify a clear point of
contact and leader for that area. The Lieutenant is also responsible for
updating the core review team for the sub-project's repositories.
@ -221,10 +222,10 @@ have +2 rights to the following git repositories:
* `openstack/neutron-specs <https://opendev.org/openstack/neutron-specs/>`_
The Neutron specs core reviewer team is responsible for reviewing specs targeted to
all Neutron git repositories (Neutron + Advanced Services). It is worth noting that
specs reviewers have the following attributes which are potentially different than
code reviewers:
The Neutron specs core reviewer team is responsible for reviewing specs
targeted to all Neutron git repositories (Neutron + Advanced Services).
It is worth noting that specs reviewers have the following attributes which
are potentially different than code reviewers:
* Broad understanding of cloud and networking technologies
* Broad understanding of core OpenStack projects and technologies
@ -240,11 +241,12 @@ Drivers Team
------------
The `drivers team <https://review.opendev.org/#/admin/groups/464,members>`_ is
the group of people who have full rights to the specs repo. This team, which matches
the group of people who have full rights to the specs repo. This team, which
matches
`Launchpad Neutron Drivers team <https://launchpad.net/~neutron-drivers>`_, is
instituted to ensure a consistent architectural vision for the Neutron project, and
to continue to disaggregate and share the responsibilities of the Neutron PTL.
The team is in charge of reviewing and commenting on
instituted to ensure a consistent architectural vision for the Neutron
project, and to continue to disaggregate and share the responsibilities of
the Neutron PTL. The team is in charge of reviewing and commenting on
:ref:`RFEs <request-for-feature-enhancement>`,
and working with specification contributors to provide guidance on the process
that govern contributions to the Neutron project as a whole. The team

View File

@ -142,9 +142,9 @@ a patch which introduces for example:
#. requirement change,
#. API visible change,
The above list doesn't cover all possible cases. Those are only examples of fixes
which require bump of minor version number but there can be also other types of
changes requiring the same.
The above list doesn't cover all possible cases. Those are only examples of
fixes which require bump of minor version number but there can be also other
types of changes requiring the same.
Changes that require the minor version number to be bumped should always have a
release note added.

View File

@ -81,11 +81,11 @@ systems.
A third party system can have it's voting rights removed as well. If the
system becomes unstable (stops running, voting, or start providing inaccurate
results), the Neutron PTL or any core reviewer will make an attempt to contact
the owner and copy the openstack-discuss mailing list. If no response is received
within 2 days, the Neutron PTL will remove voting rights for the third party
CI system. If a response is received, the owner will work to correct the
issue. If the issue cannot be addressed in a reasonable amount of time, the
voting rights will be temporarily removed.
the owner and copy the openstack-discuss mailing list. If no response is
received within 2 days, the Neutron PTL will remove voting rights for the
third party CI system. If a response is received, the owner will work to
correct the issue. If the issue cannot be addressed in a reasonable amount of
time, the voting rights will be temporarily removed.
Log & Test Results Filesystem Layout
------------------------------------

View File

@ -87,10 +87,10 @@ mature OpenStack projects:
using OpenStack CI (upstream) resources so that `Grafana <https://grafana.opendev.org/d/f913631585/neutron-failure-rate>`_
support is available. Access to CI resources and historical data by the
team is key to ensuring stability and robustness of a project.
In particular, it is of paramount importance to ensure that DB models/migrations
are tested functionally to prevent data inconsistency issues or unexpected
DB logic errors due to schema/models mismatch. For more details, please
look at the following resources:
In particular, it is of paramount importance to ensure that DB
models/migrations are tested functionally to prevent data inconsistency
issues or unexpected DB logic errors due to schema/models mismatch.
For more details, please look at the following resources:
* https://review.opendev.org/#/c/346091/
* https://review.opendev.org/#/c/346272/
@ -152,9 +152,9 @@ the PTL and Neutron team do release planning, and have the most time available
to discuss governance issues.
Projects part of the Neutron Stadium have typically the first milestone to get
their house in order, during which time reassessment happens; if removed, because
of substantial lack of meeting the criteria, a project cannot reapply within
the same release cycle it has been evicted.
their house in order, during which time reassessment happens; if removed,
because of substantial lack of meeting the criteria, a project cannot reapply
within the same release cycle it has been evicted.
The process for proposing a repo into openstack/ and under the Neutron
governance is to propose a patch to the openstack/governance repository.

View File

@ -19,14 +19,14 @@ Neutron Stadium
================
This section contains information on policies and procedures for the so called
Neutron Stadium. The Neutron Stadium is the list of projects that show up in the
OpenStack `Governance Document <https://governance.openstack.org/tc/reference/projects/neutron.html>`_.
Neutron Stadium. The Neutron Stadium is the list of projects that show up in
the OpenStack `Governance Document <https://governance.openstack.org/tc/reference/projects/neutron.html>`_.
The list includes projects that the Neutron PTL and core team are directly
involved in, and manage on a day to day basis. To do so, the PTL and team
ensure that common practices and guidelines are followed throughout the Stadium,
for all aspects that pertain software development, from inception, to coding,
testing, documentation and more.
ensure that common practices and guidelines are followed throughout the
Stadium, for all aspects that pertain software development, from inception,
to coding, testing, documentation and more.
The Stadium is not to be intended as a VIP club for OpenStack networking
projects, or an upper tier within OpenStack. It is simply the list of projects

View File

@ -28,9 +28,9 @@ Neutron Jobs Running in Zuul CI
Tempest jobs running in Neutron CI
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In upstream Neutron CI there are various tempest and neutron-tempest-plugin jobs
running.
Each of those jobs runs on slightly different configuration of Neutron services.
In upstream Neutron CI there are various tempest and neutron-tempest-plugin
jobs running. Each of those jobs runs on slightly different configuration of
Neutron services.
Below is a summary of those jobs.
::
@ -91,7 +91,8 @@ Grenade jobs running in Neutron CI
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In upstream Neutron CI there are various Grenade jobs running.
Each of those jobs runs on slightly different configuration of Neutron services.
Each of those jobs runs on slightly different configuration of Neutron
services.
Below is summary of those jobs.
::
@ -109,8 +110,8 @@ Tempest jobs running in Neutron experimental CI
In upstream Neutron CI there is also queue called ``experimental``. It includes
jobs which are not needed to be run on every patch and/or jobs which isn't
stable enough to be run always.
Those jobs can be run by making comment ``check experimental`` in the comment to
the patch in Gerrit.
Those jobs can be run by making comment ``check experimental`` in the comment
to the patch in Gerrit.
Currently we have in that queue jobs like listed below.
::

View File

@ -54,43 +54,44 @@ such as what L2 agent to use or what type of routers to create.
* A name - That person has committed to work on an item
* Implicit - The code is executed, yet no assertions are made
+------------------------+------------+------------+------------+------------+------------+------------+
+--------------------+------+------------+-----+-----------+----------+------+
| Area | Unit | Functional | API | Fullstack | Scenario | Gate |
+========================+============+============+============+============+============+============+
+====================+======+============+=====+===========+==========+======+
| DVR | V | L3-V OVS-X | V | X | X | V |
+------------------------+------------+------------+------------+------------+------------+------------+
| L3 HA | V | V | X | 286087 | X | X |
+------------------------+------------+------------+------------+------------+------------+------------+
+--------------------+------+------------+-----+-----------+----------+------+
| L3 HA | V | V | X | 286087* | X | X |
+--------------------+------+------------+-----+-----------+----------+------+
| L2pop | V | X | | Implicit | | |
+------------------------+------------+------------+------------+------------+------------+------------+
| DHCP HA | V | | | amuller | | |
+------------------------+------------+------------+------------+------------+------------+------------+
| OVS ARP responder | V | X* | | Implicit | | |
+------------------------+------------+------------+------------+------------+------------+------------+
+--------------------+------+------------+-----+-----------+----------+------+
| DHCP HA | V | | | | | |
+--------------------+------+------------+-----+-----------+----------+------+
| OVS ARP responder | V | X | | Implicit | | |
+--------------------+------+------------+-----+-----------+----------+------+
| OVS agent | V | V | | V | | V |
+------------------------+------------+------------+------------+------------+------------+------------+
+--------------------+------+------------+-----+-----------+----------+------+
| OVN | V | V | | | | V |
+--------------------+------+------------+-----+-----------+----------+------+
| Linux Bridge agent | V | X | | V | | V |
+------------------------+------------+------------+------------+------------+------------+------------+
+--------------------+------+------------+-----+-----------+----------+------+
| Metering | V | X | V | X | | |
+------------------------+------------+------------+------------+------------+------------+------------+
| DHCP agent | V | V | | amuller | | V |
+------------------------+------------+------------+------------+------------+------------+------------+
+--------------------+------+------------+-----+-----------+----------+------+
| DHCP agent | V | V | | | | V |
+--------------------+------+------------+-----+-----------+----------+------+
| rpc_workers | | | | | | X |
+------------------------+------------+------------+------------+------------+------------+------------+
| Reference ipam driver | V | | | | | X |
+------------------------+------------+------------+------------+------------+------------+------------+
+--------------------+------+------------+-----+-----------+----------+------+
| Ref IPAM driver | V | | | | | X |
+--------------------+------+------------+-----+-----------+----------+------+
| MTU advertisement | V | | | X | | |
+------------------------+------------+------------+------------+------------+------------+------------+
+--------------------+------+------------+-----+-----------+----------+------+
| VLAN transparency | V | | X | X | | |
+------------------------+------------+------------+------------+------------+------------+------------+
| Prefix delegation | V | X | | X | | |
+------------------------+------------+------------+------------+------------+------------+------------+
+--------------------+------+------------+-----+-----------+----------+------+
| Prefix delegation | V | X* | | X | | |
+--------------------+------+------------+-----+-----------+----------+------+
* Patch https://review.opendev.org/c/openstack/neutron/+/286087 was abandoned.
* Prefix delegation doesn't have functional tests for the dibbler and pd
layers, nor for the L3 agent changes. This has been an area of repeated
regressions.
* The functional job now compiles OVS 2.5 from source, enabling testing
features that we previously could not.
Missing Infrastructure
----------------------

View File

@ -34,5 +34,5 @@ neutron-server neutron.conf file.
The plugin will inject a Deadlock exception on database flushes with a 1/50
probability and a delay of 1 second with a 1/200 probability when SQLAlchemy
objects are loaded into the persistent state from the DB. The goal is to ensure
the code is tolerant of these transient delays/failures that will be experienced
in busy production (and Galera) systems.
the code is tolerant of these transient delays/failures that will be
experienced in busy production (and Galera) systems.

View File

@ -96,12 +96,12 @@ Neutron offers a Quality of Service API, initially offering bandwidth
capping at the port level. In the reference implementation, it does this by
utilizing an OVS feature.
neutron.tests.fullstack.test_qos.TestBwLimitQoSOvs.test_bw_limit_qos_policy_rule_lifecycle
is a positive example of how the fullstack testing infrastructure should be used.
It creates a network, subnet, QoS policy & rule and a port utilizing that policy.
It then asserts that the expected bandwidth limitation is present on the OVS
bridge connected to that port. The test is a true integration test, in the
sense that it invokes the API and then asserts that Neutron interacted with
the hypervisor appropriately.
is a positive example of how the fullstack testing infrastructure should be
used. It creates a network, subnet, QoS policy & rule and a port utilizing
that policy. It then asserts that the expected bandwidth limitation is present
on the OVS bridge connected to that port. The test is a true integration test,
in the sense that it invokes the API and then asserts that Neutron interacted
with the hypervisor appropriately.
How to run fullstack tests locally?
+++++++++++++++++++++++++++++++++++
@ -141,9 +141,9 @@ done you should see a message like:
That means that all went well and you should be ready to run fullstack tests
locally.
Fullstack tests execute a custom dhclient-script. From kernel version 4.14 onward,
apparmor on certain distros could deny the execution of this script. To be sure,
check journalctl ::
Fullstack tests execute a custom dhclient-script. From kernel version 4.14
onward, apparmor on certain distros could deny the execution of this script.
To be sure, check journalctl ::
sudo journalctl | grep DENIED | grep fullstack-dhclient-script
@ -260,7 +260,8 @@ Each fullstack test is spawning its own, isolated environment with needed
services. So, for example, it can be ``neutron-server``, ``neutron-ovs-agent``
or ``neutron-dhcp-agent``. And often there is a need to check logs of some of
those processes. That is of course possible when running fullstack tests
locally. By default, logs are stored in ``/opt/stack/logs/dsvm-fullstack-logs``.
locally. By default, logs are stored in
``/opt/stack/logs/dsvm-fullstack-logs``.
The logs directory can be defined by the environment variable ``OS_LOG_PATH``.
In that directory there are directories with names matching names of the
tests, for example:
@ -299,8 +300,8 @@ Debugging fullstack failures in the gate
Sometimes there is a need to investigate reason that a test failed in the gate.
After every ``neutron-fullstack`` job run, on the Zuul job page there are logs
available. In the directory ``controller/logs/dsvm-fullstack-logs`` you can find
exactly the same files with logs from each test case as mentioned above.
available. In the directory ``controller/logs/dsvm-fullstack-logs`` you can
find exactly the same files with logs from each test case as mentioned above.
You can also check, for example, the journal log from the node where the tests
were run. All those logs are available in the file

View File

@ -591,8 +591,8 @@ On the compute nodes, enable it as follows:
Troubleshooting
---------------
If you run into any problems, take a look at our :doc:`/admin/ovn/troubleshooting`
page.
If you run into any problems, take a look at our
:doc:`/admin/ovn/troubleshooting` page.
Additional Resources
--------------------

View File

@ -4,8 +4,8 @@
ML2 OVS with DevStack
=====================
This document describes how to test OpenStack Neutron with ML2 OpenvSwitch using
DevStack. We will start by describing how to test on a single host.
This document describes how to test OpenStack Neutron with ML2 OpenvSwitch
using DevStack. We will start by describing how to test on a single host.
Single Node Test Environment
----------------------------
@ -22,7 +22,8 @@ to use either CentOS 8 or the latest Ubuntu LTS.
$ git clone https://opendev.org/openstack/devstack.git
$ sudo ./devstack/tools/create-stack-user.sh
3. Switch to the ``stack`` user, copy Devstack to stack folder and clone Neutron.
3. Switch to the ``stack`` user, copy Devstack to stack folder and clone
Neutron.
::

View File

@ -35,7 +35,8 @@ this is `neutron-tempest-plugin <https://opendev.org/openstack/neutron-tempest-p
neutron-tempest-plugin covers API and scenario tests not just for core Neutron
functionality, but for stadium projects as well.
For reference please read `Testing Neutron\'s related sections <testing.html#api-tests>`_
For reference please read
`Testing Neutron\'s related sections <testing.html#api-tests>`_
API Tests
~~~~~~~~~
@ -56,9 +57,10 @@ should be validated, and all interaction with the daemon should be via
a REST client.
The neutron-tempest-plugin/neutron_tempest_plugin directory was copied from the
Tempest project around the Kilo timeframe. At the time, there was an overlap of tests
between the Tempest and Neutron repositories. This overlap was then eliminated by carving
out a subset of resources that belong to Tempest, with the rest in Neutron.
Tempest project around the Kilo timeframe. At the time, there was an overlap
of tests between the Tempest and Neutron repositories. This overlap was then
eliminated by carving out a subset of resources that belong to Tempest, with
the rest in Neutron.
API tests that belong to Tempest deal with a subset of Neutron's resources:
@ -91,9 +93,10 @@ define a list of required extensions for a particular test class.
Scenario Tests
~~~~~~~~~~~~~~
Scenario tests (neutron-tempest-plugin/neutron_tempest_plugin/scenario), like API tests,
use the Tempest test infrastructure and have the same requirements. Guidelines for
writing a good scenario test may be found at the Tempest developer guide:
Scenario tests (neutron-tempest-plugin/neutron_tempest_plugin/scenario),
like API tests, use the Tempest test infrastructure and have the same
requirements. Guidelines for writing a good scenario test may be found in
the Tempest developer guide:
https://docs.openstack.org/tempest/latest/field_guide/scenario.html
Scenario tests, like API tests, are split between the Tempest and Neutron

View File

@ -37,9 +37,10 @@ This test compares models with the result of existing migrations. It is based on
<https://docs.openstack.org/oslo.db/latest/reference/api/oslo_db.sqlalchemy.test_migrations.html>`_
which is provided by oslo.db and was adapted for Neutron. It compares core
Neutron models and vendor specific models with migrations from Neutron core and
migrations from the driver/plugin repo. This test is functional - it runs against
MySQL and PostgreSQL dialects. The detailed description of this test can be
found in Neutron Database Layer section - :ref:`testing-database-migrations`.
migrations from the driver/plugin repo. This test is functional - it runs
against MySQL and PostgreSQL dialects. The detailed description of this test
can be found in Neutron Database Layer
section - :ref:`testing-database-migrations`.
Steps for implementing the test
-------------------------------
@ -84,9 +85,9 @@ names, which were moved out of Neutron: ::
Also the test uses **VERSION_TABLE**, it is the name of table in database which
contains revision id of head migration. It is preferred to keep this variable in
``networking_foo/db/migration/alembic_migrations/__init__.py`` so it will be easy
to use in test.
contains revision id of head migration. It is preferred to keep this variable
in ``networking_foo/db/migration/alembic_migrations/__init__.py`` so it will
be easy to use in test.
Create a module ``networking_foo/tests/functional/db/test_migrations.py``
with the following content: ::

View File

@ -3,7 +3,8 @@ Introduction
============
This document describes how features are listed in
:doc:`general_feature_support_matrix` and :doc:`provider_network_support_matrix`.
:doc:`general_feature_support_matrix` and
:doc:`provider_network_support_matrix`.
Goals
~~~~~

View File

@ -60,11 +60,13 @@ Only a single instance of the ``ovsdb-server`` and ``ovn-northd`` services
can operate in a deployment. However, deployment tools can implement
active/passive high-availability using a management tool that monitors
service health and automatically starts these services on another node after
failure of the primary node. See the :doc:`/ovn/faq/index` for more information.
failure of the primary node. See the :doc:`/ovn/faq/index` for more
information.
#. Install the ``ovn-central`` and ``openvswitch`` packages (RHEL/Fedora).
#. Install the ``ovn-central`` and ``openvswitch-common`` packages (Ubuntu/Debian).
#. Install the ``ovn-central`` and ``openvswitch-common`` packages
(Ubuntu/Debian).
#. Start the OVS service. The central OVS service starts the ``ovsdb-server``
service that manages OVN databases.

View File

@ -1,7 +1,7 @@
Edit the ``/etc/hosts`` file to contain the following:
.. path /etc/hosts
.. code-block:: none
.. code-block::
# controller
10.0.0.11 controller

View File

@ -22,17 +22,17 @@ at [1]_.
* DHCP service for instances
ML2/OVS adds packet filtering rules to every instance that allow DHCP queries
from instances to reach the DHCP agent. For OVN this traffic has to be explicitly
allowed by security group rules attached to the instance. Note that the default
security group does allow all outgoing traffic, so this only becomes relevant
when using custom security groups [6]_. Proposed patch is [7]_ but it
needs to be revived and updated.
from instances to reach the DHCP agent. For OVN this traffic has to be
explicitly allowed by security group rules attached to the instance. Note
that the default security group does allow all outgoing traffic, so this only
becomes relevant when using custom security groups [6]_. Proposed patch is
[7]_ but it needs to be revived and updated.
* DNS resolution for instances
OVN cannot use the host's networking for DNS resolution, so Case 2b in [8]_ can
only be used when additional DHCP agents are deployed. For Case 2a a different
configuration option has to be used in ``ml2_conf.ini``::
OVN cannot use the host's networking for DNS resolution, so Case 2b in [8]_
can only be used when additional DHCP agents are deployed. For Case 2a a
different configuration option has to be used in ``ml2_conf.ini``::
[ovn]
dns_servers = 203.0.113.8, 198.51.100.53
@ -41,8 +41,8 @@ at [1]_.
responses from the configured DNS servers. This may lead to confusion in
debugging.
OVN can only answer queries that are sent via UDP, queries that use TCP will be
ignored by OVN and forwarded to the configured resolvers.
OVN can only answer queries that are sent via UDP, queries that use TCP will
be ignored by OVN and forwarded to the configured resolvers.
OVN can only answer queries with no additional options being set (EDNS). Such
queries depending on the OVN version will either get broken responses or will

View File

@ -46,7 +46,8 @@ Usage
Examples
--------
If vm1 and vm2 only have one network interface and you want to trace between them:
If vm1 and vm2 only have one network interface and you want to trace between
them:
.. code-block:: console
@ -70,7 +71,8 @@ To add to the generated microflow, use -m. For example, for SSH:
$ sudo ml2ovn-trace --net net1 --from server=vm1 --to server=vm2 -m "tcp.dst==22"
To pass arbitrary (non microflow) arguments to ovn-trace, place them after '--':
To pass arbitrary (non microflow) arguments to ovn-trace, place them after
'--':
.. code-block:: console

View File

@ -6,8 +6,8 @@ Neutron Objects
Directory
=========
This directory is designed to contain all modules which have objects definitions
shipped with core Neutron. The files and directories located inside
This directory is designed to contain all modules which have objects
definitions shipped with core Neutron. The files and directories located inside
of this directory should follow the guidelines below.
@ -17,6 +17,8 @@ Structure
The Neutron objects tree should have the following structure:
* The expected directory structure is flat, except for the ML2 plugins. All ML2
plugin objects should fall under the plugins subdirectory (i.e. plugins/ml2/gre_allocation).
* Module names should use singular forms for nouns (network.py, not networks.py).
plugin objects should fall under the plugins subdirectory
(i.e. plugins/ml2/gre_allocation).
* Module names should use singular forms for nouns
(network.py, not networks.py).

View File

@ -10,3 +10,4 @@ stestr>=1.0.0 # Apache-2.0
ddt>=1.0.1 # MIT
# Needed to run DB commands in virtualenvs
PyMySQL>=0.7.6 # MIT License
doc8>=0.6.0 # Apache-2.0

15
tox.ini
View File

@ -150,12 +150,27 @@ commands=
flake8
bash ./tools/coding-checks.sh --pylint '{posargs}'
neutron-db-manage --config-file neutron/tests/etc/neutron.conf check_migration
# RST linter - remove the ignores once files are updated
doc8 \
--ignore-path doc/source/admin/config-qos-min-pps.rst \
--ignore-path doc/source/admin/deploy-provider-verifynetworkoperation.txt \
--ignore-path doc/source/admin/deploy-selfservice-verifynetworkoperation.txt \
--ignore-path doc/source/admin/shared/deploy-ha-vrrp-initialnetworks.txt \
--ignore-path doc/source/admin/shared/deploy-ha-vrrp-verifynetworkoperation.txt \
--ignore-path doc/source/admin/shared/deploy-provider-initialnetworks.txt \
--ignore-path doc/source/configuration/metering-agent.rst \
--ignore-path doc/source/contributor/internals/images \
--ignore-path doc/source/contributor/policies/bugs.rst \
doc/source neutron CONTRIBUTING.rst README.rst TESTING.rst
{[testenv:genconfig]commands}
{[testenv:bashate]commands}
{[testenv:bandit]commands}
{[testenv:genpolicy]commands}
allowlist_externals = bash
[doc8]
max-line-length = 79
[testenv:cover]
description =
Run unit tests and generate coverage report.