Browse Source

Merge remote-tracking branch 'origin/master' into merge-branch

Change-Id: I9c29ad9564671ae5a2db35835bc4a30e75482cb2
tags/7.0.0.0rc1
Doug Wiegley 4 years ago
parent
commit
2c5f44e1b3
100 changed files with 4211 additions and 1107 deletions
  1. +4
    -1
      README.rst
  2. +4
    -0
      TESTING.rst
  3. +2
    -1
      bin/neutron-rootwrap-xen-dom0
  4. +13
    -0
      devstack/lib/l2_agent
  5. +13
    -0
      devstack/lib/ml2
  6. +20
    -0
      devstack/lib/qos
  7. +18
    -0
      devstack/plugin.sh
  8. +3
    -0
      devstack/settings
  9. +34
    -0
      doc/dashboards/graphite.dashboard.html
  10. +313
    -0
      doc/source/devref/alembic_migrations.rst
  11. +18
    -0
      doc/source/devref/callbacks.rst
  12. +8
    -6
      doc/source/devref/contribute.rst
  13. +4
    -143
      doc/source/devref/db_layer.rst
  14. +16
    -13
      doc/source/devref/fullstack_testing.rst
  15. BIN
      doc/source/devref/images/fullstack-multinode-simulation.png
  16. BIN
      doc/source/devref/images/fullstack_multinode_simulation.png
  17. +7
    -0
      doc/source/devref/index.rst
  18. +5
    -5
      doc/source/devref/layer3.rst
  19. +2
    -2
      doc/source/devref/linuxbridge_agent.rst
  20. +11
    -1
      doc/source/devref/openvswitch_agent.rst
  21. +357
    -0
      doc/source/devref/quality_of_service.rst
  22. +332
    -0
      doc/source/devref/quota.rst
  23. +187
    -0
      doc/source/devref/rpc_callbacks.rst
  24. +3
    -3
      doc/source/devref/security_group_api.rst
  25. +148
    -0
      doc/source/devref/sub_project_guidelines.rst
  26. +40
    -3
      doc/source/devref/sub_projects.rst
  27. +157
    -0
      doc/source/devref/template_model_sync_test.rst
  28. +114
    -0
      doc/source/devref/testing_coverage.rst
  29. +4
    -3
      doc/source/man/neutron-server.rst
  30. +17
    -1
      doc/source/policies/core-reviewers.rst
  31. +10
    -1
      etc/dhcp_agent.ini
  32. +5
    -0
      etc/l3_agent.ini
  33. +24
    -1
      etc/neutron.conf
  34. +0
    -50
      etc/neutron/plugins/ibm/sdnve_neutron_plugin.ini
  35. +20
    -3
      etc/neutron/plugins/ml2/ml2_conf.ini
  36. +0
    -157
      etc/neutron/plugins/ml2/ml2_conf_cisco.ini
  37. +10
    -0
      etc/neutron/plugins/ml2/openvswitch_agent.ini
  38. +0
    -63
      etc/neutron/plugins/nec/nec.ini
  39. +0
    -14
      etc/neutron/plugins/plumgrid/plumgrid.ini
  40. +0
    -283
      etc/neutron/plugins/vmware/nsx.ini
  41. +0
    -10
      etc/neutron/plugins/vmware/policy/network-gateways.json
  42. +0
    -7
      etc/neutron/plugins/vmware/policy/routers.json
  43. +16
    -0
      etc/neutron/rootwrap.d/dibbler.filters
  44. +0
    -12
      etc/neutron/rootwrap.d/nec-plugin.filters
  45. +24
    -4
      etc/policy.json
  46. +1
    -1
      etc/rootwrap.conf
  47. +53
    -6
      neutron/agent/common/ovs_lib.py
  48. +7
    -1
      neutron/agent/dhcp/config.py
  49. +0
    -0
      neutron/agent/l2/__init__.py
  50. +59
    -0
      neutron/agent/l2/agent_extension.py
  51. +0
    -0
      neutron/agent/l2/extensions/__init__.py
  52. +85
    -0
      neutron/agent/l2/extensions/manager.py
  53. +149
    -0
      neutron/agent/l2/extensions/qos.py
  54. +27
    -0
      neutron/agent/l3/agent.py
  55. +7
    -0
      neutron/agent/l3/config.py
  56. +26
    -11
      neutron/agent/l3/dvr_edge_router.py
  57. +18
    -9
      neutron/agent/l3/dvr_fip_ns.py
  58. +15
    -13
      neutron/agent/l3/dvr_local_router.py
  59. +5
    -1
      neutron/agent/l3/dvr_router_base.py
  60. +53
    -0
      neutron/agent/l3/fip_rule_priority_allocator.py
  61. +10
    -0
      neutron/agent/l3/ha_router.py
  62. +83
    -22
      neutron/agent/l3/router_info.py
  63. +2
    -0
      neutron/agent/l3/router_processing_queue.py
  64. +119
    -68
      neutron/agent/linux/dhcp.py
  65. +181
    -0
      neutron/agent/linux/dibbler.py
  66. +11
    -6
      neutron/agent/linux/external_process.py
  67. +45
    -0
      neutron/agent/linux/interface.py
  68. +3
    -4
      neutron/agent/linux/ip_conntrack.py
  69. +5
    -0
      neutron/agent/linux/ip_lib.py
  70. +85
    -25
      neutron/agent/linux/iptables_firewall.py
  71. +7
    -0
      neutron/agent/linux/iptables_manager.py
  72. +356
    -0
      neutron/agent/linux/pd.py
  73. +65
    -0
      neutron/agent/linux/pd_driver.py
  74. +32
    -14
      neutron/agent/linux/utils.py
  75. +6
    -3
      neutron/agent/metadata/agent.py
  76. +1
    -1
      neutron/agent/metadata/namespace_proxy.py
  77. +31
    -6
      neutron/agent/ovsdb/api.py
  78. +8
    -2
      neutron/agent/ovsdb/impl_idl.py
  79. +20
    -4
      neutron/agent/ovsdb/impl_vsctl.py
  80. +28
    -1
      neutron/agent/ovsdb/native/commands.py
  81. +0
    -19
      neutron/agent/securitygroups_rpc.py
  82. +20
    -2
      neutron/agent/windows/utils.py
  83. +11
    -1
      neutron/api/api_common.py
  84. +18
    -8
      neutron/api/extensions.py
  85. +5
    -4
      neutron/api/rpc/agentnotifiers/l3_rpc_agent_api.py
  86. +0
    -0
      neutron/api/rpc/callbacks/__init__.py
  87. +0
    -0
      neutron/api/rpc/callbacks/consumer/__init__.py
  88. +44
    -0
      neutron/api/rpc/callbacks/consumer/registry.py
  89. +8
    -11
      neutron/api/rpc/callbacks/events.py
  90. +8
    -9
      neutron/api/rpc/callbacks/exceptions.py
  91. +0
    -0
      neutron/api/rpc/callbacks/producer/__init__.py
  92. +62
    -0
      neutron/api/rpc/callbacks/producer/registry.py
  93. +139
    -0
      neutron/api/rpc/callbacks/resource_manager.py
  94. +49
    -0
      neutron/api/rpc/callbacks/resources.py
  95. +3
    -1
      neutron/api/rpc/handlers/dhcp_rpc.py
  96. +12
    -4
      neutron/api/rpc/handlers/dvr_rpc.py
  97. +13
    -4
      neutron/api/rpc/handlers/l3_rpc.py
  98. +174
    -0
      neutron/api/rpc/handlers/resources_rpc.py
  99. +68
    -1
      neutron/api/v2/attributes.py
  100. +11
    -58
      neutron/api/v2/base.py

+ 4
- 1
README.rst View File

@@ -15,7 +15,10 @@ The latest and most in-depth documentation on how to use Neutron is
available at: <http://docs.openstack.org>. This includes:

Neutron Administrator Guide
http://docs.openstack.org/admin-guide-cloud/content/ch_networking.html
http://docs.openstack.org/admin-guide-cloud/networking.html

Networking Guide
http://docs.openstack.org/networking-guide/

Neutron API Reference:
http://docs.openstack.org/api/openstack-network/2.0/content/


+ 4
- 0
TESTING.rst View File

@@ -309,6 +309,10 @@ current unit tests coverage by running::

$ ./run_tests.sh -c

Since the coverage command can only show unit test coverage, a coverage
document is maintained that shows test coverage per area of code in:
doc/source/devref/testing_coverage.rst.

Debugging
---------



+ 2
- 1
bin/neutron-rootwrap-xen-dom0 View File

@@ -24,7 +24,8 @@ responsible determining whether a command is safe to execute.
from __future__ import print_function

from six.moves import configparser as ConfigParser
import json
from oslo_serialization import jsonutils as json

import os
import select
import sys


+ 13
- 0
devstack/lib/l2_agent View File

@@ -0,0 +1,13 @@
function plugin_agent_add_l2_agent_extension {
local l2_agent_extension=$1
if [[ -z "$L2_AGENT_EXTENSIONS" ]]; then
L2_AGENT_EXTENSIONS=$l2_agent_extension
elif [[ ! ,${L2_AGENT_EXTENSIONS}, =~ ,${l2_agent_extension}, ]]; then
L2_AGENT_EXTENSIONS+=",$l2_agent_extension"
fi
}


function configure_l2_agent {
iniset /$Q_PLUGIN_CONF_FILE agent extensions "$L2_AGENT_EXTENSIONS"
}

+ 13
- 0
devstack/lib/ml2 View File

@@ -0,0 +1,13 @@
function enable_ml2_extension_driver {
local extension_driver=$1
if [[ -z "$Q_ML2_PLUGIN_EXT_DRIVERS" ]]; then
Q_ML2_PLUGIN_EXT_DRIVERS=$extension_driver
elif [[ ! ,${Q_ML2_PLUGIN_EXT_DRIVERS}, =~ ,${extension_driver}, ]]; then
Q_ML2_PLUGIN_EXT_DRIVERS+=",$extension_driver"
fi
}


function configure_qos_ml2 {
enable_ml2_extension_driver "qos"
}

+ 20
- 0
devstack/lib/qos View File

@@ -0,0 +1,20 @@
function configure_qos_service_plugin {
_neutron_service_plugin_class_add "qos"
}


function configure_qos_core_plugin {
configure_qos_$Q_PLUGIN
}


function configure_qos_l2_agent {
plugin_agent_add_l2_agent_extension "qos"
}


function configure_qos {
configure_qos_service_plugin
configure_qos_core_plugin
configure_qos_l2_agent
}

+ 18
- 0
devstack/plugin.sh View File

@@ -0,0 +1,18 @@
LIBDIR=$DEST/neutron/devstack/lib

source $LIBDIR/l2_agent
source $LIBDIR/ml2
source $LIBDIR/qos


if [[ "$1" == "stack" && "$2" == "install" ]]; then
if is_service_enabled q-qos; then
configure_qos
fi
fi

if [[ "$1" == "stack" && "$2" == "post-config" ]]; then
if is_service_enabled q-agt; then
configure_l2_agent
fi
fi

+ 3
- 0
devstack/settings View File

@@ -0,0 +1,3 @@
L2_AGENT_EXTENSIONS=${L2_AGENT_EXTENSIONS:-}

enable_service q-qos

+ 34
- 0
doc/dashboards/graphite.dashboard.html View File

@@ -0,0 +1,34 @@

<h2>
Neutron Graphite Thumbnails - Click to see full size figure
</h2>
<table border="1">
<tr>
<td align="center">
Failure Percentage - Last 10 Days - DVR and Full Jobs<br>
<a href="http://graphite.openstack.org/render/?title=Failure Percentage - Last 10 Days - DVR and Full Jobs&from=-10days&height=500&until=now&width=1200&bgcolor=ffffff&fgcolor=000000&yMax=100&yMin=0&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-dvr-multinode-full.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-dvr-multinode-full.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-tempest-dsvm-neutron-dvr-multinode-full%27%29,%27orange%27%29&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-dvr.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-dvr.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-tempest-dsvm-neutron-dvr%27%29,%27blue%27%29&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-multinode-full.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-multinode-full.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-tempest-dsvm-neutron-multinode-full%27%29,%27green%27%29&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-full.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-full.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-tempest-dsvm-neutron-full%27%29,%27red%27%29">
<img src="http://graphite.openstack.org/render/?from=-10days&height=500&until=now&width=1200&bgcolor=ffffff&fgcolor=000000&yMax=100&yMin=0&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-dvr-multinode-full.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-dvr-multinode-full.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-tempest-dsvm-neutron-dvr-multinode-full%27%29,%27orange%27%29&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-dvr.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-dvr.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-tempest-dsvm-neutron-dvr%27%29,%27blue%27%29&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-multinode-full.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-multinode-full.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-tempest-dsvm-neutron-multinode-full%27%29,%27green%27%29&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-full.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-full.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-tempest-dsvm-neutron-full%27%29,%27red%27%29" width="400">
</a>
</td>
<td align="center">
Failure Percentage - Last 10 Days - Grenade, DSVM API/Functional/Fullstack<br>
<a href="http://graphite.openstack.org/render/?title=Failure Percentage - Last 10 Days - Grenade, DSVM API/Functional/Fullstack&from=-10days&height=500&until=now&width=1200&bgcolor=ffffff&fgcolor=000000&yMax=100&yMin=0&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-grenade-dsvm-neutron.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-grenade-dsvm-neutron.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-grenade-dsvm-neutron%27%29,%27orange%27%29&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-neutron-dsvm-api.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-neutron-dsvm-api.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-neutron-dsvm-api%27%29,%27blue%27%29&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-neutron-dsvm-functional.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-neutron-dsvm-functional.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-neutron-dsvm-functional%27%29,%27green%27%29&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-neutron-dsvm-fullstack.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-neutron-dsvm-fullstack.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-neutron-dsvm-fullstack%27%29,%27red%27%29">
<img src="http://graphite.openstack.org/render/?from=-10days&height=500&until=now&width=1200&bgcolor=ffffff&fgcolor=000000&yMax=100&yMin=0&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-grenade-dsvm-neutron.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-grenade-dsvm-neutron.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-grenade-dsvm-neutron%27%29,%27orange%27%29&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-neutron-dsvm-api.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-neutron-dsvm-api.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-neutron-dsvm-api%27%29,%27blue%27%29&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-neutron-dsvm-functional.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-neutron-dsvm-functional.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-neutron-dsvm-functional%27%29,%27green%27%29&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-neutron-dsvm-fullstack.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-neutron-dsvm-fullstack.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-neutron-dsvm-fullstack%27%29,%27red%27%29" width="400">
</a>
</td>
</tr>
<tr>
<td align="center">
Failure Percentage - Last 10 Days - Rally, LinuxBridge, LBaaS v1/v2<br>
<a href="http://graphite.openstack.org/render/?title=ailure Percentage - Last 10 Days - Rally, LinuxBridge, LBaaS v1/v2&from=-10days&height=500&until=now&width=1200&bgcolor=ffffff&fgcolor=000000&yMax=100&yMin=0&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-rally-dsvm-neutron-neutron.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-rally-dsvm-neutron-neutron.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-rally-dsvm-neutron-neutron%27%29,%27orange%27%29&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-linuxbridge.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-linuxbridge.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-tempest-dsvm-neutron-linuxbridge%27%29,%27blue%27%29&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-neutron-lbaasv1-dsvm-api.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-neutron-lbaasv1-dsvm-api.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-neutron-lbaasv1-dsvm-api%27%29,%27green%27%29&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-neutron-lbaasv2-dsvm-api.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-neutron-lbaasv2-dsvm-api.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-neutron-lbaasv2-dsvm-api%27%29,%27red%27%29">
<img src="http://graphite.openstack.org/render/?from=-10days&height=500&until=now&width=1200&bgcolor=ffffff&fgcolor=000000&yMax=100&yMin=0&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-rally-dsvm-neutron-neutron.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-rally-dsvm-neutron-neutron.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-rally-dsvm-neutron-neutron%27%29,%27orange%27%29&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-linuxbridge.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-linuxbridge.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-tempest-dsvm-neutron-linuxbridge%27%29,%27blue%27%29&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-neutron-lbaasv1-dsvm-api.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-neutron-lbaasv1-dsvm-api.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-neutron-lbaasv1-dsvm-api%27%29,%27green%27%29&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-neutron-lbaasv2-dsvm-api.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-neutron-lbaasv2-dsvm-api.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-neutron-lbaasv2-dsvm-api%27%29,%27red%27%29" width="400">
</a>
</td>
<td align="center">
Failure Percentage - Last 10 Days - Large Opts<br>
<a href="http://graphite.openstack.org/render/?title=Failure Percentage - Last 10 Days - Large Opts&from=-10days&height=500&until=now&width=1200&bgcolor=ffffff&fgcolor=000000&yMax=100&yMin=0&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-large-ops.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-large-ops.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-tempest-dsvm-neutron-large-ops%27%29,%27orange%27%29">
<img src="http://graphite.openstack.org/render/?from=-10days&height=500&until=now&width=1200&bgcolor=ffffff&fgcolor=000000&yMax=100&yMin=0&target=color%28alias%28movingAverage%28asPercent%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-large-ops.FAILURE,sum%28stats.zuul.pipeline.check.job.gate-tempest-dsvm-neutron-large-ops.{SUCCESS,FAILURE}%29%29,%2736hours%27%29,%20%27gate-tempest-dsvm-neutron-large-ops%27%29,%27orange%27%29" width="400">
</a>
</td>
</tr>
</table>

+ 313
- 0
doc/source/devref/alembic_migrations.rst View File

@@ -0,0 +1,313 @@
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.


Convention for heading levels in Neutron devref:
======= Heading 0 (reserved for the title in a document)
------- Heading 1
~~~~~~~ Heading 2
+++++++ Heading 3
''''''' Heading 4
(Avoid deeper levels because they do not render well.)


Alembic Migrations
==================

Introduction
------------

The migrations in the alembic/versions contain the changes needed to migrate
from older Neutron releases to newer versions. A migration occurs by executing
a script that details the changes needed to upgrade the database. The migration
scripts are ordered so that multiple scripts can run sequentially to update the
database.


The Migration Wrapper
---------------------

The scripts are executed by Neutron's migration wrapper ``neutron-db-manage``
which uses the Alembic library to manage the migration. Pass the ``--help``
option to the wrapper for usage information.

The wrapper takes some options followed by some commands::

neutron-db-manage <options> <commands>

The wrapper needs to be provided with the database connection string, which is
usually provided in the ``neutron.conf`` configuration file in an installation.
The wrapper automatically reads from ``/etc/neutron/neutron.conf`` if it is
present. If the configuration is in a different location::

neutron-db-manage --config-file /path/to/neutron.conf <commands>

Multiple ``--config-file`` options can be passed if needed.

Instead of reading the DB connection from the configuration file(s) the
``--database-connection`` option can be used::

neutron-db-manage --database-connection mysql+pymysql://root:secret@127.0.0.1/neutron?charset=utf8 <commands>

For some commands the wrapper needs to know the entrypoint of the core plugin
for the installation. This can be read from the configuration file(s) or
specified using the ``--core_plugin`` option::

neutron-db-manage --core_plugin neutron.plugins.ml2.plugin.Ml2Plugin <commands>

When giving examples below of using the wrapper the options will not be shown.
It is assumed you will use the options that you need for your environment.

For new deployments you will start with an empty database. You then upgrade
to the latest database version via::

neutron-db-manage upgrade heads

For existing deployments the database will already be at some version. To
check the current database version::

neutron-db-manage current

After installing a new version of Neutron server, upgrading the database is
the same command::

neutron-db-manage upgrade heads

To create a script to run the migration offline::

neutron-db-manage upgrade heads --sql

To run the offline migration between specific migration versions::

neutron-db-manage upgrade <start version>:<end version> --sql

Upgrade the database incrementally::

neutron-db-manage upgrade --delta <# of revs>

**NOTE:** Database downgrade is not supported.


Migration Branches
------------------

Neutron makes use of alembic branches for two purposes.

1. Indepedent Sub-Project Tables
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Various `sub-projects <sub_projects.html>`_ can be installed with Neutron. Each
sub-project registers its own alembic branch which is responsible for migrating
the schemas of the tables owned by the sub-project.

The neutron-db-manage script detects which sub-projects have been installed by
enumerating the ``neutron.db.alembic_migrations`` entrypoints. For more details
see the `Entry Points section of Contributing extensions to Neutron
<contribute.html#entry-points>`_.

The neutron-db-manage script runs the given alembic command against all
installed sub-projects. (An exception is the ``revision`` command, which is
discussed in the `Developers`_ section below.)

2. Offline/Online Migrations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Since Liberty, Neutron maintains two parallel alembic migration branches.

The first one, called 'expand', is used to store expansion-only migration
rules. Those rules are strictly additive and can be applied while
neutron-server is running. Examples of additive database schema changes are:
creating a new table, adding a new table column, adding a new index, etc.

The second branch, called 'contract', is used to store those migration rules
that are not safe to apply while neutron-server is running. Those include:
column or table removal, moving data from one part of the database into another
(renaming a column, transforming single table into multiple, etc.), introducing
or modifying constraints, etc.

The intent of the split is to allow invoking those safe migrations from
'expand' branch while neutron-server is running, reducing downtime needed to
upgrade the service.

For more details, see the `Expand and Contract Scripts`_ section below.


Developers
----------

A database migration script is required when you submit a change to Neutron or
a sub-project that alters the database model definition. The migration script
is a special python file that includes code to upgrade the database to match
the changes in the model definition. Alembic will execute these scripts in
order to provide a linear migration path between revisions. The
neutron-db-manage command can be used to generate migration scripts for you to
complete. The operations in the template are those supported by the Alembic
migration library.


Script Auto-generation
~~~~~~~~~~~~~~~~~~~~~~

::

neutron-db-manage revision -m "description of revision" --autogenerate

This generates a prepopulated template with the changes needed to match the
database state with the models. You should inspect the autogenerated template
to ensure that the proper models have been altered.

In rare circumstances, you may want to start with an empty migration template
and manually author the changes necessary for an upgrade. You can create a
blank file via::

neutron-db-manage revision -m "description of revision"

The timeline on each alembic branch should remain linear and not interleave
with other branches, so that there is a clear path when upgrading. To verify
that alembic branches maintain linear timelines, you can run this command::

neutron-db-manage check_migration

If this command reports an error, you can troubleshoot by showing the migration
timelines using the ``history`` command::

neutron-db-manage history


Expand and Contract Scripts
~~~~~~~~~~~~~~~~~~~~~~~~~~~

The obsolete "branchless" design of a migration script included that it
indicates a specific "version" of the schema, and includes directives that
apply all necessary changes to the database at once. If we look for example at
the script ``2d2a8a565438_hierarchical_binding.py``, we will see::

# .../alembic_migrations/versions/2d2a8a565438_hierarchical_binding.py

def upgrade():

# .. inspection code ...

op.create_table(
'ml2_port_binding_levels',
sa.Column('port_id', sa.String(length=36), nullable=False),
sa.Column('host', sa.String(length=255), nullable=False),
# ... more columns ...
)

for table in port_binding_tables:
op.execute((
"INSERT INTO ml2_port_binding_levels "
"SELECT port_id, host, 0 AS level, driver, segment AS segment_id "
"FROM %s "
"WHERE host <> '' "
"AND driver <> '';"
) % table)

op.drop_constraint(fk_name_dvr[0], 'ml2_dvr_port_bindings', 'foreignkey')
op.drop_column('ml2_dvr_port_bindings', 'cap_port_filter')
op.drop_column('ml2_dvr_port_bindings', 'segment')
op.drop_column('ml2_dvr_port_bindings', 'driver')

# ... more DROP instructions ...

The above script contains directives that are both under the "expand"
and "contract" categories, as well as some data migrations. the ``op.create_table``
directive is an "expand"; it may be run safely while the old version of the
application still runs, as the old code simply doesn't look for this table.
The ``op.drop_constraint`` and ``op.drop_column`` directives are
"contract" directives (the drop column moreso than the drop constraint); running
at least the ``op.drop_column`` directives means that the old version of the
application will fail, as it will attempt to access these columns which no longer
exist.

The data migrations in this script are adding new
rows to the newly added ``ml2_port_binding_levels`` table.

Under the new migration script directory structure, the above script would be
stated as two scripts; an "expand" and a "contract" script::

# expansion operations
# .../alembic_migrations/versions/liberty/expand/2bde560fc638_hierarchical_binding.py

def upgrade():

op.create_table(
'ml2_port_binding_levels',
sa.Column('port_id', sa.String(length=36), nullable=False),
sa.Column('host', sa.String(length=255), nullable=False),
# ... more columns ...
)


# contraction operations
# .../alembic_migrations/versions/liberty/contract/4405aedc050e_hierarchical_binding.py

def upgrade():

for table in port_binding_tables:
op.execute((
"INSERT INTO ml2_port_binding_levels "
"SELECT port_id, host, 0 AS level, driver, segment AS segment_id "
"FROM %s "
"WHERE host <> '' "
"AND driver <> '';"
) % table)

op.drop_constraint(fk_name_dvr[0], 'ml2_dvr_port_bindings', 'foreignkey')
op.drop_column('ml2_dvr_port_bindings', 'cap_port_filter')
op.drop_column('ml2_dvr_port_bindings', 'segment')
op.drop_column('ml2_dvr_port_bindings', 'driver')

# ... more DROP instructions ...

The two scripts would be present in different subdirectories and also part of
entirely separate versioning streams. The "expand" operations are in the
"expand" script, and the "contract" operations are in the "contract" script.

For the time being, data migration rules also belong to contract branch. There
is expectation that eventually live data migrations move into middleware that
will be aware about different database schema elements to converge on, but
Neutron is still not there.

Scripts that contain only expansion or contraction rules do not require a split
into two parts.

If a contraction script depends on a script from expansion stream, the
following directive should be added in the contraction script::

depends_on = ('<expansion-revision>',)


Applying database migration rules
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

To apply just expansion rules, execute::

neutron-db-manage upgrade expand@head

After the first step is done, you can stop neutron-server, apply remaining
non-expansive migration rules, if any::

neutron-db-manage upgrade contract@head

and finally, start your neutron-server again.

If you are not interested in applying safe migration rules while the service is
running, you can still upgrade database the old way, by stopping the service,
and then applying all available rules::

neutron-db-manage upgrade head[s]

It will apply all the rules from both the expand and the contract branches, in
proper order.

+ 18
- 0
doc/source/devref/callbacks.rst View File

@@ -300,6 +300,14 @@ The output is:
FAQ
===

Can I use the callbacks registry to subscribe and notify non-core resources and events?

Short answer is yes. The callbacks module defines literals for what are considered core Neutron
resources and events. However, the ability to subscribe/notify is not limited to these as you
can use your own defined resources and/or events. Just make sure you use string literals, as
typos are common, and the registry does not provide any runtime validation. Therefore, make
sure you test your code!

What is the relationship between Callbacks and Taskflow?

There is no overlap between Callbacks and Taskflow or mutual exclusion; as matter of fact they
@@ -315,6 +323,16 @@ Is there any ordering guarantee during notifications?
notified. Priorities can be a future extension, if a use case arises that require enforced
ordering.

How is the the notifying object expected to interact with the subscribing objects?

The ``notify`` method implements a one-way communication paradigm: the notifier sends a message
without expecting a response back (in other words it fires and forget). However, due to the nature
of Python, the payload can be mutated by the subscribing objects, and this can lead to unexpected
behavior of your code, if you assume that this is the intentional design. Bear in mind, that
passing-by-value using deepcopy was not chosen for efficiency reasons. Having said that, if you
intend for the notifier object to expect a response, then the notifier itself would need to act
as a subscriber.

Is the registry thread-safe?

Short answer is no: it is not safe to make mutations while callbacks are being called (more


+ 8
- 6
doc/source/devref/contribute.rst View File

@@ -439,7 +439,7 @@ should take these steps to move the models for the tables out of tree.
third-party repo as is done in the neutron repo,
i.e. ``networking_foo/db/migration/alembic_migrations/versions/*.py``
#. Remove the models from the neutron repo.
#. Add the names of the removed tables to ``DRIVER_TABLES`` in
#. Add the names of the removed tables to ``REPO_FOO_TABLES`` in
``neutron/db/migration/alembic_migrations/external.py`` (this is used for
testing, see below).

@@ -452,7 +452,7 @@ DB Model/Migration Testing
~~~~~~~~~~~~~~~~~~~~~~~~~~

Here is a `template functional test
<https://bugs.launchpad.net/neutron/+bug/1470678>`_ (TODO:Ann) third-party
<http://docs.openstack.org/developer/neutron/devref/template_model_sync_test.html>`_ third-party
maintainers can use to develop tests for model-vs-migration sync in their
repos. It is recommended that each third-party CI sets up such a test, and runs
it regularly against Neutron master.
@@ -461,7 +461,7 @@ Liberty Steps
+++++++++++++

The model_sync test will be updated to ignore the models that have been moved
out of tree. A ``DRIVER_TABLES`` list will be maintained in
out of tree. ``REPO_FOO_TABLES`` lists will be maintained in
``neutron/db/migration/alembic_migrations/external.py``.


@@ -520,9 +520,11 @@ the installer to configure this item in the ``[default]`` section. For example::
interface_driver = networking_foo.agent.linux.interface.FooInterfaceDriver

**ToDo: Interface Driver port bindings.**
These are currently defined by the ``VIF_TYPES`` in
``neutron/extensions/portbindings.py``. We could make this config-driven
for agents. For Nova, selecting the VIF driver can be done outside of
``VIF_TYPE_*`` constants in ``neutron/extensions/portbindings.py`` should be
moved from neutron core to the repositories where their drivers are
implemented. We need to provide some config or hook mechanism for VIF types
to be registered by external interface drivers. For Nova, selecting the VIF
driver can be done outside of
Neutron (using the new `os-vif python library
<https://review.openstack.org/193668>`_?). Armando and Akihiro to discuss.



+ 4
- 143
doc/source/devref/db_layer.rst View File

@@ -23,150 +23,11 @@ should also be added in model. If default value in database is not needed,
business logic.


How we manage database migration rules
--------------------------------------
Database migrations
-------------------

Since Liberty, Neutron maintains two parallel alembic migration branches.

The first one, called 'expand', is used to store expansion-only migration
rules. Those rules are strictly additive and can be applied while
neutron-server is running. Examples of additive database schema changes are:
creating a new table, adding a new table column, adding a new index, etc.

The second branch, called 'contract', is used to store those migration rules
that are not safe to apply while neutron-server is running. Those include:
column or table removal, moving data from one part of the database into another
(renaming a column, transforming single table into multiple, etc.), introducing
or modifying constraints, etc.

The intent of the split is to allow invoking those safe migrations from
'expand' branch while neutron-server is running, reducing downtime needed to
upgrade the service.

To apply just expansion rules, execute:

- neutron-db-manage upgrade liberty_expand@head

After the first step is done, you can stop neutron-server, apply remaining
non-expansive migration rules, if any:

- neutron-db-manage upgrade liberty_contract@head

and finally, start your neutron-server again.

If you are not interested in applying safe migration rules while the service is
running, you can still upgrade database the old way, by stopping the service,
and then applying all available rules:

- neutron-db-manage upgrade head[s]

It will apply all the rules from both the expand and the contract branches, in
proper order.


Expand and Contract Scripts
---------------------------

The obsolete "branchless" design of a migration script included that it
indicates a specific "version" of the schema, and includes directives that
apply all necessary changes to the database at once. If we look for example at
the script ``2d2a8a565438_hierarchical_binding.py``, we will see::

# .../alembic_migrations/versions/2d2a8a565438_hierarchical_binding.py

def upgrade():

# .. inspection code ...

op.create_table(
'ml2_port_binding_levels',
sa.Column('port_id', sa.String(length=36), nullable=False),
sa.Column('host', sa.String(length=255), nullable=False),
# ... more columns ...
)

for table in port_binding_tables:
op.execute((
"INSERT INTO ml2_port_binding_levels "
"SELECT port_id, host, 0 AS level, driver, segment AS segment_id "
"FROM %s "
"WHERE host <> '' "
"AND driver <> '';"
) % table)

op.drop_constraint(fk_name_dvr[0], 'ml2_dvr_port_bindings', 'foreignkey')
op.drop_column('ml2_dvr_port_bindings', 'cap_port_filter')
op.drop_column('ml2_dvr_port_bindings', 'segment')
op.drop_column('ml2_dvr_port_bindings', 'driver')

# ... more DROP instructions ...

The above script contains directives that are both under the "expand"
and "contract" categories, as well as some data migrations. the ``op.create_table``
directive is an "expand"; it may be run safely while the old version of the
application still runs, as the old code simply doesn't look for this table.
The ``op.drop_constraint`` and ``op.drop_column`` directives are
"contract" directives (the drop column moreso than the drop constraint); running
at least the ``op.drop_column`` directives means that the old version of the
application will fail, as it will attempt to access these columns which no longer
exist.

The data migrations in this script are adding new
rows to the newly added ``ml2_port_binding_levels`` table.

Under the new migration script directory structure, the above script would be
stated as two scripts; an "expand" and a "contract" script::

# expansion operations
# .../alembic_migrations/versions/liberty/expand/2bde560fc638_hierarchical_binding.py

def upgrade():

op.create_table(
'ml2_port_binding_levels',
sa.Column('port_id', sa.String(length=36), nullable=False),
sa.Column('host', sa.String(length=255), nullable=False),
# ... more columns ...
)


# contraction operations
# .../alembic_migrations/versions/liberty/contract/4405aedc050e_hierarchical_binding.py

def upgrade():

for table in port_binding_tables:
op.execute((
"INSERT INTO ml2_port_binding_levels "
"SELECT port_id, host, 0 AS level, driver, segment AS segment_id "
"FROM %s "
"WHERE host <> '' "
"AND driver <> '';"
) % table)

op.drop_constraint(fk_name_dvr[0], 'ml2_dvr_port_bindings', 'foreignkey')
op.drop_column('ml2_dvr_port_bindings', 'cap_port_filter')
op.drop_column('ml2_dvr_port_bindings', 'segment')
op.drop_column('ml2_dvr_port_bindings', 'driver')

# ... more DROP instructions ...

The two scripts would be present in different subdirectories and also part of
entirely separate versioning streams. The "expand" operations are in the
"expand" script, and the "contract" operations are in the "contract" script.

For the time being, data migration rules also belong to contract branch. There
is expectation that eventually live data migrations move into middleware that
will be aware about different database schema elements to converge on, but
Neutron is still not there.

Scripts that contain only expansion or contraction rules do not require a split
into two parts.

If a contraction script depends on a script from expansion stream, the
following directive should be added in the contraction script::

depends_on = ('<expansion-revision>',)
For details on the neutron-db-manage wrapper and alembic migrations, see
`Alembic Migrations <alembic_migrations.html>`_.


Tests to verify that database migrations and models are in sync


+ 16
- 13
doc/source/devref/fullstack_testing.rst View File

@@ -28,20 +28,23 @@ Why?
----

The idea behind "fullstack" testing is to fill a gap between unit + functional
tests and Tempest. Tempest tests are expensive to run, difficult to run in
a multi node environment, and are often very high level and provide little
indication to what is wrong, only that something is wrong. Developers further
benefit from full stack testing as it can sufficiently simulate a real
environment and provide a rapidly reproducible way to verify code as you're
still writing it.
tests and Tempest. Tempest tests are expensive to run, and operate only
through the REST API. So they can only provide an explanation of what went wrong
gets reported to an end user via the REST API, which is often too high level.
Additionally, Tempest requires an OpenStack deployment to be run against, which
can be difficult to configure and setup. The full stack testing addresses
these issues by taking care of the deployment itself, according to the topology
that the test requires. Developers further benefit from full stack testing as
it can sufficiently simulate a real environment and provide a rapidly
reproducible way to verify code as you're still writing it.

How?
----

Full stack tests set up their own Neutron processes (Server & agents). They
assume a working Rabbit and MySQL server before the run starts. Instructions
on how to run fullstack tests on a VM are available at TESTING.rst:
http://git.openstack.org/cgit/openstack/neutron/tree/TESTING.rst
on how to run fullstack tests on a VM are available in our
`TESTING.rst. <development.environment.html#id2>`_

Each test defines its own topology (What and how many servers and agents should
be running).
@@ -52,10 +55,10 @@ through the API and then assert that a namespace was created for it.

Full stack tests run in the Neutron tree with Neutron resources alone. You
may use the Neutron API (The Neutron server is set to NOAUTH so that Keystone
is out of the picture). instances may be simulated with a helper class that
contains a container-like object in its own namespace and IP address. It has
helper methods to send different kinds of traffic. The "instance" may be
connected to br-int or br-ex, to simulate internal or external traffic.
is out of the picture). VMs may be simulated with a container-like class:
neutron.tests.fullstack.resources.machine.FakeFullstackMachine.
An example of its usage may be found at:
neutron/tests/fullstack/test_connectivity.py.

Full stack testing can simulate multi node testing by starting an agent
multiple times. Specifically, each node would have its own copy of the
@@ -63,7 +66,7 @@ OVS/DHCP/L3 agents, all configured with the same "host" value. Each OVS agent
is connected to its own pair of br-int/br-ex, and those bridges are then
interconnected.

.. image:: images/fullstack-multinode-simulation.png
.. image:: images/fullstack_multinode_simulation.png

When?
-----


BIN
doc/source/devref/images/fullstack-multinode-simulation.png View File

Before After
Width: 611  |  Height: 608  |  Size: 29 KiB

BIN
doc/source/devref/images/fullstack_multinode_simulation.png View File

Before After
Width: 618  |  Height: 635  |  Size: 31 KiB

+ 7
- 0
doc/source/devref/index.rst View File

@@ -43,7 +43,9 @@ Programming HowTos and Tutorials
contribute
neutron_api
sub_projects
sub_project_guidelines
client_command_extensions
alembic_migrations


Neutron Internals
@@ -53,12 +55,15 @@ Neutron Internals

services_and_agents
api_layer
quota
api_extensions
plugin-api
db_layer
rpc_api
rpc_callbacks
layer3
l2_agents
quality_of_service
advanced_services
oslo-incubator
callbacks
@@ -70,6 +75,8 @@ Testing
:maxdepth: 3

fullstack_testing
testing_coverage
template_model_sync_test

Module Reference
----------------


+ 5
- 5
doc/source/devref/layer3.rst View File

@@ -50,7 +50,7 @@ Neutron logical network setup
Neutron logical router setup
----------------------------

* http://docs.openstack.org/admin-guide-cloud/content/ch_networking.html#under_the_hood_openvswitch_scenario1_network
* http://docs.openstack.org/networking-guide/scenario_legacy_ovs.html


::
@@ -147,7 +147,7 @@ Neutron Routers are realized in OpenVSwitch
Finding the router in ip/ipconfig
---------------------------------

* http://docs.openstack.org/admin-guide-cloud/content/ch_networking.html
* http://docs.openstack.org/admin-guide-cloud/networking.html

The neutron-l3-agent uses the Linux IP stack and iptables to perform L3 forwarding and NAT.
In order to support multiple routers with potentially overlapping IP addresses, neutron-l3-agent
@@ -189,11 +189,11 @@ For example::
Provider Networking
-------------------

Neutron can also be configured to create `provider networks <http://docs.openstack.org/admin-guide-cloud/content/ch_networking.html#provider_terminology>`_
Neutron can also be configured to create `provider networks <http://docs.openstack.org/admin-guide-cloud/networking_adv-features.html#provider-networks>`_

Further Reading
---------------
* `Packet Pushers - Neutron Network Implementation on Linux <http://packetpushers.net/openstack-neutron-network-implementation-in-linux/>`_
* `OpenStack Cloud Administrator Guide <http://docs.openstack.org/admin-guide-cloud/content/ch_networking.html>`_
* `Packet Pushers - Neutron Network Implementation on Linux <http://packetpushers.net/openstack-quantum-network-implementation-in-linux/>`_
* `OpenStack Cloud Administrator Guide <http://docs.openstack.org/admin-guide-cloud/networking.html>`_
* `Neutron - Layer 3 API extension usage guide <http://docs.openstack.org/api/openstack-network/2.0/content/router_ext.html>`_
* `Darragh O'Reilly - The Quantum L3 router and floating IPs <http://techbackground.blogspot.com/2013/05/the-quantum-l3-router-and-floating-ips.html>`_

+ 2
- 2
doc/source/devref/linuxbridge_agent.rst View File

@@ -6,8 +6,8 @@ This Agent uses the `Linux Bridge
<http://www.linuxfoundation.org/collaborate/workgroups/networking/bridge>`_ to
provide L2 connectivity for VM instances running on the compute node to the
public network. A graphical illustration of the deployment can be found in
`OpenStack Admin Guide Linux Bridge
<http://docs.openstack.org/admin-guide-cloud/content/under_the_hood_linuxbridge.html>`_
`Networking Guide
<http://docs.openstack.org/networking-guide/scenario_legacy_lb.html>`_

In most common deployments, there is a compute and a network node. On both the
compute and the network node, the Linux Bridge Agent will manage virtual


+ 11
- 1
doc/source/devref/openvswitch_agent.rst View File

@@ -26,7 +26,6 @@ GRE Tunneling is documented in depth in the `Networking in too much
detail <http://openstack.redhat.com/Networking_in_too_much_detail>`_
by RedHat.


VXLAN Tunnels
-------------

@@ -35,6 +34,16 @@ at layer 2 into a UDP header.
More information can be found in `The VXLAN wiki page.
<http://en.wikipedia.org/wiki/Virtual_Extensible_LAN>`_

Geneve Tunnels
--------------

Geneve uses UDP as its transport protocol and is dynamic
in size using extensible option headers.
It is important to note that currently it is only supported in
newer kernels. (kernel >= 3.18, OVS version >=2.4)
More information can be found in the `Geneve RFC document.
<https://tools.ietf.org/html/draft-ietf-nvo3-geneve-00>`_


Bridge Management
-----------------
@@ -71,6 +80,7 @@ future to support existing VLAN-tagged traffic (coming from NFV VMs
for instance) and/or to deal with potential QinQ support natively
available in the Open vSwitch.


Further Reading
---------------



+ 357
- 0
doc/source/devref/quality_of_service.rst View File

@@ -0,0 +1,357 @@
==================
Quality of Service
==================

Quality of Service advanced service is designed as a service plugin. The
service is decoupled from the rest of Neutron code on multiple levels (see
below).

QoS extends core resources (ports, networks) without using mixins inherited
from plugins but through an ml2 extension driver.

Details about the DB models, API extension, and use cases can be found here: `qos spec <http://specs.openstack.org/openstack/neutron-specs/specs/liberty/qos-api-extension.html>`_
.

Service side design
===================
* neutron.extensions.qos:
base extension + API controller definition. Note that rules are subattributes
of policies and hence embedded into their URIs.

* neutron.services.qos.qos_plugin:
QoSPlugin, service plugin that implements 'qos' extension, receiving and
handling API calls to create/modify policies and rules.

* neutron.services.qos.notification_drivers.manager:
the manager that passes object notifications down to every enabled
notification driver.

* neutron.services.qos.notification_drivers.qos_base:
the interface class for pluggable notification drivers that are used to
update backends about new {create, update, delete} events on any rule or
policy change.

* neutron.services.qos.notification_drivers.message_queue:
MQ-based reference notification driver which updates agents via messaging
bus, using `RPC callbacks <rpc_callbacks.html>`_.

* neutron.core_extensions.base:
Contains an interface class to implement core resource (port/network)
extensions. Core resource extensions are then easily integrated into
interested plugins. We may need to have a core resource extension manager
that would utilize those extensions, to avoid plugin modifications for every
new core resource extension.

* neutron.core_extensions.qos:
Contains QoS core resource extension that conforms to the interface described
above.

* neutron.plugins.ml2.extensions.qos:
Contains ml2 extension driver that handles core resource updates by reusing
the core_extensions.qos module mentioned above. In the future, we would like
to see a plugin-agnostic core resource extension manager that could be
integrated into other plugins with ease.


Supported QoS rule types
------------------------

Any plugin or Ml2 mechanism driver can claim support for some QoS rule types by
providing a plugin/driver class property called 'supported_qos_rule_types' that
should return a list of strings that correspond to QoS rule types (for the list
of all rule types, see: neutron.extensions.qos.VALID_RULE_TYPES).

In the most simple case, the property can be represented by a simple Python
list defined on the class.

For Ml2 plugin, the list of supported QoS rule types is defined as a common
subset of rules supported by all active mechanism drivers.

Note: the list of supported rule types reported by core plugin is not enforced
when accessing QoS rule resources. This is mostly because then we would not be
able to create any rules while at least one ml2 driver in gate lacks support
for QoS (at the moment of writing, linuxbridge is such a driver).


Database models
---------------

QoS design defines the following two conceptual resources to apply QoS rules
for a port or a network:

* QoS policy
* QoS rule (type specific)

Each QoS policy contains zero or more QoS rules. A policy is then applied to a
network or a port, making all rules of the policy applied to the corresponding
Neutron resource (for a network, applying a policy means that the policy will
be applied to all ports that belong to it).

From database point of view, following objects are defined in schema:

* QosPolicy: directly maps to the conceptual policy resource.
* QosNetworkPolicyBinding, QosPortPolicyBinding: defines attachment between a
Neutron resource and a QoS policy.
* QosBandwidthLimitRule: defines the only rule type available at the moment.


All database models are defined under:

* neutron.db.qos.models


QoS versioned objects
---------------------

There is a long history of passing database dictionaries directly into business
logic of Neutron. This path is not the one we wanted to take for QoS effort, so
we've also introduced a new objects middleware to encapsulate the database logic
from the rest of the Neutron code that works with QoS resources. For this, we've
adopted oslo.versionedobjects library and introduced a new NeutronObject class
that is a base for all other objects that will belong to the middle layer.
There is an expectation that Neutron will evolve into using objects for all
resources it handles, though that part was obviously out of scope for the QoS
effort.

Every NeutronObject supports the following operations:

* get_by_id: returns specific object that is represented by the id passed as an
argument.
* get_objects: returns all objects of the type, potentially with a filter
applied.
* create/update/delete: usual persistence operations.

Base object class is defined in:

* neutron.objects.base

For QoS, new neutron objects were implemented:

* QosPolicy: directly maps to the conceptual policy resource, as defined above.
* QosBandwidthLimitRule: class that represents the only rule type supported by
initial QoS design.

Those are defined in:

* neutron.objects.qos.policy
* neutron.objects.qos.rule

For QosPolicy neutron object, the following public methods were implemented:

* get_network_policy/get_port_policy: returns a policy object that is attached
to the corresponding Neutron resource.
* attach_network/attach_port: attach a policy to the corresponding Neutron
resource.
* detach_network/detach_port: detach a policy from the corresponding Neutron
resource.

In addition to the fields that belong to QoS policy database object itself,
synthetic fields were added to the object that represent lists of rules that
belong to the policy. To get a list of all rules for a specific policy, a
consumer of the object can just access the corresponding attribute via:

* policy.rules

Implementation is done in a way that will allow adding a new rule list field
with little or no modifications in the policy object itself. This is achieved
by smart introspection of existing available rule object definitions and
automatic definition of those fields on the policy class.

Note that rules are loaded in a non lazy way, meaning they are all fetched from
the database on policy fetch.

For Qos<type>Rule objects, an extendable approach was taken to allow easy
addition of objects for new rule types. To accomodate this, fields common to
all types are put into a base class called QosRule that is then inherited into
type-specific rule implementations that, ideally, only define additional fields
and some other minor things.

Note that the QosRule base class is not registered with oslo.versionedobjects
registry, because it's not expected that 'generic' rules should be
instantiated (and to suggest just that, the base rule class is marked as ABC).

QoS objects rely on some primitive database API functions that are added in:

* neutron.db.api: those can be reused to fetch other models that do not have
corresponding versioned objects yet, if needed.
* neutron.db.qos.api: contains database functions that are specific to QoS
models.


RPC communication
-----------------
Details on RPC communication implemented in reference backend driver are
discussed in `a separate page <rpc_callbacks.html>`_.

One thing that should be mentioned here explicitly is that RPC callback
endpoints communicate using real versioned objects (as defined by serialization
for oslo.versionedobjects library), not vague json dictionaries. Meaning,
oslo.versionedobjects are on the wire and not just used internally inside a
component.

One more thing to note is that though RPC interface relies on versioned
objects, it does not yet rely on versioning features the oslo.versionedobjects
library provides. This is because Liberty is the first release where we start
using the RPC interface, so we have no way to get different versions in a
cluster. That said, the versioning strategy for QoS is thought through and
described in `the separate page <rpc_callbacks.html>`_.

There is expectation that after RPC callbacks are introduced in Neutron, we
will be able to migrate propagation from server to agents for other resources
(f.e. security groups) to the new mechanism. This will need to wait until those
resources get proper NeutronObject implementations.

The flow of updates is as follows:

* if a port that is bound to the agent is attached to a QoS policy, then ML2
plugin detects the change by relying on ML2 QoS extension driver, and
notifies the agent about a port change. The agent proceeds with the
notification by calling to get_device_details() and getting the new port dict
that contains a new qos_policy_id. Each device details dict is passed into l2
agent extension manager that passes it down into every enabled extension,
including QoS. QoS extension sees that there is a new unknown QoS policy for
a port, so it uses ResourcesPullRpcApi to fetch the current state of the
policy (with all the rules included) from the server. After that, the QoS
extension applies the rules by calling into QoS driver that corresponds to
the agent.
* on existing QoS policy update (it includes any policy or its rules change),
server pushes the new policy object state through ResourcesPushRpcApi
interface. The interface fans out the serialized (dehydrated) object to any
agent that is listening for QoS policy updates. If an agent have seen the
policy before (it is attached to one of the ports it maintains), then it goes
with applying the updates to the port. Otherwise, the agent silently ignores
the update.


Agent side design
=================

To ease code reusability between agents and to avoid the need to patch an agent
for each new core resource extension, pluggable L2 agent extensions were
introduced. They can be especially interesting to third parties that don't want
to maintain their code in Neutron tree.

Extensions are meant to receive handle_port events, and do whatever they need
with them.

* neutron.agent.l2.agent_extension:
This module defines an abstract extension interface.

* neutron.agent.l2.extensions.manager:
This module contains a manager that allows to register multiple extensions,
and passes handle_port events down to all enabled extensions.

* neutron.agent.l2.extensions.qos
defines QoS L2 agent extension. It receives handle_port and delete_port
events and passes them down into QoS agent backend driver (see below). The
file also defines the QosAgentDriver interface. Note: each backend implements
its own driver. The driver handles low level interaction with the underlying
networking technology, while the QoS extension handles operations that are
common to all agents.


Agent backends
--------------

At the moment, QoS is supported by Open vSwitch and SR-IOV ml2 drivers.

Each agent backend defines a QoS driver that implements the QosAgentDriver
interface:

* Open vSwitch (QosOVSAgentDriver);
* SR-IOV (QosSRIOVAgentDriver).


Open vSwitch
~~~~~~~~~~~~

Open vSwitch implementation relies on the new ovs_lib OVSBridge functions:

* get_egress_bw_limit_for_port
* create_egress_bw_limit_for_port
* delete_egress_bw_limit_for_port

An egress bandwidth limit is effectively configured on the port by setting
the port Interface parameters ingress_policing_rate and
ingress_policing_burst.

That approach is less flexible than linux-htb, Queues and OvS QoS profiles,
which we may explore in the future, but which will need to be used in
combination with openflow rules.

SR-IOV
~~~~~~

SR-IOV bandwidth limit implementation relies on the new pci_lib function:

* set_vf_max_rate

As the name of the function suggests, the limit is applied on a Virtual
Function (VF).

ip link interface has the following limitation for bandwidth limit: it uses
Mbps as units of bandwidth measurement, not kbps, and does not support float
numbers. So in case the limit is set to something less than 1000 kbps, it's set
to 1 Mbps only. If the limit is set to something that does not divide to 1000
kbps chunks, then the effective limit is rounded to the nearest integer Mbps
value.


Configuration
=============

To enable the service, the following steps should be followed:

On server side:

* enable qos service in service_plugins;
* set the needed notification_drivers in [qos] section (message_queue is the default);
* for ml2, add 'qos' to extension_drivers in [ml2] section.

On agent side (OVS):

* add 'qos' to extensions in [agent] section.


Testing strategy
================

All the code added or extended as part of the effort got reasonable unit test
coverage.


Neutron objects
---------------

Base unit test classes to validate neutron objects were implemented in a way
that allows code reuse when introducing a new object type.

There are two test classes that are utilized for that:

* BaseObjectIfaceTestCase: class to validate basic object operations (mostly
CRUD) with database layer isolated.
* BaseDbObjectTestCase: class to validate the same operations with models in
place and database layer unmocked.

Every new object implemented on top of one of those classes is expected to
either inherit existing test cases as is, or reimplement it, if it makes sense
in terms of how those objects are implemented. Specific test classes can
obviously extend the set of test cases as they see needed (f.e. you need to
define new test cases for those additional methods that you may add to your
object implementations on top of base semantics common to all neutron objects).


Functional tests
----------------

Additions to ovs_lib to set bandwidth limits on ports are covered in:

* neutron.tests.functional.agent.test_ovs_lib


API tests
---------

API tests for basic CRUD operations for ports, networks, policies, and rules were added in:

* neutron.tests.api.test_qos

+ 332
- 0
doc/source/devref/quota.rst View File

@@ -0,0 +1,332 @@
================================
Quota Management and Enforcement
================================

Most resources exposed by the Neutron API are subject to quota limits.
The Neutron API exposes an extension for managing such quotas. Quota limits are
enforced at the API layer, before the request is dispatched to the plugin.

Default values for quota limits are specified in neutron.conf. Admin users
can override those defaults values on a per-tenant basis. Limits are stored
in the Neutron database; if no limit is found for a given resource and tenant,
then the default value for such resource is used.
Configuration-based quota management, where every tenant gets the same quota
limit specified in the configuration file, has been deprecated as of the
Liberty release.

Please note that Neutron does not support both specification of quota limits
per user and quota management for hierarchical multitenancy (as a matter of
fact Neutron does not support hierarchical multitenancy at all). Also, quota
limits are currently not enforced on RPC interfaces listening on the AMQP
bus.

Plugin and ML2 drivers are not supposed to enforce quotas for resources they
manage. However, the subnet_allocation [#]_ extension is an exception and will
be discussed below.

The quota management and enforcement mechanisms discussed here apply to every
resource which has been registered with the Quota engine, regardless of
whether such resource belongs to the core Neutron API or one of its extensions.

High Level View
---------------

There are two main components in the Neutron quota system:

* The Quota API extension;
* The Quota Engine.

Both components rely on a quota driver. The neutron codebase currently defines
two quota drivers:

* neutron.db.quota.driver.DbQuotaDriver
* neutron.quota.ConfDriver

The latter driver is however deprecated.

The Quota API extension handles quota management, whereas the Quota Engine
component handles quota enforcement. This API extension is loaded like any
other extension. For this reason plugins must explicitly support it by including
"quotas" in the support_extension_aliases attribute.

In the Quota API simple CRUD operations are used for managing tenant quotas.
Please note that the current behaviour when deleting a tenant quota is to reset
quota limits for that tenant to configuration defaults. The API
extension does not validate the tenant identifier with the identity service.

Performing quota enforcement is the responsibility of the Quota Engine.
RESTful API controllers, before sending a request to the plugin, try to obtain
a reservation from the quota engine for the resources specified in the client
request. If the reservation is successful, then it proceeds to dispatch the
operation to the plugin.

For a reservation to be successful, the total amount of resources requested,
plus the total amount of resources reserved, plus the total amount of resources
already stored in the database should not exceed the tenant's quota limit.

Finally, both quota management and enforcement rely on a "quota driver" [#]_,
whose task is basically to perform database operations.

Quota Management
----------------

The quota management component is fairly straightforward.

However, unlike the vast majority of Neutron extensions, it uses it own
controller class [#]_.
This class does not implement the POST operation. List, get, update, and
delete operations are implemented by the usual index, show, update and
delete methods. These method simply call into the quota driver for either
fetching tenant quotas or updating them.

The _update_attributes method is called only once in the controller lifetime.
This method dynamically updates Neutron's resource attribute map [#]_ so that
an attribute is added for every resource managed by the quota engine.
Request authorisation is performed in this controller, and only 'admin' users
are allowed to modify quotas for tenants. As the neutron policy engine is not
used, it is not possible to configure which users should be allowed to manage
quotas using policy.json.

The driver operations dealing with quota management are:

* delete_tenant_quota, which simply removes all entries from the 'quotas'
table for a given tenant identifier;
* update_quota_limit, which adds or updates an entry in the 'quotas' tenant for
a given tenant identifier and a given resource name;
* _get_quotas, which fetches limits for a set of resource and a given tenant
identifier
* _get_all_quotas, which behaves like _get_quotas, but for all tenants.


Resource Usage Info
-------------------

Neutron has two ways of tracking resource usage info:

* CountableResource, where resource usage is calculated every time quotas
limits are enforced by counting rows in the resource table and reservations
for that resource.
* TrackedResource, which instead relies on a specific table tracking usage
data, and performs explicitly counting only when the data in this table are
not in sync with actual used and reserved resources.

Another difference between CountableResource and TrackedResource is that the
former invokes a plugin method to count resources. CountableResource should be
therefore employed for plugins which do not leverage the Neutron database.
The actual class that the Neutron quota engine will use is determined by the
track_quota_usage variable in the quota configuration section. If True,
TrackedResource instances will be created, otherwise the quota engine will
use CountableResource instances.
Resource creation is performed by the create_resource_instance factory method
in the neutron.quota.resource module.

From a performance perspective, having a table tracking resource usage
has some advantages, albeit not fundamental. Indeed the time required for
executing queries to explicitly count objects will increase with the number of
records in the table. On the other hand, using TrackedResource will fetch a
single record, but has the drawback of having to execute an UPDATE statement
once the operation is completed.
Nevertheless, CountableResource instances do not simply perform a SELECT query
on the relevant table for a resource, but invoke a plugin method, which might
execute several statements and sometimes even interacts with the backend
before returning.
Resource usage tracking also becomes important for operational correctness
when coupled with the concept of resource reservation, discussed in another
section of this chapter.

Tracking quota usage is not as simple as updating a counter every time
resources are created or deleted.
Indeed a quota-limited resource in Neutron can be created in several ways.
While a RESTful API request is the most common one, resources can be created
by RPC handlers listing on the AMQP bus, such as those which create DHCP
ports, or by plugin operations, such as those which create router ports.

To this aim, TrackedResource instances are initialised with a reference to
the model class for the resource for which they track usage data. During
object initialisation, SqlAlchemy event handlers are installed for this class.
The event handler is executed after a record is inserted or deleted.
As result usage data for that resource and will be marked as 'dirty' once
the operation completes, so that the next time usage data is requested,
it will be synchronised counting resource usage from the database.
Even if this solution has some drawbacks, listed in the 'exceptions and
caveats' section, it is more reliable than solutions such as:

* Updating the usage counters with the new 'correct' value every time an
operation completes.
* Having a periodic task synchronising quota usage data with actual data in
the Neutron DB.

Finally, regardless of whether CountableResource or TrackedResource is used,
the quota engine always invokes its count() method to retrieve resource usage.
Therefore, from the perspective of the Quota engine there is absolutely no
difference between CountableResource and TrackedResource.

Quota Enforcement
-----------------

**NOTE: The reservation engine is currently not wired into the API controller
as issues have been discovered with multiple workers. For more information
see _bug1468134**

.. _bug1468134: https://bugs.launchpad.net/neutron/+bug/1486134

Before dispatching a request to the plugin, the Neutron 'base' controller [#]_
attempts to make a reservation for requested resource(s).
Reservations are made by calling the make_reservation method in
neutron.quota.QuotaEngine.
The process of making a reservation is fairly straightforward:

* Get current resource usages. This is achieved by invoking the count method
on every requested resource, and then retrieving the amount of reserved
resources.
* Fetch current quota limits for requested resources, by invoking the
_get_tenant_quotas method.
* Fetch expired reservations for selected resources. This amount will be
subtracted from resource usage. As in most cases there won't be any
expired reservation, this approach actually requires less DB operations than
doing a sum of non-expired, reserved resources for each request.
* For each resource calculate its headroom, and verify the requested
amount of resource is less than the headroom.
* If the above is true for all resource, the reservation is saved in the DB,
otherwise an OverQuotaLimit exception is raised.

The quota engine is able to make a reservation for multiple resources.
However, it is worth noting that because of the current structure of the
Neutron API layer, there will not be any practical case in which a reservation
for multiple resources is made. For this reason performance optimisation
avoiding repeating queries for every resource are not part of the current
implementation.

In order to ensure correct operations, a row-level lock is acquired in
the transaction which creates the reservation. The lock is acquired when
reading usage data. In case of write-set certification failures,
which can occur in active/active clusters such as MySQL galera, the decorator
oslo_db.api.wrap_db_retry will retry the transaction if a DBDeadLock
exception is raised.
While non-locking approaches are possible, it has been found out that, since
a non-locking algorithms increases the chances of collision, the cost of
handling a DBDeadlock is still lower than the cost of retrying the operation
when a collision is detected. A study in this direction was conducted for
IP allocation operations, but the same principles apply here as well [#]_.
Nevertheless, moving away for DB-level locks is something that must happen
for quota enforcement in the future.

Committing and cancelling a reservation is as simple as deleting the
reservation itself. When a reservation is committed, the resources which
were committed are now stored in the database, so the reservation itself
should be deleted. The Neutron quota engine simply removes the record when
cancelling a reservation (ie: the request failed to complete), and also
marks quota usage info as dirty when the reservation is committed (ie:
the request completed correctly).
Reservations are committed or cancelled by respectively calling the
commit_reservation and cancel_reservation methods in neutron.quota.QuotaEngine.

Reservations are not perennial. Eternal reservation would eventually exhaust
tenants' quotas because they would never be removed when an API worker crashes
whilst in the middle of an operation.
Reservation expiration is currently set to 120 seconds, and is not
configurable, not yet at least. Expired reservations are not counted when
calculating resource usage. While creating a reservation, if any expired
reservation is found, all expired reservation for that tenant and resource
will be removed from the database, thus avoiding build-up of expired
reservations.

Setting up Resource Tracking for a Plugin
------------------------------------------

By default plugins do not leverage resource tracking. Having the plugin
explicitly declare which resources should be tracked is a precise design
choice aimed at limiting as much as possible the chance of introducing
errors in existing plugins.

For this reason a plugin must declare which resource it intends to track.
This can be achieved using the tracked_resources decorator available in the
neutron.quota.resource_registry module.
The decorator should ideally be applied to the plugin's __init__ method.

The decorator accepts in input a list of keyword arguments. The name of the
argument must be a resource name, and the value of the argument must be
a DB model class. For example:

::
@resource_registry.tracked_resources(network=models_v2.Network,
port=models_v2.Port,
subnet=models_v2.Subnet,
subnetpool=models_v2.SubnetPool)

Will ensure network, port, subnet and subnetpool resources are tracked.
In theory, it is possible to use this decorator multiple times, and not
exclusively to __init__ methods. However, this would eventually lead to
code readability and maintainability problems, so developers are strongly
encourage to apply this decorator exclusively to the plugin's __init__
method (or any other method which is called by the plugin only once
during its initialization).

Notes for Implementors of RPC Interfaces and RESTful Controllers
-------------------------------------------------------------------------------

Neutron unfortunately does not have a layer which is called before dispatching
the operation from the plugin which can be leveraged both from RESTful and
RPC over AMQP APIs. In particular the RPC handlers call straight into the
plugin, without doing any request authorisation or quota enforcement.

Therefore RPC handlers must explicitly indicate if they are going to call the
plugin to create or delete any sort of resources. This is achieved in a simple
way, by ensuring modified resources are marked as dirty after the RPC handler
execution terminates. To this aim developers can use the mark_resources_dirty
decorator available in the module neutron.quota.resource_registry.

The decorator would scan the whole list of registered resources, and store
the dirty status for their usage trackers in the database for those resources
for which items have been created or destroyed during the plugin operation.

Exceptions and Caveats
-----------------------

Please be aware of the following limitations of the quota enforcement engine:

* Subnet allocation from subnet pools, in particularly shared pools, is also
subject to quota limit checks. However this checks are not enforced by the
quota engine, but trough a mechanism implemented in the
neutron.ipam.subnetalloc module. This is because the Quota engine is not
able to satisfy the requirements for quotas on subnet allocation.
* The quota engine also provides a limit_check routine which enforces quota
checks without creating reservations. This way of doing quota enforcement
is extremely unreliable and superseded by the reservation mechanism. It
has not been removed to ensure off-tree plugins and extensions which leverage
are not broken.
* SqlAlchemy events might not be the most reliable way for detecting changes
in resource usage. Since the event mechanism monitors the data model class,
it is paramount for a correct quota enforcement, that resources are always
created and deleted using object relational mappings. For instance, deleting
a resource with a query.delete call, will not trigger the event. SQLAlchemy
events should be considered as a temporary measure adopted as Neutron lacks
persistent API objects.
* As CountableResource instance do not track usage data, when making a
reservation no write-intent lock is acquired. Therefore the quota engine
with CountableResource is not concurrency-safe.
* The mechanism for specifying for which resources enable usage tracking
relies on the fact that the plugin is loaded before quota-limited resources
are registered. For this reason it is not possible to validate whether a
resource actually exists or not when enabling tracking for it. Developers
should pay particular attention into ensuring resource names are correctly
specified.
* The code assumes usage trackers are a trusted source of truth: if they
report a usage counter and the dirty bit is not set, that counter is
correct. If it's dirty than surely that counter is out of sync.
This is not very robust, as there might be issues upon restart when toggling
the use_tracked_resources configuration variable, as stale counters might be
trusted upon for making reservations. Also, the same situation might occur
if a server crashes after the API operation is completed but before the
reservation is committed, as the actual resource usage is changed but
the corresponding usage tracker is not marked as dirty.

References
----------

.. [#] Subnet allocation extension: http://git.openstack.org/cgit/openstack/neutron/tree/neutron/extensions/subnetallocation.py
.. [#] DB Quota driver class: http://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/quota_db.py#n33
.. [#] Quota API extension controller: http://git.openstack.org/cgit/openstack/neutron/tree/neutron/extensions/quotasv2.py#n40
.. [#] Neutron resource attribute map: http://git.openstack.org/cgit/openstack/neutron/tree/neutron/api/v2/attributes.py#n639
.. [#] Base controller class: http://git.openstack.org/cgit/openstack/neutron/tree/neutron/api/v2/base.py#n50
.. [#] http://lists.openstack.org/pipermail/openstack-dev/2015-February/057534.html

+ 187
- 0
doc/source/devref/rpc_callbacks.rst View File

@@ -0,0 +1,187 @@
=================================
Neutron Messaging Callback System
=================================

Neutron already has a callback system [link-to: callbacks.rst] for
in-process resource callbacks where publishers and subscribers are able
to publish and subscribe for resource events.

This system is different, and is intended to be used for inter-process
callbacks, via the messaging fanout mechanisms.

In Neutron, agents may need to subscribe to specific resource details which
may change over time. And the purpose of this messaging callback system
is to allow agent subscription to those resources without the need to extend
modify existing RPC calls, or creating new RPC messages.

A few resource which can benefit of this system:

* QoS policies;
* Security Groups.

Using a remote publisher/subscriber pattern, the information about such
resources could be published using fanout messages to all interested nodes,
minimizing messaging requests from agents to server since the agents
get subscribed for their whole lifecycle (unless they unsubscribe).

Within an agent, there could be multiple subscriber callbacks to the same
resource events, the resources updates would be dispatched to the subscriber
callbacks from a single message. Any update would come in a single message,
doing only a single oslo versioned objects deserialization on each receiving
agent.

This publishing/subscription mechanism is highly dependent on the format
of the resources passed around. This is why the library only allows
versioned objects to be published and subscribed. Oslo versioned objects
allow object version down/up conversion. #[vo_mkcompat]_ #[vo_mkcptests]_

For the VO's versioning schema look here: #[vo_versioning]_

versioned_objects serialization/deserialization with the
obj_to_primitive(target_version=..) and primitive_to_obj() #[ov_serdes]_
methods is used internally to convert/retrieve objects before/after messaging.

Considering rolling upgrades, there are several scenarios to look at:

* publisher (generally neutron-server or a service) and subscriber (agent)
know the same version of the objects, so they serialize, and deserialize
without issues.

* publisher knows (and sends) an older version of the object, subscriber
will get the object updated to latest version on arrival before any
callback is called.

* publisher sends a newer version of the object, subscriber won't be able
to deserialize the object, in this case (PLEASE DISCUSS), we can think of two
strategies:


The strategy for upgrades will be:
During upgrades, we pin neutron-server to a compatible version for resource
fanout updates, and the server sends both the old, and the newer version.
The new agents process updates, taking the newer version of the resource
fanout updates. When the whole system upgraded, we un-pin the compatible
version fanout.

Serialized versioned objects look like::

{'versioned_object.version': '1.0',
'versioned_object.name': 'QoSPolicy',
'versioned_object.data': {'rules': [
{'versioned_object.version': '1.0',
'versioned_object.name': 'QoSBandwidthLimitRule',
'versioned_object.data': {'name': u'a'},
'versioned_object.namespace': 'versionedobjects'}
],
'uuid': u'abcde',
'name': u'aaa'},
'versioned_object.namespace': 'versionedobjects'}

Topic names for every resource type RPC endpoint
================================================

neutron-vo-<resource_class_name>-<version>

In the future, we may want to get oslo messaging to support subscribing
topics dynamically, then we may want to use:

neutron-vo-<resource_class_name>-<resource_id>-<version> instead,

or something equivalent which would allow fine granularity for the receivers
to only get interesting information to them.

Subscribing to resources
========================

Imagine that you have agent A, which just got to handle a new port, which
has an associated security group, and QoS policy.

The agent code processing port updates may look like::

from neutron.api.rpc.callbacks.consumer import registry
from neutron.api.rpc.callbacks import events
from neutron.api.rpc.callbacks import resources


def process_resource_updates(resource_type, resource, event_type):

# send to the right handler which will update any control plane
# details related to the updated resource...


def subscribe_resources():
registry.subscribe(process_resource_updates, resources.SEC_GROUP)

registry.subscribe(process_resource_updates, resources.QOS_POLICY)

def port_update(port):

# here we extract sg_id and qos_policy_id from port..

sec_group = registry.pull(resources.SEC_GROUP, sg_id)
qos_policy = registry.pull(resources.QOS_POLICY, qos_policy_id)


The relevant function is:

* subscribe(callback, resource_type): subscribes callback to a resource type.


The callback function will receive the following arguments:

* resource_type: the type of resource which is receiving the update.
* resource: resource of supported object
* event_type: will be one of CREATED, UPDATED, or DELETED, see
neutron.api.rpc.callbacks.events for details.

With the underlaying oslo_messaging support for dynamic topics on the receiver
we cannot implement a per "resource type + resource id" topic, rabbitmq seems
to handle 10000's of topics without suffering, but creating 100's of
oslo_messaging receivers on different topics seems to crash.

We may want to look into that later, to avoid agents receiving resource updates
which are uninteresting to them.

Unsubscribing from resources
============================

To unsubscribe registered callbacks:

* unsubscribe(callback, resource_type): unsubscribe from specific resource type.
* unsubscribe_all(): unsubscribe from all resources.


Sending resource events
=======================

On the server side, resource updates could come from anywhere, a service plugin,
an extension, anything that updates, creates, or destroys the resource and that
is of any interest to subscribed agents.

The server/publisher side may look like::

from neutron.api.rpc.callbacks.producer import registry
from neutron.api.rpc.callbacks import events

def create_qos_policy(...):
policy = fetch_policy(...)
update_the_db(...)
registry.push(policy, events.CREATED)

def update_qos_policy(...):
policy = fetch_policy(...)
update_the_db(...)
registry.push(policy, events.UPDATED)

def delete_qos_policy(...):
policy = fetch_policy(...)
update_the_db(...)
registry.push(policy, events.DELETED)


References
==========
.. [#ov_serdes] https://github.com/openstack/oslo.versionedobjects/blob/master/oslo_versionedobjects/tests/test_objects.py#L621
.. [#vo_mkcompat] https://github.com/openstack/oslo.versionedobjects/blob/master/oslo_versionedobjects/base.py#L460
.. [#vo_mkcptests] https://github.com/openstack/oslo.versionedobjects/blob/master/oslo_versionedobjects/tests/test_objects.py#L111
.. [#vo_versioning] https://github.com/openstack/oslo.versionedobjects/blob/master/oslo_versionedobjects/base.py#L236

+ 3
- 3
doc/source/devref/security_group_api.rst View File

@@ -29,7 +29,7 @@ running on the compute nodes, and modifying the IPTables rules on each hyperviso

* `Plugin RPC classes <https://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/securitygroups_rpc_base.py>`_

* `SecurityGroupServerRpcMixin <https://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/securitygroups_rpc_base.py#39>`_ - defines the RPC API that the plugin uses to communicate with the agents running on the compute nodes
* `SecurityGroupServerRpcMixin <https://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/securitygroups_rpc_base.py>`_ - defines the RPC API that the plugin uses to communicate with the agents running on the compute nodes
* SecurityGroupServerRpcMixin - Defines the API methods used to fetch data from the database, in order to return responses to agents via the RPC API

* `Agent RPC classes <https://git.openstack.org/cgit/openstack/neutron/tree/neutron/agent/securitygroups_rpc.py>`_
@@ -43,8 +43,8 @@ IPTables Driver

* ``prepare_port_filter`` takes a ``port`` argument, which is a ``dictionary`` object that contains information about the port - including the ``security_group_rules``

* ``prepare_port_filter`` `appends the port to an internal dictionary <https://git.openstack.org/cgit/openstack/neutron/tree/neutron/agent/linux/iptables_firewall.py#L60>`_, ``filtered_ports`` which is used to track the internal state.
* ``prepare_port_filter`` appends the port to an internal dictionary, ``filtered_ports`` which is used to track the internal state.

* Each security group has a `chain <http://www.thegeekstuff.com/2011/01/iptables-fundamentals/>`_ in Iptables.

* The ``IptablesFirewallDriver`` has a method to `convert security group rules into iptables statements <https://git.openstack.org/cgit/openstack/neutron/tree/neutron/agent/linux/iptables_firewall.py#L248>`_
* The ``IptablesFirewallDriver`` has a method to convert security group rules into iptables statements.

+ 148
- 0
doc/source/devref/sub_project_guidelines.rst View File

@@ -0,0 +1,148 @@
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.


Convention for heading levels in Neutron devref:
======= Heading 0 (reserved for the title in a document)
------- Heading 1
~~~~~~~ Heading 2
+++++++ Heading 3
''''''' Heading 4
(Avoid deeper levels because they do not render well.)


Sub-Project Guidelines
======================

This document provides guidance for those who maintain projects that consume
main neutron or neutron advanced services repositories as a dependency. It is
not meant to describe projects that are not tightly coupled with Neutron code.

Code Reuse
----------

At all times, avoid using any Neutron symbols that are explicitly marked as
private (those have an underscore at the start of their names).

Oslo Incubator
~~~~~~~~~~~~~~

Don't ever reuse neutron code that comes from oslo-incubator in your
subprojects. For neutron repository, the code is usually located under the
following path: neutron.openstack.common.*

If you need any oslo-incubator code in your repository, copy it into your
repository from oslo-incubator and then use it from there.

Neutron team does not maintain any backwards compatibility strategy for the
code subtree and can break anyone who relies on it at any time.

Requirements
------------

Neutron dependency
~~~~~~~~~~~~~~~~~~

Subprojects usually depend on neutron repositories, by using -e git://...
schema to define such a dependency. The dependency *must not* be present in
requirements lists though, and instead belongs to tox.ini deps section. This is
because next pbr library releases do not guarantee -e git://... dependencies
will work.

You may still put some versioned neutron dependency in your requirements list
to indicate the dependency for anyone who packages your subproject.

Explicit dependencies
~~~~~~~~~~~~~~~~~~~~~

Each neutron project maintains its own lists of requirements. Subprojects that
depend on neutron while directly using some of those libraries that neutron
maintains as its dependencies must not rely on the fact that neutron will pull
the needed dependencies for them. Direct library usage requires that this
library is mentioned in requirements lists of the subproject.

The reason to duplicate those dependencies is that neutron team does not stick
to any backwards compatibility strategy in regards to requirements lists, and
is free to drop any of those dependencies at any time, breaking anyone who
could rely on those libraries to be pulled by neutron itself.

Automated requirements updates
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

At all times, subprojects that use neutron as a dependency should make sure
their dependencies do not conflict with neutron's ones.

Core neutron projects maintain their requirements lists by utilizing a
so-called proposal bot. To keep your subproject in sync with neutron, it is
highly recommended that you register your project in
openstack/requirements:projects.txt file to enable the bot to update
requirements for you.

Once a subproject opts in global requirements synchronization, it should enable
check-requirements jobs in project-config. For example, see `this patch
<https://review.openstack.org/#/c/215671/>`_.

Stable branches
---------------

Stable branches for libraries should be created at the same time when
corresponding neutron stable branches are cut off. This is to avoid situations
when a postponed cut-off results in a stable branch that contains some patches
that belong to the next release. This would require reverting patches, and this
is something you should avoid.

Make sure your neutron dependency uses corresponding stable branch for neutron,
not master.

Note that to keep requirements in sync with core neutron repositories in stable
branches, you should make sure that your project is registered in
openstack/requirements:projects.txt *for the branch in question*.

Subproject stable branches are supervised by horizontal `neutron-stable-maint
team <https://review.openstack.org/#/admin/groups/539,members>`_.

More info on stable branch process can be found on `the following page
<https://wiki.openstack.org/wiki/StableBranch>`_.

Releases
--------

It is suggested that sub-projects release new tarballs on PyPI from time to
time, especially for stable branches. It will make the life of packagers and
other consumers of your code easier.

It is highly suggested that you do not strip pieces of the source tree (tests,
executables, tools) before releasing on PyPI: those missing pieces may be
needed to validate the package, or make the packaging easier or more complete.
As a rule of thumb, don't strip anything from the source tree unless completely
needed.

Sub-Project Release Process
~~~~~~~~~~~~~~~~~~~~~~~~~~~

To release a sub-project, follow the following steps:

* Only members of the `neutron-release
<https://review.openstack.org/#/admin/groups/150,members>`_ gerrit group can
do releases. Make sure you talk to a member of neutron-release to perform
your release.
* For projects which have not moved to post-versioning, we need to push an
alpha tag to avoid pbr complaining. The neutron-release group will handle
this.
* Modify setup.cfg to remove the version (if you have one), which moves your
project to post-versioning, similar to all the other Neutron projects. You
can skip this step if you don't have a version in setup.cfg.
* Have neutron-release push the tag to gerrit.
* Have neutron-release `tag the release
<http://docs.openstack.org/infra/manual/drivers.html#tagging-a-release>`_,
which will release the code to PyPi.

+ 40
- 3
doc/source/devref/sub_projects.rst View File

@@ -67,6 +67,9 @@ working on testing.
By being included, the project accepts oversight by the TC as a part of
being in OpenStack, and also accepts oversight by the Neutron PTL.

It is also assumed the respective review teams will make sure their projects
stay in line with `current best practices <sub_project_guidelines.html>`_.

Inclusion Criteria
------------------

@@ -100,6 +103,10 @@ repo but are summarized here to describe the functionality they provide.