Remove BPs not in Juno.

A second pass at removing specs which did not land in Juno.

Change-Id: Ie4d4ce1f91b4a6754d2c2709ce882dc03a614094
This commit is contained in:
Kyle Mestery
2014-09-09 14:12:01 +00:00
parent ed6fc9a0d8
commit 77f8c806a4
22 changed files with 0 additions and 4896 deletions

View File

@@ -1,298 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
============================
Agent child processes status
============================
https://blueprints.launchpad.net/neutron/+spec/agent-child-processes-status
Neutron agents spawn children which run unmonitored, if anything happens to
those children neutron won't take any action, failing to provide those
services reliably.
We propose monitoring those processes, and taking a configurable action,
making neutron more resilient to external failures.
Problem description
===================
When a ns-metadata-proxy dies inside an l3-agent [#liveness_bug]_,
subnets served by this ns-metadata-proxy will have no metadata until there
are any changes to the router, which will recheck the metadata agent
liveness.
Same thing happens with the dhcp-agent [#dhcp_agent_bug]_ and also
in lbaas and vpnaas agents.
This is a long known bug, which generally would be triggered
by bugs in dnsmasq, or the ns-metadata-proxy, and specially critical
on big clouds and HA environments.
Proposed change
===============
I propose to monitor the spawned children checking the
neutron.agent.linux.external_process.ProcessManager class .active method
periodically, spawning a pool of green threads which would avoid excessive
locking. The same approach that Bryan Haley started here [#check_metadata]_.
If a process that should be active is not, it will be logged, and we
could take any of the following admin configured actions, in the
configuration specified order.
* Respawn the process.
* Notify the process manager [#oslo_service_status]_.
* Exit the agent. (for use when an HA service manager is taking care of the
agent and will respawn it, optionally in a different host).
Examples of configurations could be:
* Log (implicit) and respawn
::
check_child_processes = True
check_child_processes_actions = respawn
check_child_processes_period = 60
* Log (implicit) and notify
::
check_child_processes = True
check_child_processes_actions = notify
check_child_processes_period = 60
* Log (implicit), notify, respawn
::
check_child_processes = True
check_child_processes_actions = notify, respawn
check_child_processes_period = 60
* Log (implicit), notify, exit
::
check_child_processes = True
check_child_processes_actions = notify, exit
check_child_processes_period = 60
This feature will be disabled by default, and default
action will be 'respawn'.
Alternatives
------------
* Use popen to start services in the foreground and wait on SIGCHLD
instead of polling. It wouldn't be possible to reattach after
we exit or restart an agent because the parent will detach from
the child and it's not possible to reattach when agent restarts
(without using ptrace which sounds too hackish). This is a
POSIX limitation.
In our design, when an agent exits, all the underlying children
stay alive, detached from the parent and continue to run
to make sure there is no service disruption during upgrades.
When the agent starts again, it will check in /var/neutron/{$resource}/
for the pid of the child that serves each resource, and it's
configuration, and make sure that it's running (or restart it
otherwise). This is the point we can't re-attach, or wait [#waitpid]_
for an specific non-child PID [#waitpid_non_child]_.
* Use a intermediate daemon to start long running processes and
monitor them via SIGCHLD as a workaround for the problems in the first
alternative. This is very similar to the soon-to-be available
functionality in oslo rootwrap daemon, but rootwrap daemon won't
be supporting long running processes yet, even though the problem
with this alternative is the case when the intermediate process
manager dies or gets killed. In that case we lose control
over the spawn children (that we would be monitoring via SIGCHLD).
* Instead of periodically checking all children, spread the load
in several batches over time. That would be a more complicated
implementation, which probably could be addressed on a second
round or as a last work item if the initial implementation doesn't
perform as expected for a high amount of resources (routers, dhcp
services, lbaas..).
* Initially, the notification part was planned to be implemented
within neutron itself, but the design has been modularized in
oslo with drivers for different types (systemd, init.d, upstart..).
Data model impact
-----------------
None
REST API impact
---------------
None
Security impact
---------------
None
Notifications impact
--------------------
None
Other end user impact
---------------------
None
Performance Impact
------------------
Some extra periodic load would be added by checking the underlying
children. Locking of other green threads would be diminished by starting
a green thread pool for checking the children. A semaphore will be introduced
to avoid several check cycles from starting concurrently.
As there were concerns on polling /proc/$pid/cmdline, I implemented a
simplistic benchmark:
::
i=10000000
while i>0:
f = open ('/proc/8125/cmdline','r')
f.readlines()
i = i - 1
Please note that the cmdline file is addressed by kernel functions in memory
and does not make any I/O.
::
root@ns316109:~# time python test.py
real 0m59.836s
user 0m23.681s
sys 0m35.679s
That means, 170.000 reads/s using 1 core / 100% CPU on a 7400 bogomips machine.
If we had to check 1000 children processes we would need 1000/170000 = 0.0059
seconds plus the overhead of the intermediate method calls and the spawning
of greenthreads.
I believe ~ 6ms CPU usage to check 1000 children is rather acceptable, even
though the check interval is tunable, and it's disabled by default
to let the deployers balance the performance impact with the failure detection
latency.
Polling isn't ideal, but the alternatives aren't either, and
we need a solution for this problem, specially for HA environments.
Other deployer impact
---------------------
None
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
* https://launchpad.net/~mangelajo
* https://launchpad.net/~brian-haley
Adding brian-haley as I'm taking a few of his ideas, and reusing
partly his work on [#check_metadata]_.
Work Items
----------
* Implement in l3-agent, modularizing for reuse in other agent,
implement functional testing.
* Implement in dhcp-agent, refactor to use external_process to
avoid code duplication.
* Implement in lbaas-agent
* Implement in vpnaas-agent
* Implement in any other agents that spawn children.
* Implement the notify action once that's accepted and implemented
in oslo.
Dependencies
============
The notify action depends on the implementation of [#oslo_service_status]_,
but all the other features/actions can be acomplished without that.
Testing
=======
Unit testing won't be enough and Tempest is not capable of running arbitrary
ssh commands on hosts, killing processes remotely to test this.
We propose the use of functional testing to validate the functionaly
proposed e.g.
* Create a parent process (e.g. l3-agent) responsible for launching/monitoring a
child process (e.g neutron-ns-metadata-proxy)
* Kill the child process.
* Check that the configured actions are successfully performed (e.g. logging
and respawn) within a reasonable interval.
Documentation Impact
====================
The new configuration options will have to be documented per agent.
This are the proposed defaults:
::
check_child_processes = False
check_child_processes_actions = respawn
check_child_processes_period = 60
References
==========
.. [#dhcp_agent_bug] Dhcp agent dying children bug:
https://bugs.launchpad.net/neutron/+bug/1257524
.. [#liveness_bug] L3 agent dying children bug:
https://bugs.launchpad.net/neutron/+bug/1257775
.. [#check_metadata] Brian Haley's implementation for l3 agent
https://review.openstack.org/#/c/59997/
.. [#oslo_service_status] Oslo service manager status notification spec
http://docs-draft.openstack.org/48/97748/3/check/gate-oslo-specs-docs/ef96358/doc/build/html/specs/juno/service-status-interface.html]
.. [#oslo_sn_review] Oslo spec review
https://review.openstack.org/#/c/97748/
.. [#old_agent_service_status] Old agent service status blueprint
https://blueprints.launchpad.net/neutron/+spec/agent-service-status
.. [#waitpid] http://linux.die.net/man/2/waitpid
.. [#waitpid_non_child] http://stackoverflow.com/questions/1058047/wait-for-any-process-to-finish

View File

@@ -1,119 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
============================================================
Big Switch - Support for External Attachment Point Extension
============================================================
https://blueprints.launchpad.net/neutron/+spec/bsn-external-attachment
Add support for the external attachment point extension to the Big Switch
plugin so it can provision arbitrary ports in the fabric to be members of
Neutron networks.
Problem description
===================
Neutron lacked a way to express attachments of physical ports into Neutron
networks. For this the external attachment specification was approved.[1]
However, that spec only covers ML2 deployments so it will need to be included
in the Big Switch plugin for users of the plugin to gain this feature.
Proposed change
===============
Include the extension mixin in the Big Switch plugin and add the appropriate
REST calls to the backend controller to provide network access to these
external attachment points.
Alternatives
------------
N/A
Data model impact
-----------------
The Big Switch plugin will need to be included in the DB migration for the
external attachment tables.
REST API impact
---------------
The Big Switch plugin will support the external attachment REST endpoints.
Security impact
---------------
N/A
Notifications impact
--------------------
N/A
Other end user impact
---------------------
N/A
Performance Impact
------------------
N/A
Other deployer impact
---------------------
N/A
Developer impact
----------------
N/A
Implementation
==============
Assignee(s)
-----------
Primary assignee:
kevinbenton
Work Items
----------
* Include the mixin and create the methods to bind the vlan_switch type via
a REST call to the backend controller
* Add unit tests
Dependencies
============
Implementation of the extension. [1]
Testing
=======
At first this will be covered by regular unit tests and the integration test
for the OVS gateway mode. An additional 3rd party CI test will be setup
to exercise the custom provisioning code.
Documentation Impact
====================
Add mention of support of this feature for the Big Switch plugin.
References
==========
1. https://github.com/openstack/neutron-specs/blob/master/specs/juno/neutron-external-attachment-points.rst

View File

@@ -1,146 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
============================
Big Switch - Cleanup modules
============================
https://blueprints.launchpad.net/neutron/+spec/bsn-module-cleanup
The Big Switch plugin and ML2 have shared code that lives inside of the Big
Switch plugin file. This creates strange side effects when importing the plugin
from the ML2 agent when database modules are loaded. The Big Switch modules
need to be separated out better to prevent these cases and to cleanly express
what is shared between the ML2 driver and the plugin.
Problem description
===================
Importing the Big Switch plugin from the Big Switch ML2 driver requires some
ugly hacks since it causes all of the Big Switch plugin DB modules to be
imported.[1] This very tight integration makes updating the ML2 driver or the
plugin a delicate process because we have to make sure one doesn't break the
other.
Proposed change
===============
The shared code paths should be removed from the Big Switch plugin module and
placed into their own. Some of the shared methods should also be refactored to
make the two use cases easier to see and customize.
Sections to be moved to a new module:
- The entire NeutronRestProxyV2Base class.[2]
Shared methods to cleanup/refactor:
- The async_port_create method.[3] The name is misleading because it's not
asynchronous. It just has the ability to be called asynchronously because it
handles failures.
- The _extend_port_dict_binding method.[4] It has to have special logic to
determine if it's being called from the driver or the plugin. There should be
a different function for each one for the conditional parts and then the base
method can contain the shared logic.
- Rename _get_all_subnets_json_for_network since it's not really returning
JSON.[5]
- Move imports out of methods.[9][10]
Other:
- Remove the version code since it's not used any longer.[6][7]
- Move the router rule DB module into the DB folder.[8]
Alternatives
------------
Let the code be ugly
Data model impact
-----------------
N/A
REST API impact
---------------
N/A
Security impact
---------------
N/A
Notifications impact
--------------------
N/A
Other end user impact
---------------------
N/A
Performance Impact
------------------
N/A
Other deployer impact
---------------------
N/A
Developer impact
----------------
N/A
Implementation
==============
Assignee(s)
-----------
Primary assignee:
kevinbenton
Work Items
----------
* Separate code into modules
* Refactor shared methods between ML2 and plugin to make demarcation clear
Dependencies
============
N/A
Testing
=======
Existing tests should cover this refactor.
Documentation Impact
====================
N/A
References
==========
1. https://github.com/openstack/neutron/commit/1997cc97f14b95251ad568820160405e34a39801#diff-101742a6f187560f6c12441b8dbfb816
2. https://github.com/openstack/neutron/blob/c82e6a4d33c2a8dc51260fab4ad0cb87805b49de/neutron/plugins/bigswitch/plugin.py#L159
3. https://github.com/openstack/neutron/blob/c82e6a4d33c2a8dc51260fab4ad0cb87805b49de/neutron/plugins/bigswitch/plugin.py#L388
4. https://github.com/openstack/neutron/blob/c82e6a4d33c2a8dc51260fab4ad0cb87805b49de/neutron/plugins/bigswitch/plugin.py#L349
5. https://github.com/openstack/neutron/blob/c82e6a4d33c2a8dc51260fab4ad0cb87805b49de/neutron/plugins/bigswitch/plugin.py#L254
6. https://github.com/openstack/neutron/blob/c82e6a4d33c2a8dc51260fab4ad0cb87805b49de/neutron/plugins/bigswitch/version.py
7. https://github.com/openstack/neutron/blob/c82e6a4d33c2a8dc51260fab4ad0cb87805b49de/neutron/plugins/bigswitch/vcsversion.py
8. https://github.com/openstack/neutron/blob/c82e6a4d33c2a8dc51260fab4ad0cb87805b49de/neutron/plugins/bigswitch/routerrule_db.py
9. https://github.com/openstack/neutron/blob/c82e6a4d33c2a8dc51260fab4ad0cb87805b49de/neutron/plugins/bigswitch/db/porttracker_db.py#L25
10. https://github.com/openstack/neutron/blob/c82e6a4d33c2a8dc51260fab4ad0cb87805b49de/neutron/plugins/bigswitch/db/porttracker_db.py#L37

View File

@@ -1,124 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================
Big Switch - Support for Provider Networks
==========================================
https://blueprints.launchpad.net/neutron/+spec/bsn-provider-net
Add support for provider networks to the Big Switch plugin to make connecting
vswitches outside of the control of the fabric possible.
Problem description
===================
When the backend controller controls all of the physical and virtual switches,
the chosen segmentation ID is irrelevant to Neutron. Since this was the only
supported model for the Big Switch plugin, it didn't need the provider net
extension to specify segmentation IDs for each network.
However, the plugin now needs to support heterogeneous environments containing
vswitches under the control of the controller and standalone vswitches
controlled by a neutron agent. To support these latter switches, a segmentation
ID is required so the agent understands how to configure the VLAN translation
to the physical network.
Proposed change
===============
Implement the provider network extention in the plugin to populate the
segmentation ID for vlan networks.
Alternatives
------------
N/A. This VLAN information is not available to neutron unless it selects it on
its own rather then leaving it up to the fabric.
Data model impact
-----------------
N/A
REST API impact
---------------
Port and network responses for admins will contain segmentation information.
Security impact
---------------
N/A
Notifications impact
--------------------
N/A
Other end user impact
---------------------
N/A
Performance Impact
------------------
N/A
Other deployer impact
---------------------
N/A
Developer impact
----------------
N/A
Implementation
==============
Assignee(s)
-----------
Primary assignee:
kevinbenton
Work Items
----------
* Implement extension in Big Switch plugin
* Add unit tests to ensure segmentation ID is present
Dependencies
============
The bsn-ovs-plugin-agent spec will depend on this support being added.[1]
Testing
=======
Unit tests will cover the addition of this extension. After the dependent
blueprint is implemented, the 3rd party CI system will exercise this code.
Documentation Impact
====================
N/A
References
==========
1. https://blueprints.launchpad.net/neutron/+spec/bsn-ovs-plugin-agent

View File

@@ -1,247 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================================
Cisco Dynamic Fabric Automation (DFA) ML2 Mechanism Driver
==========================================================
Launchpad blueprint:
https://blueprints.launchpad.net/neutron/+spec/ml2-mechnism-driver-for-cisco-dfa
The purpose of this blueprint it to build an ML2 Mechanism Driver for DFA.
Problem description
===================
Cisco Dynamic Fabric Automation (DFA) helps enable network automation and
provisioning, to simplify both physical servers and virtual machine
deployments and moves across the fabric. It uses network admin-defined
profile templates for physical and VM projects.
When a server admin provisions VMs and physical servers, instances of
network policies are automatically created and applied to the network leaf
switch. As VMs move across the fabric, the network policy is automatically
applied to the leaf switch. More details of cisco DFA can be found in [1].
The following sections describe the proposed changes in neutron, a new
ML2 mechanism driver, and make it possible to use openstack in such a
topology.
This diagram depicts openstack deployment in Cisco DFA topology.
asciiflows::
XXXXXXXXXXXXX
XXXXXXX XXXXXXXXX
X ++ ++ X
X |spine 1| ++ ++ |spine n | X
++ X ++ ++ X
|DCNM | X X
|(Data | X X
| Center | ++ X
| Network ++ Leaf i | SWITCH FABRIC X
| Manager)| ++ X
+++ X X
| X X
| X++ ++X
+ | Leaf x|XXXX XX XXXX|Leaf z |
| | +----+ | +---+
REST API | | VDP| | |VDP|
+ ++++ +++---+
| ++ | |
| | Openstack | | |
| | Controller | | |
| | Node | +++ +++
| | ++ | | OVS | | OVS |
| | |Cisco DFA| | ++--+ ++-+
++Mechanism| | | |LLDPad(VDP)| | |LLDPad(VDP)|
| |Driver | | | +---+ ++ | +--+
| | | | | Openstack | | Openstack |
+++++ | Compute node 1| | Compote node n|
| +++ +++
| | |
+++
As shown in the diagram above, each openstack compute node is connected
to a leaf node and controller node is connected to a DCNM (Data Center
Network Manager). The DCNM is responsible for provisioning, monitoring and
troubleshooting of data center network infrastructures [2].
Proposed change
===============
The requirements for ML2 Mechanism Driver to support cisco DFA are as follows:
1. DCNM exchanges information with openstack controller node using REST API.
To support this, a new client should be created to handle the exchange.
2. When a project is created/deleted the same needs to be created/deleted on
the DCNM.
3. New extension, dfa:cfg_profile_id, needs to be added to network resource.
Configuration profile can be thought as extended network attributes of the
switch fabric.
4. Periodic should be sent to DCNM to get all supported
configuration profile and save it into a database.
5. The network information, such as subnet, segmentation ID, and configuration
profile should be sent to DCNM, when network is created, deleted or updated.
6. Mechanism Driver needs to exchange information with OVS neutron agent when a
VM is launched. Currently only plugin can have RPC call to ovs neutron agent.
Mechanism driver need to setup RPC to exchange the information with ovs agent.
7. The DFA topology represent a network based overlay and it support more than
4k segment. So new type driver is needed to handle this type of network and
detailed is explained in [4].
8. Data path (OVS flow) should be programmed on compute nodes where instances
are launched. The VLAN for the flow is obtained from an external process, that
is LLDPad, running on computed node. More detail of process explained in [4].
Alternatives
------------
None
Data model impact
-----------------
* Configuration profiles - keeps supported config profile by DCNM.
* Configuration profile and network binding - keeps mapping between network
and config profile.
* Project - keeps mapping between project name and project ID.
REST API impact
---------------
None
Security impact
---------------
None
Notifications impact
--------------------
None
Other end user impact
---------------------
1. As it is mentioned before, when a network is created, list of configuration
profile needs to be displayed. From horizon GUI or CLI, the information can
be requested using list_config_profiles API, which will be added to the
python-neutronclient.
2. Configuration parameters regarding DCNM (such as ip address,...) should be
added to mechanism driver config file.
Performance Impact
------------------
There are two options to query configuration profiles from DCNM, periodic and
on demand.
The on demand request may cause performance issue on create_network. As the
reply to a request, has to be processed and it also include database access.
On the other hand with periodic approach the information may not be available
for duration of pooling time. Concerning performance, a periodic task can
query and process the information.
There are create/update/delete_<resource>_post/precommit methods in the ML2
mechanism driver. All the access to database for DFA mechanism driver is done
in the precommit methods and postcommit methods are handling DCNM requests.
Other deployer impact
---------------------
1. New configuration options for DCNM, that is ip address and credentials.
2. Enabling notification in keystone.conf.
3. Adding new config parameter to ml2_conf.init to define RPC parameters
(i.e. topic's name) for neutron ovs agent and mechanism driver.
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Nader Lahouti (nlahouti)
Work Items
----------
1. Change the setup.cfg to introduce 'cisco_dfa' as mechanism driver.
2. Events notifications should be enabled in keystone.conf.
The mechanism driver relies on notification for project events (create/delete).
The cisco_dfa mechanism driver listens to these events and after processing
them it sends request to the DCNM to create/delete project.
'cisco_dfa' keep the projects info in a local database, as they will be used
when sending delete request to DCNM.
3. Spawn a periodic task to send request to DCNM. The reply contains
configuration profiles. The information will be saved in a database.
If connection to DCNM fails invalidate the cached information.
4. Define new extension to network resource for configuration profile. The
extensions will be added to supported aliases in the cisco_dfa mechanism
driver class.
NOTE: ML2 plugin currently does not support extension in the mechanism driver.
A new blueprint is opened to address this issue [5].
5. When an instance is created cisco_dfa needs to send information (such as
instances name), to the external process (i.e. LLDPad) on the compute nodes.
For that purpose RPC is needed to call an API on ovs_neutron_agent side.
Then the API pass the information to LLDpad through a shared library
(provided by LLDPad)
Dependencies
============
1. Changes in ovs_neutron_agent to program the flow [4].
2. Need implementation of RPC for mechanism driver in ovs_neutron_agent.
3. Support for extensions in ML2 mechanism drivers [5]
Testing
=======
It is needed to have a setup as shown in the beginning of this page.
It is not mandatory to have physical switches for that topology.
The whole setup can be deployed using virtual switches (i.e. instead of
having physical switch fabric, it can be replaced by virtual switches).
For each module added to the mechanism driver, unit test is provided.
Functional testing with tempest will be provided. The third-party Cisco DFA CI
report will be provided to validate this ML2 mechanism driver.
Documentation Impact
====================
Describe cisco DFA mechanism driver and configuration details.
References
==========
[1] http://www.cisco.com/go/dfa
http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/unified-fabric/white_paper_c11-728337.html
http://www.cisco.com/c/en/us/solutions/data-center-virtualization/unified-fabric/dynamic_fabric_automation.html#~Overview
[2] http://www.cisco.com/c/en/us/products/cloud-systems-management/prime-data-center-network-manager/index.html
[3] https://blueprints.launchpad.net/horizon/+spec/horizon-cisco-dfa-support
[4] https://blueprints.launchpad.net/neutron/+spec/vdp-network-overlay
[5] https://blueprints.launchpad.net/neutron/+spec/neutron-ml2-mechanismdriver-extensions
http://summit.openstack.org/cfp/details/240

View File

@@ -1,125 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
============================================================
Add the capability to sync neutron resources to the N1KV VSM
============================================================
https://blueprints.launchpad.net/neutron/+spec/cisco-n1kv-full-sync
The purpose of this blueprint is to add support to synchronize the state of
neutron database with Cisco N1KV controller (VSM).
Problem description
===================
Today if there is any inconsistency in the state of neutron and the VSM
databases, there is no way to push all the neutron configuration back into the
VSM.
Proposed change
===============
The proposed change is to introduce support for state synchronization between
neutron and VSM in the N1KV plugin.
Creates and updates of resources are rolled back in neutron if an error is
encountered on the VSM. In case the VSM loses its config, a sync is triggered
from the neutron plugin. The sync compares the resources present in neutron DB
with those in the VSM. It issues creates or deletes as appropriate in order to
synchronize the state of the VSM with that of the neutron DB.
Deletes cannot be rolled back in neutron. If a resource delete fails on the VSM
due to connection failures, neutron will attempt to synchronize that resource
periodically.
The full sync will be triggered based on a boolean config parameter
i.e. enable_sync_on_start. If "enable_sync_on_start" is True, neutron will
attempt to synchronize its state with that of VSM. If "enable_sync_on_start"
is set to False, neutron will not attempt any state sync.
This blueprint will introduce a bare minimum capability of synchronizing
resources. It does not cover out-of-sync detection logic.
Alternatives
------------
None
Data model impact
-----------------
None
REST API impact
---------------
None
Security impact
---------------
None
Notifications impact
--------------------
None
Other end user impact
---------------------
None
Performance Impact
------------------
None
Other deployer impact
---------------------
None
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
abhraut
Work Items
----------
* Add logic to the plugin module to push database state to the VSM.
* Add configuration parameter in cisco_plugins.ini
Dependencies
============
None
Testing
=======
Unit tests will be provided.
Documentation Impact
====================
Update documentation to reflect the new configuration parameter.
References
==========
None

View File

@@ -1,158 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================================
Enhance floating IP router lookup to consider extra routes
==========================================================
https://blueprints.launchpad.net/neutron/+spec/floating-ip-extra-route
Allow assigning a floating IP to a port that is not directly connected to the
router (a port on the ``internal-net`` in the diagram below).
Example setup::
++ ++ ++ ++ ++
|public-net++router1++intermediate-net++router2++internal-net++
++ ++ ++ ++ ++
Problem description
===================
When a floating IP is associated with an internal address, there is a
verification that the internal address is reachable from the router. Currently
it only considers internal addresses that are directly connected to the router,
and does not allow association to internal addresses that are reachable by
routes over intermediate networks.
An example use case would be a setup where a compute instance that implemets a
multi-homed gateway and functions as a VPN or firewall gateway protecting an
internal network. In this setup the port of an ``internal-net`` server to which
we wish to assign a floating IP will not be considered reachable, even if there
exists an extra route to it on ``router1``.
Proposed change
===============
Extend the verification to allow the floating IP association if the internal
address belongs to a network that is reachable by an extra route, over other
gateway devices (such as routers, or multi-homed compute instances).
Iterate over routers belonging to the same tenant as the ``internal-net``,
and select routers that have an extra route that matches the fixed IP on the
``internal-net`` port which is the target of the floating IP assignment. Sort
the candidate routers according to the specificity of the route (most specific
route is first). Use the router with the most specific route for the floating
IP assignment.
Alternatives
------------
The proposal above takes a minimalist approach that trusts the tenant to ensure
that there exists a path between the router and the internal port (the target
of the floating IP association). As discussed in [#third]_, a follow-on
enhancement can add such validation.
Also, the tenant is trusted to maintain the extra route for the life of the
floating IP association. A future enhancement can add a validation before an
extra route deletetion that there is no floating IP association that becomes
unreachable without the deleted route.
Data model impact
-----------------
None.
REST API impact
---------------
None.
Security impact
---------------
None. There is validation that the router that is selected for the floating IP
assignement belongs to the same tenant as the internal port that is the target
for the floating IP assignment.
Notifications impact
--------------------
None.
Other end user impact
---------------------
None.
Performance Impact
------------------
The perfomance impact is expected to be minimal.
The added router verification code runs only over routers that belong to the
same tenant as the ``internal-net`` port. For each such router that has an
extra route that matches the internal address, there will be an extra query to
validate that the router is indeed on the ``public-net``.
Other deployer impact
---------------------
None. The change enables floating IP assignment in network topologies, where it
was not possible in the past. As such, it will not affect existing deployments.
Developer impact
----------------
Plugins that override L3_NAT_db_mixin._get_router_for_floatingip() should be
modified to call L3_NAT_db_mixin._find_routers_via_routes_for_floatingip() and
iterate over the resulting list of routers. The iteration should apply any
plugin specific requirements and select an appropriate router from the list.
(The blueprint implementation already contains this change for
neutron/plugins/nec/nec_router.py).
Implementation
==============
Assignee(s)
-----------
Primary assignee:
https://launchpad.net/~ofer-w
Other contributors:
https://launchpad.net/~zegman
Work Items
----------
Enhance floating IP router lookup: https://review.openstack.org/#/c/55987/
Dependencies
============
None.
Testing
=======
A new unit test: neutron.tests.unit.test_extension_extraroute:
ExtraRouteDBTestCaseBase.test_floatingip_reachable_by_route() will be added to
define the topology in the example setup diagram above. It will test the
association of a floating IP to the internal network, including tenant ID match
and multiple fixed IPs for the internal network.
Documentation Impact
====================
None.
References
==========
.. [#first] http://lists.openstack.org/pipermail/openstack-dev/2013-December/021579.html
.. [#second] http://lists.openstack.org/pipermail/openstack-dev/2014-January/025940.html
.. [#third] http://lists.openstack.org/pipermail/openstack-dev/2014-February/026194.html

View File

@@ -1,118 +0,0 @@
=============================================
FloatingIP Extension support for Nuage Plugin
=============================================
https://blueprints.launchpad.net/neutron/+spec/floatingip-ext-for-nuage-plugin
Adding floatingip extension support to existing nuage networks' Plugin
Problem description
===================
Current Nuage Plugin does not support Neutron's floatingip extension.
Nuage's VSP supports this feature and the support for extension needs
to be added in the plugin code.
Proposed change
===============
Adding extension support code in Nuage plugin.
Alternatives
------------
None
Data model impact
-----------------
Existing floatingip tables in neutron will be supported.
On top of that, there will be 2 nuage specific tables which will be added:
Schema could look like::
class FloatingIPPoolMapping(model_base.BASEV2):
__tablename__ = "floatingip_pool_mapping"
fip_pool_id = Column(String(36), primary_key=True)
net_id = Column(String(36), ForeignKey('networks.id', ondelete="CASCADE"))
router_id = Column(String(36), ForeignKey('routers.id', ondelete="CASCADE"))
class FloatingIPMapping(model_base.BASEV2):
__tablename__ = 'floatingip_mapping'
fip_id = Column(String(36), ForeignKey('floatingips.id',ondelete="CASCADE"),
primary_key=True)
router_id = Column(String(36))
nuage_fip_id = Column(String(36))
REST API impact
---------------
None
Security impact
---------------
None
Notifications impact
--------------------
None
Other end user impact
---------------------
None
Performance Impact
------------------
None
Other deployer impact
---------------------
None
Developer impact
----------------
None
Implementation
==============
Its a straightforward floatingip extension support where
resource from neutron will be mapped into VSP.
When user creates external network and gives it a subnet,
floatingip-pool will be created.
When floatingip is created, its created in VSP as well.
Associate, disassociate of floatingip with port should work
in the same way as well.
CRUD operation on floatinip will be supported in normal fashion.
Assignee(s)
-----------
Ronak Shah
Primary assignee:
ronak-malav-shah
Other contributors:
Work Items
----------
Extension code in Nuage plugin
Nuage Unit tests addition
Nuage CI coverage addition
Dependencies
============
None
Testing
=======
Unit Test coverage for floating-ip extension within Nuage unit test
Nuage CI will be modified to start supporting this extension tests
Documentation Impact
====================
None
References
==========
None

View File

@@ -1,196 +0,0 @@
=======================================
Freescale FWaaS Plugin
=======================================
Launchpad blueprint:
https://blueprints.launchpad.net/neutron/+spec/freescale-fwaas-plugin
Problem description
===================
CRD (Cloud Resource Discovery) Service is designed to support Freescale silicon
in data center environment. Like Neutron, it uses keystone authentication for
all ReSTful calls.
Neutron Firewall information (like rules, policies and firewall) is requried
by CRD service to manage Firewall deployment in virtual network appliances and
openflow controller apps.
In order to send this information to CRD from neutron, a new FWaaS plugin is
required.
Freescale FWaaS Plugin proxies ReSTful calls (formatted for CRD Service) from
Neutron to CRD Service.
It supports the Cloud Resource Discovery (CRD) service by updating the Firewall
related data (rules, policies and firewall) into the CRD database.
CRD service manages creation of firewall on network nodes, virtual network
appliances and openflow controller network applications.
Proposed change
===============
The basic work flow is as shown below.
::
+-------------------------------+
| |
| Neutron Service |
| |
| +--------------------------+
| | |
| | Freescale Firewall |
| | Service Plugin |
| | |
+----+-----------+--------------+
|
| ReST API
|
|
+---------v-----------+
| |
| CRD Service |
| |
+---------------------+
Freescale Firewall Service plugin sends the Firewall related data to
CRD server.
The plug-in implements the CRUD operation on the following entities:
* Firewall Rules
* Firewall Policies
* Firewall
The plug-in uses the exisitng firewall database to store the firewall
data.
The creation of firewall in network node or Virtual Network appliance or
Openflow controller app is decided by CRD service.
Sequence flow of events for create_firewall is as follows:
::
create_firewall
{
neutron -> fsl_fw_plugin
fsl_fw_plugin -> crd_service
fsl_fw_plugin <-- crd_service
neutron <-- fsl_fw_plugin
}
Alternatives
------------
None
Data model impact
-----------------
No existing models are changed and new models are created.
REST API impact
---------------
None.
Security impact
---------------
None.
Notifications impact
--------------------
None.
Other end user impact
---------------------
None.
Performance Impact
------------------
Negligible or None
Other deployer impact
---------------------
This change only affects deployments where neutron 'service_plugins' is
configured with 'fsl_firewall'.
In [DEFAULT] section of /etc/neutron/neutron.conf modify 'service_plugins'
attribute as,
::
[DEFAULT]
service_plugins = fsl_firewall
Update /etc/neutron/plugins/services/firewall/fsl_firewall.ini, as below.
::
[fsl_fwaas]
crd_auth_strategy = keystone
crd_url = http://127.0.0.1:9797
crd_auth_url = http://127.0.0.1:5000/v2.0/
crd_tenant_name = service
crd_password = <-service-password->
crd_user_name = <-service-username->
CRD service must be running in the Controller.
Developer impact
----------------
None.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
trinath-somanchi
Other contributors:
None
Work Items
----------
- Freescale firewall service plugin (fsl_firewall_plugin.py)
Dependencies
============
None
Testing
=======
* Complete unit test coverage of the code is included.
* For tempest test coverage, third party testing is provided.
* The Freescale CI reports on all changes affecting this Plugin.
* The testing is run in a setup with an OpenStack deployment (devstack)
connected to an active CRD server.
Documentation Impact
====================
Usage and Configuration details.
References
==========
None

View File

@@ -1,173 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=============================================
FWaaS Implementation for Cisco Virtual Router
=============================================
https://blueprints.launchpad.net/neutron/+spec/fwaas-cisco
Problem description
===================
The Cisco Virtual Router implementation (CSR1kv) also supports the Firewall
Service in addition to Routing. The CSR1kv backend allows a Firewall to be
applied on any of it's interfaces for a specific direction of traffic. This
blueprint targets neutron support for this use case.
Proposed change
===============
Support of the Plugin and Agent/Driver for the CSR1kv Firewall is being
proposed in this blueprint. There are no changes to any of the Resources from
the Reference implementation. The OpenStack resources are translated to the
backend implmentation and the mapping to the backend resources is maintained.
Implementation targeted as a Service Plugin and will be refactored to align
with the Flavor Framework post Juno. Also given that the Service Insertion
BP[3] is in discussion, the initial implementation will be done using Vendor
extension attributes to capture the insertion points(Router interface &
direction) of the service in as simple a form as possible. This will be
refactored to align with the community post Juno.
Supporting the CSR1kv requires:
* Additional vendor attributes to specify firewall insertion points (neutron
port corresponding to router interface and associated direction). Supported
as vendor extension attributes as a simple model that will be refactored to
adopt the Service Insertion BP when available. The "extraroute" approach
will be taken to add the needed attributes of port and direction without
any changes to the client.
* Introduce new table to track insertion points of a firewall resource in the
vendor plugin.
* Interaction with the CSR1kv Routing Service Plugin[1] which is limited to
querying for the hosting VM and some validation for the attached interface.
* Add validators for the attribute extensions to conform to vendor
implementation constraints.
* Agent support for Firewall built on Cisco Config Agent[2] as a service agent
to handle messaging with the plugin along with the messaging interfaces
(firewall dict, plugin API and agent API) mostly along the lines of the
reference implementation.
* Agent to backend communication using existing vendor REST communication
library.
Alternatives
------------
The ideal approach is to base it on the flavor framework and service insertion
BP's. But given that these are WIP, this is being proposed as a Service Plugin
which will be refactored to align with the community model when available.
Data model impact
-----------------
There are no changes planned to existing Firewall resources (FirewallRule,
FirewallPolicy and Firewalls). The insertion point attributes are tracked
by introducing a new table CiscoFirewallAssociation:
* firewall_id - uuid of logical firewall resource
* port_id - uuid of neutron port corresponding to router interface
* direction - direction of traffic on the portid to apply firewall on
can be:
- ingress
- egress
- bidirectional
REST API impact
---------------
No new REST API is introduced.
Security impact
---------------
None.
Notifications impact
--------------------
None to existing. New topic for messaging between the plugin and agent.
Other end user impact
---------------------
None.
Performance Impact
------------------
None.
Other deployer impact
---------------------
Deployer will have to enable the CSR1kv Routing Service Plugin, the Cisco
Config Agent in addition the CSR1kv Firewall Service Plugin being proposed
here. There is no impact to the community implementation when this is not
enabled. The Agent/backend driver is derived from the Service Plugin and
eventually from the flavor and this is messaged with the Config Agent avoiding
the need for a separate .ini file.
Developer impact
----------------
None.
Implementation
==============
The above figure is a representation of the CSR1kv components and
interactions. The CSR1kv Routing Service Plugin [1] and the Cisco Config
Agent[2] are being addressed in separate BP's. The work being targeted
here are the two items suffixed with a '*' and their interfaces to the
existing components.
::
Neutron Server
+---+---------------------+--------+ +-----------+
| +---------------------+ | |Cisco Cfg |
| | CSR1kv Routing | | | Agent |
| | Service Plugin | | | |
| | | | | |
| | | | | |
| +---------------------+ | +----+------+
| ^ | |CSR1|kv Firewall
| | | |Agen|t*
| | +------------------------>| |<---+
| v | | +----+ | Cisco
| +------------------v--+ | | Device specific
| | CSR1kv Firewall | | v REST i/f
| | Service Plugin* | | +------+
| | | | |CSR1kv|
| | | | | VM |
| +---------------------+ | | |
| | | |
| | +------+
| |
| |
+----------------------------------+
Assignee(s)
-----------
Primary assignee: skandasw
Other contributors: yanping
Work Items
----------
Service Plugin with vendor extension attributes for the Firewall Resource.
API & DB changes for the vendor specific extensions.
Cisco CSR1kv FWaaS service agent addition to the Cisco config Agent[2].
Dependencies
============
https://blueprints.launchpad.net/neutron/+spec/cisco-routing-service-vm
https://blueprints.launchpad.net/neutron/+spec/cisco-config-agent
Testing
=======
Unit tests, Tempest API tests and support for Vendor CI framework will be
addressed. Scenario tests will be attempted based on the tests available
for the reference implementation.
Documentation Impact
====================
Will require new documentation in Cisco sections.
References
==========
[1]https://blueprints.launchpad.net/neutron/+spec/cisco-routing-service-vm
[2]https://blueprints.launchpad.net/neutron/+spec/cisco-config-agent
[3]https://blueprints.launchpad.net/neutron/+spec/service-base-class-and-insertion

View File

@@ -1,268 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================
Huawei ML2 mechanism driver
==========================================
https://blueprints.launchpad.net/neutron/+spec/huawei-ml2-mechanism-driver
* HW-SDN MD : Huawei SDN Mechanism Driver
* HW-SDN CR : Huawei SDN Controller
The purpose of this blueprint is to build an ML2 Mechanism Driver for Huawei
software define network (SDN) controller, which proxies RESTful calls
(formatted for Huawei SDN controller) from ML2 plugin of Neutron to Huawei
SDN controller.
Huawei SDN controller enables network automation and provision to simplify
virtual machine deployments and move on a larger layer 2 network. When a
cloud administrator provisions VMs, instances of network flow rules are
automatically created and applied to the OpenvSwitch(OVS) which hosted on
each compute node. As VMs move across the compute nodes, the network flow
rules are automatically applied to each OVS.
Problem description
===================
Huawei SDN controller requires information of OpenStack Neutron based networks
and ports to manage virtual network appliances and OVS flow rules.
In order to recieve such information from neutron service, a new ML2 mechanism
driver is needed to post the _postcommit data to Huawei SDN controller.
The following sections describe the proposed changes in Neutron, a new ML2
mechanism driver, and make it possible to use OpenStack in Huawei SDN
topology. The following diagram depicts the OpenStack deployment in Huawei
SDN topology.
Huawei SDN Topology::
+-----------------------+ +----------------+
| | | |
| OpenStack | | |
| Controller | | |
| Node | | |
| | | |
| +---------------------+ | Huawei SDN |
| |Huawei SDN mechanism | REST API | controller |
| |driver |--------------| |
| | | | |
+-+--------+-----+------+ +--+----------+--+
| | | |
| | | |
| +--------------+ | |
| | | |
+----------+---------+ +---+---------+------+ |
| | | | |
| OVS | | OVS | |
+--------------------+ ---- +--------------------+ |
| OpenStack compute | | OpenStack compute | |
| node 1 | | node n | |
+----------+---------+ +--------------------+ |
| |
| |
+-----------------------------------------+
As shown in the diagram above, each OpenStack compute node is connected
to Huawei SDN controller, which is responsible for provisioning, monitoring
and troubleshooting of cloud network infrastructures. The Neutron API requests
will be proxied to SDN controller, then network topology information can be
built. When a VM instance starts to communicate with another, the first packet
will be pushed to Huawei SDN controller by OVS, then the flow rules will be
calculated and applied to related compute nodes by SDN controller. Finally,
OVS follows the rules to forward packets to the destination instance.
Proposed change
===============
The requirements for ML2 mechanism driver to support huawei SDN controller
are as follow:
1. SDN controller exchanges information with OpenStack controller node by
using REST API. To support this, we need a specific client.
2. OpenStack controller (Neutron configured with ML2 plugin) must be
configured with SDN controller access credentials.
3. The network, subnet and port information should be sent to
SDN controller when network or port is created, updated, or deleted.
4. SDN controller address should be set on OVS. SDN controller will detect
port change and calculate flow tables based on network information sent from
OpenStack controller. These flow tables will be applied on OVS on related
compute nodes.
Huawei Mechanism driver handles the following postcommit operations.
Network create/update/delete
Subnet create/update/delete
Port create/delete
Supported network types include vlan and vxlan.
Huawei SDN mechanism driver handles VM port binding within the mechanism
driver.
'bind_port' function verifies the supported network types (vlan, vxlan)
and calls context.set_binding with binding details.
Huawei SDN Controller manages the flows required on OVS, so we don't have
an extra agent.
Sequence flow of events for create_network is as follow:
::
create_network
{
neutron -> ML2_plugin
ML2_plugin -> HW-SDN-MD
HW-SDN-MD -> HW-SDN-CR
HW-SDN-MD <-- HW-SDN-CR
ML2_plugin <-- HW-SDN-MD
neutron <-- ML2_plugin
}
Port binding task is handled within the mechanism driver, So OVS mechanism
driver is not required when this mechanism driver is enabled.
Alternatives
------------
None
Data model impact
-----------------
None
REST API impact
---------------
None
Security impact
---------------
Recently a feature of enabling OVS secure mode was added to the OVS agent.
Huawei SDN controller doesn't rely on the OVS agent but secure mode will
be enabled when deploying Huawei SDN controller and OVS.
Notifications impact
--------------------
None
Other end user impact
---------------------
This change doesn't take immediate effect.
1. Configuration parameters regarding SDN (such as ip address,...) should be
added to the mechanism driver configuration file.
Update /etc/neutron/plugins/ml2/ml2_conf_huawei.ini, as follow:
::
[ml2_Huawei]
nos_host = 128.100.1.7
nos_port = 8080
2. An SDN controller account should be created for OpenStack to access, also
this account should be added to the mechanism driver configuration file.
Update /etc/neutron/plugins/ml2/ml2_conf_huawei.ini, as follow:
::
[ml2_Huawei]
nos_username = admin
nos_password = my_password
Performance Impact
------------------
There are create/update/delete_<resource>_postcommit functions to proxy
those requests to SDN controller in the ML2 mechanism driver. All those
processes require database access in SDN controller, which may impact the
Neutron API performance a little.
Other deployer impact
---------------------
This change doesn't take immediate effect.
1. Add new configuration options for SDN controller, which are ip address
and credentials.
Update /etc/neutron/plugins/ml2/ml2_conf_huawei.ini, as follow:
::
[ml2_Huawei]
nos_host = 128.100.1.7
nos_port = 8080
nos_username = admin
nos_password = my_password
2. Configure parameters of section ml2_type_vxlan in ml2_conf.ini, setting
vni_ranges for vxlan network segment ids and vxlan_group for multicast.
Update /etc/neutron/plugins/ml2/ml2_conf.ini, as follow:
::
[ml2_type_vxlan]
vni_ranges = 1001:2000
vxlan_group = 239.1.1.1
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
yangxurong
Work Items
----------
1. Change the setup.cfg to introduce 'huawei' as the mechanism driver.
2. An REST client for SDN controller should be developed first.
3. Mechanism driver should implement create/update/delete_resource_postcommit.
4. Test connection between two new instances under different subnets.
Dependencies
============
None
Testing
=======
1. The whole setup can be deployed using OVS and SDN controller can be deployed
in VM.
2. For each module added to the mechanism driver, unit test is provided.
3. Functional testing with tempest will be provided. The third-party Huawei CI
report will be provided to validate this ML2 mechanism driver.
Documentation Impact
====================
Huawei SDN mechanism driver description and configuration details will be added.
References
==========
https://review.openstack.org/#/c/68148/

View File

@@ -1,343 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================
ML2: Hierarchical Port Binding
==========================================
Launchpad blueprint:
https://blueprints.launchpad.net/neutron/+spec/ml2-hierarchical-port-binding
This blueprint extends ML2 port binding to support hierarchical
network topologies.
Problem description
===================
The ML2 plugin does not adequately support hierarchical network
topologies. A hierarchical network might have different network
segment types (VLAN, VXLAN, GRE, proprietary fabric, ...) at different
levels, and might be made up of one or more top-level static network
segments along with dynamically allocated network segments at lower
levels. For example, traffic between ToR and core switches could
encapsulate virtual networks using VXLAN segments, while traffic
between ToR switches and compute nodes would use dynamically allocated
VLANs segments.
::
+-------------+
||
|CoreSwitch|
||
+---+-----+---+
VXLAN||VXLAN
+-----------++------------+
||
+------+-----++------+-----+
||||
|ToRSwitch||ToRSwitch|
||||
+---+---+----++---+----+---+
VLAN||VLANVLAN||VLAN
+-------++----++----++------+
||||
+----+----++----+----++----+----++----+----+
||||||||
|Compute||Compute||Compute||Compute|
|Node||Node||Node||Node|
||||||||
+---------++---------++---------++---------+
Dynamically allocating segments at lower levels of the hierarchy is
particularly important in allowing neutron deployments to scale beyond
the 4K limit per physical network for VLANs. VLAN allocations can be
managed at lower levels of the hierarchy, allowing many more than 4K
virtual networks to exist and be accessible to compute nodes as VLANs,
as long as each link from ToR switch to compute node needs no more
than 4K VLANs simultaneously.
Note that the diagram above shows static VXLAN segments connecting ToR
and core switches, but this most likely isn't the current 'vxlan'
network_type where tunnel endpoints are managed via RPCs between the
neutron server and L2 agents. It would instead be a network_type
specific to both the encapsulation format and the way tunnel endpoints
are managed among the switches. Each network_type value should
identify a well-defined standard or proprietary protocol, enabling
interoperability where desired.
Proposed change
===============
ML2 will support hierarchical network topologies by binding ports to
mechanism drivers and network segments at each level of the
hierarchy. For example, one mechanism driver might bind to a static
VXLAN segment of the network, causing a ToR switch to bridge that
network to a dynamically allocated VLAN on the link(s) to the compute
node(s) connected to that switch, while a second mechanism driver,
such as the existing OVS or HyperV driver, would bind the compute node
to that dynamic VLAN.
Supporting hierarchical network topologies impacts the ML2 driver APIs
and the configuration of deployments that use hierarchical networks,
but does not change any REST APIs in any way.
A new function and property will be added to the PortContext class in
the driver API to enable hierarchical port binding.
::
class PortContext(object):
# ...
@abc.abstractmethod
def continue_binding(self, segment_id, next_segments_to_bind):
pass
@abc.abstractproperty
def segments_to_bind(self):
pass
The new continue_binding() method can be called from within a
mechanism driver's bind_port() method as an alternative to the
existing set_binding() method. As is currently the case, if a
mechanism driver can complete a binding, it calls
PortContext.set_binding(segment_id, vif_type, vif_details, status). If
a mechanism driver can only partially establish a binding, it will
instead call continue_binding(segment_id, next_segments_to_bind).
As with set_binding(), the segment_id passed to continue_binding()
indicates the segment that this driver is binding to. The new_segments
parameter specifies the set of network segments that can be used by
the next stage of binding for the port. It will typically contain a
dynamically allocated segment that the next driver can use to complete
the binding.
Currently, mechanism drivers try to bind using the segments from the
PortContext.network.network_segments property. These are the network's
static segments. The new PortContext.segments_to_bind property should
now be used instead by all drivers. For the initial stage of binding,
it will contain the same segments as
PortContext.network.network_segments. But for subsequent stages, it
will contain the segment(s) passed to PortContext.continue_binding()
by the previous stage driver as next_segments_to_bind.
The ML2 plugin currently tries to bind using all registered mechanism
drivers in the order they are specified in the mechanism_drivers
config variable. To support hierarchical binding, a new
port_binding_drivers configuration variable is added that specifies
the sets of drivers that can be used to establish port bindings.
::
port_binding_drivers = [openvswitch, hyperv, myfabric+{openvswitch|hyperv}]
With this example, ML2 will first try binding using just the
openvswitch mechanism driver. If that fails it will try the hyperv
mechanism driver. If neither of these can bind, it will try to bind
using the myfabric driver. If the myfabric driver partially bindings
(calling PortContext.continue_binding()), then ML2 will try to
complete the binding using the openvswitch driver, and if that can't
complete it, the hyperv driver.
When port_binding_drivers has the default empty value, the
mechanism_drivers value is used instead, with each registered driver
treated as a separate single-item chain.
Alternatives
------------
We originally considered supporting dynamic VLANs by allowing the
(single) bound mechanism driver to provide arbitrary data to be
returned to the L2 agent via the get_device_details RPC. This approach
was ruled out because it requires a single mechanism driver to support
both the ToR switch and the specific L2 agent on the compute
node. This would require separate drivers for each possible
combination of ToR mechanism (switch) and compute node mechanism (L2
agent). The approach described in this specification avoids this
combinatorial explosion by binding separate mechanism drivers at each
level.
Data model impact
-----------------
Whether the data model is impacted remains to be determined during
implementation. If the ML2 plugin only needs to store the details of
the final stage of the binding, then no change should be needed. But
if it needs to store details of all levels, the ml2_port_bindings
table schema will need to be modified.
If we need to persist all levels of the binding, we must store the
driver name and the bound segment ID for each level, but the vif_type
and vif_details only apply to the lowest level. The driver and segment
columns in the ml2_port_bindings table currently each store a single
string, and we need to make sure DB migration preserves already
established bindings. We could add columns to store a list of
additional binding levels, store partial binding data in a separate
table, or possibly redefine the current strings' content to be
comma-separated lists if items.
REST API impact
---------------
No REST API changes are proposed in this specification.
Using the existing providernet and multiprovidernet API extensions,
only the top-level static segments of a network are accessible. There
is no current need to expose dynamic segments through REST APIs. The
portbindings extension could potentially be modified in the future to
expose more information about multi-level bindings if needed.
As mechanism drivers for specific network fabric technologies are
developed, new network_type values may be defined that will be visible
through the providernet and multiprovidernet extensions. But no new
network_type values are being introduced through this specific BP.
Security impact
---------------
None.
Notifications impact
--------------------
None.
Other end user impact
---------------------
None.
Performance Impact
------------------
None.
Other deployer impact
---------------------
No change is required when deploying non-hierarchical network
topologies. To support hierarchical network topologies, the new
port_binding_drivers configuration variable will need to be set to
specify the allowable binding driver chains, and all relevant drivers
will need to be listed in the mechanism_drivers and type_drivers
configuration variables. Additionally, when VLANs are used for the
host-level bindings, L2 agent configurations will be impacted as
described below. Since its multi-level networks at least initially
will involve propriery switches, vendor-specific documentation and
deployment tools will need assist the adminstrator.
Mechanism drivers determine whether they can bind to a network segment
by looking at the network_type and any other relevant information. For
example, if network_type is 'flat' or 'vlan', the L2 agent mechanism
drivers look at the physical_network and, using agents_db info, make
sure the L2 agent on that host has a mapping for the segment's
physical_network. This is how the existing mechanism drivers work, and
this will not be changed by this BP.
With hierarchical port binding, where a ToR switch is using dynamic
VLAN segments and the hosts connected to it are using a standard L2
agent, the L2 agents on the hosts will be configured with a mapping
for a physical_network name that corresponds to the scope at which the
switch assigns dynamic VLANs.
If dynamic VLAN segments are assigned at the switch scope, then each
ToR switch should have a unique corresponding physical_network
name. The switch's mechanism driver will use this physical_network
name in the dynamic segments it creates as partial bindings. The L2
agents on the hosts connected to that switch must have a (bridge or
interface) mapping for that same physical_network name, allowing any
of the normal L2 agent mechanism drivers to complete the binding.
If dynamic VLAN segments are instead assigned at the switch port
scope, then each switch port would have a corresponding unique
physical_network name, and only the host connected to that port should
have a mapping for that physical_network.
Developer impact
----------------
Mechanism drivers that support hierarchical bindings will use the
additional driver API call(s). Other drivers will only need a very
minor update to use PortContext.segments_to_bind in place of
PortContext.network.network_segments.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
rkukura
Other contributors:
asomya
Work Items
----------
This specification should not require much code change, so it can
probably be implemented as a single patch that contains:
1. Update ML2 DB schema to support multi-level bindings, including
migration (if necessary).
2. Update ML2 driver API.
3. Implement multi-level binding logic.
4. Add unit test for multi-level binding.
5. Update existing drivers to use PortContext.segments_to_bind
If it does turn out that details of each partial binding need to be
persistent, that might be implemented as a separate patch.
Dependencies
============
Usage, and possibly testing, depend on implementation of portions of
https://blueprints.launchpad.net/neutron/+spec/ml2-type-driver-refactor,
in order to support dynamic segment allocation.
Testing
=======
A new unit test will cover the added port binding functionality. No
new tempest tests are needed. Third party CI will cover specific
mechanism drivers that support dynamic segment allocation.
Documentation Impact
====================
The new configuration variables and ML2 driver API changes will need
to be documented. Configuration for specific mechanism drivers
supporting multi-level binding will be documented by those drivers'
vendors.
References
==========

View File

@@ -1,219 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
================================================
ML2 Mechanism Driver for Cisco Nexus1000V switch
================================================
URL of your launchpad blueprint:
https://blueprints.launchpad.net/neutron/+spec/ml2-n1kv-mechanism-driver
The purpose of this blueprint is to add support for Cisco Nexus1000V switch
in OpenStack neutron as a ML2 mechanism driver.
Problem description
===================
Cisco Nexus 1000V for KVM is a distributed virtual switch that works with
the Linux Kernel-based virtual machine (KVM) open source hypervisor.
* Virtual Supervisor Module (VSM): Controller of the Cisco Nexus1000V
distributed virtual switch based on Cisco NX-OS software.
* Virtual Ethernet Module (VEM): Data plane software installed on the Compute
nodes. All VEMs are controlled by VSM.
* Network Profiles: Container for one or more networks. VLAN
type of network profiles will be supported in the initial version.
* Policy Profiles: Policy profiles are the primary mechanism by which network
policy is defined and applied to VM ports in a Nexus 1000V system.
* VM Network: VM Network refers to a combination of network-segment and
policy-profile. It maintains a count of ports that use the above
combination.
This proposed mechanism driver will interact with the VSM via REST APIs to
dynamically configure and manage networking for instances created via
OpenStack.
Proposed change
===============
The diagram below provides a high level overview of the interactions between
Cisco Nexus1000V switch and OpenStack components.
Flows::
+--------------------------+
| Neutron Server |
| with ML2 Plugin |
| |
| +------------+
| | N1KV |
| | Mechanism | +--------------------+
+-------| | Driver | | |
| | +-+--+---------+ REST API | Cisco N1KV |
| +---+ | N1KV Client +-----------------+ |
| | +-----------+--------------+ | Virtual |
| | | Supervisor |
| | | Module |
| | +--------------------------+ | |
| | | N1KV VEM +-----------------+ |
| | +--------------------------+ +-+------------------+
| | | | |
| +---+ Compute 1 | |
| | | |
| +--------------------------+ |
| |
| |
| +--------------------------+ |
| | N1KV VEM +-------------------+
| +--------------------------+
| | |
+-------+ Compute 2 |
| |
+--------------------------+
The Cisco Nexus1000V mechanism driver will handle all the postcommit events
for network, subnets and ports. This data will be used to configure the VSM.
VSM and VEM will be responsible for port bring up on the compute nodes.
The mechanism driver will initialize a default network profile and a policy
profile on the VSM. All networks will have a binding to this default
network profile. All ports will have a binding to this default policy profile.
Dynamic binding of network and policy profiles will be implemented in the
future once extensions are supported for mechanism drivers.
Alternatives
------------
None.
Data model impact
-----------------
This mechanism driver introduces three new tables which are specific to the
N1KV mechanism driver.
* NetworkProfile: Stores network profile name and type.
* N1kvNetworkBinding: Stores the binding between network profile and
network.
* PolicyProfile: Stores policy profile name and UUID.
* N1kvPortBinding: Stores the binding between policy profile and
port.
* VmNetwork: Stores port count for each VM Network.
A database migration is included to create the tables for these models.
No existing models are changed.
REST API impact
---------------
None.
Security impact
---------------
None.
Notifications impact
--------------------
None.
Other end user impact
---------------------
None.
Performance Impact
------------------
The performance of Cisco N1KV ML2 mechanism driver will depend on the
responsiveness of the VSM.
Other deployer impact
---------------------
The deployer must provide the following in order to be able to connect to a
VSM.
* IP address of VSM.
* Admin credentials (username and password) to log into VSM.
* Add "cisco_n1kv" as a ML2 driver.
* Add the "vlan" type driver.
* Add the "vxlan" type driver.
These should be provided in:
/opt/stack/neutron/etc/neutron/plugins/ml2/ml2_conf_cisco.ini
Example:
[ml2_cisco_n1kv]
# N1KV Format.
# [N1KV:<IP address of VSM>]
# username=<credential username>
# password=<credential password>
Developer impact
----------------
None.
Implementation
==============
Assignee(s)
-----------
Abhishek Raut <abhraut>
Sourabh Patwardhan <sopatwar>
Work Items
----------
Work Items can be roughly divided into the following tasks:
* Mechanism driver to handle network/subnet/port CRUD requests.
* N1KV Client to perform HTTP requests to the VSM.
* Unit test cases to test the mechanism driver and client code.
* Tempest test cases to peform functional testing.
Dependencies
============
Following third party library used:
* requests: Requests is a python library for making HTTP requests which is
well documented at http://docs.python-requests.org/en/latest/
Link to code -> https://github.com/kennethreitz/requests
Testing
=======
Unit test coverage of the code will be provided.
Third party testing will be provided. The Cisco CI will report on all changes
affecting this mechanism driver. The testing will run on a setup with an
OpenStack deployment connected to a VSM and VEM.
Documentation Impact
====================
Configuration details for this mechanism driver.
References
==========
http://www.cisco.com/go/nexus1000v

View File

@@ -1,221 +0,0 @@
================================================
ML2 Type drivers refactor to allow extensiblity
================================================
Blueprint URL:
https://blueprints.launchpad.net/neutron/+spec/ml2-type-driver-refactor
This blueprint aims to refactor the ML2 type driver architecture
such that type drivers are more self sufficient and allows developers
to author custom type drivers.
Flows
Flows represented are only a partial flow of events relevant to this patch
Current Flow:
.. seqdiag::
seqdiag {
API -> Neutron [label = "POST create_network"];
Neutron -> ML2_Plugin [label = "create_network"];
ML2_Plugin -> Type_Manager [label = "create_network"];
Type_Manager -> VLAN_Driver [label = "allocate_tenant_segment"];
Type_Manager -> VxLAN_Driver [label = "allocate_tenant_segment"];
Type_Manager <-- VLAN_Driver [label = "segment"];
Type_Manager <-- VxLAN_Driver [label = "segment"];
ML2_Plugin <-- Type_Manager [label = "segments"];
Neutron <-- ML2_Plugin [label = "Network dictionary"];
API <-- Neutron [label = "Network dictionary"];
}
Problem description
===================
Currently the ML2 segmentation is managed in multiple places within
the architecture:
* The plugin invokes the type manager with tenant/provider segment calls.
* The manager in turn invokes the type driver to reserve an available/specified
segment for the network.
* The segment_id if reserved is returned all the way to the plugin which then
stores it inside a DB.
* The type driver itself also has tables of its own where it stores
segment details.
This model works well for generic segmentation types like vlan, gre etc.
But this also makes any network types deviating from the norm like dynamic
network segments near impossible to implement.
Use Case
A sample use case for this feature is an overlay network with VTEPs in the
hardware layer instead of the soft switch edge. To overcome the 4k VLAN limit
a single network can have different segmentation IDs depending on location
in the network. The assumption that a network has fixed segmentation ids
across the deployment doesnt work with this use case with external
controllers that are capable of managing this kind of segmentation.
Proposed change
===============
* Move segmentation logic to the type managers and type drivers, send network
dictionary to the type drivers to store network information. This involves
moving the segmentation management bits from the ML2 plugin to the type
manager and the type drivers will now require the network dictionary to be
passed when allocating or reserving a segment.
* Add a new optional method called allocate_dynamic_segment to be invoked from
the mechanism drivers and consumed by the type drivers if required. This call
will typically be invoked by the mechanism driver in it's bind_port method,
it can also be invoked at any time if so desired by the mechanism driver(s).
* Store provider information in the DB tables, this is just a boolean that
distinguishes between a provider network and a non-provider network. This
information is required by many vendor plugins that wish to differentiate
between a provider and non-provider network.
* Agent interactions for the dynamic segments will be handled by Bob Kukura's
patch: https://blueprints.launchpad.net/neutron/+spec/ml2-dynamic-segment
Proposed Flows:
.. seqdiag::
seqdiag {
API -> Neutron [label = "POST create_network"];
Neutron -> ML2_Plugin [label = "create_network"];
ML2_Plugin -> Type_Manager [label = "create_network"];
Type_Manager -> Type_Driver [label = "allocate_segment(static)"];
Type_Manager <-- Type_Driver [label = "static segment"];
ML2_Plugin <-- Type_Manager [label = "segments(static)"];
Neutron <-- ML2_Plugin [label = "Network dictionary"];
API <-- Neutron [label = "Network dictionary"];
}
.. seqdiag::
seqdiag {
ML2_Plugin -> Mech_Manager [label = "bind_port"];
Mech_Manager -> Mech_Driver [label = "bind_port"];
Mech_Driver -> Type_Manager [label = "allocate_segment(dynamic)"];
Type_Manager -> Type_Driver [label = "allocate_segment(dynamic)"];
Type_Manager <-- Type_Driver [label = "segment(dynamic)"];
Mech_Driver <-- Type_Manager [label = "segments(dynamic)"];
Mech_Manager <-- Mech_Driver [label = "Bindings(dynamic)"];
ML2_Plugin <-- Mech_Manager [label = "Bindings(dynamic)"];
}
Alternatives
------------
The alternative is to override the get_device_details rpc method and send
the RPC call from the agent to the type or mechanism driver and serve a vlan
out of a pool. This approach will require extensive changes to the rpc
interactions to send the calls directly to the type or mechanism drivers
for them to override. Also, having mechanism drivers do segment management
will break the ml2 model of clean separation between type and mechanism
drivers.
Data model impact
-----------------
The data model for all the type drivers will change to accomodate network id's.
This translates to one extra column in the database for each type driver to
store the network uuid. Existing models for ml2_network_segments will be
modified to only store the type driver type and not the complete segment.
The patch will be accompanied by an Alembic migration to upgrade/downgrade all
type driver and ML2 DB models.
NetworkSegment:
Removed Fields:
network_type = sa.Column(sa.String(32), nullable=False)
physical_network = sa.Column(sa.String(64))
segmentation_id = sa.Column(sa.Integer)
Added Fields:
segment_type = sa.Column(sa.String(32), nullable=False)
Add DB based type drivers (vlan/vxlan/gre):
Added Fields:
network_id = sa.Column(sa.String(255), nullable=True)
provider_network = sa.Column(sa.Boolean, default=False)
As part of the Alembic migration script, data will also be migrated
out of the NetworkSegment tables to the respective type driver tables
and vice versa for upgrades/downgrades.
As of today there is no method of knowing if the existing segment is a
provider segment or allocated tenant segment in ML2 so all migrated networks
will appear as tenant segments.
REST API impact
---------------
None.
Security impact
---------------
None.
Notifications impact
--------------------
None.
Other end user impact
---------------------
None.
Performance Impact
------------------
Minor performance impact as extra information will be sent to the type drivers
to store in their database.
Other deployer impact
---------------------
None.
Developer impact
----------------
No API impact. Developers developing new type drivers will have to
manage all segmentation within the type driver itself.
Implementation
==============
Assignee(s)
-----------
Arvind Somya <asomya>
Work Items
----------
* Modify ML2 plugin.py to remove all segmentation management.
* Modify ML2 network_segments model to only store type drivers for
each network.
* Move segment processing to type manager.
* Modify type manager to handle getting segments.
* Modify type manager to send network dictionary to the type drivers.
* Refit type driver models to store network id's.
* Implement dynamic allocation method in base plugin.
* Implement dynamic allocation method in all existing type drivers.
* Modify existing unit tests for the new architecture.
* Add unit tests for dynamic allocation.
Dependencies
============
None.
Testing
=======
Complete unit test coverage for the ML2 plugin and all refitted type drivers.
Documentation Impact
====================
None.
References
==========
None.

View File

@@ -1,248 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================
ML2 Mechanism Driver for Cisco UCS Manager
==========================================
URL of your launchpad blueprint:
https://blueprints.launchpad.net/neutron/+spec/ml2-ucs-manager-mechanism-driver
The purpose of this blueprint is to add support for Cisco UCS Manager into
Openstack Neutron. This plugin for Cisco UCS Manager is implemented as a ML2
mechanism driver.
Problem description
===================
This section includes a brief introduction to the SR-IOV and Cisco VM-FEX
technologies in addition to a detailed description of the problem:
1. This mechanism driver needs to be able to configure Cisco VM-FEX on specific
Cisco NICs on the UCS by communicating with a UCS Manager.
2. Cisco VM-FEX technology is based on SR-IOV technology which allows a single
PCIe physical device (PF - physical function) to be divided into multiple
logical devices (VF - virtual functions). For more details on Cisco VM-Fex
technology, please refer to: http://www.cisco.com/c/en/us/solutions/data-center-virtualization/data-center-virtual-machine-fabric-extender-vm-fex/index.html
3. With SR-IOV and Cisco VM-FEX a VM's port can be configured in either the
"direct" or "macvtap" modes. In the "direct" mode, the VM's port is connected
directly to the VF and to a macvtap device on the host in the "macvtap" mode.
In both these modes, the VM's traffic completely bypasses the hypervisor,
sending and receiving traffic directly to and from the vNIC and thus the
upstream switch. This results in a significant increase in throughput on the VM
and frees up CPU resources on the host OS to handle more VMs. Due to this
direct connection with the upstream switch, the "direct" mode does not support
live migration of the VMs that it is attached to.
4. Cisco VM-FEX technology is based on the 802.1qbh and works on top of the
SR-IOV technology using the concept of port profiles. Port profiles are
configuration entities that specify additional config that needs to be applied
on the VF. This config includes the vlan-id, QoS (not applicable in Openstack
for now) and the mode (direct/macvtap).
5. This mechanism driver needs to configure port profiles on the UCS Manager
and pass this port profile to Nova so that it can stick it into the VMs domain
XML file.
6. This mechanism driver is responsible for creating, updating and deleting
port profiles in the UCS Manager and maintaining the same info in a local DB.
7. This mechanism driver also needs to support SR-IOV capable Intel NICs on the
UCS servers.
8. Security groups cannot be applied on SR-IOV and VM-FEX ports. Further work
needs to be done to handle security groups gracefully for these ports. It will
be taken up in a different BP in the next iteration. (Since the VFs appear as
interfaces on the upstream switch, ACLs can be applied on the VFs at the
upstream switch.)
9. Multi-segment networks are also currently not supported by this BP. It will
be added as part of a different BP in the next iteration.
Proposed change
===============
1. This ML2 mechanism driver communicates to the UCS manager via UCS Python SDK.
2. There is no need for a L2 agent to be running on the compute host to
configure the SR-IOV and VM-FEX ports.
3. The ML2 mechanism driver takes care of binding the port if the pci vendor
info in the binding:profile portbinding attribute matches the pci device
attributes of the devices it can handle.
4. The mechanism driver also expects to get the physical network information
as part of port binding:profile at the time of bind port.
5. When a neutron port is being bound to a VM, the mechanism driver
uses the segmentation-id associated with the network to determine if a new
port profile needs to be created. According to the DB maintained by the ML2
driver, if a port profile with that vlan_id already exists, then it re-uses
this port profile for the neutron port being created.
6. If the ML2 driver determines that an existing port profile cannot be re-used
it tries to create a new port profile on the UCS manager using the vlan_id
from the network. Since port profile is a vendor specific entity, we did not
want to expose this to the cloud admin or the tenant. So, port profiles are
created and maintained completely behind the scenes by the ML2 driver.
7. Port profiles created by this mechanism driver will have the name
"OS-PP-<vlan-id>".
The process of creating a port profile on the UCS Manager involves:
A. Connecting to the UCS manager and starting a new transaction
B. Creating a Fabric Vlan managed object corresponding to the vlan-id
C. Creating a vNIC Port Profile managed object that is associated with
the above Fabric Vlan managed object.
D. Creating a Profile Client managed object that corresponds to the vNIC
Port Profile managed object.
E. Ending the current transaction and disconnecting from UCS maanager
8. Once the above entities are created on the UCS manager, the ML2 driver
populates the profile:vif_details portbindings attribute with the profile_id
(name of the port profile). Nova then uses Neutron V2 API to grab the
profile_id and populates the VM's domain XML. After the VM is successfully
launched by libvirt, the configuration of VM-FEX is complete.
9. In the case of NICs that support SR-IOV and not VM-FEX (for example, the
Intel NIC), the portbinding profile:vif_details attribute is populated with
the vlan_id. This vlan_id is then written into the VM's domain XML file by
Nova's generic vif driver.
Alternatives
------------
None.
Data model impact
-----------------
One new data model is created by this driver to keep track of port profiles
created by Openstack. Other port profiles can exist on the UCS Manager that this
driver does not care about.
PortProfile: Tracks the vlan-id of the port associated with a given port
profile.
__tablename__ = 'ml2_ucsm_port_profiles'
profile_id = sa.Column(sa.String(64), nullable=False, primary_key=True)
vlan_id = sa.Colum(sa.Integer(), nullable=False)
The profile_id to port_id mapping is kept track of via ml2_port_bindings table
where the profile_id is stored in vif_details.
REST API impact
---------------
None.
Security impact
---------------
The connection to the XML API layer on the UCS Manager is via HTTP/HTTPS.
Traffic from the VM completely bypasses the host, so no security groups are
enforced when this mechanism driver binds the port.
Notifications impact
--------------------
None.
Other end user impact
---------------------
No user impact. As mentioned earlier, did not want to expose the port profile_id
to the user since this is a vendor specific entity. Instead, managing
allocation of port profiles internally within the driver.
Performance Impact
------------------
The ML2 driver code would have to conditionally communicate with the UCS Manager
to configure, update or delete port profiles and associated configuration. These
tasks would have a performance impact on Neutron's responsiveness to a command
that affects port config.
Other deployer impact
---------------------
The deployer must provide the following in order to be able to connect to a UCS
Manager and add support for SR-IOV ports.
1. IP address of UCS Manager.
2. Admin Username and password to log into UCS Manager.
3. Add "cisco_ucsm" as a ML2 driver to handle SR-IOV port configuration.
4. Add the "vlan" type driver. Currently, this mechansim driver supports only
the VLAN type driver.
These should be provided in:
/opt/stack/neutron/etc/neutron/plugins/ml2/ml2_conf_cisco.ini.
Example:
[ml2_cisco_ucsm]
# Hostname for UCS Manager
# ucsm_ip=1.1.1.1
# Username for the UCS Manager
# ucsm_username=username
# Password for APIC controller
# ucsm_password=password
The deployer should also install the Cisco UCS Python SDK for this ML2 mechanism
driver to connect and configure the UCS Manager. The SDK and install
instructions can be found at: https://github.com/CiscoUcs/UcsPythonSDK.
Developer impact
----------------
None.
Implementation
==============
Assignee(s)
-----------
Sandhya Dasu <sadasu>
Work Items
----------
Work Items can be roughly divided into the following tasks:
1. Mechanism driver handles port create, update and delete requests.
2. Network driver handles communication with UCS manager. This communication is
triggered by an operation peformed on a port by the mechanism driver.
3. Unit test cases to test the mechanism driver and network driver code.
4. Tempest test cases to peform end-to-end and functional testing of 1 and 2.
Dependencies
============
1. This mechanism driver depends on third party UCS python SDK (located at :
https://github.com/CiscoUcs/UcsPythonSDK) to communicate with the UCS Manager.
2. For SR-IOV ports to be actually scheduled and assigned to a VM, some non-vendor specific Nova code is required. This effort is tracked via:
https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov
Testing
=======
Third party tempest testing will be provided for this mechanism driver.
Cisco CI will start reporting on all changes affecting this driver. The third
party tempest tests will run on a setup which runs Openstack (devstack) code on
a multi-node setup that is connected to a UCS Manager system.
Documentation Impact
====================
Details of configuring this mechanism driver.
References
==========
1. Here is the link to the larger discussion around PCI passthrough ports:
https://wiki.openstack.org/wiki/Meetings/Passthrough
2. Useful links on VM-FEX - http://www.cisco.com/c/en/us/solutions/data-center-virtualization/data-center-virtual-machine-fabric-extender-vm-fex/index.html and
https://www.youtube.com/watch?v=8uCU9ghxJKg

View File

@@ -1,117 +0,0 @@
================================================================================
NetScaler LBaaS V2 Driver
================================================================================
NetScaler ADC driver for LBaaS v2 model
https://blueprints.launchpad.net/neutron/+spec/netscaler-lbaas-driver
Problem description
===================
Driver for the Neutron LBaaS plugin for using the Citrix NetScaler
loadbalancing devices to provide Neutron LBaaS functionality in OpenStack
based on the LBaaS V2 model described in
https://review.openstack.org/#/c/89903/.
Proposed change
===============
The driver will implement the interfaces according to the driver interfaces
mentioned in the spec https://review.openstack.org/100690
for the blueprint
https://blueprints.launchpad.net/neutron/+spec/lbaas-objmodel-driver-changes
The following managers will be implemented:
* LoadBalancerManager
* ListenerManager
* PoolManager
* MemberManager
* HealthMonitorManager
Alternatives
------------
None.
Data model impact
-----------------
None.
REST API impact
---------------
None.
Security impact
---------------
None.
Notifications impact
--------------------
None.
Other end user impact
---------------------
None.
Performance Impact
------------------
None.
Other deployer impact
---------------------
None.
Developer impact
----------------
None.
Implementation
==============
Assignee(s)
-----------
Primary assignee: https://launchpad.net/~vijay-venkatachalam
Work Items
----------
* NetScaler driver code
* Unit tests
* Voting CI
Dependencies
============
* https://review.openstack.org/#/c/101084/
* https://review.openstack.org/#/c/105610/
Testing
=======
* Unit tests
* NetScaler QA
* NetScaler CI
Documentation Impact
====================
None.
References
==========
None.

View File

@@ -1,404 +0,0 @@
====================================
External Attachment Points Extension
====================================
Launchpad blueprint:
https://blueprints.launchpad.net/neutron/+spec/neutron-external-attachment-points
This blueprint introduces the concept of external attachment points into Neutron
via an extension so external devices (e.g. baremetal workloads) can gain access
to Neutron networks. External attachment points will specify an attachment ID to
locate the port in the physical infrastructure (e.g. Switch1/Port5) as well as the
neutron network that it should be a member of. These can be referenced in port creation
requests from externally managed devices such as ironic instances.
Problem description
===================
There is no well-defined way to connect devices not managed by OpenStack
directly into a Neutron network. Even if everything is manually configured
to setup connectivity, the neutron DHCP agent will not issue addresses to
the device attached to the physical port since it doesn't have a corresponding
Neutron port. A neutron port can be created to match the MAC of an
external device to allow DHCP, but there is nothing to automate the process
of configuring the network attachment point of that device to put it in the
correct VLAN/VXLAN/etc.
Proposed change
===============
To integrate these external devices into Neutron networks, this blueprint
introduces the external attachment point extension. It will add a new mixin
class that will handle the database records for the external attachent points.
The actual configuration of the network devices the attachment points reside on
will be dependent on the plugin. The reference implementation will include an
OVS-based attachment gateway.
The new resource type this extension will introduce is the external attachment
point. It iss responsible for capturing the external attachment identification
information (e.g. Switch1/Port5).
External attachment points are created by admins and then assigned to a tenant for use.
The tenant can then assign the external attachment point to any Neutron networks that
he/she can create Neutron ports on. When it is assigned to a network, the backend will
provision the infrastructure so that attachment point is a member of the Neutron network.
A port creation request may reference an external attachment ID. This will prevent the
external attachment from being deleted or assigned to a different network while any Neutron
ports are associated with it.
Relational Model
----------------
::
+-----------------+
| Neutron Network |
+-----------------+
| 1
|
| M
+---------------------------+
| External Attachment Point |
+---------------------------+
Example CLI
-----------
Create external attachment point referencing a switch port (admin-only).
::
neutron external-attachment-point-create --attachment_type 'vlan_switch' --attachment_id switch_id=00:12:34:43:21:00,port=5,vlan_tag=none
Assign external attachment point to a tenant (admin-only).
::
neutron external-attachment-point-update <external_attachment_point_id> --tenant-id <tenant_id>
Assign external attachment point to neutron network (admin-or-owner).
::
neutron external-attachment-point-update <external_attachment_point_id> --network-id <network_id>
Create a neutron port referencing an external attachment point to prevent the attachment point from being deleted/re-assigned.
::
neutron port-create --external-attachment-point-id <external_attachment_point_id>
Use Cases
---------
*Ironic*
Ironic will be responsible for knowing the MAC address and switch port of each
instance it manages. This will either be through manual configuration or through
LLDP messages received from the switch. Using this information, it will create
and manage the life cycle of the external attachment point associated with each
server. Since this process is managed by Ironic and since Neutron ports can't be
assigned to a different network after creation, the external attachment object
never needs to be assigned to the tenant.
*L2 Gateway*
This is a generic layer 2 attachment into the Neutron network. This could be
a link to an access point, switch, server or any arbitrary set of devices that
need to share a broadcast domain with a Neutron network. In this workflow an
admin would create the attachment point with the switch info and either assign
it to a neutron network directly or assign it to a tenant who would assign
it to one of his/her networks as necessary.
Alternatives
------------
There isn't a very good alternative right now. To attach an external device to
a Neutron network, the admin has to lookup the VLAN (or other segment
identifier) for that network and then manually associate a port to that VLAN.
The end device then has to be configured with a static IP in an exclusion range
on the Neutron subnet or manually create a neutron port with the matching MAC.
Tenants then cannot assign the physical device to a different network,
or see any indication that the device is actually attached to the network.
Data model impact
-----------------
For plugins that leverage this extension, it will add the two following tables:
external attachment points and external attachment points port bindings.
The external attachment point table will contain the following fields:
* id - standard object UUID
* name - optional name
* description - optional description of the attached devices
(e.g. number of attached servers or a description of the L2 neighbors
on other side)
* attachment_id - an identifier for the backend to identify the port
(e.g. Switch1:Port5).
* attachment_type - a type that will control how the attachment_id should be
formatted (e.g. 'vlan_switch' might require a switch_id, port_num, and an optional_vlan_tag).
These will be enumerated by the backend in a manner that allows plugins to
add new types.
* network_id - the ID of the neutron network that the attachment point should be a member of
* status - indicates status of backend attachment configuration operations (BUILD, ERROR, ACTIVE)
* status_description - provides details of failure in the case of the ERROR state
* tenant_id - the owner of the attachment point
The external attachment point port binding table will contain the following fields:
* port_id - foreign-key reference to associated neutron port
* external_attachment_point_id - foreign-key reference to external attachment point
This will have no impact on the existing data model. Neutron ports associated with
external attachment points can be deleted through the normal neutron port API.
Three attachment_type formats will be included.
* vlan_switch
* switch_id - hostname, MAC address, IP, etc that identifies the switch on the network
* switch_port - port identifier on switch (e.g. ethernet7)
* vlan_tag - 'untagged' or a vlan 1-4095
* ovs_gateway
* host_id - hostname of node running openvswitch
* interface_name - name of interface to attach to network
* bonded_port_group
* ports = a list of port objects that can be any attachment_types
REST API impact
---------------
The following is the API exposed for physical ports.
.. code-block:: python
RESOURCE_ATTRIBUTE_MAP = {
'external_attachment_points': {
'id': {'allow_post': False, 'allow_put': False,
'enforce_policy': True,
'validate': {'type:uuid': None},
'is_visible': True, 'primary_key': True},
'tenant_id': {'allow_post': True, 'allow_put': True,
'required_by_policy': True,
'is_visible': True},
'name': {'allow_post': True, 'allow_put': True,
'enforce_policy': True,
'validate': {'type:string': None},
'is_visible': True, 'default': ''},
'description': {'allow_post': True, 'allow_put': True,
'enforce_policy': True,
'validate': {'type:string': None},
'is_visible': True, 'default': ''},
# the attachment_id format will be enforced in the mixin
# depending on the attachment_type
'attachment_id': {'allow_post': True, 'allow_put': False,
'enforce_policy': True,
'default': False,
'validate': {'type:dict': None},
'is_visible': True,
'required_by_policy': True},
'attachment_type': {'allow_post': True, 'allow_put': False,
'enforce_policy': True,
'default': False, 'validate': {'type:string': None},
'is_visible': True,
'required_by_policy': True},
'network_id': {'allow_post': True, 'allow_put': True,
'required_by_policy': True,
'is_visible': True},
'ports': {'allow_post': False, 'allow_put': False,
'required_by_policy': False,
'is_visible': True},
'status': {'allow_post': False, 'allow_put': False,
'required_by_policy': False, 'is_visible': True},
'status_description': {'allow_post': False, 'allow_put': False,
'required_by_policy': False, 'is_visible': True}
},
'ports': {
'external_attachment_id': {'allow_post': True, 'allow_put': True,
'is_visible': True, 'default': None,
'validate': {'type:uuid': None}},
}
}
The following is the default policy for external attachment points.
.. code-block:: javascript
{
"create_external_attachment_point": "rule:admin_only",
"delete_external_attachment_point": "rule:admin_only",
"update_external_attachment_point:tenant_id": "rule:admin_only",
"get_external_attachment_point": "rule:admin_or_owner",
"update_external_attachment_point": "rule:admin_or_owner",
}
Security impact
---------------
There should be no security impact to Neutron. However, these ports will
not have security group support so users won't have a way of applying
firewall rules to them.
Notifications impact
--------------------
N/A
Other end user impact
---------------------
The neutron command-line client will be updated with the new external
attachment point CRUD commands.
Performance Impact
------------------
None to plugins that don't use this extension.
For plugins that use this extension it will be limited since most of
this code is only called during external attachment CRUD operations.
Other deployer impact
---------------------
The level of configuration required to use this will depend highly
on the chosen backend. Backends that already have full network control
may not require any additional configuration. Others may require lists
of objects to specify associations between configuration credentials
and network hardware.
Developer impact
----------------
If plugin developers want to use this, they will need to enable the extension
and use the mixin module.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
kevinbenton
Other contributors:
kanzhe-jiang
Work Items
----------
* Complete DB mixin model and extension API attributes
* Update python neutron client to support the new external attachment point commands
* Implement extension in ML2 in a multi-driver compatible way
* Implement an experimental OVS-based network gateway reference backend
(allows physical or virtual ports to be used as external attachment points)
Dependencies
============
N/A
Testing
=======
Unit tests will be included to exercise all of the new DB code and the API.
Tempest tests will leverage the reference OVS-based network gateway implementation.
Documentation Impact
====================
New admin and tenant workflows need to be documented for this extension.
It should not impact any other documentation.
References
==========
The following are notes from the mailing list[1] regarding Ironic use, but they
are not all requirements that will be fulfilled in the initial implementation:
* Bare metal instances are created through Nova API with specifying networking
requirements similarly to virtual instances. Having a mixed environment with
some instances running in VMs while others in bare metal nodes is a possible
scenario. In both cases, networking endpoints are represented as Neutron
ports from the user perspective.
* In case of multi-tenancy with bare metal nodes, network access control
(VLAN isolation) must be secured by adjacent EOR/TOR switches
* It is highly desirable to keep existing Nova workflow, mainly common for
virtual and bare metal instances:
* Instance creation is requested on Nova API, then Nova schedules the
Ironic node
* Nova calls Neutron to create ports in accordance with the user
requirements. However, the node is not yet deployed by that time and
networking is not to be "activated" at that point.
* Nova calls Ironic for "spawning" the instance. The node must be connected
to the provisioning network during the deployment.
* On completion of the deployment phase, "user" ports created in step 2 are
to be activated by Ironic calling Neutron.
* It is a realistic use case that a bare metal node is connected with multiple
NICs to the physical network, therefore the requested Neutron ports need to
be mapped to physical NICs (Ironic view) - attachment points (Neutron view)
* It is a realistic use case that multiple Neutron ports need to be mapped to
the same NIC / attachment, e.g. when the bare metal node needs to be
connected to many VLANs. In that case, the attachment point needs to be
configured to trunking (C-tagging) mode, and C-tag per tenant network needs
to be exposed to the user. NOTE(kevinbenton): this exact workflow will not
be supported in this initial patch because attachment points are a 1-1
mapping to a Neutron network.
* In the simplest case, port-to-attachment point mapping logic could be
placed into Ironic. Mapping logic is the logic that needs to decide which
NIC/attachment point to select for a particular Neutron port withing a
specific tenant network. In that case, Ironic can signal the requested
attachment point to Neutron.
* In the long-term, it is highly desirable to prepare the "Neutron port" to
"attachment point" mapping logic for specific situations:
* Different NICs are connected to isolated physical networks, mapping to
consider network topology / accessibility
* User wants to apply anti-affinity rules on Neutron ports, i.e. requesting
ports that are connected to physically different switches for resiliency
* Mapping logic to consider network metrics, interface speed, uplink
utilization.cThese aspects argue to place the mapping logic into Neutron,
that has the necessary network visibility (rather than Ironic)
* In some cases, Ironic node is configured to boot from a particular NIC in
network boot mode. In such cases, Ironic shall be able to inform the mapping
logic that a specific Neutron port (boot network) must be placed to that
particular NIC/attachment point.
* It is highly desirable to support a way for automating NIC-to-attachment
point detection for Ironic nodes. For that purpose, Ironic agent could
monitor the link for LLDP messages from the switch and register an
attachment point with the detected LLDP data at Ironic. Attachment point
discovery could happen either at node discovery time, or before deployment.
1. http://lists.openstack.org/pipermail/openstack-dev/2014-May/thread.html#35298

View File

@@ -1,336 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===============================================
Support for VDP in Neutron for Network Overlays
===============================================
https://blueprints.launchpad.net/neutron/+spec/vdp-network-overlay
Problem description
===================
A very common requirement in today's data centers is the need to support
more than 4K segments. Openstack achieves this by using Host based
Tunneling and uses tunneling protocols like GRE, VXLAN etc. A topology,
more common in Massively Scalable Data Centers (MSDC's) is when the compute
nodes are connected to external switches. We will refer to all the external
switches and the other inter-connected switches as a 'Fabric'. This is shown
below:
asciiflows::
XXXXXXXXXXXXX
XXXXXXX XXXXXXXXX
X ++ ++ X
X |spine 1| ++ ++ |spine n | X
X ++ ++ X
X X
X X
++ X
+ Leaf i | SWITCH FABRIC X
++ X
X X
X X
X++ ++X
| Leaf x|XXXX XX XXXX|Leaf z |
| +----+ | +---+
| | VDP| | |VDP|
++++ +++---+
++ | |
| Openstack | | |
| Controller | | |
| Node | +++ +++
| | | OVS | | OVS |
| | ++--+ ++-+
| | | |LLDPad(VDP)| | |LLDPad(VDP)|
| | | +---+ ++ | +--+
| | | Openstack | | Openstack |
+++++ | Compute node 1| | Compute node n|
| +++ +++
| | |
+++
In such topologies, the fabric can support more than 4K segments.
Tunneling starts at the Fabric or more precisely the leaf switch
connected to the compute nodes. This is called the Network based Overlay.
The compute nodes send a regular dot1q frame to the fabric. The
fabric does the appropriate encapsulation of the frame and associates
the port, VLAN to a segment (> 4K) that is known in the fabric. This
fabric segment can be anything based on the overlay technology used
at the fabric. For VXLAN, it can be VNI. For TRILL, it can be the FGL
value and for FabricPath it can be the internal qinq tag. So,
the VLAN tag used by the compute nodes can only be of local significance,
that is, it's understood only between the compute nodes and the
immediately connected switch of the fabric. The immediate question is
how the fabric knows to associate a particular VLAN from a VM to a
segment. An obvious way is to configure this statically in the switches
beforehand. But, a more useful and interesting way is to do this
automatically. The fabric using 'some' method knows the dot1q tag that
will be used by the launched instance and assigns a segment and
configures itself.
There are many solutions floating around to achieve this.
The method that is of interest in this document is by using VSI
Discovery Protocol (VDP), which is a part of IEEE 802.1QBG standard [1].
The purpose of VDP is to automatically signal to the network when a vNIC is
created, destroyed or modified. The network can then appropriately provision
itself, which alleviates the need for manual provisioning. The basic flow
involving VDP is given below. Please refer the specification link of the
blueprint for the diagram.
1. In the first step, the network admin creates a network. (non-VDP
specific)
2. In the second step, the server admin launches a VM, assigning it's
vNIC to the appropriate network that was already created by the network
admin. (non-VDP specific).
3. When a VM is launched in a server, the VDP protocol running in the
server signals the UP event of a vNIC to the connected switch passing the
parameters of the VM. The parameters of interest are the information
associated with a vNIC (like the MAC address, UUID etc) and the network
ID associated with the vNIC.
4. The switch can then provision itself. The network ID is the common
entity here. The switch can either contact the central database for the
information associated with the Network ID or these information can be
configured statically in the switch. (again, non-VDP specific).
In Openstack, the first two steps are done at the Controller. The third
step is done as follows:
The Network ID (step 3 above) that we will be using will be the Segment ID
that was returned when the Network was created. In Openstack, when VLAN
type driver is used, it does the VLAN translation. The internal VLAN is
translated to an external VLAN for frames going out and the reverse is
done for the frames coming in. What we need is a similar mechanism, but
the twist here is that the external VLAN is not pre-configured, but is
given by the VDP protocol. When a VM is launched, its information will
be sent to the VDP protocol running in the compute. Each compute will
be running a VDP daemon. The VDP protocol will signal the information
to the fabric (Leaf switch) and the switch will return the VLAN to be
used by the VM. Any frames from the VM then needs to be tagged
with the VLAN sent by VDP so that the fabric can associate it with the
right fabric segment.
To summarize the requirements:
1. Creation of more than 4K networks in Openstack. With the current model,
one cannot use a type driver of VLAN because of the 4K limitation. With
today's model, we need to use either GRE or VXLAN.
2. If a type driver of VXLAN or GRE is used that may mean host based
overlay. Frames will be tunneled at the server and not at the network.
3. Also the programming of flows should be done after communicating the
vNIC information to lldpad. lldpad communicates with the leaf switches and
return the VLAN to be used by this VM as described earlier.
Proposed change
===============
The following changes can be considered as a solution to support Network
based overlays using VDP. External components can still communicate with
LLDPad.
1. Create a new type driver for network-based overlays. This will be very
similar to the existing VLAN type driver but without the 4K range check. The
range can be made configurable.
This type driver will be called "network_overlay" type driver.
2. In the computes (ovs_neutron_agent.py), the following is needed:
2.a. An external bridge (br-ethx) is also needed for this model, so no
change required in the init. 'enable_tunneling' will be set to false.
2.b. A configuration parameter is required that specifies whether the
network overlay uses VDP based mechanism to get the VLAN.
2.c. Another condition needs to be added in the places where
provisioning/reclaiming of the local VLAN and adding/deleting the
flows is done. Change will be to communicate with VDP and program
the flow using the VLAN returned by VDP.
This is sample code change:
def port_bound(...)
...
if net_uuid not in self.local_vlan_map:
self.provision_local_vlan(net_uuid, network_type,
physical_network, segmentation_id)
else:
if network_type == constants.TYPE_NETWORK_OVERLAY and
self.vdp_enabled():
self.send_vdp_assoc(...)
def provision_local_vlan(...)
...
elif network_type == constants.TYPE_FLAT:
...
elif network_type == constants.TYPE_VLAN:
...
elif network_type == constants.TYPE_NETWORK_OVERLAY:
if self.vdp_enabled():
self.send_vdp_assoc(....)
br.add_flow(...) Using VLAN return from prev step
else:
...
def reclaim_local_vlan(self, net_uuid):
...
elif network_type == constants.TYPE_FLAT:
...
elif network_type == constants.TYPE_VLAN:
...
elif network_type == constants.TYPE_NETWORK_OVERLAY:
if self.vdp_enabled():
self.send_vdp_disassoc(...)
br.delete_flows(...)
else:
...
3. Save the assigned VDP vlan and other useful information (such as
segmentation_id, compute node name and port id) into the ML2 database for
troubleshooting and debugging purposes. A new RPC method from OVS neutron
agent to neutron server will be added.
Alternatives
------------
VDP Protocol runs between the server hosting the VM's (also running Openstack
agent) and the connected switch. Mechanism driver support is only present at
the Neutron Server. Current type drivers of VXLAN, GRE use host-based overlays.
Type driver of VLAN has the 4K limitation. So a new type driver is required.
Refer [4] for more detailed information.
Duplicating the OVS Neutron agent code for VDP alone is also an alternative.
But, this makes it tough to manage and was also not recommended.
This approach mentioned in this spec and explained in detail in [4]
require the minimal changes with the existing infra structure and that will
also serve the needs without impacting other areas.
Data model impact
-----------------
New database for network overlay type driver will be created. It contains,
segment-id, physical network and allocated flag.
REST API impact
---------------
None
Security impact
---------------
None
Notifications impact
--------------------
None
Other end user impact
---------------------
None
Performance Impact
------------------
None
Other deployer impact
---------------------
Adding configuration parameters in config file for:
1. network_overlay type driver - This will have a parameter that shows if VDP
is used.
2. VDP - This has all the VDP related configuration (such as vsiidtype) [1][2]
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
Nader Lahouti (nlahouti)
Other contributors:
Paddu Krishnan (padkrish)
Work Items
----------
* Add a new type driver for network-based overlays. This will be very
similar to the existing VLAN type driver but without the 4K range check. The
range can be made configurable.
* In the computes (ovs_neutron_agent.py), add functionality for network
overlays using VDP.
Dependencies
============
VDP running as a part of lldpad daemon [2].
Testing
=======
For testing, it is needed to have the setup as shown in the begining of this
page. It is not mandatory to have physical switches for that topology.
The whole setup can be deployed using virtual switches (i.e. instead of
having physical switch fabric, it can be replaced by virtual switches).
The regular test for type driver applies here. Integration with VDP is
required for programming the flows.
The implementation of VDP is needed in the switches (physical or virtual).
Documentation Impact
====================
Changes in the OVS neutron agent and configuration details.
References
==========
[1] [802.1QBG-D2.2] IEEE 802.1Qbg/D2.2
[2] http://www.open-lldp.org
[3] https://blueprints.launchpad.net/neutron/+spec/vdp-network-overlay
[4] https://docs.google.com/document/d/1jZ63WU2LJArFuQjpcp54NPgugVSaWNQgy-KV8AQEOoo/edit?pli=1

View File

@@ -1,416 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
========================================
Service group and Service object support
========================================
Launchpad blueprint:
https://blueprints.launchpad.net/neutron/+spec/fwaas-customized-service
In the traditional firewall design a service is used to define type of traffic
in firewall. This blueprint creates an extension that allows the firewall
administrators to create customized service objects. The customized service
objects can be grouped together to form a service group object.
Problem description
=======================
1. In FWaaS, administrator can use port range and protocol inside firewall rules
to define traffic type. But we don't have a flexible way to allow user to specify more
than one type of traffic in the same rule. To support different traffic type, with the
same source, destination address and action, different rules need to be created.
This makes the process of defining firewall rules un-scalable.
2. Most of vendors' (eg. PAN, Juniper) security policy is implemented based on
service and do not configure protocol and port on the policy directly. We should support
the same in the firewall rule for the easy integration.
3. Some vendors also support special attributes for different traffic type. One of the
usage is to allow session idle timeout values for different traffic type. We can not support
this today.
Proposed change
==================
We propose to add a new extension with two resources. One is called service group
and one is called service object.
Administrator can use service object to define a special type of traffic. The service
objects are grouped into a service group that can be used across neutron features, with
the initial usage being in firewall rules exposed by the FWaaS feature.
Each service object can be defined with a timeout value that can be used to override
default session idle timeout value. In the FWaaS reference implementation, this timeout
will be implemented by using the timeout option in the iptable connection tracking.
When being used with firewall rules, the service groups are allowed to be reused among
different firewall rules in the same tenant.
To simplify the initial implementation, following features are not supported in the first
implementation:
1. support of sharing the service object among service groups.
2. support of the case where the cloud provider administrator creates service
group for tenant to use.
We should be able to add these two features later with backward compatible APIs.
The service group can only be deleted if there are no other objects referencing it and
in the current case, the referencing object will be firewall rules. A service object can
only be deleted if there are no other service groups are referencing it.
Since most of the firewall vendors support only using service group or service object to
configure firewall policy and do not allow user to configure the protocol and port on the
security policy directly, we can potentially deprecate the protocol and port options from
the firewall rule resources. But we will delay this decision until later releases.
Even though the current document is targeting firewall rule as the user of the service
group, the service group could also be useful when configure security group or group policy.
In the developer session of Hong Kong openstack summit, some of developers suggested to
make service group as global resource. Based on this suggestion, we will make the
service group and service object an independent extension module, such that its usage
is not limited to FWaaS.
Naming:
The names of service group and service object are selected based on the fact that many
firewall vendors are using the same name for the same feature in their firewall products.
Also, traditionally, in UNIX system, all supported protocol and port are listed in
/etc/services file.
Use case:
An administrator wants to create firewall rules to allow all H.323 traffics to server 2.2.2.2.
The H.323 traffic can be on following port/protocol pairs:
tcp/1720
tcp/1503
tcp/389
tcp/522
tcp/1731
udp/1719
Without service group, administrator has to create 6 different rules and other than
protocol and port, all other fields in each rule will be a duplication to the same fields in
other rules. Basically, administrator needs to create separated rules for each type of the
traffics that can hit on the server 2.2.2.2 and many of the rules are duplicated. With service
group, administrator only needs to create a few service groups and there will not be any
duplications among firewall rules. For the current use case, administrator only needs to create
one service group and one firewall rule. This can reduce amount of firewall rules.
Here is an example for using service group in firewall rule:
Assuming a tenant has two servers that provide certain type of web services on port 8080,
80 and 8000. We can create two firewall rules to permit traffic from any source IP address
to these two servers. The service provided from 8000 port has very short idle timeout
(10 seconds), the services provided on port 8080 and 80 have default idle timeout.
neutron service-group-create tcp-http-services
This will create a service group named as tcp-http-services
neutron service-object-create --protcol tcp --destination-port 8080 http_object_8080 \
--service-group tcp-http-services
This creates a service object named http_object_8080 in the http-service group
neutron service-object-create --protcol tcp --destination-port 80 http_object_80 \
--service-group tcp-http-services
This creates a service object named http_object_80 in tcp-http-services group
neutron service-object-create --protcol tcp --destination-port 8000 --timeout 10 \
http_object_8000 --service-group tcp-http-services
This creates a service object names http_object_8000 in tcp-http-services group, the
service idle timeout for this object is 10 seconds. It implies the firewall session
that created by this type of traffic has idle timeout as 10 seconds (comparing the
default timeout 1800 seconds).
neutron firewall-rule-create --destination-ip-address 10.0.2.1 --service-group \
tcp-http-service --action permit
neutron firewall-rule-create --destination-ip-address 11.0.2.1 --service-group \
tcp-http-service --action permit
These two rules permit traffic from any IP address to service 10.0.2.1 and 11.0.2.1
that match any service defined in the service group tcp-http-services.
In the current reference design, when the firewall rules get pushed to the firewall
agent, the agent checks if the rule is referencing service groups (by the service
group id), if it is, then the agent queries the service group content from plugin
and expand the firewal rule based on the content of the service group into the
iptables. Since the firewall policy is pushed with all the rules together, it would
be better to have agent to query the service group contents so that the policy push
message will not be too big. When deleting rules, the agent will do the same so that
it can recover the original iptable rules and delete them.
Note:
1. firewall rule can also be configured with protocol and port range, for current
reference design, we will not allow service-group, protocol and port range to be configured
together.
2. Later, we can used IPset to implement firewall reference design, in that way, it will
be much easier for us to apply service group.
Alternatives
------------
Without service group, administrator can create separate rule for each type of traffic.
The issue with this method is high overheads, it may create way too many rules with the
duplicated resource defined in it.
Also, the most of firewall vendors have service group like concept in their policy
definition. Adding the notion of the service group in the firewall rule simplifies
integration path for firewall vendors
Data model impact
-----------------
Firewall rules:
+-------------------+------------+-----------------+-----------+------+-------------------------+
| Attribute name | Type | Default Value | Required | CRUD | Description |
+-------------------+------------+-----------------+-----------+------+-------------------------+
| service_groups | List | empty | N | CRU | List of service groups |
+-------------------+------------+-----------------+-----------+------+-------------------------+
Service group:
+-------------------+------------+-----------------+-----------+------+-------------------------+
| Attribute name | Type | Default Value | Required | CRUD | Description |
+-------------------+------------+-----------------+-----------+------+-------------------------+
| id | uuid | generated | Y | R | |
+-------------------+------------+-----------------+-----------+------+-------------------------+
| name | String | empty | N | CRU |Name of service group |
+-------------------+------------+-----------------+-----------+------+-------------------------+
| description | String | empty | N | CRU | |
+-------------------+------------+-----------------+-----------+------+-------------------------+
| tenant id | uuid | empty | Y | R |Id of tenant that creates|
| | | | | |service group |
+-------------------+------------+-----------------+-----------+------+-------------------------+
| service objects | list | empty list | N | R |List of service objects |
+-------------------+------------+-----------------+-----------+------+-------------------------+
Service object:
+----------------------+----------------+-----------------------------+------+--------------------------+
| Attribute name | Type | Default Value | Required | CRUD |Description |
+----------------------+----------------+-----------------+-----------+------+--------------------------+
| id | uuid | generated | Y | R | |
+----------------------+----------------+-----------------+-----------+------+--------------------------+
| name | String | empty | N | CRU |Service object name |
+----------------------+----------------+-----------------+-----------+------+--------------------------+
| protocol | string | empty | Y | CR |'tcp','udp','icmp','any' |
| | | | | | or protocol id (0-255) |
+----------------------+----------------+-----------------+-----------+------+--------------------------+
| source_port | integer or str | empty | N | CR |This could be either a |
| | | | | |single port (integer or |
| | | | | |string) or a range(string)|
| | | | | |in the form "p1:p2" |
| | | | | |where(0<=p1<=p2 <=65535) |
+----------------------+----------------+-----------------+-----------+------+--------------------------+
| destination_port |integer or str | empty | N | CR | Same as source_port |
+----------------------+----------------+-----------------+-----------+------+--------------------------+
| icmp_code | char | empty | N | CR | ICMP code number |
+----------------------+----------------+-----------------+-----------+------+--------------------------+
| icmp_type | char | empty | N | CR | ICMP type number |
+----------------------+----------------+-----------------+-----------+------+--------------------------+
| timeout | short | empty | N | CR | idle timeout in seconds |
+----------------------+----------------+-----------------+-----------+------+--------------------------+
| tenant_id | uuid | empty | Y | R | |
+----------------------+----------------+-----------------+-----------+------+--------------------------+
New CLIs:
service-group-create
service-group-delete
service-group-list
service-group-show
service-group-update
service-object-create
service-object-delete
service-object-list
service-object-show
service-object-update
REST API impact
---------------
The new resources:
.. code-block:: python
RESOURCE_ATTRIBUTE_MAP = {
'service_groups': {
'id': {'allow_post': False, 'allow_put': False,
'validate': {'type:uuid': None},
'is_visible': True,
'primary_key': True},
'name': {'allow_post': True, 'allow_put': True,
'is_visible': True, 'default': '',
'validate': {'type:name_not_default': None}},
'description': {'allow_post': True, 'allow_put': True,
'is_visible': True, 'default': ''},
'tenant_id': {'allow_post': True, 'allow_put': False,
'required_by_policy': True,
'is_visible': True},
'service_objects': {'allow_post': False, 'allow_put': False,
'convert_to': attr.convert_none_to_empty_list,
'is_visible': True},
}
'service_objects': {
'id': {'allow_post': False, 'allow_put': False,
'validate': {'type:uuid': None},
'is_visible': True, 'primary_key': True},
'name': {'allow_post': True, 'allow_put': True,
'is_visible': True, 'default': '',
'validate': {'type:name_not_default': None}},
'protocol': {'allow_post': True, 'allow_put': False,
'is_visible': True, 'default': None,
'convert_to': _convert_protocol},
'source_port': {'allow_post': True, 'allow_put': False,
'validate': {'type:service_port_range': None},
'convert_to': _convert_port_to_string,
'default': None, 'is_visible': True},
'destination_port': {'allow_post': True, 'allow_put': False,
'validate': {'type:service_port_range': None},
'convert_to': _convert_port_to_string,
'default': None, 'is_visible': True},
'icmp_code': {'allow_post': True, 'allow_put': False,
'validate': {'type:icmp_code': None},
'convert_to': _convert_icmp_code,
'default': None, 'is_visible': True},
'icmp_type': {'allow_post': True, 'allow_put': False,
'validate': {'type:icmp_type': None},
'convert_to': _convert_icmp_type,
'default': None, 'is_visible': True},
'timeout': {'allow_post': True, 'allow_put': False,
'validate': {'type:range': [0, 65535]},
'convert_to': attr.convert_to_int,
'default': 0, 'is_visible': True},
'tenant_id': {'allow_post': True, 'allow_put': False,
'required_by_policy': True,
'is_visible': True},
}
}
RESOURCE_ATTRIBUTE_MAP = {
'firewall_rules': {
'service_groups': {'allow_post': True, 'allow_put': True,
'convert_to': attr.convert_none_to_empty_list,
'default': None, 'is_visible': True},
}
}
+---------------+----------------------------+----------------------+
|Object |URI |Type |
+---------------+----------------------------+----------------------+
|service group |/service-groups |GET |
+---------------+----------------------------+----------------------+
|service group |/service-groups |POST |
+---------------+----------------------------+----------------------+
|service group |/service-groups/{id} |GET |
+---------------+----------------------------+----------------------+
|service group |/service-groups/{id} |PUT |
+---------------+----------------------------+----------------------+
|service group |/service-groups/{id} |DELETE |
+---------------+----------------------------+----------------------+
|service object |/service-objects |GET |
+---------------+----------------------------+----------------------+
|service object |/service-objects |POST |
+---------------+----------------------------+----------------------+
|service object |/service-objects/{id} |GET |
+---------------+----------------------------+----------------------+
|service object |/service-objects/{id} |PUT |
+---------------+----------------------------+----------------------+
|service object |/service-objects/{id} |DELETE |
+---------------+----------------------------+----------------------+
Security impact
---------------
* Does this change touch sensitive data such as tokens, keys, or user data?
No
* Does this change alter the API in a way that may impact security, such as
a new way to access sensitive information or a new way to login?
No
* Does this change involve cryptography or hashing?
No
* Does this change require the use of sudo or any elevated privileges?
No
* Does this change involve using or parsing user-provided data? This could
be directly at the API level or indirectly such as changes to a cache layer.
Yes
* Can this change enable a resource exhaustion attack, such as allowing a
single API interaction to consume significant server resources? Some examples
of this include launching subprocesses for each connection, or entity
expansion attacks in XML.
No
Notifications impact
--------------------
None
Other end user impact
---------------------
None
Performance Impact
------------------
None
Other deployer impact
---------------------
None
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Yi Sun beyounn@gmail.com
Vishnu Badveli breddy@varmour.com
Work Items
------------
* API and database
* Reference implementation
* python-neutronclient
Dependencies
============
None
Testing
=======
Both unit test and tempest test will be required
Documentation Impact
====================
Documentation for both administrators and end users will have to be
provided. Administrators will need to know how to configure the service
group by using the service group API and python-neutronclient.
References
==========
None

View File

@@ -1,162 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=========================================================
Allow the external IP address of a router to be specified
=========================================================
https://blueprints.launchpad.net/neutron/+spec/specify-router-ext-ip
There currently is no way to specify the IP address given to a
router on its external port. The API also does not return the
IP address that is assigned to the router. This is a problem
if the router is running a service (e.g. VPNaaS) that requires
the address to be known. This blueprint allows external IPs to
be set (admin-only by default) and allows the IP to be read.
Problem description
===================
The current router API doesn't allow any control over the IP
address given to the external interface on router objects. It
also blocks tenants from reading the IP address it is given.
This makes it difficult for tenants in scenarios where the IP
address needs to be known or needs to be set to a known address.
For example, if the router is running VPNaaS, the tenant can't
get the address required to issue to clients. Or, even if the address
is already known to clients, there is no way to delete the router,
move it to another project, and request the same address.
Proposed change
===============
Allow the external IP to be specified for a router in the
external_gateway_info passed to router_update. By default, this
will be restricted by policy.json to an admin-only operation.
Include the external IP addresses in the get_router response so
tenants can see the addresses.
The format of this will be the standard fixed_ips format used
when specifying an IP address for a normal port so it offers
the flexibility of specifying a subnet_id instead of an IP directly.
The fixed_ips format will also handle use cases where the router
has multiple external addresses. For example, a logical
router may be implemented in a distributed fashion with multiple external
addresses to distribute the source NAT traffic. In the current reference
implementation, it will just raise an exception if a user tries to add
more than one.
Requested addresses will be permitted to be any address inside any of the
subnets associated with the external network except for the gateway addresses.
They will not be affected by allocation pool ranges.
If an address is already in use, the API will return a BadRequest
error (HTTP 400).
Alternatives
------------
N/A
Data model impact
-----------------
N/A
REST API impact
---------------
Adds a new external_ips field to the external_gateway_info dict
in the router object.
+-------------------+--------+----------+----------+------------------+--------------+
|Attribute |Type |Access |Default |Validation/ |Description |
|Name | | |Value |Conversion | |
+===================+========+==========+==========+==================+==============+
|external_fixed_ips |fixed_ip|RO, owner |generated |Same as fixed_ips |External IP |
| |format |RW, admin | |field validation |addresses |
| |for | | |for normal ports. | |
| |ports | | | | |
+-------------------+--------+----------+----------+------------------+--------------+
Security impact
---------------
N/A
Notifications impact
--------------------
N/A
Other end user impact
---------------------
N/A
Performance Impact
------------------
N/A
Other deployer impact
---------------------
N/A
Developer impact
----------------
N/A
Implementation
==============
Assignee(s)
-----------
kevinbenton
Work Items
----------
Make the changes to the L3 db code, API, and policy.
Update neutronclient
Dependencies
============
N/A
Testing
=======
Unit tests should be adequate since there will be no new behavior outside
of the IP address assignment, which is well contained in the neutron code.
Documentation Impact
====================
Indicate that tenants can see their router's external IP and that
admins can specify router IPs.
References
==========
https://bugs.launchpad.net/neutron/+bug/1255142
https://bugs.launchpad.net/neutron/+bug/1188427

View File

@@ -1,199 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
===========================================
Clean up resources when a tenant is deleted
===========================================
https://blueprints.launchpad.net/neutron/+spec/tenant-delete
Problem description
===================
OpenStack projects currently do not delete tenant resources when a tenant
is deleted. For example, a user registers to a public cloud, and a script
auto generates a Keystone user and tenant, along with a Neutron router and
a tenant network. Later, the user changes his mind and removes his subscription
from the service. An admin deletes the tenant, yet the router and network
remain, with a tenant id of a tenant that no longer exists. This issue can
cause ballooning databases and operational issues for long-standing clouds.
Proposed change
===============
1) Expose an admin CLI tool that either accepts a tenant-id and deletes its
resources, or finds all tenants with left-over resources and deletes them.
It does this by listing tenants in Keystone and deletes any resources
not belonging to those tenants. The tool will support generating a JSON
document that details all to-be-deleted tenants and their resources,
as well as a command to delete said tenants.
2) The ability to configure Neutron to listen and react to Keystone tenant
deletion events, by sending a delete on each orphaned resource.
The Big Picture
---------------
Solving the issue only in Neutron is only the beginning. The goal of this
blueprint is to implement the needed changes in Neutron, so the implementation
may serve as a reference for future work and discussion for an OpenStack-
wide solution.
I aim to lead a discussion about this issue in the K summit with leads
from other projects.
Alternatives
------------
Should the events be reacted upon? According to the principle of least surprise,
I think an admin would expect tenant resources to be deleted when he deletes a
tenant, and not have to invoke a CLI tool manually, or set up a cron job.
Currently Keystone does not emit notifications by default, which poses a
challenge. I'll try to change the default in Keystone, as well as make Neutron
listen and react by default.
How would an OpenStack-wide solution work? Would each project expose its
own CLI utility, which an admin would have to invoke one by one? Or would
projects simply listen to Keystone events? Another alternative would be
that each project would implement an API. I argue that it introduces higher
coupling for no significant gain.
Data model impact
-----------------
None
REST API impact
---------------
None
Security impact
---------------
Neutron will now be willing to accept RPC messages in a well known format,
deleting user data.
Notifications impact
--------------------
Configuration changes to listen to the same exchange and topic as Keystone
notifications.
Other end user impact
---------------------
See documentation section.
Performance Impact
------------------
Deleting all tenant resources is a costly operation and should probably
be turned off for scale purposes. A cron job can be setup to perform clean
up during off hours.
Other deployer impact
---------------------
| Keystone needs to be configured to emit notifications via (Non-default):
| notification_driver = messaging
| The following keys need to match in keystone.conf in the general section,
and neutron.conf in the [identity] section:
| control_exchange = openstack
| notification_topics = notifications
They match by default.
The new neutron-delete-tenant executable may be configured as a cron job.
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Assaf Muller
Work Items
----------
1) Implement a CLI tool that detects "orphaned" tenants and deletes all of
their resources.
2) Make Neutron listen to Keystone notifications.
3) Implement a callback that accepts a tenant id and deletes all of its
resources. This will be implemented as a new service plugin. It will reuse
code from step 1.
Note that Neutron has intra-dependencies. For example, you can not delete
a network with active ports. This means that resources need to be deleted
in specific order. Shared resources need to be dealt with in a special manner:
If there are multiple tenants that need to be deleted, (IE: More than
one orphaned tenant), the solution is to delete resources breadth first,
and not tenant by tenant. IE: First delete all ports by all tenants,
then all networks.
An issue remains if the admin tries to delete a tenant with a shared network
without first deleting another tenant with ports belonging to the shared
network. The non-shared resources will be successfully deleted but the shared
network will not. At this point the first tenant will have a network which
was not cleaned up and has no reason to be deleted, unless at some point in the
future the active ports will be deleted, then the CLI tool may be used with
the --automatic flag. I'm fine with this deficiency.
Dependencies
============
I want to make Keystone emit notifications by default.
Testing
=======
I think unit tests are not the correct level to test this feature.
What we need is both functional and integration tests.
Functional tests are currently proposed as "unit" tests: Like a significant
amount of currently implemented unit tests, these are in fact functional tests,
making actual API calls to an in-memory DB. This is the current proposal
for the Neutron side of things: Make API calls to create resources under a
certain tenant, import the Identity service plugin and "delete" the tenant.
At this point assertions will be made to verify that the previously created
resources are now gone.
Maru has been leading an effort [1] to move these type of tests to the
functional tree and CI job, as well as make calls directly to the plugins and
not via the API. There remains an open question if this blueprint should depend
on Maru's, placing more focus on Maru's efforts, which is missing a framework
that enables calls to the plugins for all types of resources.
What would still be missing is coverage for the Keystone to Neutron
notifications, and this would obviously belong to Tempest. This would
require a configuration change to the gate, as Keystone does not emit
notifications by default.
Finally, I propose that the --automatic functionality of the CLI tool would
be tested in Neutron, by mocking out the call to Keystone.
Reminder: The CLI tool makes an API call to get a Keystone tenant list, then
goes through the resources in Neutron, deleting orphaned resources.
The mock would return a predetermined list of tenants. Note: I propose
to test the 'automatic' functionality of the Identity service plugin directly,
not via the CLI tool.
Documentation Impact
====================
There will be new configuration options in neutron.conf, as well as a new
CLI tool to delete left over resources.
| neutron.conf:
| [identity]
| control_exchange = openstack
| notification_topics = notifications
| CLI tool:
| neutron-delete-tenant --tenant-id=<tenant_id> --automatic --generate --delete
| --tenant-id will delete all resources by <tenant_id>.
| --generate reports a JSON report of all tenants not currently existing in
| Keystone, and their resources. The report will be placed in $STATE_PATH/
| identity.
| --delete will delete all resources listed in $STATE_PATH/identity.
| --automatic performs --generate and then --delete.
--tenant-id is exclusive with the other three options.
References
==========
[1] https://blueprints.launchpad.net/neutron/+spec/neutron-testing-refactor

View File

@@ -1,259 +0,0 @@
..
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=============================================
Brocade Neutron L3 Plugin for Vyatta vRouter
=============================================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/neutron/+spec/l3-plugin-brocade-vyatta-vrouter
This blueprint is for implementing an L3 service plugin for Brocade Vyatta
vRouter appliance.
Brocade Neutron L3 Plugin for Vyatta vRouter supports CRUD operations on
vRouter, add/remove interfaces from vRouter and floating IPs for VMs.
It performs vRouter VM lifecyle management by calling Nova APIs during the
Create and Delete Router calls. Once the vRouter VM is up, L3 plugin connects
to the REST API end-point exposed by the vRouter VM using REST API to perform
the appropriate configurations.L3 plugin supports add/remove router interfaces
by attaching/detaching the neutron ports to vRouter VM using Nova API.
Basic workflow is as shown below:
::
+---------------------------+
| |
| |
| Neutron Server |
| |
| |
| +-----------------------+ | +---------------+
| | L3 Plugin for Brocade | | | |
| | Vyatta vRouter +----------> Nova API |
| | | | | |
+-+----------+------------+-+ +-------+-------+
| |
| |
|REST API |
| |
| |
+-------V----------+ |
| | |
| Brocade Vyatta | |
| vRouter VM <--------------------+
| |
+------------------+
Problem description
===================
Cloud service providers want to use Brocade Vyatta vRouter as a tenant virtual
router in their OpenStack cloud. In order to perform the vRouter VM lifecycle
management and required configurations, a new Neutron L3 plugin for Brocade
Vyatta vRouter is required.
Proposed change
===============
Brocade Vyatta vRouter L3 plugin implements the below operations:
- Create/Update/Delete Routers
- Configure/Clear External Gateway
- Create/Delete Router-interfaces
- Create/Delete Floating-IPs
During the tenant router creation, L3 plugin will invoke nova-api by using the
admin tenant credentials mentioned in the plugin configuration file (More
details specified in deployer impact section). Nova-api is invoked to provision
Vyatta vRouter VM on-demand in admin tenant (Service VM tenant) by using the
tenant-id, image-id, management network name and flavor-id specified in plugin
configuration file. Vyatta vRouter VM's UUID is used while creating the Neutron
router so that router's UUID is the same as VM's UUID. During vRouter VM
creation, we will poll the status of the VM synchornously.Only when it becomes
'Active', we create the neutron router and the router creation process is
declared as successful.Once the vRouter VM is up, L3 plugin will use REST API
to configure the router name and administration state. If L3 plugin encounters
error from nova-api during vRouter VM creation or while using REST API to
communicate with the vRouter VM, router creation will fail and appropriate
error message is returned to the user.
When external gateway is configured, L3 plugin will create a neutron port in
external network and attach the port to vRouter VM using nova-api. Vyatta
vRouter image will recognize the hot-plugged interface. Once the port is
attached, L3 plugin will use REST API to configure the interface ip-address
on the ethernet interface. It will also create SNAT rules for all the private
subnets configured in router interfaces using REST API. SNAT rules and the
external gateway port will be deleted when external gateway configuration is
removed. If L3 plugin encounters error from nova-api during port attachment
or while using REST API to communicate with the vRouter VM, external gateway
configuration will fail and appropriate error message is returned to the user.
While adding a router interface, L3 plugin will create a neutron port in
tenant network and attach the port to vRouter VM using nova-api. Vyatta vRouter
image will recognize the hot-plugged interface. Once the port is attached,
L3 plugin will use REST API to configure the subnet on the ethernet interface.
It will also create SNAT rule for the router interface subnet using REST API
if external gateway is configured in the router. If L3 plugin encounters error
from nova-api during port attachment or while using REST API to communicate
with the vRouter VM, router interface addition will fail and appropriate error
message is returned to the user.
While deleting a router interface, L3 plugin will remove the ethernet interface
ip-address configuration, SNAT rule (if configured because of external gateway)
using REST API, detach and delete the neutron port in the tenant network.
If L3 plugin encounters error from nova-api during port detachment or while
using REST API to communicate with the vRouter VM, router interface deletion
will fail and appropriate error message is returned to the user.
When floating IPs are configured, L3 plugin will create SNAT and DNAT rules for
the translation between floating IPs and private network IPs using REST API.
SNAT and DNAT rules will be deleted when the floating IPs are disassociated.
If L3 plugin encounters error while using REST API to communicate with the
vRouter VM, floating IP configurations will fail and appropriate error message
is returned to the user.
While deleting the router, l3 plugin will first validate for non-existence of
router interfaces and external gateway configuration. It will delete the
router VM using Nova API and then delete the Neutron router.
Alternatives
------------
None
Data model impact
-----------------
None
REST API impact
---------------
None
Security impact
---------------
None
Notifications impact
--------------------
None
Other end user impact
---------------------
While creating the Neutron router, end user has to wait for the vRouter VM
to be up (as it is spawned on-demand). This can take around 20 seconds.
Performance Impact
------------------
None
Other deployer impact
---------------------
1. Edit Neutron configuration file /etc/neutron/neutron.conf to specify
Vyatta vRouter L3 plugin:
service_plugins =
neutron.plugins.brocade.vyatta.vrouter_neutron_plugin.VyattaVRouterPlugin
2. Import the Brocade Vyatta vRouter image using the below glance command:
glance image-create --name "Vyatta vRouter" --is-public true
--disk-format qcow2 --file ./vyatta_l3_plugin/image/vyatta_vrouter.qcow2
--container-format bare
3. Note the provider management network name. This needs to be specified in
the plugin configuration.
4. Configure the L3 plugin configuration file
/etc/neutron/plugins/brocade/vyatta/vrouter.ini with the below parameters:
# Tenant admin name
tenant_admin_name = admin
# Tenant admin password
tenant_admin_password = devstack
# Admin or service VM Tenant-id
tenant_id = <UUID of the admin or service VM tenant>
# Keystone URL. Example: http://<Controller node>:5000/v2.0/
keystone_url = http://10.18.160.5:5000/v2.0
# Vyatta vRouter Image id. Image should be imported using Glance
image_id = <UUID>
# vRouter VM Flavor-id (Small)
flavor = 2
# vRouter Management network name
management_network = management
Once configured, L3 plugin will be invoked for the CRUD operations on
tenant router, add/remove router interfaces and floating ip support.
Developer impact
----------------
None
Implementation
==============
Assignee(s)
-----------
Primary assignee:
natarajk
Other contributors:
None
Work Items
----------
Brocade Vyatta vRouter L3 plugin source code files:
vrouter_neutron_plugin.py - Implements L3 API and calls the vRouter driver.
vrouter_driver.py - Uses Nova API for vRouter VM provisioning and
vRouter REST API for configuration.
Code is available for review:
https://review.openstack.org/#/c/102336/
Dependencies
============
None
Testing
=======
- Complete Unit testing coverage of the code will be included.
- For tempest test coverage, 3rd party testing will be provided (Brocade CI).
- Brocade CI will report on all changes affecting this plugin.
- Testing is done using devstack and Vyatta vRouter.
Documentation Impact
====================
Will require new documentation in Brocade sections.
References
==========
None