Merge remote-tracking branch 'origin/master' into merge-branch

Change-Id: I42fb51c428c5ddf39501b9dc1792add62206f4d6
This commit is contained in:
Kyle Mestery 2015-07-31 13:41:15 +00:00
commit 9badcd249d
275 changed files with 9334 additions and 6564 deletions

View File

@ -65,7 +65,7 @@ do whatever they are supposed to do. In a callback-less world this would work li
# A gets hold of the references of B and C # A gets hold of the references of B and C
# A calls B # A calls B
# A calls C # A calls C
B->my_random_method_for_knowning_about_router_created() B->my_random_method_for_knowing_about_router_created()
C->my_random_very_difficult_to_remember_method_about_router_created() C->my_random_very_difficult_to_remember_method_about_router_created()
If B and/or C change, things become sour. In a callback-based world, things become a lot If B and/or C change, things become sour. In a callback-based world, things become a lot

View File

@ -0,0 +1,9 @@
=================================
Client command extension support
=================================
The client command extension adds support for extending the neutron client while
considering ease of creation.
The full document can be found in the python-neutronclient repository:
http://docs.openstack.org/developer/python-neutronclient/devref/client_command_extensions.html

View File

@ -1,14 +1,11 @@
Contributing new extensions to Neutron Contributing new extensions to Neutron
====================================== ======================================
**NOTE!** .. note:: **Third-party plugins/drivers which do not start decomposition in
--------- Liberty will be marked as deprecated and removed before the Mitaka-3
milestone.**
**Third-party plugins/drivers which do not start decomposition in Liberty will Read on for details ...
be marked as deprecated, and they will be removed before the Mxxx-3
milestone.**
Read on for details ...
Introduction Introduction
@ -46,7 +43,7 @@ by allowing third-party code to exist entirely out of tree. Further extension
mechanisms have been provided to better support external plugins and drivers mechanisms have been provided to better support external plugins and drivers
that alter the API and/or the data model. that alter the API and/or the data model.
In the Mxxx cycle we will **require** all third-party code to be moved out of In the Mitaka cycle we will **require** all third-party code to be moved out of
the neutron tree completely. the neutron tree completely.
'Outside the tree' can be anything that is publicly available: it may be a repo 'Outside the tree' can be anything that is publicly available: it may be a repo

View File

@ -23,6 +23,152 @@ should also be added in model. If default value in database is not needed,
business logic. business logic.
How we manage database migration rules
--------------------------------------
Since Liberty, Neutron maintains two parallel alembic migration branches.
The first one, called 'expand', is used to store expansion-only migration
rules. Those rules are strictly additive and can be applied while
neutron-server is running. Examples of additive database schema changes are:
creating a new table, adding a new table column, adding a new index, etc.
The second branch, called 'contract', is used to store those migration rules
that are not safe to apply while neutron-server is running. Those include:
column or table removal, moving data from one part of the database into another
(renaming a column, transforming single table into multiple, etc.), introducing
or modifying constraints, etc.
The intent of the split is to allow invoking those safe migrations from
'expand' branch while neutron-server is running, reducing downtime needed to
upgrade the service.
To apply just expansion rules, execute:
- neutron-db-manage upgrade liberty_expand@head
After the first step is done, you can stop neutron-server, apply remaining
non-expansive migration rules, if any:
- neutron-db-manage upgrade liberty_contract@head
and finally, start your neutron-server again.
If you are not interested in applying safe migration rules while the service is
running, you can still upgrade database the old way, by stopping the service,
and then applying all available rules:
- neutron-db-manage upgrade head[s]
It will apply all the rules from both the expand and the contract branches, in
proper order.
Expand and Contract Scripts
---------------------------
The obsolete "branchless" design of a migration script included that it
indicates a specific "version" of the schema, and includes directives that
apply all necessary changes to the database at once. If we look for example at
the script ``2d2a8a565438_hierarchical_binding.py``, we will see::
# .../alembic_migrations/versions/2d2a8a565438_hierarchical_binding.py
def upgrade():
# .. inspection code ...
op.create_table(
'ml2_port_binding_levels',
sa.Column('port_id', sa.String(length=36), nullable=False),
sa.Column('host', sa.String(length=255), nullable=False),
# ... more columns ...
)
for table in port_binding_tables:
op.execute((
"INSERT INTO ml2_port_binding_levels "
"SELECT port_id, host, 0 AS level, driver, segment AS segment_id "
"FROM %s "
"WHERE host <> '' "
"AND driver <> '';"
) % table)
op.drop_constraint(fk_name_dvr[0], 'ml2_dvr_port_bindings', 'foreignkey')
op.drop_column('ml2_dvr_port_bindings', 'cap_port_filter')
op.drop_column('ml2_dvr_port_bindings', 'segment')
op.drop_column('ml2_dvr_port_bindings', 'driver')
# ... more DROP instructions ...
The above script contains directives that are both under the "expand"
and "contract" categories, as well as some data migrations. the ``op.create_table``
directive is an "expand"; it may be run safely while the old version of the
application still runs, as the old code simply doesn't look for this table.
The ``op.drop_constraint`` and ``op.drop_column`` directives are
"contract" directives (the drop column moreso than the drop constraint); running
at least the ``op.drop_column`` directives means that the old version of the
application will fail, as it will attempt to access these columns which no longer
exist.
The data migrations in this script are adding new
rows to the newly added ``ml2_port_binding_levels`` table.
Under the new migration script directory structure, the above script would be
stated as two scripts; an "expand" and a "contract" script::
# expansion operations
# .../alembic_migrations/versions/liberty/expand/2bde560fc638_hierarchical_binding.py
def upgrade():
op.create_table(
'ml2_port_binding_levels',
sa.Column('port_id', sa.String(length=36), nullable=False),
sa.Column('host', sa.String(length=255), nullable=False),
# ... more columns ...
)
# contraction operations
# .../alembic_migrations/versions/liberty/contract/4405aedc050e_hierarchical_binding.py
def upgrade():
for table in port_binding_tables:
op.execute((
"INSERT INTO ml2_port_binding_levels "
"SELECT port_id, host, 0 AS level, driver, segment AS segment_id "
"FROM %s "
"WHERE host <> '' "
"AND driver <> '';"
) % table)
op.drop_constraint(fk_name_dvr[0], 'ml2_dvr_port_bindings', 'foreignkey')
op.drop_column('ml2_dvr_port_bindings', 'cap_port_filter')
op.drop_column('ml2_dvr_port_bindings', 'segment')
op.drop_column('ml2_dvr_port_bindings', 'driver')
# ... more DROP instructions ...
The two scripts would be present in different subdirectories and also part of
entirely separate versioning streams. The "expand" operations are in the
"expand" script, and the "contract" operations are in the "contract" script.
For the time being, data migration rules also belong to contract branch. There
is expectation that eventually live data migrations move into middleware that
will be aware about different database schema elements to converge on, but
Neutron is still not there.
Scripts that contain only expansion or contraction rules do not require a split
into two parts.
If a contraction script depends on a script from expansion stream, the
following directive should be added in the contraction script::
depends_on = ('<expansion-revision>',)
Tests to verify that database migrations and models are in sync Tests to verify that database migrations and models are in sync
--------------------------------------------------------------- ---------------------------------------------------------------

View File

@ -0,0 +1,74 @@
Keep DNS Nameserver Order Consistency In Neutron
================================================
In Neutron subnets, DNS nameservers are given priority when created or updated.
This means if you create a subnet with multiple DNS servers, the order will
be retained and guests will receive the DNS servers in the order you
created them in when the subnet was created. The same thing applies for update
operations on subnets to add, remove, or update DNS servers.
Get Subnet Details Info
-----------------------
::
changzhi@stack:~/devstack$ neutron subnet-list
+--------------------------------------+------+-------------+--------------------------------------------+
| id | name | cidr | allocation_pools |
+--------------------------------------+------+-------------+--------------------------------------------+
| 1a2d261b-b233-3ab9-902e-88576a82afa6 | | 10.0.0.0/24 | {"start": "10.0.0.2", "end": "10.0.0.254"} |
+--------------------------------------+------+-------------+--------------------------------------------+
changzhi@stack:~/devstack$ neutron subnet-show 1a2d261b-b233-3ab9-902e-88576a82afa6
+------------------+--------------------------------------------+
| Field | Value |
+------------------+--------------------------------------------+
| allocation_pools | {"start": "10.0.0.2", "end": "10.0.0.254"} |
| cidr | 10.0.0.0/24 |
| dns_nameservers | 1.1.1.1 |
| | 2.2.2.2 |
| | 3.3.3.3 |
| enable_dhcp | True |
| gateway_ip | 10.0.0.1 |
| host_routes | |
| id | 1a2d26fb-b733-4ab3-992e-88554a87afa6 |
| ip_version | 4 |
| name | |
| network_id | a404518c-800d-2353-9193-57dbb42ac5ee |
| tenant_id | 3868290ab10f417390acbb754160dbb2 |
+------------------+--------------------------------------------+
Update Subnet DNS Nameservers
-----------------------------
::
neutron subnet-update 1a2d261b-b233-3ab9-902e-88576a82afa6 \
--dns_nameservers list=true 3.3.3.3 2.2.2.2 1.1.1.1
changzhi@stack:~/devstack$ neutron subnet-show 1a2d261b-b233-3ab9-902e-88576a82afa6
+------------------+--------------------------------------------+
| Field | Value |
+------------------+--------------------------------------------+
| allocation_pools | {"start": "10.0.0.2", "end": "10.0.0.254"} |
| cidr | 10.0.0.0/24 |
| dns_nameservers | 3.3.3.3 |
| | 2.2.2.2 |
| | 1.1.1.1 |
| enable_dhcp | True |
| gateway_ip | 10.0.0.1 |
| host_routes | |
| id | 1a2d26fb-b733-4ab3-992e-88554a87afa6 |
| ip_version | 4 |
| name | |
| network_id | a404518c-800d-2353-9193-57dbb42ac5ee |
| tenant_id | 3868290ab10f417390acbb754160dbb2 |
+------------------+--------------------------------------------+
As shown in above output, the order of the DNS nameservers has been updated.
New virtual machines deployed to this subnet will receive the DNS nameservers
in this new priority order. Existing virtual machines that have already been
deployed will not be immediately affected by changing the DNS nameserver order
on the neutron subnet. Virtual machines that are configured to get their IP
address via DHCP will detect the DNS nameserver order change
when their DHCP lease expires or when the virtual machine is restarted.
Existing virtual machines configured with a static IP address will never
detect the updated DNS nameserver order.

View File

@ -29,11 +29,11 @@ Since the test runs on the machine itself, full stack testing enables
through the API and then assert that a namespace was created for it. through the API and then assert that a namespace was created for it.
Full stack tests run in the Neutron tree with Neutron resources alone. You Full stack tests run in the Neutron tree with Neutron resources alone. You
may use the Neutron API (Keystone is set to NOAUTH so that it's out of the may use the Neutron API (The Neutron server is set to NOAUTH so that Keystone
picture). VMs may be simulated with a helper class that contains a container- is out of the picture). instances may be simulated with a helper class that
like object in its own namespace and IP address. It has helper methods to send contains a container-like object in its own namespace and IP address. It has
different kinds of traffic. The "VM" may be connected to br-int or br-ex, helper methods to send different kinds of traffic. The "instance" may be
to simulate internal or external traffic. connected to br-int or br-ex, to simulate internal or external traffic.
Full stack testing can simulate multi node testing by starting an agent Full stack testing can simulate multi node testing by starting an agent
multiple times. Specifically, each node would have its own copy of the multiple times. Specifically, each node would have its own copy of the
@ -84,9 +84,12 @@ Long Term Goals
* Currently we configure the OVS agent with VLANs segmentation (Only because * Currently we configure the OVS agent with VLANs segmentation (Only because
it's easier). This allows us to validate most functionality, but we might it's easier). This allows us to validate most functionality, but we might
need to support tunneling somehow. need to support tunneling somehow.
* How do advanced services use the full stack testing infrastructure? I'd * How will advanced services use the full stack testing infrastructure? Full
assume we treat all of the infrastructure classes as a publicly consumed stack tests infrastructure classes are expected to change quite a bit over
API and have the XaaS repos import and use them. the next coming months. This means that other repositories may import these
classes and break from time to time, or copy them in their repositories
instead. Since changes to full stack testing infrastructure is a given,
XaaS repositories should be copying it and not importing it directly.
* Currently we configure the Neutron server with the ML2 plugin and the OVS * Currently we configure the Neutron server with the ML2 plugin and the OVS
mechanism driver. We may modularize the topology configuration further to mechanism driver. We may modularize the topology configuration further to
allow to rerun full stack tests against different Neutron plugins or ML2 allow to rerun full stack tests against different Neutron plugins or ML2

View File

@ -34,6 +34,7 @@ Programming HowTos and Tutorials
contribute contribute
neutron_api neutron_api
sub_projects sub_projects
client_command_extensions
Neutron Internals Neutron Internals
@ -52,6 +53,7 @@ Neutron Internals
advanced_services advanced_services
oslo-incubator oslo-incubator
callbacks callbacks
dns_order
Testing Testing
------- -------

View File

@ -5,3 +5,4 @@ L2 Agent Networking
openvswitch_agent openvswitch_agent
linuxbridge_agent linuxbridge_agent
sriov_nic_agent

View File

@ -2,7 +2,7 @@ Neutron public API
================== ==================
Neutron main tree serves as a library for multiple subprojects that rely on Neutron main tree serves as a library for multiple subprojects that rely on
different modules from neutron.* namespace to accomodate their needs. different modules from neutron.* namespace to accommodate their needs.
Specifically, advanced service repositories and open source or vendor Specifically, advanced service repositories and open source or vendor
plugin/driver repositories do it. plugin/driver repositories do it.
@ -33,3 +33,34 @@ incompatible changes that could or are known to trigger those breakages.
- commit: 6e693fc91dd79cfbf181e3b015a1816d985ad02c - commit: 6e693fc91dd79cfbf181e3b015a1816d985ad02c
- solution: switch using oslo_service.* namespace; stop using ANY neutron.openstack.* contents. - solution: switch using oslo_service.* namespace; stop using ANY neutron.openstack.* contents.
- severity: low (plugins must not rely on that subtree). - severity: low (plugins must not rely on that subtree).
* change: oslo.utils.fileutils adopted.
- commit: I933d02aa48260069149d16caed02b020296b943a
- solution: switch using oslo_utils.fileutils module; stop using neutron.openstack.fileutils module.
- severity: low (plugins must not rely on that subtree).
* change: Reuse caller's session in DB methods.
- commit: 47dd65cf986d712e9c6ca5dcf4420dfc44900b66
- solution: Add context to args and reuse.
- severity: High (mostly undetected, because 3rd party CI run Tempest tests only).
* change: switches to oslo.log, removes neutron.openstack.common.log.
- commit: 22328baf1f60719fcaa5b0fbd91c0a3158d09c31
- solution: a) switch to oslo.log; b) copy log module into your tree and use it
(may not work due to conflicts between the module and oslo.log configuration options).
- severity: High (most CI systems are affected).
* change: Implements reorganize-unit-test-tree spec.
- commit: 1105782e3914f601b8f4be64939816b1afe8fb54
- solution: Code affected need to update existing unit tests to reflect new locations.
- severity: High (mostly undetected, because 3rd party CI run Tempest tests only).
* change: drop linux/ovs_lib compat layer.
- commit: 3bbf473b49457c4afbfc23fd9f59be8aa08a257d
- solution: switch to using neutron/agent/common/ovs_lib.py.
- severity: High (most CI systems are affected).

View File

@ -0,0 +1,27 @@
======================================
L2 Networking with SR-IOV enabled NICs
======================================
SR-IOV (Single Root I/O Virtualization) is a specification that allows
a PCIe device to appear to be multiple separate physical PCIe devices.
SR-IOV works by introducing the idea of physical functions (PFs) and virtual functions (VFs).
Physical functions (PFs) are full-featured PCIe functions.
Virtual functions (VFs) are “lightweight” functions that lack configuration resources.
SR-IOV supports VLANs for L2 network isolation, other networking technologies
such as VXLAN/GRE may be supported in the future.
SR-IOV NIC agent manages configuration of SR-IOV Virtual Functions that connect
VM instances running on the compute node to the public network.
In most common deployments, there are compute and a network nodes.
Compute node can support VM connectivity via SR-IOV enabled NIC. SR-IOV NIC Agent manages
Virtual Functions admin state. In the future it will manage additional settings, such as
quality of service, rate limit settings, spoofcheck and more.
Network node will be usually deployed with either Open vSwitch or Linux Bridge to support network node functionality.
Further Reading
---------------
* `Nir Yechiel - SR-IOV Networking Part I: Understanding the Basics <http://redhatstackblog.redhat.com/2015/03/05/red-hat-enterprise-linux-openstack-platform-6-sr-iov-networking-part-i-understanding-the-basics/>`_
* `SR-IOV Passthrough For Networking <https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking/>`_

View File

@ -7,10 +7,10 @@ part of the overall Neutron project.
Inclusion Process Inclusion Process
----------------- -----------------
The process for proposing the move of a repo into openstack/ and under The process for proposing a repo into openstack/ and under the Neutron
the Neutron project is to propose a patch to the openstack/governance project is to propose a patch to the openstack/governance repository.
repository. For example, to propose moving networking-foo, one For example, to propose networking-foo, one would add the following entry
would add the following entry under Neutron in reference/projects.yaml:: under Neutron in reference/projects.yaml::
- repo: openstack/networking-foo - repo: openstack/networking-foo
tags: tags:
@ -28,6 +28,11 @@ repositories are within the existing approved scope of the project.
http://git.openstack.org/cgit/openstack/governance/commit/?id=321a020cbcaada01976478ea9f677ebb4df7bd6d http://git.openstack.org/cgit/openstack/governance/commit/?id=321a020cbcaada01976478ea9f677ebb4df7bd6d
In order to create a project, in case it does not exist, follow steps
as explained in:
http://docs.openstack.org/infra/manual/creators.html
Responsibilities Responsibilities
---------------- ----------------
@ -86,14 +91,14 @@ repo but are summarized here to describe the functionality they provide.
+-------------------------------+-----------------------+ +-------------------------------+-----------------------+
| networking-edge-vpn_ | vpn | | networking-edge-vpn_ | vpn |
+-------------------------------+-----------------------+ +-------------------------------+-----------------------+
| networking-fujitsu_ | ml2 |
+-------------------------------+-----------------------+
| networking-hyperv_ | ml2 | | networking-hyperv_ | ml2 |
+-------------------------------+-----------------------+ +-------------------------------+-----------------------+
| networking-ibm_ | ml2,l3 | | networking-ibm_ | ml2,l3 |
+-------------------------------+-----------------------+ +-------------------------------+-----------------------+
| networking-l2gw_ | l2 | | networking-l2gw_ | l2 |
+-------------------------------+-----------------------+ +-------------------------------+-----------------------+
| networking-metaplugin_ | core |
+-------------------------------+-----------------------+
| networking-midonet_ | core,lb | | networking-midonet_ | core,lb |
+-------------------------------+-----------------------+ +-------------------------------+-----------------------+
| networking-mlnx_ | ml2 | | networking-mlnx_ | ml2 |
@ -205,6 +210,15 @@ Edge VPN
* Git: https://git.openstack.org/cgit/stackforge/networking-edge-vpn * Git: https://git.openstack.org/cgit/stackforge/networking-edge-vpn
* Launchpad: https://launchpad.net/edge-vpn * Launchpad: https://launchpad.net/edge-vpn
.. _networking-fujitsu:
FUJITSU
-------
* Git: https://git.openstack.org/cgit/openstack/networking-fujitsu
* Launchpad: https://launchpad.net/networking-fujitsu
* PyPI: https://pypi.python.org/pypi/networking-fujitsu
.. _networking-hyperv: .. _networking-hyperv:
Hyper-V Hyper-V
@ -239,13 +253,6 @@ L2 Gateway
* Git: https://git.openstack.org/cgit/openstack/networking-l2gw * Git: https://git.openstack.org/cgit/openstack/networking-l2gw
* Launchpad: https://launchpad.net/networking-l2gw * Launchpad: https://launchpad.net/networking-l2gw
.. _networking-metaplugin:
Metaplugin
----------
* Git: https://github.com/ntt-sic/networking-metaplugin
.. _networking-midonet: .. _networking-midonet:
MidoNet MidoNet

View File

@ -13,7 +13,7 @@ triaging. The bug czar is expected to communicate with the various Neutron teams
been triaged. In addition, the bug czar should be reporting "High" and "Critical" priority bugs been triaged. In addition, the bug czar should be reporting "High" and "Critical" priority bugs
to both the PTL and the core reviewer team during each weekly Neutron meeting. to both the PTL and the core reviewer team during each weekly Neutron meeting.
The current Neutron bug czar is Eugene Nikanorov (IRC nick enikanorov). The current Neutron bug czar is Kyle Mestery (IRC nick mestery).
Plugin and Driver Repositories Plugin and Driver Repositories
------------------------------ ------------------------------

View File

@ -100,9 +100,14 @@ updating the core review team for the sub-project's repositories.
| Area | Lieutenant | IRC nick | | Area | Lieutenant | IRC nick |
+========================+===========================+======================+ +========================+===========================+======================+
| dragonflow | Eran Gampel | gampel | | dragonflow | Eran Gampel | gampel |
| | Gal Sagie | gsagie |
+------------------------+---------------------------+----------------------+ +------------------------+---------------------------+----------------------+
| networking-l2gw | Sukhdev Kapur | sukhdev | | networking-l2gw | Sukhdev Kapur | sukhdev |
+------------------------+---------------------------+----------------------+ +------------------------+---------------------------+----------------------+
| networking-midonet | Ryu Ishimoto | ryu_ishimoto |
| | Jaume Devesa | devvesa |
| | YAMAMOTO Takashi | yamamoto |
+------------------------+---------------------------+----------------------+
| networking-odl | Flavio Fernandes | flaviof | | networking-odl | Flavio Fernandes | flaviof |
| | Kyle Mestery | mestery | | | Kyle Mestery | mestery |
+------------------------+---------------------------+----------------------+ +------------------------+---------------------------+----------------------+
@ -110,6 +115,10 @@ updating the core review team for the sub-project's repositories.
+------------------------+---------------------------+----------------------+ +------------------------+---------------------------+----------------------+
| networking-ovn | Russell Bryant | russellb | | networking-ovn | Russell Bryant | russellb |
+------------------------+---------------------------+----------------------+ +------------------------+---------------------------+----------------------+
| networking-plumgrid | Fawad Khaliq | fawadkhaliq |
+------------------------+---------------------------+----------------------+
| networking-sfc | Cathy Zhang | cathy |
+------------------------+---------------------------+----------------------+
| networking-vshpere | Vivekanandan Narasimhan | viveknarasimhan | | networking-vshpere | Vivekanandan Narasimhan | viveknarasimhan |
+------------------------+---------------------------+----------------------+ +------------------------+---------------------------+----------------------+
| octavia | German Eichberger | xgerman | | octavia | German Eichberger | xgerman |

View File

@ -45,7 +45,7 @@ admin_password = %SERVICE_PASSWORD%
# Location of Metadata Proxy UNIX domain socket # Location of Metadata Proxy UNIX domain socket
# metadata_proxy_socket = $state_path/metadata_proxy # metadata_proxy_socket = $state_path/metadata_proxy
# Metadata Proxy UNIX domain socket mode, 3 values allowed: # Metadata Proxy UNIX domain socket mode, 4 values allowed:
# 'deduce': deduce mode from metadata_proxy_user/group values, # 'deduce': deduce mode from metadata_proxy_user/group values,
# 'user': set metadata proxy socket mode to 0o644, to use when # 'user': set metadata proxy socket mode to 0o644, to use when
# metadata_proxy_user is agent effective user or root, # metadata_proxy_user is agent effective user or root,

View File

@ -593,7 +593,7 @@
[quotas] [quotas]
# Default driver to use for quota checks # Default driver to use for quota checks
# quota_driver = neutron.db.quota_db.DbQuotaDriver # quota_driver = neutron.db.quota.driver.DbQuotaDriver
# Resource name(s) that are supported in quota features # Resource name(s) that are supported in quota features
# This option is deprecated for removal in the M release, please refrain from using it # This option is deprecated for removal in the M release, please refrain from using it

View File

@ -1,31 +0,0 @@
# Config file for Metaplugin
[meta]
# Comma separated list of flavor:neutron_plugin for plugins to load.
# Extension method is searched in the list order and the first one is used.
plugin_list = 'ml2:neutron.plugins.ml2.plugin.Ml2Plugin,nvp:neutron.plugins.vmware.plugin.NsxPluginV2'
# Comma separated list of flavor:neutron_plugin for L3 service plugins
# to load.
# This is intended for specifying L2 plugins which support L3 functions.
# If you use a router service plugin, set this blank.
l3_plugin_list =
# Default flavor to use, when flavor:network is not specified at network
# creation.
default_flavor = 'nvp'
# Default L3 flavor to use, when flavor:router is not specified at router
# creation.
# Ignored if 'l3_plugin_list' is blank.
default_l3_flavor =
# Comma separated list of supported extension aliases.
supported_extension_aliases = 'provider,binding,agent,dhcp_agent_scheduler'
# Comma separated list of method:flavor to select specific plugin for a method.
# This has priority over method search order based on 'plugin_list'.
extension_map = 'get_port_stats:nvp'
# Specifies flavor for plugin to handle 'q-plugin' RPC requests.
rpc_flavor = 'ml2'

View File

@ -137,76 +137,6 @@
# mcast_ranges = # mcast_ranges =
# Example: mcast_ranges = 224.0.0.1:224.0.0.3,224.0.1.1:224.0.1. # Example: mcast_ranges = 224.0.0.1:224.0.0.3,224.0.1.1:224.0.1.
[ml2_cisco_apic]
# Hostname:port list of APIC controllers
# apic_hosts = 1.1.1.1:80, 1.1.1.2:8080, 1.1.1.3:80
# Username for the APIC controller
# apic_username = user
# Password for the APIC controller
# apic_password = password
# Whether use SSl for connecting to the APIC controller or not
# apic_use_ssl = True
# How to map names to APIC: use_uuid or use_name
# apic_name_mapping = use_name
# Names for APIC objects used by Neutron
# Note: When deploying multiple clouds against one APIC,
# these names must be unique between the clouds.
# apic_vmm_domain = openstack
# apic_vlan_ns_name = openstack_ns
# apic_node_profile = openstack_profile
# apic_entity_profile = openstack_entity
# apic_function_profile = openstack_function
# apic_app_profile_name = openstack_app
# Agent timers for State reporting and topology discovery
# apic_sync_interval = 30
# apic_agent_report_interval = 30
# apic_agent_poll_interval = 2
# Specify your network topology.
# This section indicates how your compute nodes are connected to the fabric's
# switches and ports. The format is as follows:
#
# [apic_switch:<swich_id_from_the_apic>]
# <compute_host>,<compute_host> = <switchport_the_host(s)_are_connected_to>
#
# You can have multiple sections, one for each switch in your fabric that is
# participating in OpenStack. e.g.
#
# [apic_switch:17]
# ubuntu,ubuntu1 = 1/10
# ubuntu2,ubuntu3 = 1/11
#
# [apic_switch:18]
# ubuntu5,ubuntu6 = 1/1
# ubuntu7,ubuntu8 = 1/2
# Describe external connectivity.
# In this section you can specify the external network configuration in order
# for the plugin to be able to teach the fabric how to route the internal
# traffic to the outside world. The external connectivity configuration
# format is as follows:
#
# [apic_external_network:<externalNetworkName>]
# switch = <switch_id_from_the_apic>
# port = <switchport_the_external_router_is_connected_to>
# encap = <encapsulation>
# cidr_exposed = <cidr_exposed_to_the_external_router>
# gateway_ip = <ip_of_the_external_gateway>
#
# An example follows:
# [apic_external_network:network_ext]
# switch=203
# port=1/34
# encap=vlan-100
# cidr_exposed=10.10.40.2/16
# gateway_ip=10.10.40.1
[ml2_cisco_ucsm] [ml2_cisco_ucsm]
# Cisco UCS Manager IP address # Cisco UCS Manager IP address

View File

@ -1,17 +0,0 @@
# neutron-rootwrap command filters for nodes on which neutron is
# expected to control network
#
# This file should be owned by (and only-writeable by) the root user
# format seems to be
# cmd-name: filter-name, raw-command, user, args
[Filters]
# cisco-apic filters
lldpctl: CommandFilter, lldpctl, root
# ip_lib filters
ip: IpFilter, ip, root
find: RegExpFilter, find, root, find, /sys/class/net, -maxdepth, 1, -type, l, -printf, %.*
ip_exec: IpNetnsExecFilter, ip, root

View File

@ -163,5 +163,16 @@
"get_service_provider": "rule:regular_user", "get_service_provider": "rule:regular_user",
"get_lsn": "rule:admin_only", "get_lsn": "rule:admin_only",
"create_lsn": "rule:admin_only" "create_lsn": "rule:admin_only",
"create_flavor": "rule:admin_only",
"update_flavor": "rule:admin_only",
"delete_flavor": "rule:admin_only",
"get_flavors": "rule:regular_user",
"get_flavor": "rule:regular_user",
"create_service_profile": "rule:admin_only",
"update_service_profile": "rule:admin_only",
"delete_service_profile": "rule:admin_only",
"get_service_profiles": "rule:admin_only",
"get_service_profile": "rule:admin_only"
} }

View File

@ -304,7 +304,12 @@ class OVSBridge(BaseOVS):
('options', {'peer': remote_name})] ('options', {'peer': remote_name})]
return self.add_port(local_name, *attrs) return self.add_port(local_name, *attrs)
def get_iface_name_list(self):
# get the interface name list for this bridge
return self.ovsdb.list_ifaces(self.br_name).execute(check_error=True)
def get_port_name_list(self): def get_port_name_list(self):
# get the port name list for this bridge
return self.ovsdb.list_ports(self.br_name).execute(check_error=True) return self.ovsdb.list_ports(self.br_name).execute(check_error=True)
def get_port_stats(self, port_name): def get_port_stats(self, port_name):
@ -557,7 +562,7 @@ class DeferredOVSBridge(object):
key=operator.itemgetter(0)) key=operator.itemgetter(0))
itemgetter_1 = operator.itemgetter(1) itemgetter_1 = operator.itemgetter(1)
for action, action_flow_list in grouped: for action, action_flow_list in grouped:
flows = map(itemgetter_1, action_flow_list) flows = list(map(itemgetter_1, action_flow_list))
self.br.do_action_flows(action, flows) self.br.do_action_flows(action, flows)
def __enter__(self): def __enter__(self):

View File

@ -15,10 +15,33 @@
import os import os
from oslo_log import log as logging
from oslo_utils import importutils
from neutron.i18n import _LE
if os.name == 'nt': if os.name == 'nt':
from neutron.agent.windows import utils from neutron.agent.windows import utils
else: else:
from neutron.agent.linux import utils from neutron.agent.linux import utils
LOG = logging.getLogger(__name__)
execute = utils.execute execute = utils.execute
def load_interface_driver(conf):
if not conf.interface_driver:
LOG.error(_LE('An interface driver must be specified'))
raise SystemExit(1)
try:
return importutils.import_object(conf.interface_driver, conf)
except ImportError as e:
LOG.error(_LE("Error importing interface driver "
"'%(driver)s': %(inner)s"),
{'driver': conf.interface_driver,
'inner': e})
raise SystemExit(1)

View File

@ -26,7 +26,6 @@ from oslo_utils import importutils
from neutron.agent.linux import dhcp from neutron.agent.linux import dhcp
from neutron.agent.linux import external_process from neutron.agent.linux import external_process
from neutron.agent.linux import utils as linux_utils
from neutron.agent.metadata import driver as metadata_driver from neutron.agent.metadata import driver as metadata_driver
from neutron.agent import rpc as agent_rpc from neutron.agent import rpc as agent_rpc
from neutron.common import constants from neutron.common import constants
@ -63,7 +62,7 @@ class DhcpAgent(manager.Manager):
ctx, self.conf.use_namespaces) ctx, self.conf.use_namespaces)
# create dhcp dir to store dhcp info # create dhcp dir to store dhcp info
dhcp_dir = os.path.dirname("/%s/dhcp/" % self.conf.state_path) dhcp_dir = os.path.dirname("/%s/dhcp/" % self.conf.state_path)
linux_utils.ensure_dir(dhcp_dir) utils.ensure_dir(dhcp_dir)
self.dhcp_version = self.dhcp_driver_cls.check_version() self.dhcp_version = self.dhcp_driver_cls.check_version()
self._populate_networks_cache() self._populate_networks_cache()
self._process_monitor = external_process.ProcessMonitor( self._process_monitor = external_process.ProcessMonitor(

View File

@ -19,6 +19,10 @@ import contextlib
import six import six
INGRESS_DIRECTION = 'ingress'
EGRESS_DIRECTION = 'egress'
@six.add_metaclass(abc.ABCMeta) @six.add_metaclass(abc.ABCMeta)
class FirewallDriver(object): class FirewallDriver(object):
"""Firewall Driver base class. """Firewall Driver base class.

View File

@ -21,9 +21,9 @@ import oslo_messaging
from oslo_service import loopingcall from oslo_service import loopingcall
from oslo_service import periodic_task from oslo_service import periodic_task
from oslo_utils import excutils from oslo_utils import excutils
from oslo_utils import importutils
from oslo_utils import timeutils from oslo_utils import timeutils
from neutron.agent.common import utils as common_utils
from neutron.agent.l3 import dvr from neutron.agent.l3 import dvr
from neutron.agent.l3 import dvr_edge_router as dvr_router from neutron.agent.l3 import dvr_edge_router as dvr_router
from neutron.agent.l3 import dvr_local_router as dvr_local_router from neutron.agent.l3 import dvr_local_router as dvr_local_router
@ -165,15 +165,7 @@ class L3NATAgent(firewall_l3_agent.FWaaSL3AgentRpcCallback,
config=self.conf, config=self.conf,
resource_type='router') resource_type='router')
try: self.driver = common_utils.load_interface_driver(self.conf)
self.driver = importutils.import_object(
self.conf.interface_driver,
self.conf
)
except Exception:
LOG.error(_LE("Error importing interface driver "
"'%s'"), self.conf.interface_driver)
raise SystemExit(1)
self.context = n_context.get_admin_context_without_session() self.context = n_context.get_admin_context_without_session()
self.plugin_rpc = L3PluginApi(topics.L3PLUGIN, host) self.plugin_rpc = L3PluginApi(topics.L3PLUGIN, host)

View File

@ -84,9 +84,11 @@ OPTS = [
cfg.StrOpt('metadata_access_mark', cfg.StrOpt('metadata_access_mark',
default='0x1', default='0x1',
help=_('Iptables mangle mark used to mark metadata valid ' help=_('Iptables mangle mark used to mark metadata valid '
'requests')), 'requests. This mark will be masked with 0xffff so '
'that only the lower 16 bits will be used.')),
cfg.StrOpt('external_ingress_mark', cfg.StrOpt('external_ingress_mark',
default='0x2', default='0x2',
help=_('Iptables mangle mark used to mark ingress from ' help=_('Iptables mangle mark used to mark ingress from '
'external network')), 'external network. This mark will be masked with '
'0xffff so that only the lower 16 bits will be used.')),
] ]

View File

@ -28,13 +28,13 @@ class DvrEdgeRouter(dvr_local_router.DvrLocalRouter):
def __init__(self, agent, host, *args, **kwargs): def __init__(self, agent, host, *args, **kwargs):
super(DvrEdgeRouter, self).__init__(agent, host, *args, **kwargs) super(DvrEdgeRouter, self).__init__(agent, host, *args, **kwargs)
self.snat_namespace = None self.snat_namespace = None
self.snat_iptables_manager = None
def external_gateway_added(self, ex_gw_port, interface_name): def external_gateway_added(self, ex_gw_port, interface_name):
super(DvrEdgeRouter, self).external_gateway_added( super(DvrEdgeRouter, self).external_gateway_added(
ex_gw_port, interface_name) ex_gw_port, interface_name)
if self._is_this_snat_host(): if self._is_this_snat_host():
snat_ports = self.get_snat_interfaces() self._create_dvr_gateway(ex_gw_port, interface_name)
self._create_dvr_gateway(ex_gw_port, interface_name, snat_ports)
def external_gateway_updated(self, ex_gw_port, interface_name): def external_gateway_updated(self, ex_gw_port, interface_name):
if not self._is_this_snat_host(): if not self._is_this_snat_host():
@ -70,8 +70,7 @@ class DvrEdgeRouter(dvr_local_router.DvrLocalRouter):
if not self._is_this_snat_host(): if not self._is_this_snat_host():
return return
snat_ports = self.get_snat_interfaces() sn_port = self.get_snat_port_for_internal_port(port)
sn_port = self._map_internal_interfaces(port, snat_ports)
if not sn_port: if not sn_port:
return return
@ -92,7 +91,7 @@ class DvrEdgeRouter(dvr_local_router.DvrLocalRouter):
if not self.ex_gw_port: if not self.ex_gw_port:
return return
sn_port = self._map_internal_interfaces(port, self.snat_ports) sn_port = self.get_snat_port_for_internal_port(port)
if not sn_port: if not sn_port:
return return
@ -108,12 +107,11 @@ class DvrEdgeRouter(dvr_local_router.DvrLocalRouter):
self.driver.unplug(snat_interface, namespace=ns_name, self.driver.unplug(snat_interface, namespace=ns_name,
prefix=prefix) prefix=prefix)
def _create_dvr_gateway(self, ex_gw_port, gw_interface_name, def _create_dvr_gateway(self, ex_gw_port, gw_interface_name):
snat_ports):
"""Create SNAT namespace.""" """Create SNAT namespace."""
snat_ns = self.create_snat_namespace() snat_ns = self.create_snat_namespace()
# connect snat_ports to br_int from SNAT namespace # connect snat_ports to br_int from SNAT namespace
for port in snat_ports: for port in self.get_snat_interfaces():
# create interface_name # create interface_name
interface_name = self.get_snat_int_device_name(port['id']) interface_name = self.get_snat_int_device_name(port['id'])
self._internal_network_added( self._internal_network_added(
@ -145,4 +143,26 @@ class DvrEdgeRouter(dvr_local_router.DvrLocalRouter):
return long_name[:self.driver.DEV_NAME_LEN] return long_name[:self.driver.DEV_NAME_LEN]
def _is_this_snat_host(self): def _is_this_snat_host(self):
return self.get_gw_port_host() == self.host host = self.router.get('gw_port_host')
if not host:
LOG.debug("gw_port_host missing from router: %s",
self.router['id'])
return host == self.host
def _handle_router_snat_rules(self, ex_gw_port, interface_name):
if not self._is_this_snat_host():
return
if not self.get_ex_gw_port():
return
if not self.snat_iptables_manager:
LOG.debug("DVR router: no snat rules to be handled")
return
with self.snat_iptables_manager.defer_apply():
self._empty_snat_chains(self.snat_iptables_manager)
# NOTE DVR doesn't add the jump to float snat like the super class.
self._add_snat_rules(ex_gw_port, self.snat_iptables_manager,
interface_name)

View File

@ -19,7 +19,7 @@ from oslo_log import log as logging
from oslo_utils import excutils from oslo_utils import excutils
from neutron.agent.l3 import dvr_fip_ns from neutron.agent.l3 import dvr_fip_ns
from neutron.agent.l3 import router_info as router from neutron.agent.l3 import dvr_router_base
from neutron.agent.linux import ip_lib from neutron.agent.linux import ip_lib
from neutron.common import constants as l3_constants from neutron.common import constants as l3_constants
from neutron.common import exceptions from neutron.common import exceptions
@ -31,15 +31,11 @@ LOG = logging.getLogger(__name__)
MASK_30 = 0x3fffffff MASK_30 = 0x3fffffff
class DvrLocalRouter(router.RouterInfo): class DvrLocalRouter(dvr_router_base.DvrRouterBase):
def __init__(self, agent, host, *args, **kwargs): def __init__(self, agent, host, *args, **kwargs):
super(DvrLocalRouter, self).__init__(*args, **kwargs) super(DvrLocalRouter, self).__init__(agent, host, *args, **kwargs)
self.agent = agent
self.host = host
self.floating_ips_dict = {} self.floating_ips_dict = {}
self.snat_iptables_manager = None
# Linklocal subnet for router and floating IP namespace link # Linklocal subnet for router and floating IP namespace link
self.rtr_fip_subnet = None self.rtr_fip_subnet = None
self.dist_fip_count = None self.dist_fip_count = None
@ -50,9 +46,6 @@ class DvrLocalRouter(router.RouterInfo):
floating_ips = super(DvrLocalRouter, self).get_floating_ips() floating_ips = super(DvrLocalRouter, self).get_floating_ips()
return [i for i in floating_ips if i['host'] == self.host] return [i for i in floating_ips if i['host'] == self.host]
def get_snat_interfaces(self):
return self.router.get(l3_constants.SNAT_ROUTER_INTF_KEY, [])
def _handle_fip_nat_rules(self, interface_name, action): def _handle_fip_nat_rules(self, interface_name, action):
"""Configures NAT rules for Floating IPs for DVR. """Configures NAT rules for Floating IPs for DVR.
@ -201,17 +194,6 @@ class DvrLocalRouter(router.RouterInfo):
subnet_id, subnet_id,
'add') 'add')
def _map_internal_interfaces(self, int_port, snat_ports):
"""Return the SNAT port for the given internal interface port."""
fixed_ip = int_port['fixed_ips'][0]
subnet_id = fixed_ip['subnet_id']
match_port = [p for p in snat_ports if
p['fixed_ips'][0]['subnet_id'] == subnet_id]
if match_port:
return match_port[0]
else:
LOG.error(_LE('DVR: no map match_port found!'))
@staticmethod @staticmethod
def _get_snat_idx(ip_cidr): def _get_snat_idx(ip_cidr):
"""Generate index for DVR snat rules and route tables. """Generate index for DVR snat rules and route tables.
@ -291,13 +273,6 @@ class DvrLocalRouter(router.RouterInfo):
"""Removes rules and routes for SNAT redirection.""" """Removes rules and routes for SNAT redirection."""
self._snat_redirect_modify(gateway, sn_port, sn_int, is_add=False) self._snat_redirect_modify(gateway, sn_port, sn_int, is_add=False)
def get_gw_port_host(self):
host = self.router.get('gw_port_host')
if not host:
LOG.debug("gw_port_host missing from router: %s",
self.router['id'])
return host
def internal_network_added(self, port): def internal_network_added(self, port):
super(DvrLocalRouter, self).internal_network_added(port) super(DvrLocalRouter, self).internal_network_added(port)
@ -313,8 +288,7 @@ class DvrLocalRouter(router.RouterInfo):
if not ex_gw_port: if not ex_gw_port:
return return
snat_ports = self.get_snat_interfaces() sn_port = self.get_snat_port_for_internal_port(port)
sn_port = self._map_internal_interfaces(port, snat_ports)
if not sn_port: if not sn_port:
return return
@ -325,7 +299,7 @@ class DvrLocalRouter(router.RouterInfo):
if not self.ex_gw_port: if not self.ex_gw_port:
return return
sn_port = self._map_internal_interfaces(port, self.snat_ports) sn_port = self.get_snat_port_for_internal_port(port)
if not sn_port: if not sn_port:
return return
@ -355,14 +329,13 @@ class DvrLocalRouter(router.RouterInfo):
ip_wrapr = ip_lib.IPWrapper(namespace=self.ns_name) ip_wrapr = ip_lib.IPWrapper(namespace=self.ns_name)
ip_wrapr.netns.execute(['sysctl', '-w', ip_wrapr.netns.execute(['sysctl', '-w',
'net.ipv4.conf.all.send_redirects=0']) 'net.ipv4.conf.all.send_redirects=0'])
snat_ports = self.get_snat_interfaces()
for p in self.internal_ports: for p in self.internal_ports:
gateway = self._map_internal_interfaces(p, snat_ports) gateway = self.get_snat_port_for_internal_port(p)
id_name = self.get_internal_device_name(p['id']) id_name = self.get_internal_device_name(p['id'])
if gateway: if gateway:
self._snat_redirect_add(gateway, p, id_name) self._snat_redirect_add(gateway, p, id_name)
for port in snat_ports: for port in self.get_snat_interfaces():
for ip in port['fixed_ips']: for ip in port['fixed_ips']:
self._update_arp_entry(ip['ip_address'], self._update_arp_entry(ip['ip_address'],
port['mac_address'], port['mac_address'],
@ -379,35 +352,13 @@ class DvrLocalRouter(router.RouterInfo):
to_fip_interface_name = ( to_fip_interface_name = (
self.get_external_device_interface_name(ex_gw_port)) self.get_external_device_interface_name(ex_gw_port))
self.process_floating_ip_addresses(to_fip_interface_name) self.process_floating_ip_addresses(to_fip_interface_name)
snat_ports = self.get_snat_interfaces()
for p in self.internal_ports: for p in self.internal_ports:
gateway = self._map_internal_interfaces(p, snat_ports) gateway = self.get_snat_port_for_internal_port(p)
internal_interface = self.get_internal_device_name(p['id']) internal_interface = self.get_internal_device_name(p['id'])
self._snat_redirect_remove(gateway, p, internal_interface) self._snat_redirect_remove(gateway, p, internal_interface)
def _handle_router_snat_rules(self, ex_gw_port, def _handle_router_snat_rules(self, ex_gw_port, interface_name):
interface_name, action): pass
if not self.snat_iptables_manager:
LOG.debug("DVR router: no snat rules to be handled")
return
with self.snat_iptables_manager.defer_apply():
self._empty_snat_chains(self.snat_iptables_manager)
# NOTE DVR doesn't add the jump to float snat like the super class.
self._add_snat_rules(ex_gw_port, self.snat_iptables_manager,
interface_name, action)
def perform_snat_action(self, snat_callback, *args):
# NOTE DVR skips this step in a few cases...
if not self.get_ex_gw_port():
return
if self.get_gw_port_host() != self.host:
return
super(DvrLocalRouter,
self).perform_snat_action(snat_callback, *args)
def process_external(self, agent): def process_external(self, agent):
ex_gw_port = self.get_ex_gw_port() ex_gw_port = self.get_ex_gw_port()

View File

@ -0,0 +1,42 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log as logging
from neutron.agent.l3 import router_info as router
from neutron.common import constants as l3_constants
from neutron.i18n import _LE
LOG = logging.getLogger(__name__)
class DvrRouterBase(router.RouterInfo):
def __init__(self, agent, host, *args, **kwargs):
super(DvrRouterBase, self).__init__(*args, **kwargs)
self.agent = agent
self.host = host
def get_snat_interfaces(self):
return self.router.get(l3_constants.SNAT_ROUTER_INTF_KEY, [])
def get_snat_port_for_internal_port(self, int_port):
"""Return the SNAT port for the given internal interface port."""
snat_ports = self.get_snat_interfaces()
fixed_ip = int_port['fixed_ips'][0]
subnet_id = fixed_ip['subnet_id']
match_port = [p for p in snat_ports
if p['fixed_ips'][0]['subnet_id'] == subnet_id]
if match_port:
return match_port[0]
else:
LOG.error(_LE('DVR: no map match_port found!'))

View File

@ -22,6 +22,7 @@ import webob
from neutron.agent.linux import keepalived from neutron.agent.linux import keepalived
from neutron.agent.linux import utils as agent_utils from neutron.agent.linux import utils as agent_utils
from neutron.common import utils as common_utils
from neutron.i18n import _LI from neutron.i18n import _LI
from neutron.notifiers import batch_notifier from neutron.notifiers import batch_notifier
@ -157,4 +158,4 @@ class AgentMixin(object):
def _init_ha_conf_path(self): def _init_ha_conf_path(self):
ha_full_path = os.path.dirname("/%s/" % self.conf.ha_confs_path) ha_full_path = os.path.dirname("/%s/" % self.conf.ha_confs_path)
agent_utils.ensure_dir(ha_full_path) common_utils.ensure_dir(ha_full_path)

View File

@ -200,6 +200,15 @@ class HaRouter(router.RouterInfo):
if enable_ra_on_gw: if enable_ra_on_gw:
self.driver.configure_ipv6_ra(self.ns_name, interface_name) self.driver.configure_ipv6_ra(self.ns_name, interface_name)
def _add_extra_subnet_onlink_routes(self, ex_gw_port, interface_name):
extra_subnets = ex_gw_port.get('extra_subnets', [])
instance = self._get_keepalived_instance()
onlink_route_cidrs = set(s['cidr'] for s in extra_subnets)
instance.virtual_routes.extra_subnets = [
keepalived.KeepalivedVirtualRoute(
onlink_route_cidr, None, interface_name, scope='link') for
onlink_route_cidr in onlink_route_cidrs]
def _should_delete_ipv6_lladdr(self, ipv6_lladdr): def _should_delete_ipv6_lladdr(self, ipv6_lladdr):
"""Only the master should have any IP addresses configured. """Only the master should have any IP addresses configured.
Let keepalived manage IPv6 link local addresses, the same way we let Let keepalived manage IPv6 link local addresses, the same way we let
@ -235,6 +244,7 @@ class HaRouter(router.RouterInfo):
for ip_cidr in common_utils.fixed_ip_cidrs(ex_gw_port['fixed_ips']): for ip_cidr in common_utils.fixed_ip_cidrs(ex_gw_port['fixed_ips']):
self._add_vip(ip_cidr, interface_name) self._add_vip(ip_cidr, interface_name)
self._add_default_gw_virtual_route(ex_gw_port, interface_name) self._add_default_gw_virtual_route(ex_gw_port, interface_name)
self._add_extra_subnet_onlink_routes(ex_gw_port, interface_name)
def add_floating_ip(self, fip, interface_name, device): def add_floating_ip(self, fip, interface_name, device):
fip_ip = fip['floating_ip_address'] fip_ip = fip['floating_ip_address']
@ -353,6 +363,7 @@ class HaRouter(router.RouterInfo):
if self.ha_port: if self.ha_port:
self.enable_keepalived() self.enable_keepalived()
@common_utils.synchronized('enable_radvd')
def enable_radvd(self, internal_ports=None): def enable_radvd(self, internal_ports=None):
if (self.keepalived_manager.get_process().active and if (self.keepalived_manager.get_process().active and
self.ha_state == 'master'): self.ha_state == 'master'):

View File

@ -12,6 +12,7 @@
from oslo_log import log as logging from oslo_log import log as logging
from neutron.agent.l3 import dvr_fip_ns
from neutron.agent.l3 import dvr_snat_ns from neutron.agent.l3 import dvr_snat_ns
from neutron.agent.l3 import namespaces from neutron.agent.l3 import namespaces
from neutron.agent.linux import external_process from neutron.agent.linux import external_process
@ -42,6 +43,12 @@ class NamespaceManager(object):
agent restarts gracefully. agent restarts gracefully.
""" """
ns_prefix_to_class_map = {
namespaces.NS_PREFIX: namespaces.RouterNamespace,
dvr_snat_ns.SNAT_NS_PREFIX: dvr_snat_ns.SnatNamespace,
dvr_fip_ns.FIP_NS_PREFIX: dvr_fip_ns.FipNamespace,
}
def __init__(self, agent_conf, driver, clean_stale, metadata_driver=None): def __init__(self, agent_conf, driver, clean_stale, metadata_driver=None):
"""Initialize the NamespaceManager. """Initialize the NamespaceManager.
@ -95,7 +102,7 @@ class NamespaceManager(object):
:returns: tuple with prefix and id or None if no prefix matches :returns: tuple with prefix and id or None if no prefix matches
""" """
prefix = namespaces.get_prefix_from_ns_name(ns_name) prefix = namespaces.get_prefix_from_ns_name(ns_name)
if prefix in (namespaces.NS_PREFIX, dvr_snat_ns.SNAT_NS_PREFIX): if prefix in self.ns_prefix_to_class_map:
identifier = namespaces.get_id_from_ns_name(ns_name) identifier = namespaces.get_id_from_ns_name(ns_name)
return (prefix, identifier) return (prefix, identifier)
@ -123,10 +130,7 @@ class NamespaceManager(object):
self._cleanup(ns_prefix, ns_id) self._cleanup(ns_prefix, ns_id)
def _cleanup(self, ns_prefix, ns_id): def _cleanup(self, ns_prefix, ns_id):
if ns_prefix == namespaces.NS_PREFIX: ns_class = self.ns_prefix_to_class_map[ns_prefix]
ns_class = namespaces.RouterNamespace
else:
ns_class = dvr_snat_ns.SnatNamespace
ns = ns_class(ns_id, self.agent_conf, self.driver, use_ipv6=False) ns = ns_class(ns_id, self.agent_conf, self.driver, use_ipv6=False)
try: try:
if self.metadata_driver: if self.metadata_driver:

View File

@ -30,7 +30,6 @@ LOG = logging.getLogger(__name__)
INTERNAL_DEV_PREFIX = namespaces.INTERNAL_DEV_PREFIX INTERNAL_DEV_PREFIX = namespaces.INTERNAL_DEV_PREFIX
EXTERNAL_DEV_PREFIX = namespaces.EXTERNAL_DEV_PREFIX EXTERNAL_DEV_PREFIX = namespaces.EXTERNAL_DEV_PREFIX
EXTERNAL_INGRESS_MARK_MASK = '0xffffffff'
FLOATINGIP_STATUS_NOCHANGE = object() FLOATINGIP_STATUS_NOCHANGE = object()
@ -45,7 +44,6 @@ class RouterInfo(object):
self.router_id = router_id self.router_id = router_id
self.ex_gw_port = None self.ex_gw_port = None
self._snat_enabled = None self._snat_enabled = None
self._snat_action = None
self.internal_ports = [] self.internal_ports = []
self.floating_ips = set() self.floating_ips = set()
# Invoke the setter for establishing initial SNAT action # Invoke the setter for establishing initial SNAT action
@ -97,13 +95,6 @@ class RouterInfo(object):
return return
# enable_snat by default if it wasn't specified by plugin # enable_snat by default if it wasn't specified by plugin
self._snat_enabled = self._router.get('enable_snat', True) self._snat_enabled = self._router.get('enable_snat', True)
# Set a SNAT action for the router
if self._router.get('gw_port'):
self._snat_action = ('add_rules' if self._snat_enabled
else 'remove_rules')
elif self.ex_gw_port:
# Gateway port was removed, remove rules
self._snat_action = 'remove_rules'
@property @property
def is_ha(self): def is_ha(self):
@ -119,14 +110,6 @@ class RouterInfo(object):
def get_external_device_interface_name(self, ex_gw_port): def get_external_device_interface_name(self, ex_gw_port):
return self.get_external_device_name(ex_gw_port['id']) return self.get_external_device_name(ex_gw_port['id'])
def perform_snat_action(self, snat_callback, *args):
# Process SNAT rules for attached subnets
if self._snat_action:
snat_callback(self._router.get('gw_port'),
*args,
action=self._snat_action)
self._snat_action = None
def _update_routing_table(self, operation, route): def _update_routing_table(self, operation, route):
cmd = ['ip', 'route', operation, 'to', route['destination'], cmd = ['ip', 'route', operation, 'to', route['destination'],
'via', route['nexthop']] 'via', route['nexthop']]
@ -534,27 +517,38 @@ class RouterInfo(object):
prefix=EXTERNAL_DEV_PREFIX) prefix=EXTERNAL_DEV_PREFIX)
# Process SNAT rules for external gateway # Process SNAT rules for external gateway
self.perform_snat_action(self._handle_router_snat_rules, gw_port = self._router.get('gw_port')
interface_name) self._handle_router_snat_rules(gw_port, interface_name)
def external_gateway_nat_rules(self, ex_gw_ip, interface_name): def external_gateway_nat_rules(self, ex_gw_ip, interface_name):
mark = self.agent_conf.external_ingress_mark dont_snat_traffic_to_internal_ports_if_not_to_floating_ip = (
rules = [('POSTROUTING', '! -i %(interface_name)s ' 'POSTROUTING', '! -i %(interface_name)s '
'! -o %(interface_name)s -m conntrack ! ' '! -o %(interface_name)s -m conntrack ! '
'--ctstate DNAT -j ACCEPT' % '--ctstate DNAT -j ACCEPT' %
{'interface_name': interface_name}), {'interface_name': interface_name})
('snat', '-o %s -j SNAT --to-source %s' %
(interface_name, ex_gw_ip)), snat_normal_external_traffic = (
('snat', '-m mark ! --mark %s ' 'snat', '-o %s -j SNAT --to-source %s' %
'-m conntrack --ctstate DNAT ' (interface_name, ex_gw_ip))
'-j SNAT --to-source %s' % (mark, ex_gw_ip))]
return rules # Makes replies come back through the router to reverse DNAT
ext_in_mark = self.agent_conf.external_ingress_mark
snat_internal_traffic_to_floating_ip = (
'snat', '-m mark ! --mark %s/%s '
'-m conntrack --ctstate DNAT '
'-j SNAT --to-source %s'
% (ext_in_mark, l3_constants.ROUTER_MARK_MASK, ex_gw_ip))
return [dont_snat_traffic_to_internal_ports_if_not_to_floating_ip,
snat_normal_external_traffic,
snat_internal_traffic_to_floating_ip]
def external_gateway_mangle_rules(self, interface_name): def external_gateway_mangle_rules(self, interface_name):
mark = self.agent_conf.external_ingress_mark mark = self.agent_conf.external_ingress_mark
rules = [('mark', '-i %s -j MARK --set-xmark %s/%s' % mark_packets_entering_external_gateway_port = (
(interface_name, mark, EXTERNAL_INGRESS_MARK_MASK))] 'mark', '-i %s -j MARK --set-xmark %s/%s' %
return rules (interface_name, mark, l3_constants.ROUTER_MARK_MASK))
return [mark_packets_entering_external_gateway_port]
def _empty_snat_chains(self, iptables_manager): def _empty_snat_chains(self, iptables_manager):
iptables_manager.ipv4['nat'].empty_chain('POSTROUTING') iptables_manager.ipv4['nat'].empty_chain('POSTROUTING')
@ -562,8 +556,8 @@ class RouterInfo(object):
iptables_manager.ipv4['mangle'].empty_chain('mark') iptables_manager.ipv4['mangle'].empty_chain('mark')
def _add_snat_rules(self, ex_gw_port, iptables_manager, def _add_snat_rules(self, ex_gw_port, iptables_manager,
interface_name, action): interface_name):
if action == 'add_rules' and ex_gw_port: if self._snat_enabled and ex_gw_port:
# ex_gw_port should not be None in this case # ex_gw_port should not be None in this case
# NAT rules are added only if ex_gw_port has an IPv4 address # NAT rules are added only if ex_gw_port has an IPv4 address
for ip_addr in ex_gw_port['fixed_ips']: for ip_addr in ex_gw_port['fixed_ips']:
@ -578,25 +572,22 @@ class RouterInfo(object):
iptables_manager.ipv4['mangle'].add_rule(*rule) iptables_manager.ipv4['mangle'].add_rule(*rule)
break break
def _handle_router_snat_rules(self, ex_gw_port, def _handle_router_snat_rules(self, ex_gw_port, interface_name):
interface_name, action):
self._empty_snat_chains(self.iptables_manager) self._empty_snat_chains(self.iptables_manager)
self.iptables_manager.ipv4['nat'].add_rule('snat', '-j $float-snat') self.iptables_manager.ipv4['nat'].add_rule('snat', '-j $float-snat')
self._add_snat_rules(ex_gw_port, self._add_snat_rules(ex_gw_port,
self.iptables_manager, self.iptables_manager,
interface_name, interface_name)
action)
def process_external(self, agent): def process_external(self, agent):
fip_statuses = {}
existing_floating_ips = self.floating_ips existing_floating_ips = self.floating_ips
try: try:
with self.iptables_manager.defer_apply(): with self.iptables_manager.defer_apply():
ex_gw_port = self.get_ex_gw_port() ex_gw_port = self.get_ex_gw_port()
self._process_external_gateway(ex_gw_port) self._process_external_gateway(ex_gw_port)
# TODO(Carl) Return after setting existing_floating_ips and
# still call update_fip_statuses?
if not ex_gw_port: if not ex_gw_port:
return return
@ -614,8 +605,9 @@ class RouterInfo(object):
# All floating IPs must be put in error state # All floating IPs must be put in error state
LOG.exception(e) LOG.exception(e)
fip_statuses = self.put_fips_in_error_state() fip_statuses = self.put_fips_in_error_state()
finally:
agent.update_fip_statuses(self, existing_floating_ips, fip_statuses) agent.update_fip_statuses(
self, existing_floating_ips, fip_statuses)
@common_utils.exception_logger() @common_utils.exception_logger()
def process(self, agent): def process(self, agent):
@ -633,6 +625,5 @@ class RouterInfo(object):
# Update ex_gw_port and enable_snat on the router info cache # Update ex_gw_port and enable_snat on the router info cache
self.ex_gw_port = self.get_ex_gw_port() self.ex_gw_port = self.get_ex_gw_port()
self.snat_ports = self.router.get( # TODO(Carl) FWaaS uses this. Why is it set after processing is done?
l3_constants.SNAT_ROUTER_INTF_KEY, [])
self.enable_snat = self.router.get('enable_snat') self.enable_snat = self.router.get('enable_snat')

View File

@ -181,7 +181,10 @@ class AsyncProcess(object):
"""Kill the async process and respawn if necessary.""" """Kill the async process and respawn if necessary."""
LOG.debug('Halting async process [%s] in response to an error.', LOG.debug('Halting async process [%s] in response to an error.',
self.cmd) self.cmd)
respawning = self.respawn_interval >= 0 if self.respawn_interval is not None and self.respawn_interval >= 0:
respawning = True
else:
respawning = False
self._kill(respawning=respawning) self._kill(respawning=respawning)
if respawning: if respawning:
eventlet.sleep(self.respawn_interval) eventlet.sleep(self.respawn_interval)

View File

@ -39,3 +39,9 @@ class BridgeDevice(ip_lib.IPDevice):
def delif(self, interface): def delif(self, interface):
return self._brctl(['delif', self.name, interface]) return self._brctl(['delif', self.name, interface])
def setfd(self, fd):
return self._brctl(['setfd', self.name, str(fd)])
def disable_stp(self):
return self._brctl(['stp', self.name, 'off'])

View File

@ -31,6 +31,16 @@ LOG = logging.getLogger(__name__)
DEVNULL = object() DEVNULL = object()
# Note: We can't use sys.std*.fileno() here. sys.std* objects may be
# random file-like objects that may not match the true system std* fds
# - and indeed may not even have a file descriptor at all (eg: test
# fixtures that monkey patch fixtures.StringStream onto sys.stdout).
# Below we always want the _real_ well-known 0,1,2 Unix fds during
# os.dup2 manipulation.
STDIN_FILENO = 0
STDOUT_FILENO = 1
STDERR_FILENO = 2
def setuid(user_id_or_name): def setuid(user_id_or_name):
try: try:
@ -121,8 +131,7 @@ class Pidfile(object):
return self.pidfile return self.pidfile
def unlock(self): def unlock(self):
if not not fcntl.flock(self.fd, fcntl.LOCK_UN): fcntl.flock(self.fd, fcntl.LOCK_UN)
raise IOError(_('Unable to unlock pid file'))
def write(self, pid): def write(self, pid):
os.ftruncate(self.fd, 0) os.ftruncate(self.fd, 0)
@ -160,11 +169,13 @@ class Daemon(object):
def __init__(self, pidfile, stdin=DEVNULL, stdout=DEVNULL, def __init__(self, pidfile, stdin=DEVNULL, stdout=DEVNULL,
stderr=DEVNULL, procname='python', uuid=None, stderr=DEVNULL, procname='python', uuid=None,
user=None, group=None, watch_log=True): user=None, group=None, watch_log=True):
"""Note: pidfile may be None."""
self.stdin = stdin self.stdin = stdin
self.stdout = stdout self.stdout = stdout
self.stderr = stderr self.stderr = stderr
self.procname = procname self.procname = procname
self.pidfile = Pidfile(pidfile, procname, uuid) self.pidfile = (Pidfile(pidfile, procname, uuid)
if pidfile is not None else None)
self.user = user self.user = user
self.group = group self.group = group
self.watch_log = watch_log self.watch_log = watch_log
@ -180,6 +191,16 @@ class Daemon(object):
def daemonize(self): def daemonize(self):
"""Daemonize process by doing Stevens double fork.""" """Daemonize process by doing Stevens double fork."""
# flush any buffered data before fork/dup2.
if self.stdout is not DEVNULL:
self.stdout.flush()
if self.stderr is not DEVNULL:
self.stderr.flush()
# sys.std* may not match STD{OUT,ERR}_FILENO. Tough.
for f in (sys.stdout, sys.stderr):
f.flush()
# fork first time # fork first time
self._fork() self._fork()
@ -192,23 +213,23 @@ class Daemon(object):
self._fork() self._fork()
# redirect standard file descriptors # redirect standard file descriptors
sys.stdout.flush() with open(os.devnull, 'w+') as devnull:
sys.stderr.flush() stdin = devnull if self.stdin is DEVNULL else self.stdin
devnull = open(os.devnull, 'w+') stdout = devnull if self.stdout is DEVNULL else self.stdout
stdin = devnull if self.stdin is DEVNULL else self.stdin stderr = devnull if self.stderr is DEVNULL else self.stderr
stdout = devnull if self.stdout is DEVNULL else self.stdout os.dup2(stdin.fileno(), STDIN_FILENO)
stderr = devnull if self.stderr is DEVNULL else self.stderr os.dup2(stdout.fileno(), STDOUT_FILENO)
os.dup2(stdin.fileno(), sys.stdin.fileno()) os.dup2(stderr.fileno(), STDERR_FILENO)
os.dup2(stdout.fileno(), sys.stdout.fileno())
os.dup2(stderr.fileno(), sys.stderr.fileno())
# write pidfile if self.pidfile is not None:
atexit.register(self.delete_pid) # write pidfile
signal.signal(signal.SIGTERM, self.handle_sigterm) atexit.register(self.delete_pid)
self.pidfile.write(os.getpid()) signal.signal(signal.SIGTERM, self.handle_sigterm)
self.pidfile.write(os.getpid())
def delete_pid(self): def delete_pid(self):
os.remove(str(self.pidfile)) if self.pidfile is not None:
os.remove(str(self.pidfile))
def handle_sigterm(self, signum, frame): def handle_sigterm(self, signum, frame):
sys.exit(0) sys.exit(0)
@ -216,7 +237,7 @@ class Daemon(object):
def start(self): def start(self):
"""Start the daemon.""" """Start the daemon."""
if self.pidfile.is_running(): if self.pidfile is not None and self.pidfile.is_running():
self.pidfile.unlock() self.pidfile.unlock()
LOG.error(_LE('Pidfile %s already exist. Daemon already ' LOG.error(_LE('Pidfile %s already exist. Daemon already '
'running?'), self.pidfile) 'running?'), self.pidfile)

View File

@ -23,10 +23,10 @@ import time
import netaddr import netaddr
from oslo_config import cfg from oslo_config import cfg
from oslo_log import log as logging from oslo_log import log as logging
from oslo_utils import importutils
from oslo_utils import uuidutils from oslo_utils import uuidutils
import six import six
from neutron.agent.common import utils as common_utils
from neutron.agent.linux import external_process from neutron.agent.linux import external_process
from neutron.agent.linux import ip_lib from neutron.agent.linux import ip_lib
from neutron.agent.linux import iptables_manager from neutron.agent.linux import iptables_manager
@ -36,7 +36,7 @@ from neutron.common import exceptions
from neutron.common import ipv6_utils from neutron.common import ipv6_utils
from neutron.common import utils as commonutils from neutron.common import utils as commonutils
from neutron.extensions import extra_dhcp_opt as edo_ext from neutron.extensions import extra_dhcp_opt as edo_ext
from neutron.i18n import _LE, _LI, _LW from neutron.i18n import _LI, _LW
LOG = logging.getLogger(__name__) LOG = logging.getLogger(__name__)
@ -174,7 +174,7 @@ class DhcpLocalProcess(DhcpBase):
version, plugin) version, plugin)
self.confs_dir = self.get_confs_dir(conf) self.confs_dir = self.get_confs_dir(conf)
self.network_conf_dir = os.path.join(self.confs_dir, network.id) self.network_conf_dir = os.path.join(self.confs_dir, network.id)
utils.ensure_dir(self.network_conf_dir) commonutils.ensure_dir(self.network_conf_dir)
@staticmethod @staticmethod
def get_confs_dir(conf): def get_confs_dir(conf):
@ -199,7 +199,7 @@ class DhcpLocalProcess(DhcpBase):
if self.active: if self.active:
self.restart() self.restart()
elif self._enable_dhcp(): elif self._enable_dhcp():
utils.ensure_dir(self.network_conf_dir) commonutils.ensure_dir(self.network_conf_dir)
interface_name = self.device_manager.setup(self.network) interface_name = self.device_manager.setup(self.network)
self.interface_name = interface_name self.interface_name = interface_name
self.spawn_process() self.spawn_process()
@ -657,14 +657,23 @@ class Dnsmasq(DhcpLocalProcess):
old_leases = self._read_hosts_file_leases(filename) old_leases = self._read_hosts_file_leases(filename)
new_leases = set() new_leases = set()
dhcp_port_exists = False
dhcp_port_on_this_host = self.device_manager.get_device_id(
self.network)
for port in self.network.ports: for port in self.network.ports:
client_id = self._get_client_id(port) client_id = self._get_client_id(port)
for alloc in port.fixed_ips: for alloc in port.fixed_ips:
new_leases.add((alloc.ip_address, port.mac_address, client_id)) new_leases.add((alloc.ip_address, port.mac_address, client_id))
if port.device_id == dhcp_port_on_this_host:
dhcp_port_exists = True
for ip, mac, client_id in old_leases - new_leases: for ip, mac, client_id in old_leases - new_leases:
self._release_lease(mac, ip, client_id) self._release_lease(mac, ip, client_id)
if not dhcp_port_exists:
self.device_manager.driver.unplug(
self.interface_name, namespace=self.network.namespace)
def _output_addn_hosts_file(self): def _output_addn_hosts_file(self):
"""Writes a dnsmasq compatible additional hosts file. """Writes a dnsmasq compatible additional hosts file.
@ -919,18 +928,7 @@ class DeviceManager(object):
def __init__(self, conf, plugin): def __init__(self, conf, plugin):
self.conf = conf self.conf = conf
self.plugin = plugin self.plugin = plugin
if not conf.interface_driver: self.driver = common_utils.load_interface_driver(conf)
LOG.error(_LE('An interface driver must be specified'))
raise SystemExit(1)
try:
self.driver = importutils.import_object(
conf.interface_driver, conf)
except Exception as e:
LOG.error(_LE("Error importing interface driver '%(driver)s': "
"%(inner)s"),
{'driver': conf.interface_driver,
'inner': e})
raise SystemExit(1)
def get_interface_name(self, network, port): def get_interface_name(self, network, port):
"""Return interface(device) name for use by the DHCP process.""" """Return interface(device) name for use by the DHCP process."""
@ -1058,9 +1056,18 @@ class DeviceManager(object):
return dhcp_port return dhcp_port
def _update_dhcp_port(self, network, port):
for index in range(len(network.ports)):
if network.ports[index].id == port.id:
network.ports[index] = port
break
else:
network.ports.append(port)
def setup(self, network): def setup(self, network):
"""Create and initialize a device for network's DHCP on this host.""" """Create and initialize a device for network's DHCP on this host."""
port = self.setup_dhcp_port(network) port = self.setup_dhcp_port(network)
self._update_dhcp_port(network, port)
interface_name = self.get_interface_name(network, port) interface_name = self.get_interface_name(network, port)
if ip_lib.ensure_device_is_ready(interface_name, if ip_lib.ensure_device_is_ready(interface_name,

View File

@ -21,12 +21,13 @@ import eventlet
from oslo_concurrency import lockutils from oslo_concurrency import lockutils
from oslo_config import cfg from oslo_config import cfg
from oslo_log import log as logging from oslo_log import log as logging
from oslo_utils import fileutils
from neutron.agent.common import config as agent_cfg from neutron.agent.common import config as agent_cfg
from neutron.agent.linux import ip_lib from neutron.agent.linux import ip_lib
from neutron.agent.linux import utils from neutron.agent.linux import utils
from neutron.common import utils as common_utils
from neutron.i18n import _LE from neutron.i18n import _LE
from neutron.openstack.common import fileutils
LOG = logging.getLogger(__name__) LOG = logging.getLogger(__name__)
@ -78,7 +79,7 @@ class ProcessManager(MonitoredProcess):
self.service_pid_fname = 'pid' self.service_pid_fname = 'pid'
self.service = 'default-service' self.service = 'default-service'
utils.ensure_dir(os.path.dirname(self.get_pid_file_name())) common_utils.ensure_dir(os.path.dirname(self.get_pid_file_name()))
def enable(self, cmd_callback=None, reload_cfg=False): def enable(self, cmd_callback=None, reload_cfg=False):
if not self.active: if not self.active:

View File

@ -18,7 +18,6 @@ import abc
import netaddr import netaddr
from oslo_config import cfg from oslo_config import cfg
from oslo_log import log as logging from oslo_log import log as logging
from oslo_utils import importutils
import six import six
from neutron.agent.common import ovs_lib from neutron.agent.common import ovs_lib
@ -26,7 +25,6 @@ from neutron.agent.linux import ip_lib
from neutron.agent.linux import utils from neutron.agent.linux import utils
from neutron.common import constants as n_const from neutron.common import constants as n_const
from neutron.common import exceptions from neutron.common import exceptions
from neutron.extensions import flavor
from neutron.i18n import _LE, _LI from neutron.i18n import _LE, _LI
@ -41,29 +39,6 @@ OPTS = [
help=_('Uses veth for an interface or not')), help=_('Uses veth for an interface or not')),
cfg.IntOpt('network_device_mtu', cfg.IntOpt('network_device_mtu',
help=_('MTU setting for device.')), help=_('MTU setting for device.')),
cfg.StrOpt('meta_flavor_driver_mappings',
help=_('Mapping between flavor and LinuxInterfaceDriver. '
'It is specific to MetaInterfaceDriver used with '
'admin_user, admin_password, admin_tenant_name, '
'admin_url, auth_strategy, auth_region and '
'endpoint_type.')),
cfg.StrOpt('admin_user',
help=_("Admin username")),
cfg.StrOpt('admin_password',
help=_("Admin password"),
secret=True),
cfg.StrOpt('admin_tenant_name',
help=_("Admin tenant name")),
cfg.StrOpt('auth_url',
help=_("Authentication URL")),
cfg.StrOpt('auth_strategy', default='keystone',
help=_("The type of authentication to use")),
cfg.StrOpt('auth_region',
help=_("Authentication region")),
cfg.StrOpt('endpoint_type',
default='publicURL',
help=_("Network service endpoint type to pull from "
"the keystone catalog")),
] ]
@ -420,63 +395,3 @@ class BridgeInterfaceDriver(LinuxInterfaceDriver):
except RuntimeError: except RuntimeError:
LOG.error(_LE("Failed unplugging interface '%s'"), LOG.error(_LE("Failed unplugging interface '%s'"),
device_name) device_name)
class MetaInterfaceDriver(LinuxInterfaceDriver):
def __init__(self, conf):
super(MetaInterfaceDriver, self).__init__(conf)
from neutronclient.v2_0 import client
self.neutron = client.Client(
username=self.conf.admin_user,
password=self.conf.admin_password,
tenant_name=self.conf.admin_tenant_name,
auth_url=self.conf.auth_url,
auth_strategy=self.conf.auth_strategy,
region_name=self.conf.auth_region,
endpoint_type=self.conf.endpoint_type
)
self.flavor_driver_map = {}
for net_flavor, driver_name in [
driver_set.split(':')
for driver_set in
self.conf.meta_flavor_driver_mappings.split(',')]:
self.flavor_driver_map[net_flavor] = self._load_driver(driver_name)
def _get_flavor_by_network_id(self, network_id):
network = self.neutron.show_network(network_id)
return network['network'][flavor.FLAVOR_NETWORK]
def _get_driver_by_network_id(self, network_id):
net_flavor = self._get_flavor_by_network_id(network_id)
return self.flavor_driver_map[net_flavor]
def _set_device_plugin_tag(self, network_id, device_name, namespace=None):
plugin_tag = self._get_flavor_by_network_id(network_id)
device = ip_lib.IPDevice(device_name, namespace=namespace)
device.link.set_alias(plugin_tag)
def _get_device_plugin_tag(self, device_name, namespace=None):
device = ip_lib.IPDevice(device_name, namespace=namespace)
return device.link.alias
def get_device_name(self, port):
driver = self._get_driver_by_network_id(port.network_id)
return driver.get_device_name(port)
def plug_new(self, network_id, port_id, device_name, mac_address,
bridge=None, namespace=None, prefix=None):
driver = self._get_driver_by_network_id(network_id)
ret = driver.plug(network_id, port_id, device_name, mac_address,
bridge=bridge, namespace=namespace, prefix=prefix)
self._set_device_plugin_tag(network_id, device_name, namespace)
return ret
def unplug(self, device_name, bridge=None, namespace=None, prefix=None):
plugin_tag = self._get_device_plugin_tag(device_name, namespace)
driver = self.flavor_driver_map[plugin_tag]
return driver.unplug(device_name, bridge, namespace, prefix)
def _load_driver(self, driver_provider):
LOG.debug("Driver location: %s", driver_provider)
plugin_klass = importutils.import_class(driver_provider)
return plugin_klass(self.conf)

View File

@ -348,10 +348,10 @@ class IpLinkCommand(IpDeviceCommandBase):
self._as_root([], ('set', self.name, 'mtu', mtu_size)) self._as_root([], ('set', self.name, 'mtu', mtu_size))
def set_up(self): def set_up(self):
self._as_root([], ('set', self.name, 'up')) return self._as_root([], ('set', self.name, 'up'))
def set_down(self): def set_down(self):
self._as_root([], ('set', self.name, 'down')) return self._as_root([], ('set', self.name, 'down'))
def set_netns(self, namespace): def set_netns(self, namespace):
self._as_root([], ('set', self.name, 'netns', namespace)) self._as_root([], ('set', self.name, 'netns', namespace))
@ -489,6 +489,17 @@ class IpAddrCommand(IpDeviceCommandBase):
class IpRouteCommand(IpDeviceCommandBase): class IpRouteCommand(IpDeviceCommandBase):
COMMAND = 'route' COMMAND = 'route'
def __init__(self, parent, table=None):
super(IpRouteCommand, self).__init__(parent)
self._table = table
def table(self, table):
"""Return an instance of IpRouteCommand which works on given table"""
return IpRouteCommand(self._parent, table)
def _table_args(self):
return ['table', self._table] if self._table else []
def add_gateway(self, gateway, metric=None, table=None): def add_gateway(self, gateway, metric=None, table=None):
ip_version = get_ip_version(gateway) ip_version = get_ip_version(gateway)
args = ['replace', 'default', 'via', gateway] args = ['replace', 'default', 'via', gateway]
@ -497,6 +508,8 @@ class IpRouteCommand(IpDeviceCommandBase):
args += ['dev', self.name] args += ['dev', self.name]
if table: if table:
args += ['table', table] args += ['table', table]
else:
args += self._table_args()
self._as_root([ip_version], tuple(args)) self._as_root([ip_version], tuple(args))
def delete_gateway(self, gateway, table=None): def delete_gateway(self, gateway, table=None):
@ -506,6 +519,8 @@ class IpRouteCommand(IpDeviceCommandBase):
'dev', self.name] 'dev', self.name]
if table: if table:
args += ['table', table] args += ['table', table]
else:
args += self._table_args()
try: try:
self._as_root([ip_version], tuple(args)) self._as_root([ip_version], tuple(args))
except RuntimeError as rte: except RuntimeError as rte:
@ -517,10 +532,9 @@ class IpRouteCommand(IpDeviceCommandBase):
def list_onlink_routes(self, ip_version): def list_onlink_routes(self, ip_version):
def iterate_routes(): def iterate_routes():
output = self._run([ip_version], args = ['list', 'dev', self.name, 'scope', 'link']
('list', args += self._table_args()
'dev', self.name, output = self._run([ip_version], tuple(args))
'scope', 'link'))
for line in output.split('\n'): for line in output.split('\n'):
line = line.strip() line = line.strip()
if line and not line.count('src'): if line and not line.count('src'):
@ -530,22 +544,21 @@ class IpRouteCommand(IpDeviceCommandBase):
def add_onlink_route(self, cidr): def add_onlink_route(self, cidr):
ip_version = get_ip_version(cidr) ip_version = get_ip_version(cidr)
self._as_root([ip_version], args = ['replace', cidr, 'dev', self.name, 'scope', 'link']
('replace', cidr, args += self._table_args()
'dev', self.name, self._as_root([ip_version], tuple(args))
'scope', 'link'))
def delete_onlink_route(self, cidr): def delete_onlink_route(self, cidr):
ip_version = get_ip_version(cidr) ip_version = get_ip_version(cidr)
self._as_root([ip_version], args = ['del', cidr, 'dev', self.name, 'scope', 'link']
('del', cidr, args += self._table_args()
'dev', self.name, self._as_root([ip_version], tuple(args))
'scope', 'link'))
def get_gateway(self, scope=None, filters=None, ip_version=None): def get_gateway(self, scope=None, filters=None, ip_version=None):
options = [ip_version] if ip_version else [] options = [ip_version] if ip_version else []
args = ['list', 'dev', self.name] args = ['list', 'dev', self.name]
args += self._table_args()
if filters: if filters:
args += filters args += filters
@ -739,16 +752,22 @@ def device_exists_with_ips_and_mac(device_name, ip_cidrs, mac, namespace=None):
return True return True
def get_routing_table(namespace=None): def get_routing_table(ip_version, namespace=None):
"""Return a list of dictionaries, each representing a route. """Return a list of dictionaries, each representing a route.
@param ip_version: the routes of version to return, for example 4
@param namespace
@return: a list of dictionaries, each representing a route.
The dictionary format is: {'destination': cidr, The dictionary format is: {'destination': cidr,
'nexthop': ip, 'nexthop': ip,
'device': device_name} 'device': device_name,
'scope': scope}
""" """
ip_wrapper = IPWrapper(namespace=namespace) ip_wrapper = IPWrapper(namespace=namespace)
table = ip_wrapper.netns.execute(['ip', 'route'], check_exit_code=True) table = ip_wrapper.netns.execute(
['ip', '-%s' % ip_version, 'route'],
check_exit_code=True)
routes = [] routes = []
# Example for route_lines: # Example for route_lines:
@ -765,7 +784,8 @@ def get_routing_table(namespace=None):
data = dict(route[i:i + 2] for i in range(1, len(route), 2)) data = dict(route[i:i + 2] for i in range(1, len(route), 2))
routes.append({'destination': network, routes.append({'destination': network,
'nexthop': data.get('via'), 'nexthop': data.get('via'),
'device': data.get('dev')}) 'device': data.get('dev'),
'scope': data.get('scope')})
return routes return routes

View File

@ -32,24 +32,22 @@ from neutron.i18n import _LI
LOG = logging.getLogger(__name__) LOG = logging.getLogger(__name__)
SG_CHAIN = 'sg-chain' SG_CHAIN = 'sg-chain'
INGRESS_DIRECTION = 'ingress'
EGRESS_DIRECTION = 'egress'
SPOOF_FILTER = 'spoof-filter' SPOOF_FILTER = 'spoof-filter'
CHAIN_NAME_PREFIX = {INGRESS_DIRECTION: 'i', CHAIN_NAME_PREFIX = {firewall.INGRESS_DIRECTION: 'i',
EGRESS_DIRECTION: 'o', firewall.EGRESS_DIRECTION: 'o',
SPOOF_FILTER: 's'} SPOOF_FILTER: 's'}
DIRECTION_IP_PREFIX = {'ingress': 'source_ip_prefix', DIRECTION_IP_PREFIX = {firewall.INGRESS_DIRECTION: 'source_ip_prefix',
'egress': 'dest_ip_prefix'} firewall.EGRESS_DIRECTION: 'dest_ip_prefix'}
IPSET_DIRECTION = {INGRESS_DIRECTION: 'src', IPSET_DIRECTION = {firewall.INGRESS_DIRECTION: 'src',
EGRESS_DIRECTION: 'dst'} firewall.EGRESS_DIRECTION: 'dst'}
LINUX_DEV_LEN = 14 LINUX_DEV_LEN = 14
comment_rule = iptables_manager.comment_rule comment_rule = iptables_manager.comment_rule
class IptablesFirewallDriver(firewall.FirewallDriver): class IptablesFirewallDriver(firewall.FirewallDriver):
"""Driver which enforces security groups through iptables rules.""" """Driver which enforces security groups through iptables rules."""
IPTABLES_DIRECTION = {INGRESS_DIRECTION: 'physdev-out', IPTABLES_DIRECTION = {firewall.INGRESS_DIRECTION: 'physdev-out',
EGRESS_DIRECTION: 'physdev-in'} firewall.EGRESS_DIRECTION: 'physdev-in'}
def __init__(self, namespace=None): def __init__(self, namespace=None):
self.iptables = iptables_manager.IptablesManager( self.iptables = iptables_manager.IptablesManager(
@ -180,14 +178,14 @@ class IptablesFirewallDriver(firewall.FirewallDriver):
def _setup_chains_apply(self, ports, unfiltered_ports): def _setup_chains_apply(self, ports, unfiltered_ports):
self._add_chain_by_name_v4v6(SG_CHAIN) self._add_chain_by_name_v4v6(SG_CHAIN)
for port in ports.values(): for port in ports.values():
self._setup_chain(port, INGRESS_DIRECTION) self._setup_chain(port, firewall.INGRESS_DIRECTION)
self._setup_chain(port, EGRESS_DIRECTION) self._setup_chain(port, firewall.EGRESS_DIRECTION)
self.iptables.ipv4['filter'].add_rule(SG_CHAIN, '-j ACCEPT') self.iptables.ipv4['filter'].add_rule(SG_CHAIN, '-j ACCEPT')
self.iptables.ipv6['filter'].add_rule(SG_CHAIN, '-j ACCEPT') self.iptables.ipv6['filter'].add_rule(SG_CHAIN, '-j ACCEPT')
for port in unfiltered_ports.values(): for port in unfiltered_ports.values():
self._add_accept_rule_port_sec(port, INGRESS_DIRECTION) self._add_accept_rule_port_sec(port, firewall.INGRESS_DIRECTION)
self._add_accept_rule_port_sec(port, EGRESS_DIRECTION) self._add_accept_rule_port_sec(port, firewall.EGRESS_DIRECTION)
def _remove_chains(self): def _remove_chains(self):
"""Remove ingress and egress chain for a port.""" """Remove ingress and egress chain for a port."""
@ -197,12 +195,12 @@ class IptablesFirewallDriver(firewall.FirewallDriver):
def _remove_chains_apply(self, ports, unfiltered_ports): def _remove_chains_apply(self, ports, unfiltered_ports):
for port in ports.values(): for port in ports.values():
self._remove_chain(port, INGRESS_DIRECTION) self._remove_chain(port, firewall.INGRESS_DIRECTION)
self._remove_chain(port, EGRESS_DIRECTION) self._remove_chain(port, firewall.EGRESS_DIRECTION)
self._remove_chain(port, SPOOF_FILTER) self._remove_chain(port, SPOOF_FILTER)
for port in unfiltered_ports.values(): for port in unfiltered_ports.values():
self._remove_rule_port_sec(port, INGRESS_DIRECTION) self._remove_rule_port_sec(port, firewall.INGRESS_DIRECTION)
self._remove_rule_port_sec(port, EGRESS_DIRECTION) self._remove_rule_port_sec(port, firewall.EGRESS_DIRECTION)
self._remove_chain_by_name_v4v6(SG_CHAIN) self._remove_chain_by_name_v4v6(SG_CHAIN)
def _setup_chain(self, port, DIRECTION): def _setup_chain(self, port, DIRECTION):
@ -263,7 +261,7 @@ class IptablesFirewallDriver(firewall.FirewallDriver):
else: else:
self._remove_rule_from_chain_v4v6('FORWARD', jump_rule, jump_rule) self._remove_rule_from_chain_v4v6('FORWARD', jump_rule, jump_rule)
if direction == EGRESS_DIRECTION: if direction == firewall.EGRESS_DIRECTION:
jump_rule = ['-m physdev --%s %s --physdev-is-bridged ' jump_rule = ['-m physdev --%s %s --physdev-is-bridged '
'-j ACCEPT' % (self.IPTABLES_DIRECTION[direction], '-j ACCEPT' % (self.IPTABLES_DIRECTION[direction],
device)] device)]
@ -300,7 +298,7 @@ class IptablesFirewallDriver(firewall.FirewallDriver):
self._add_rules_to_chain_v4v6(SG_CHAIN, jump_rule, jump_rule, self._add_rules_to_chain_v4v6(SG_CHAIN, jump_rule, jump_rule,
comment=ic.SG_TO_VM_SG) comment=ic.SG_TO_VM_SG)
if direction == EGRESS_DIRECTION: if direction == firewall.EGRESS_DIRECTION:
self._add_rules_to_chain_v4v6('INPUT', jump_rule, jump_rule, self._add_rules_to_chain_v4v6('INPUT', jump_rule, jump_rule,
comment=ic.INPUT_TO_SG) comment=ic.INPUT_TO_SG)
@ -358,7 +356,7 @@ class IptablesFirewallDriver(firewall.FirewallDriver):
ipv6_rules += [comment_rule('-p icmpv6 -j RETURN', ipv6_rules += [comment_rule('-p icmpv6 -j RETURN',
comment=ic.IPV6_ICMP_ALLOW)] comment=ic.IPV6_ICMP_ALLOW)]
ipv6_rules += [comment_rule('-p udp -m udp --sport 546 --dport 547 ' ipv6_rules += [comment_rule('-p udp -m udp --sport 546 --dport 547 '
'-j RETURN', comment=None)] '-j RETURN', comment=ic.DHCP_CLIENT)]
mac_ipv4_pairs = [] mac_ipv4_pairs = []
mac_ipv6_pairs = [] mac_ipv6_pairs = []
@ -386,7 +384,7 @@ class IptablesFirewallDriver(firewall.FirewallDriver):
ipv4_rules += [comment_rule('-p udp -m udp --sport 67 --dport 68 ' ipv4_rules += [comment_rule('-p udp -m udp --sport 67 --dport 68 '
'-j DROP', comment=ic.DHCP_SPOOF)] '-j DROP', comment=ic.DHCP_SPOOF)]
ipv6_rules += [comment_rule('-p udp -m udp --sport 547 --dport 546 ' ipv6_rules += [comment_rule('-p udp -m udp --sport 547 --dport 546 '
'-j DROP', comment=None)] '-j DROP', comment=ic.DHCP_SPOOF)]
def _accept_inbound_icmpv6(self): def _accept_inbound_icmpv6(self):
# Allow multicast listener, neighbor solicitation and # Allow multicast listener, neighbor solicitation and
@ -458,11 +456,11 @@ class IptablesFirewallDriver(firewall.FirewallDriver):
ipv4_iptables_rules = [] ipv4_iptables_rules = []
ipv6_iptables_rules = [] ipv6_iptables_rules = []
# include fixed egress/ingress rules # include fixed egress/ingress rules
if direction == EGRESS_DIRECTION: if direction == firewall.EGRESS_DIRECTION:
self._add_fixed_egress_rules(port, self._add_fixed_egress_rules(port,
ipv4_iptables_rules, ipv4_iptables_rules,
ipv6_iptables_rules) ipv6_iptables_rules)
elif direction == INGRESS_DIRECTION: elif direction == firewall.INGRESS_DIRECTION:
ipv6_iptables_rules += self._accept_inbound_icmpv6() ipv6_iptables_rules += self._accept_inbound_icmpv6()
# include IPv4 and IPv6 iptable rules from security group # include IPv4 and IPv6 iptable rules from security group
ipv4_iptables_rules += self._convert_sgr_to_iptables_rules( ipv4_iptables_rules += self._convert_sgr_to_iptables_rules(
@ -568,7 +566,7 @@ class IptablesFirewallDriver(firewall.FirewallDriver):
def _port_arg(self, direction, protocol, port_range_min, port_range_max): def _port_arg(self, direction, protocol, port_range_min, port_range_max):
if (protocol not in ['udp', 'tcp', 'icmp', 'icmpv6'] if (protocol not in ['udp', 'tcp', 'icmp', 'icmpv6']
or not port_range_min): or port_range_min is None):
return [] return []
if protocol in ['icmp', 'icmpv6']: if protocol in ['icmp', 'icmpv6']:
@ -717,7 +715,7 @@ class OVSHybridIptablesFirewallDriver(IptablesFirewallDriver):
return ('qvb' + port['device'])[:LINUX_DEV_LEN] return ('qvb' + port['device'])[:LINUX_DEV_LEN]
def _get_jump_rule(self, port, direction): def _get_jump_rule(self, port, direction):
if direction == INGRESS_DIRECTION: if direction == firewall.INGRESS_DIRECTION:
device = self._get_br_device_name(port) device = self._get_br_device_name(port)
else: else:
device = self._get_device_name(port) device = self._get_device_name(port)
@ -740,11 +738,13 @@ class OVSHybridIptablesFirewallDriver(IptablesFirewallDriver):
def _add_chain(self, port, direction): def _add_chain(self, port, direction):
super(OVSHybridIptablesFirewallDriver, self)._add_chain(port, super(OVSHybridIptablesFirewallDriver, self)._add_chain(port,
direction) direction)
if direction in [INGRESS_DIRECTION, EGRESS_DIRECTION]: if direction in [firewall.INGRESS_DIRECTION,
firewall.EGRESS_DIRECTION]:
self._add_raw_chain_rules(port, direction) self._add_raw_chain_rules(port, direction)
def _remove_chain(self, port, direction): def _remove_chain(self, port, direction):
super(OVSHybridIptablesFirewallDriver, self)._remove_chain(port, super(OVSHybridIptablesFirewallDriver, self)._remove_chain(port,
direction) direction)
if direction in [INGRESS_DIRECTION, EGRESS_DIRECTION]: if direction in [firewall.INGRESS_DIRECTION,
firewall.EGRESS_DIRECTION]:
self._remove_raw_chain_rules(port, direction) self._remove_raw_chain_rules(port, direction)

View File

@ -23,6 +23,7 @@ from oslo_log import log as logging
from neutron.agent.linux import external_process from neutron.agent.linux import external_process
from neutron.agent.linux import utils from neutron.agent.linux import utils
from neutron.common import exceptions from neutron.common import exceptions
from neutron.common import utils as common_utils
VALID_STATES = ['MASTER', 'BACKUP'] VALID_STATES = ['MASTER', 'BACKUP']
VALID_AUTH_TYPES = ['AH', 'PASS'] VALID_AUTH_TYPES = ['AH', 'PASS']
@ -31,7 +32,8 @@ PRIMARY_VIP_RANGE_SIZE = 24
# TODO(amuller): Use L3 agent constant when new constants module is introduced. # TODO(amuller): Use L3 agent constant when new constants module is introduced.
FIP_LL_SUBNET = '169.254.30.0/23' FIP_LL_SUBNET = '169.254.30.0/23'
KEEPALIVED_SERVICE_NAME = 'keepalived' KEEPALIVED_SERVICE_NAME = 'keepalived'
GARP_MASTER_REPEAT = 5
GARP_MASTER_REFRESH = 10
LOG = logging.getLogger(__name__) LOG = logging.getLogger(__name__)
@ -95,15 +97,21 @@ class KeepalivedVipAddress(object):
class KeepalivedVirtualRoute(object): class KeepalivedVirtualRoute(object):
"""A virtual route entry of a keepalived configuration.""" """A virtual route entry of a keepalived configuration."""
def __init__(self, destination, nexthop, interface_name=None): def __init__(self, destination, nexthop, interface_name=None,
scope=None):
self.destination = destination self.destination = destination
self.nexthop = nexthop self.nexthop = nexthop
self.interface_name = interface_name self.interface_name = interface_name
self.scope = scope
def build_config(self): def build_config(self):
output = '%s via %s' % (self.destination, self.nexthop) output = self.destination
if self.nexthop:
output += ' via %s' % self.nexthop
if self.interface_name: if self.interface_name:
output += ' dev %s' % self.interface_name output += ' dev %s' % self.interface_name
if self.scope:
output += ' scope %s' % self.scope
return output return output
@ -111,6 +119,7 @@ class KeepalivedInstanceRoutes(object):
def __init__(self): def __init__(self):
self.gateway_routes = [] self.gateway_routes = []
self.extra_routes = [] self.extra_routes = []
self.extra_subnets = []
def remove_routes_on_interface(self, interface_name): def remove_routes_on_interface(self, interface_name):
self.gateway_routes = [gw_rt for gw_rt in self.gateway_routes self.gateway_routes = [gw_rt for gw_rt in self.gateway_routes
@ -118,10 +127,12 @@ class KeepalivedInstanceRoutes(object):
# NOTE(amuller): extra_routes are initialized from the router's # NOTE(amuller): extra_routes are initialized from the router's
# 'routes' attribute. These routes do not have an interface # 'routes' attribute. These routes do not have an interface
# parameter and so cannot be removed via an interface_name lookup. # parameter and so cannot be removed via an interface_name lookup.
self.extra_subnets = [route for route in self.extra_subnets if
route.interface_name != interface_name]
@property @property
def routes(self): def routes(self):
return self.gateway_routes + self.extra_routes return self.gateway_routes + self.extra_routes + self.extra_subnets
def __len__(self): def __len__(self):
return len(self.routes) return len(self.routes)
@ -138,7 +149,9 @@ class KeepalivedInstance(object):
def __init__(self, state, interface, vrouter_id, ha_cidrs, def __init__(self, state, interface, vrouter_id, ha_cidrs,
priority=HA_DEFAULT_PRIORITY, advert_int=None, priority=HA_DEFAULT_PRIORITY, advert_int=None,
mcast_src_ip=None, nopreempt=False): mcast_src_ip=None, nopreempt=False,
garp_master_repeat=GARP_MASTER_REPEAT,
garp_master_refresh=GARP_MASTER_REFRESH):
self.name = 'VR_%s' % vrouter_id self.name = 'VR_%s' % vrouter_id
if state not in VALID_STATES: if state not in VALID_STATES:
@ -151,6 +164,8 @@ class KeepalivedInstance(object):
self.nopreempt = nopreempt self.nopreempt = nopreempt
self.advert_int = advert_int self.advert_int = advert_int
self.mcast_src_ip = mcast_src_ip self.mcast_src_ip = mcast_src_ip
self.garp_master_repeat = garp_master_repeat
self.garp_master_refresh = garp_master_refresh
self.track_interfaces = [] self.track_interfaces = []
self.vips = [] self.vips = []
self.virtual_routes = KeepalivedInstanceRoutes() self.virtual_routes = KeepalivedInstanceRoutes()
@ -244,7 +259,9 @@ class KeepalivedInstance(object):
' state %s' % self.state, ' state %s' % self.state,
' interface %s' % self.interface, ' interface %s' % self.interface,
' virtual_router_id %s' % self.vrouter_id, ' virtual_router_id %s' % self.vrouter_id,
' priority %s' % self.priority] ' priority %s' % self.priority,
' garp_master_repeat %s' % self.garp_master_repeat,
' garp_master_refresh %s' % self.garp_master_refresh]
if self.nopreempt: if self.nopreempt:
config.append(' nopreempt') config.append(' nopreempt')
@ -331,7 +348,7 @@ class KeepalivedManager(object):
def get_full_config_file_path(self, filename, ensure_conf_dir=True): def get_full_config_file_path(self, filename, ensure_conf_dir=True):
conf_dir = self.get_conf_dir() conf_dir = self.get_conf_dir()
if ensure_conf_dir: if ensure_conf_dir:
utils.ensure_dir(conf_dir) common_utils.ensure_dir(conf_dir)
return os.path.join(conf_dir, filename) return os.path.join(conf_dir, filename)
def _output_config_file(self): def _output_config_file(self):

View File

@ -13,7 +13,6 @@
# License for the specific language governing permissions and limitations # License for the specific language governing permissions and limitations
# under the License. # under the License.
import errno
import fcntl import fcntl
import glob import glob
import grp import grp
@ -25,6 +24,7 @@ import struct
import tempfile import tempfile
import threading import threading
from debtcollector import removals
import eventlet import eventlet
from eventlet.green import subprocess from eventlet.green import subprocess
from eventlet import greenthread from eventlet import greenthread
@ -79,7 +79,7 @@ def create_process(cmd, run_as_root=False, addl_env=None):
The return value will be a tuple of the process object and the The return value will be a tuple of the process object and the
list of command arguments used to create it. list of command arguments used to create it.
""" """
cmd = map(str, addl_env_args(addl_env) + cmd) cmd = list(map(str, addl_env_args(addl_env) + cmd))
if run_as_root: if run_as_root:
cmd = shlex.split(config.get_root_helper(cfg.CONF)) + cmd cmd = shlex.split(config.get_root_helper(cfg.CONF)) + cmd
LOG.debug("Running command: %s", cmd) LOG.debug("Running command: %s", cmd)
@ -92,7 +92,7 @@ def create_process(cmd, run_as_root=False, addl_env=None):
def execute_rootwrap_daemon(cmd, process_input, addl_env): def execute_rootwrap_daemon(cmd, process_input, addl_env):
cmd = map(str, addl_env_args(addl_env) + cmd) cmd = list(map(str, addl_env_args(addl_env) + cmd))
# NOTE(twilson) oslo_rootwrap.daemon will raise on filter match # NOTE(twilson) oslo_rootwrap.daemon will raise on filter match
# errors, whereas oslo_rootwrap.cmd converts them to return codes. # errors, whereas oslo_rootwrap.cmd converts them to return codes.
# In practice, no neutron code should be trying to execute something that # In practice, no neutron code should be trying to execute something that
@ -189,14 +189,9 @@ def find_child_pids(pid):
return [x.strip() for x in raw_pids.split('\n') if x.strip()] return [x.strip() for x in raw_pids.split('\n') if x.strip()]
def ensure_dir(dir_path): @removals.remove(message='Use neutron.common.utils.ensure_dir instead.')
"""Ensure a directory with 755 permissions mode.""" def ensure_dir(*args, **kwargs):
try: return utils.ensure_dir(*args, **kwargs)
os.makedirs(dir_path, 0o755)
except OSError as e:
# If the directory already existed, don't raise the error.
if e.errno != errno.EEXIST:
raise
def _get_conf_base(cfg_root, uuid, ensure_conf_dir): def _get_conf_base(cfg_root, uuid, ensure_conf_dir):
@ -205,7 +200,7 @@ def _get_conf_base(cfg_root, uuid, ensure_conf_dir):
conf_dir = os.path.abspath(os.path.normpath(cfg_root)) conf_dir = os.path.abspath(os.path.normpath(cfg_root))
conf_base = os.path.join(conf_dir, uuid) conf_base = os.path.join(conf_dir, uuid)
if ensure_conf_dir: if ensure_conf_dir:
ensure_dir(conf_dir) utils.ensure_dir(conf_dir)
return conf_base return conf_base
@ -338,7 +333,7 @@ def ensure_directory_exists_without_file(path):
if not os.path.exists(path): if not os.path.exists(path):
ctxt.reraise = False ctxt.reraise = False
else: else:
ensure_dir(dirname) utils.ensure_dir(dirname)
def is_effective_user(user_id_or_name): def is_effective_user(user_id_or_name):

View File

@ -87,20 +87,24 @@ class MetadataProxyHandler(object):
self.use_rpc = True self.use_rpc = True
def _get_neutron_client(self): def _get_neutron_client(self):
qclient = client.Client( params = {
username=self.conf.admin_user, 'username': self.conf.admin_user,
password=self.conf.admin_password, 'password': self.conf.admin_password,
tenant_name=self.conf.admin_tenant_name, 'tenant_name': self.conf.admin_tenant_name,
auth_url=self.conf.auth_url, 'auth_url': self.conf.auth_url,
auth_strategy=self.conf.auth_strategy, 'auth_strategy': self.conf.auth_strategy,
region_name=self.conf.auth_region, 'region_name': self.conf.auth_region,
token=self.auth_info.get('auth_token'), 'token': self.auth_info.get('auth_token'),
insecure=self.conf.auth_insecure, 'insecure': self.conf.auth_insecure,
ca_cert=self.conf.auth_ca_cert, 'ca_cert': self.conf.auth_ca_cert,
endpoint_url=self.auth_info.get('endpoint_url'), }
endpoint_type=self.conf.endpoint_type if self.conf.endpoint_url:
) params['endpoint_url'] = self.conf.endpoint_url
return qclient else:
params['endpoint_url'] = self.auth_info.get('endpoint_url')
params['endpoint_type'] = self.conf.endpoint_type
return client.Client(**params)
@webob.dec.wsgify(RequestClass=webob.Request) @webob.dec.wsgify(RequestClass=webob.Request)
def __call__(self, req): def __call__(self, req):

View File

@ -74,6 +74,10 @@ METADATA_PROXY_HANDLER_OPTS = [
default='adminURL', default='adminURL',
help=_("Network service endpoint type to pull from " help=_("Network service endpoint type to pull from "
"the keystone catalog")), "the keystone catalog")),
cfg.StrOpt('endpoint_url',
default=None,
help=_("Neutron endpoint URL, if not set will use endpoint "
"from the keystone catalog along with endpoint_type")),
cfg.StrOpt('nova_metadata_ip', default='127.0.0.1', cfg.StrOpt('nova_metadata_ip', default='127.0.0.1',
help=_("IP address used by Nova metadata server.")), help=_("IP address used by Nova metadata server.")),
cfg.IntOpt('nova_metadata_port', cfg.IntOpt('nova_metadata_port',
@ -109,7 +113,7 @@ UNIX_DOMAIN_METADATA_PROXY_OPTS = [
cfg.StrOpt('metadata_proxy_socket_mode', cfg.StrOpt('metadata_proxy_socket_mode',
default=DEDUCE_MODE, default=DEDUCE_MODE,
choices=SOCKET_MODES, choices=SOCKET_MODES,
help=_("Metadata Proxy UNIX domain socket mode, 3 values " help=_("Metadata Proxy UNIX domain socket mode, 4 values "
"allowed: " "allowed: "
"'deduce': deduce mode from metadata_proxy_user/group " "'deduce': deduce mode from metadata_proxy_user/group "
"values, " "values, "

View File

@ -24,12 +24,12 @@ from neutron.agent.linux import utils
from neutron.callbacks import events from neutron.callbacks import events
from neutron.callbacks import registry from neutron.callbacks import registry
from neutron.callbacks import resources from neutron.callbacks import resources
from neutron.common import constants
from neutron.common import exceptions from neutron.common import exceptions
LOG = logging.getLogger(__name__) LOG = logging.getLogger(__name__)
# Access with redirection to metadata proxy iptables mark mask # Access with redirection to metadata proxy iptables mark mask
METADATA_ACCESS_MARK_MASK = '0xffffffff'
METADATA_SERVICE_NAME = 'metadata-proxy' METADATA_SERVICE_NAME = 'metadata-proxy'
@ -45,7 +45,8 @@ class MetadataDriver(object):
@classmethod @classmethod
def metadata_filter_rules(cls, port, mark): def metadata_filter_rules(cls, port, mark):
return [('INPUT', '-m mark --mark %s -j ACCEPT' % mark), return [('INPUT', '-m mark --mark %s/%s -j ACCEPT' %
(mark, constants.ROUTER_MARK_MASK)),
('INPUT', '-p tcp -m tcp --dport %s ' ('INPUT', '-p tcp -m tcp --dport %s '
'-j DROP' % port)] '-j DROP' % port)]
@ -55,7 +56,7 @@ class MetadataDriver(object):
'-p tcp -m tcp --dport 80 ' '-p tcp -m tcp --dport 80 '
'-j MARK --set-xmark %(value)s/%(mask)s' % '-j MARK --set-xmark %(value)s/%(mask)s' %
{'value': mark, {'value': mark,
'mask': METADATA_ACCESS_MARK_MASK})] 'mask': constants.ROUTER_MARK_MASK})]
@classmethod @classmethod
def metadata_nat_rules(cls, port): def metadata_nat_rules(cls, port):

View File

@ -308,13 +308,22 @@ class API(object):
@abc.abstractmethod @abc.abstractmethod
def list_ports(self, bridge): def list_ports(self, bridge):
"""Create a command to list the names of porsts on a bridge """Create a command to list the names of ports on a bridge
:param bridge: The name of the bridge :param bridge: The name of the bridge
:type bridge: string :type bridge: string
:returns: :class:`Command` with list of port names result :returns: :class:`Command` with list of port names result
""" """
@abc.abstractmethod
def list_ifaces(self, bridge):
"""Create a command to list the names of interfaces on a bridge
:param bridge: The name of the bridge
:type bridge: string
:returns: :class:`Command` with list of interfaces names result
"""
def val_to_py(val): def val_to_py(val):
"""Convert a json ovsdb return value to native python object""" """Convert a json ovsdb return value to native python object"""

View File

@ -157,8 +157,7 @@ class OvsdbIdl(api.API):
return cmd.PortToBridgeCommand(self, name) return cmd.PortToBridgeCommand(self, name)
def iface_to_br(self, name): def iface_to_br(self, name):
# For our purposes, ports and interfaces always have the same name return cmd.InterfaceToBridgeCommand(self, name)
return cmd.PortToBridgeCommand(self, name)
def list_br(self): def list_br(self):
return cmd.ListBridgesCommand(self) return cmd.ListBridgesCommand(self)
@ -204,3 +203,6 @@ class OvsdbIdl(api.API):
def list_ports(self, bridge): def list_ports(self, bridge):
return cmd.ListPortsCommand(self, bridge) return cmd.ListPortsCommand(self, bridge)
def list_ifaces(self, bridge):
return cmd.ListIfacesCommand(self, bridge)

View File

@ -241,6 +241,9 @@ class OvsdbVsctl(ovsdb.API):
def list_ports(self, bridge): def list_ports(self, bridge):
return MultiLineCommand(self.context, 'list-ports', args=[bridge]) return MultiLineCommand(self.context, 'list-ports', args=[bridge])
def list_ifaces(self, bridge):
return MultiLineCommand(self.context, 'list-ifaces', args=[bridge])
def _set_colval_args(*col_values): def _set_colval_args(*col_values):
args = [] args = []

View File

@ -332,6 +332,17 @@ class ListPortsCommand(BaseCommand):
self.result = [p.name for p in br.ports if p.name != self.bridge] self.result = [p.name for p in br.ports if p.name != self.bridge]
class ListIfacesCommand(BaseCommand):
def __init__(self, api, bridge):
super(ListIfacesCommand, self).__init__(api)
self.bridge = bridge
def run_idl(self, txn):
br = idlutils.row_by_value(self.api.idl, 'Bridge', 'name', self.bridge)
self.result = [i.name for p in br.ports if p.name != self.bridge
for i in p.interfaces]
class PortToBridgeCommand(BaseCommand): class PortToBridgeCommand(BaseCommand):
def __init__(self, api, name): def __init__(self, api, name):
super(PortToBridgeCommand, self).__init__(api) super(PortToBridgeCommand, self).__init__(api)
@ -340,7 +351,7 @@ class PortToBridgeCommand(BaseCommand):
def run_idl(self, txn): def run_idl(self, txn):
# TODO(twilson) This is expensive! # TODO(twilson) This is expensive!
# This traversal of all ports could be eliminated by caching the bridge # This traversal of all ports could be eliminated by caching the bridge
# name on the Port's (or Interface's for iface_to_br) external_id field # name on the Port's external_id field
# In fact, if we did that, the only place that uses to_br functions # In fact, if we did that, the only place that uses to_br functions
# could just add the external_id field to the conditions passed to find # could just add the external_id field to the conditions passed to find
port = idlutils.row_by_value(self.api.idl, 'Port', 'name', self.name) port = idlutils.row_by_value(self.api.idl, 'Port', 'name', self.name)
@ -348,45 +359,62 @@ class PortToBridgeCommand(BaseCommand):
self.result = next(br.name for br in bridges if port in br.ports) self.result = next(br.name for br in bridges if port in br.ports)
class InterfaceToBridgeCommand(BaseCommand):
def __init__(self, api, name):
super(InterfaceToBridgeCommand, self).__init__(api)
self.name = name
def run_idl(self, txn):
interface = idlutils.row_by_value(self.api.idl, 'Interface', 'name',
self.name)
ports = self.api._tables['Port'].rows.values()
pname = next(
port for port in ports if interface in port.interfaces)
bridges = self.api._tables['Bridge'].rows.values()
self.result = next(br.name for br in bridges if pname in br.ports)
class DbListCommand(BaseCommand): class DbListCommand(BaseCommand):
def __init__(self, api, table, records, columns, if_exists): def __init__(self, api, table, records, columns, if_exists):
super(DbListCommand, self).__init__(api) super(DbListCommand, self).__init__(api)
self.requested_info = {'records': records, 'columns': columns, self.table = table
'table': table} self.columns = columns
self.table = self.api._tables[table]
self.columns = columns or self.table.columns.keys() + ['_uuid']
self.if_exists = if_exists self.if_exists = if_exists
if records: self.records = records
self.records = []
for record in records: def run_idl(self, txn):
table_schema = self.api._tables[self.table]
columns = self.columns or table_schema.columns.keys() + ['_uuid']
if self.records:
row_uuids = []
for record in self.records:
try: try:
self.records.append(idlutils.row_by_record( row_uuids.append(idlutils.row_by_record(
self.api.idl, table, record).uuid) self.api.idl, self.table, record).uuid)
except idlutils.RowNotFound: except idlutils.RowNotFound:
if self.if_exists: if self.if_exists:
continue continue
raise # NOTE(kevinbenton): this is converted to a RuntimeError
# for compat with the vsctl version. It might make more
# sense to change this to a RowNotFoundError in the future.
raise RuntimeError(_LE(
"Row doesn't exist in the DB. Request info: "
"Table=%(table)s. Columns=%(columns)s. "
"Records=%(records)s.") % {
"table": self.table,
"columns": self.columns,
"records": self.records,
})
else: else:
self.records = self.table.rows.keys() row_uuids = table_schema.rows.keys()
self.result = [
def run_idl(self, txn): {
try: c: idlutils.get_column_value(table_schema.rows[uuid], c)
self.result = [ for c in columns
{ }
c: idlutils.get_column_value(self.table.rows[uuid], c) for uuid in row_uuids
for c in self.columns ]
if not self.if_exists or uuid in self.table.rows
}
for uuid in self.records
]
except KeyError:
# NOTE(kevinbenton): this is converted to a RuntimeError for compat
# with the vsctl version. It might make more sense to change this
# to a RowNotFoundError in the future.
raise RuntimeError(_LE(
"Row removed from DB during listing. Request info: "
"Table=%(table)s. Columns=%(columns)s. "
"Records=%(records)s.") % self.requested_info)
class DbFindCommand(BaseCommand): class DbFindCommand(BaseCommand):

View File

@ -13,11 +13,11 @@
# License for the specific language governing permissions and limitations # License for the specific language governing permissions and limitations
# under the License. # under the License.
from datetime import datetime
import itertools import itertools
from oslo_log import log as logging from oslo_log import log as logging
import oslo_messaging import oslo_messaging
from oslo_utils import timeutils
from oslo_utils import uuidutils from oslo_utils import uuidutils
from neutron.common import constants from neutron.common import constants
@ -80,7 +80,7 @@ class PluginReportStateAPI(object):
agent_state['uuid'] = uuidutils.generate_uuid() agent_state['uuid'] = uuidutils.generate_uuid()
kwargs = { kwargs = {
'agent_state': {'agent_state': agent_state}, 'agent_state': {'agent_state': agent_state},
'time': timeutils.strtime(), 'time': datetime.utcnow().isoformat(),
} }
method = cctxt.call if use_call else cctxt.cast method = cctxt.call if use_call else cctxt.cast
return method(context, 'report_state', **kwargs) return method(context, 'report_state', **kwargs)
@ -95,6 +95,8 @@ class PluginApi(object):
return value to include fixed_ips and device_owner for return value to include fixed_ips and device_owner for
the device port the device port
1.4 - tunnel_sync rpc signature upgrade to obtain 'host' 1.4 - tunnel_sync rpc signature upgrade to obtain 'host'
1.5 - Support update_device_list and
get_devices_details_list_and_failed_devices
''' '''
def __init__(self, topic): def __init__(self, topic):
@ -123,6 +125,26 @@ class PluginApi(object):
] ]
return res return res
def get_devices_details_list_and_failed_devices(self, context, devices,
agent_id, host=None):
"""Get devices details and the list of devices that failed.
This method returns the devices details. If an error is thrown when
retrieving the devices details, the device is put in a list of
failed devices.
"""
try:
cctxt = self.client.prepare(version='1.5')
res = cctxt.call(
context,
'get_devices_details_list_and_failed_devices',
devices=devices, agent_id=agent_id, host=host)
except oslo_messaging.UnsupportedVersion:
#TODO(rossella_s): Remove this failback logic in M
res = self._device_list_rpc_call_with_failed_dev(
self.get_device_details, context, agent_id, host, devices)
return res
def update_device_down(self, context, device, agent_id, host=None): def update_device_down(self, context, device, agent_id, host=None):
cctxt = self.client.prepare() cctxt = self.client.prepare()
return cctxt.call(context, 'update_device_down', device=device, return cctxt.call(context, 'update_device_down', device=device,
@ -133,6 +155,41 @@ class PluginApi(object):
return cctxt.call(context, 'update_device_up', device=device, return cctxt.call(context, 'update_device_up', device=device,
agent_id=agent_id, host=host) agent_id=agent_id, host=host)
def _device_list_rpc_call_with_failed_dev(self, rpc_call, context,
agent_id, host, devices):
succeeded_devices = []
failed_devices = []
for device in devices:
try:
rpc_device = rpc_call(context, device, agent_id, host)
except Exception:
failed_devices.append(device)
else:
# update_device_up doesn't return the device
succeeded_dev = rpc_device or device
succeeded_devices.append(succeeded_dev)
return {'devices': succeeded_devices, 'failed_devices': failed_devices}
def update_device_list(self, context, devices_up, devices_down,
agent_id, host):
try:
cctxt = self.client.prepare(version='1.5')
res = cctxt.call(context, 'update_device_list',
devices_up=devices_up, devices_down=devices_down,
agent_id=agent_id, host=host)
except oslo_messaging.UnsupportedVersion:
#TODO(rossella_s): Remove this failback logic in M
dev_up = self._device_list_rpc_call_with_failed_dev(
self.update_device_up, context, agent_id, host, devices_up)
dev_down = self._device_list_rpc_call_with_failed_dev(
self.update_device_down, context, agent_id, host, devices_down)
res = {'devices_up': dev_up.get('devices'),
'failed_devices_up': dev_up.get('failed_devices'),
'devices_down': dev_down.get('devices'),
'failed_devices_down': dev_down.get('failed_devices')}
return res
def tunnel_sync(self, context, tunnel_ip, tunnel_type=None, host=None): def tunnel_sync(self, context, tunnel_ip, tunnel_type=None, host=None):
try: try:
cctxt = self.client.prepare(version='1.4') cctxt = self.client.prepare(version='1.4')

View File

@ -25,7 +25,7 @@ LOG = logging.getLogger(__name__)
def create_process(cmd, addl_env=None): def create_process(cmd, addl_env=None):
cmd = map(str, cmd) cmd = list(map(str, cmd))
LOG.debug("Running command: %s", cmd) LOG.debug("Running command: %s", cmd)
env = os.environ.copy() env = os.environ.copy()

View File

@ -452,10 +452,7 @@ class ExtensionManager(object):
try: try:
extended_attrs = ext.get_extended_resources(version) extended_attrs = ext.get_extended_resources(version)
for res, resource_attrs in six.iteritems(extended_attrs): for res, resource_attrs in six.iteritems(extended_attrs):
if attr_map.get(res, None): attr_map.setdefault(res, {}).update(resource_attrs)
attr_map[res].update(resource_attrs)
else:
attr_map[res] = resource_attrs
except AttributeError: except AttributeError:
LOG.exception(_LE("Error fetching extended attributes for " LOG.exception(_LE("Error fetching extended attributes for "
"extension '%s'"), ext.get_name()) "extension '%s'"), ext.get_name())

View File

@ -29,7 +29,7 @@ from neutron.common import utils
from neutron.extensions import portbindings from neutron.extensions import portbindings
from neutron.i18n import _LW from neutron.i18n import _LW
from neutron import manager from neutron import manager
from neutron.quota import resource_registry
LOG = logging.getLogger(__name__) LOG = logging.getLogger(__name__)
@ -203,6 +203,7 @@ class DhcpRpcCallback(object):
LOG.warning(_LW('Updating lease expiration is now deprecated. Issued ' LOG.warning(_LW('Updating lease expiration is now deprecated. Issued '
'from host %s.'), host) 'from host %s.'), host)
@resource_registry.mark_resources_dirty
def create_dhcp_port(self, context, **kwargs): def create_dhcp_port(self, context, **kwargs):
"""Create and return dhcp port information. """Create and return dhcp port information.

View File

@ -180,9 +180,8 @@ def _validate_mac_address(data, valid_values=None):
def _validate_mac_address_or_none(data, valid_values=None): def _validate_mac_address_or_none(data, valid_values=None):
if data is None: if data is not None:
return return _validate_mac_address(data, valid_values)
return _validate_mac_address(data, valid_values)
def _validate_ip_address(data, valid_values=None): def _validate_ip_address(data, valid_values=None):
@ -308,9 +307,8 @@ def _validate_hostroutes(data, valid_values=None):
def _validate_ip_address_or_none(data, valid_values=None): def _validate_ip_address_or_none(data, valid_values=None):
if data is None: if data is not None:
return None return _validate_ip_address(data, valid_values)
return _validate_ip_address(data, valid_values)
def _validate_subnet(data, valid_values=None): def _validate_subnet(data, valid_values=None):
@ -348,9 +346,8 @@ def _validate_subnet_list(data, valid_values=None):
def _validate_subnet_or_none(data, valid_values=None): def _validate_subnet_or_none(data, valid_values=None):
if data is None: if data is not None:
return return _validate_subnet(data, valid_values)
return _validate_subnet(data, valid_values)
def _validate_regex(data, valid_values=None): def _validate_regex(data, valid_values=None):
@ -366,9 +363,8 @@ def _validate_regex(data, valid_values=None):
def _validate_regex_or_none(data, valid_values=None): def _validate_regex_or_none(data, valid_values=None):
if data is None: if data is not None:
return return _validate_regex(data, valid_values)
return _validate_regex(data, valid_values)
def _validate_uuid(data, valid_values=None): def _validate_uuid(data, valid_values=None):
@ -578,7 +574,7 @@ def convert_none_to_empty_dict(value):
def convert_to_list(data): def convert_to_list(data):
if data is None: if data is None:
return [] return []
elif hasattr(data, '__iter__'): elif hasattr(data, '__iter__') and not isinstance(data, six.string_types):
return list(data) return list(data)
else: else:
return [data] return [data]

View File

@ -17,7 +17,6 @@ import copy
import netaddr import netaddr
from oslo_config import cfg from oslo_config import cfg
from oslo_db import api as oslo_db_api
from oslo_log import log as logging from oslo_log import log as logging
from oslo_policy import policy as oslo_policy from oslo_policy import policy as oslo_policy
from oslo_utils import excutils from oslo_utils import excutils
@ -35,6 +34,7 @@ from neutron.db import api as db_api
from neutron.i18n import _LE, _LI from neutron.i18n import _LE, _LI
from neutron import policy from neutron import policy
from neutron import quota from neutron import quota
from neutron.quota import resource_registry
LOG = logging.getLogger(__name__) LOG = logging.getLogger(__name__)
@ -187,6 +187,7 @@ class Controller(object):
def __getattr__(self, name): def __getattr__(self, name):
if name in self._member_actions: if name in self._member_actions:
@db_api.retry_db_errors
def _handle_action(request, id, **kwargs): def _handle_action(request, id, **kwargs):
arg_list = [request.context, id] arg_list = [request.context, id]
# Ensure policy engine is initialized # Ensure policy engine is initialized
@ -197,7 +198,7 @@ class Controller(object):
except oslo_policy.PolicyNotAuthorized: except oslo_policy.PolicyNotAuthorized:
msg = _('The resource could not be found.') msg = _('The resource could not be found.')
raise webob.exc.HTTPNotFound(msg) raise webob.exc.HTTPNotFound(msg)
body = kwargs.pop('body', None) body = copy.deepcopy(kwargs.pop('body', None))
# Explicit comparison with None to distinguish from {} # Explicit comparison with None to distinguish from {}
if body is not None: if body is not None:
arg_list.append(body) arg_list.append(body)
@ -207,7 +208,15 @@ class Controller(object):
name, name,
resource, resource,
pluralized=self._collection) pluralized=self._collection)
return getattr(self._plugin, name)(*arg_list, **kwargs) ret_value = getattr(self._plugin, name)(*arg_list, **kwargs)
# It is simply impossible to predict whether one of this
# actions alters resource usage. For instance a tenant port
# is created when a router interface is added. Therefore it is
# important to mark as dirty resources whose counters have
# been altered by this operation
resource_registry.set_resources_dirty(request.context)
return ret_value
return _handle_action return _handle_action
else: else:
raise AttributeError() raise AttributeError()
@ -280,6 +289,9 @@ class Controller(object):
pagination_links = pagination_helper.get_links(obj_list) pagination_links = pagination_helper.get_links(obj_list)
if pagination_links: if pagination_links:
collection[self._collection + "_links"] = pagination_links collection[self._collection + "_links"] = pagination_links
# Synchronize usage trackers, if needed
resource_registry.resync_resource(
request.context, self._resource, request.context.tenant_id)
return collection return collection
def _item(self, request, id, do_authz=False, field_list=None, def _item(self, request, id, do_authz=False, field_list=None,
@ -383,8 +395,7 @@ class Controller(object):
# We need a way for ensuring that if it has been created, # We need a way for ensuring that if it has been created,
# it is then deleted # it is then deleted
@oslo_db_api.wrap_db_retry(max_retries=db_api.MAX_RETRIES, @db_api.retry_db_errors
retry_on_deadlock=True)
def create(self, request, body=None, **kwargs): def create(self, request, body=None, **kwargs):
"""Creates a new instance of the requested entity.""" """Creates a new instance of the requested entity."""
parent_id = kwargs.get(self._parent_id_name) parent_id = kwargs.get(self._parent_id_name)
@ -414,11 +425,13 @@ class Controller(object):
action, action,
item[self._resource], item[self._resource],
pluralized=self._collection) pluralized=self._collection)
if 'tenant_id' not in item[self._resource]:
# no tenant_id - no quota check
continue
try: try:
tenant_id = item[self._resource]['tenant_id'] tenant_id = item[self._resource]['tenant_id']
count = quota.QUOTAS.count(request.context, self._resource, count = quota.QUOTAS.count(request.context, self._resource,
self._plugin, self._collection, self._plugin, tenant_id)
tenant_id)
if bulk: if bulk:
delta = deltas.get(tenant_id, 0) + 1 delta = deltas.get(tenant_id, 0) + 1
deltas[tenant_id] = delta deltas[tenant_id] = delta
@ -434,6 +447,12 @@ class Controller(object):
**kwargs) **kwargs)
def notify(create_result): def notify(create_result):
# Ensure usage trackers for all resources affected by this API
# operation are marked as dirty
# TODO(salv-orlando): This operation will happen in a single
# transaction with reservation commit once that is implemented
resource_registry.set_resources_dirty(request.context)
notifier_method = self._resource + '.create.end' notifier_method = self._resource + '.create.end'
self._notifier.info(request.context, self._notifier.info(request.context,
notifier_method, notifier_method,
@ -470,8 +489,7 @@ class Controller(object):
return notify({self._resource: self._view(request.context, return notify({self._resource: self._view(request.context,
obj)}) obj)})
@oslo_db_api.wrap_db_retry(max_retries=db_api.MAX_RETRIES, @db_api.retry_db_errors
retry_on_deadlock=True)
def delete(self, request, id, **kwargs): def delete(self, request, id, **kwargs):
"""Deletes the specified entity.""" """Deletes the specified entity."""
self._notifier.info(request.context, self._notifier.info(request.context,
@ -496,6 +514,9 @@ class Controller(object):
obj_deleter = getattr(self._plugin, action) obj_deleter = getattr(self._plugin, action)
obj_deleter(request.context, id, **kwargs) obj_deleter(request.context, id, **kwargs)
# A delete operation usually alters resource usage, so mark affected
# usage trackers as dirty
resource_registry.set_resources_dirty(request.context)
notifier_method = self._resource + '.delete.end' notifier_method = self._resource + '.delete.end'
self._notifier.info(request.context, self._notifier.info(request.context,
notifier_method, notifier_method,
@ -506,8 +527,7 @@ class Controller(object):
result, result,
notifier_method) notifier_method)
@oslo_db_api.wrap_db_retry(max_retries=db_api.MAX_RETRIES, @db_api.retry_db_errors
retry_on_deadlock=True)
def update(self, request, id, body=None, **kwargs): def update(self, request, id, body=None, **kwargs):
"""Updates the specified entity's attributes.""" """Updates the specified entity's attributes."""
parent_id = kwargs.get(self._parent_id_name) parent_id = kwargs.get(self._parent_id_name)
@ -561,6 +581,12 @@ class Controller(object):
if parent_id: if parent_id:
kwargs[self._parent_id_name] = parent_id kwargs[self._parent_id_name] = parent_id
obj = obj_updater(request.context, id, **kwargs) obj = obj_updater(request.context, id, **kwargs)
# Usually an update operation does not alter resource usage, but as
# there might be side effects it might be worth checking for changes
# in resource usage here as well (e.g: a tenant port is created when a
# router interface is added)
resource_registry.set_resources_dirty(request.context)
result = {self._resource: self._view(request.context, obj)} result = {self._resource: self._view(request.context, obj)}
notifier_method = self._resource + '.update.end' notifier_method = self._resource + '.update.end'
self._notifier.info(request.context, notifier_method, result) self._notifier.info(request.context, notifier_method, result)
@ -571,8 +597,7 @@ class Controller(object):
return result return result
@staticmethod @staticmethod
def _populate_tenant_id(context, res_dict, is_create): def _populate_tenant_id(context, res_dict, attr_info, is_create):
if (('tenant_id' in res_dict and if (('tenant_id' in res_dict and
res_dict['tenant_id'] != context.tenant_id and res_dict['tenant_id'] != context.tenant_id and
not context.is_admin)): not context.is_admin)):
@ -583,9 +608,9 @@ class Controller(object):
if is_create and 'tenant_id' not in res_dict: if is_create and 'tenant_id' not in res_dict:
if context.tenant_id: if context.tenant_id:
res_dict['tenant_id'] = context.tenant_id res_dict['tenant_id'] = context.tenant_id
else: elif 'tenant_id' in attr_info:
msg = _("Running without keystone AuthN requires " msg = _("Running without keystone AuthN requires "
" that tenant_id is specified") "that tenant_id is specified")
raise webob.exc.HTTPBadRequest(msg) raise webob.exc.HTTPBadRequest(msg)
@staticmethod @staticmethod
@ -627,7 +652,7 @@ class Controller(object):
msg = _("Unable to find '%s' in request body") % resource msg = _("Unable to find '%s' in request body") % resource
raise webob.exc.HTTPBadRequest(msg) raise webob.exc.HTTPBadRequest(msg)
Controller._populate_tenant_id(context, res_dict, is_create) Controller._populate_tenant_id(context, res_dict, attr_info, is_create)
Controller._verify_attributes(res_dict, attr_info) Controller._verify_attributes(res_dict, attr_info)
if is_create: # POST if is_create: # POST

View File

@ -102,7 +102,11 @@ def Resource(controller, faults=None, deserializers=None, serializers=None):
raise mapped_exc(**kwargs) raise mapped_exc(**kwargs)
except webob.exc.HTTPException as e: except webob.exc.HTTPException as e:
type_, value, tb = sys.exc_info() type_, value, tb = sys.exc_info()
LOG.exception(_LE('%s failed'), action) if hasattr(e, 'code') and 400 <= e.code < 500:
LOG.info(_LI('%(action)s failed (client error): %(exc)s'),
{'action': action, 'exc': e})
else:
LOG.exception(_LE('%s failed'), action)
translate(e, language) translate(e, language)
value.body = serializer.serialize( value.body = serializer.serialize(
{'NeutronError': get_exception_data(e)}) {'NeutronError': get_exception_data(e)})

View File

@ -20,7 +20,7 @@ from neutron.api import extensions
from neutron.api.v2 import base from neutron.api.v2 import base
from neutron import manager from neutron import manager
from neutron.plugins.common import constants from neutron.plugins.common import constants
from neutron import quota from neutron.quota import resource_registry
LOG = logging.getLogger(__name__) LOG = logging.getLogger(__name__)
@ -80,7 +80,7 @@ def build_resource_info(plural_mappings, resource_map, which_service,
if translate_name: if translate_name:
collection_name = collection_name.replace('_', '-') collection_name = collection_name.replace('_', '-')
if register_quota: if register_quota:
quota.QUOTAS.register_resource_by_name(resource_name) resource_registry.register_resource_by_name(resource_name)
member_actions = action_map.get(resource_name, {}) member_actions = action_map.get(resource_name, {})
controller = base.create_resource( controller = base.create_resource(
collection_name, resource_name, plugin, params, collection_name, resource_name, plugin, params,

View File

@ -27,7 +27,7 @@ from neutron.api.v2 import attributes
from neutron.api.v2 import base from neutron.api.v2 import base
from neutron import manager from neutron import manager
from neutron import policy from neutron import policy
from neutron import quota from neutron.quota import resource_registry
from neutron import wsgi from neutron import wsgi
@ -106,7 +106,7 @@ class APIRouter(wsgi.Router):
_map_resource(RESOURCES[resource], resource, _map_resource(RESOURCES[resource], resource,
attributes.RESOURCE_ATTRIBUTE_MAP.get( attributes.RESOURCE_ATTRIBUTE_MAP.get(
RESOURCES[resource], dict())) RESOURCES[resource], dict()))
quota.QUOTAS.register_resource_by_name(resource) resource_registry.register_resource_by_name(resource)
for resource in SUB_RESOURCES: for resource in SUB_RESOURCES:
_map_resource(SUB_RESOURCES[resource]['collection_name'], resource, _map_resource(SUB_RESOURCES[resource]['collection_name'], resource,

View File

@ -14,16 +14,24 @@
# under the License. # under the License.
import re import re
import shutil
import tempfile
import netaddr import netaddr
from oslo_config import cfg
from oslo_log import log as logging from oslo_log import log as logging
from oslo_utils import uuidutils from oslo_utils import uuidutils
import six import six
from neutron.agent.common import ovs_lib from neutron.agent.common import ovs_lib
from neutron.agent.l3 import ha_router
from neutron.agent.l3 import namespaces
from neutron.agent.linux import external_process
from neutron.agent.linux import ip_lib from neutron.agent.linux import ip_lib
from neutron.agent.linux import ip_link_support from neutron.agent.linux import ip_link_support
from neutron.agent.linux import keepalived
from neutron.agent.linux import utils as agent_utils from neutron.agent.linux import utils as agent_utils
from neutron.common import constants as n_consts
from neutron.common import utils from neutron.common import utils
from neutron.i18n import _LE from neutron.i18n import _LE
from neutron.plugins.common import constants as const from neutron.plugins.common import constants as const
@ -166,6 +174,124 @@ def dnsmasq_version_supported():
return True return True
class KeepalivedIPv6Test(object):
def __init__(self, ha_port, gw_port, gw_vip, default_gw):
self.ha_port = ha_port
self.gw_port = gw_port
self.gw_vip = gw_vip
self.default_gw = default_gw
self.manager = None
self.config = None
self.config_path = None
self.nsname = "keepalivedtest-" + uuidutils.generate_uuid()
self.pm = external_process.ProcessMonitor(cfg.CONF, 'router')
self.orig_interval = cfg.CONF.AGENT.check_child_processes_interval
def configure(self):
config = keepalived.KeepalivedConf()
instance1 = keepalived.KeepalivedInstance('MASTER', self.ha_port, 1,
['169.254.192.0/18'],
advert_int=5)
instance1.track_interfaces.append(self.ha_port)
# Configure keepalived with an IPv6 address (gw_vip) on gw_port.
vip_addr1 = keepalived.KeepalivedVipAddress(self.gw_vip, self.gw_port)
instance1.vips.append(vip_addr1)
# Configure keepalived with an IPv6 default route on gw_port.
gateway_route = keepalived.KeepalivedVirtualRoute(n_consts.IPv6_ANY,
self.default_gw,
self.gw_port)
instance1.virtual_routes.gateway_routes = [gateway_route]
config.add_instance(instance1)
self.config = config
def start_keepalived_process(self):
# Disable process monitoring for Keepalived process.
cfg.CONF.set_override('check_child_processes_interval', 0, 'AGENT')
# Create a temp directory to store keepalived configuration.
self.config_path = tempfile.mkdtemp()
# Instantiate keepalived manager with the IPv6 configuration.
self.manager = keepalived.KeepalivedManager('router1', self.config,
namespace=self.nsname, process_monitor=self.pm,
conf_path=self.config_path)
self.manager.spawn()
def verify_ipv6_address_assignment(self, gw_dev):
process = self.manager.get_process()
agent_utils.wait_until_true(lambda: process.active)
def _gw_vip_assigned():
iface_ip = gw_dev.addr.list(ip_version=6, scope='global')
if iface_ip:
return self.gw_vip == iface_ip[0]['cidr']
agent_utils.wait_until_true(_gw_vip_assigned)
def __enter__(self):
ip_lib.IPWrapper().netns.add(self.nsname)
return self
def __exit__(self, exc_type, exc_value, exc_tb):
self.pm.stop()
if self.manager:
self.manager.disable()
if self.config_path:
shutil.rmtree(self.config_path, ignore_errors=True)
ip_lib.IPWrapper().netns.delete(self.nsname)
cfg.CONF.set_override('check_child_processes_interval',
self.orig_interval, 'AGENT')
def keepalived_ipv6_supported():
"""Check if keepalived supports IPv6 functionality.
Validation is done as follows.
1. Create a namespace.
2. Create OVS bridge with two ports (ha_port and gw_port)
3. Move the ovs ports to the namespace.
4. Spawn keepalived process inside the namespace with IPv6 configuration.
5. Verify if IPv6 address is assigned to gw_port.
6. Verify if IPv6 default route is configured by keepalived.
"""
random_str = utils.get_random_string(6)
br_name = "ka-test-" + random_str
ha_port = ha_router.HA_DEV_PREFIX + random_str
gw_port = namespaces.INTERNAL_DEV_PREFIX + random_str
gw_vip = 'fdf8:f53b:82e4::10/64'
expected_default_gw = 'fe80:f816::1'
with ovs_lib.OVSBridge(br_name) as br:
with KeepalivedIPv6Test(ha_port, gw_port, gw_vip,
expected_default_gw) as ka:
br.add_port(ha_port, ('type', 'internal'))
br.add_port(gw_port, ('type', 'internal'))
ha_dev = ip_lib.IPDevice(ha_port)
gw_dev = ip_lib.IPDevice(gw_port)
ha_dev.link.set_netns(ka.nsname)
gw_dev.link.set_netns(ka.nsname)
ha_dev.link.set_up()
gw_dev.link.set_up()
ka.configure()
ka.start_keepalived_process()
ka.verify_ipv6_address_assignment(gw_dev)
default_gw = gw_dev.route.get_gateway(ip_version=6)
if default_gw:
default_gw = default_gw['gateway']
return expected_default_gw == default_gw
def ovsdb_native_supported(): def ovsdb_native_supported():
# Running the test should ensure we are configured for OVSDB native # Running the test should ensure we are configured for OVSDB native
try: try:

View File

@ -21,6 +21,7 @@ from oslo_log import log as logging
from neutron.agent import dhcp_agent from neutron.agent import dhcp_agent
from neutron.cmd.sanity import checks from neutron.cmd.sanity import checks
from neutron.common import config from neutron.common import config
from neutron.db import l3_hamode_db
from neutron.i18n import _LE, _LW from neutron.i18n import _LE, _LW
@ -35,6 +36,7 @@ cfg.CONF.import_group('ml2', 'neutron.plugins.ml2.config')
cfg.CONF.import_group('ml2_sriov', cfg.CONF.import_group('ml2_sriov',
'neutron.plugins.ml2.drivers.mech_sriov.mech_driver') 'neutron.plugins.ml2.drivers.mech_sriov.mech_driver')
dhcp_agent.register_options() dhcp_agent.register_options()
cfg.CONF.register_opts(l3_hamode_db.L3_HA_OPTS)
class BoolOptCallback(cfg.BoolOpt): class BoolOptCallback(cfg.BoolOpt):
@ -105,6 +107,15 @@ def check_dnsmasq_version():
return result return result
def check_keepalived_ipv6_support():
result = checks.keepalived_ipv6_supported()
if not result:
LOG.error(_LE('The installed version of keepalived does not support '
'IPv6. Please update to at least version 1.2.10 for '
'IPv6 support.'))
return result
def check_nova_notify(): def check_nova_notify():
result = checks.nova_notify_supported() result = checks.nova_notify_supported()
if not result: if not result:
@ -181,6 +192,8 @@ OPTS = [
help=_('Check ovsdb native interface support')), help=_('Check ovsdb native interface support')),
BoolOptCallback('ebtables_installed', check_ebtables, BoolOptCallback('ebtables_installed', check_ebtables,
help=_('Check ebtables installation')), help=_('Check ebtables installation')),
BoolOptCallback('keepalived_ipv6_support', check_keepalived_ipv6_support,
help=_('Check keepalived IPv6 support')),
] ]
@ -214,6 +227,8 @@ def enable_tests_from_config():
cfg.CONF.set_override('dnsmasq_version', True) cfg.CONF.set_override('dnsmasq_version', True)
if cfg.CONF.OVS.ovsdb_interface == 'native': if cfg.CONF.OVS.ovsdb_interface == 'native':
cfg.CONF.set_override('ovsdb_native', True) cfg.CONF.set_override('ovsdb_native', True)
if cfg.CONF.l3_ha:
cfg.CONF.set_override('keepalived_ipv6_support', True)
def all_tests_passed(): def all_tests_passed():

View File

@ -45,6 +45,9 @@ DEVICE_OWNER_LOADBALANCERV2 = "neutron:LOADBALANCERV2"
# DEVICE_OWNER_ROUTER_HA_INTF is a special case and so is not included. # DEVICE_OWNER_ROUTER_HA_INTF is a special case and so is not included.
ROUTER_INTERFACE_OWNERS = (DEVICE_OWNER_ROUTER_INTF, ROUTER_INTERFACE_OWNERS = (DEVICE_OWNER_ROUTER_INTF,
DEVICE_OWNER_DVR_INTERFACE) DEVICE_OWNER_DVR_INTERFACE)
ROUTER_INTERFACE_OWNERS_SNAT = (DEVICE_OWNER_ROUTER_INTF,
DEVICE_OWNER_DVR_INTERFACE,
DEVICE_OWNER_ROUTER_SNAT)
L3_AGENT_MODE_DVR = 'dvr' L3_AGENT_MODE_DVR = 'dvr'
L3_AGENT_MODE_DVR_SNAT = 'dvr_snat' L3_AGENT_MODE_DVR_SNAT = 'dvr_snat'
L3_AGENT_MODE_LEGACY = 'legacy' L3_AGENT_MODE_LEGACY = 'legacy'
@ -178,3 +181,5 @@ RPC_NAMESPACE_STATE = None
# Default network MTU value when not configured # Default network MTU value when not configured
DEFAULT_NETWORK_MTU = 0 DEFAULT_NETWORK_MTU = 0
ROUTER_MARK_MASK = "0xffff"

View File

@ -69,6 +69,10 @@ class ServiceUnavailable(NeutronException):
message = _("The service is unavailable") message = _("The service is unavailable")
class NotSupported(NeutronException):
message = _('Not supported: %(msg)s')
class AdminRequired(NotAuthorized): class AdminRequired(NotAuthorized):
message = _("User does not have admin privileges: %(reason)s") message = _("User does not have admin privileges: %(reason)s")

View File

@ -13,26 +13,11 @@
# under the License. # under the License.
"""Log helper functions.""" """Log helper functions."""
import functools
from oslo_log import log as logging from oslo_log import helpers
from oslo_log import versionutils from oslo_log import versionutils
@versionutils.deprecated(as_of=versionutils.deprecated.LIBERTY, log = versionutils.deprecated(
in_favor_of='oslo_log.helpers.log_method_call') as_of=versionutils.deprecated.LIBERTY,
def log(method): in_favor_of='oslo_log.helpers.log_method_call')(helpers.log_method_call)
"""Decorator helping to log method calls."""
LOG = logging.getLogger(method.__module__)
@functools.wraps(method)
def wrapper(*args, **kwargs):
instance = args[0]
data = {"class_name": "%s.%s" % (instance.__class__.__module__,
instance.__class__.__name__),
"method_name": method.__name__,
"args": args[1:], "kwargs": kwargs}
LOG.debug('%(class_name)s method %(method_name)s'
' called with arguments %(args)s %(kwargs)s', data)
return method(*args, **kwargs)
return wrapper

View File

@ -19,6 +19,7 @@
"""Utilities and helper functions.""" """Utilities and helper functions."""
import datetime import datetime
import errno
import functools import functools
import hashlib import hashlib
import logging as std_logging import logging as std_logging
@ -172,6 +173,16 @@ def find_config_file(options, config_file):
return cfg_file return cfg_file
def ensure_dir(dir_path):
"""Ensure a directory with 755 permissions mode."""
try:
os.makedirs(dir_path, 0o755)
except OSError as e:
# If the directory already existed, don't raise the error.
if e.errno != errno.EEXIST:
raise
def _subprocess_setup(): def _subprocess_setup():
# Python installs a SIGPIPE handler by default. This is usually not what # Python installs a SIGPIPE handler by default. This is usually not what
# non-Python subprocesses expect. # non-Python subprocesses expect.

View File

@ -39,7 +39,8 @@ class ContextBase(oslo_context.RequestContext):
@removals.removed_kwarg('read_deleted') @removals.removed_kwarg('read_deleted')
def __init__(self, user_id, tenant_id, is_admin=None, roles=None, def __init__(self, user_id, tenant_id, is_admin=None, roles=None,
timestamp=None, request_id=None, tenant_name=None, timestamp=None, request_id=None, tenant_name=None,
user_name=None, overwrite=True, auth_token=None, **kwargs): user_name=None, overwrite=True, auth_token=None,
is_advsvc=None, **kwargs):
"""Object initialization. """Object initialization.
:param overwrite: Set to False to ensure that the greenthread local :param overwrite: Set to False to ensure that the greenthread local
@ -60,7 +61,9 @@ class ContextBase(oslo_context.RequestContext):
timestamp = datetime.datetime.utcnow() timestamp = datetime.datetime.utcnow()
self.timestamp = timestamp self.timestamp = timestamp
self.roles = roles or [] self.roles = roles or []
self.is_advsvc = self.is_admin or policy.check_is_advsvc(self) self.is_advsvc = is_advsvc
if self.is_advsvc is None:
self.is_advsvc = self.is_admin or policy.check_is_advsvc(self)
if self.is_admin is None: if self.is_admin is None:
self.is_admin = policy.check_is_admin(self) self.is_admin = policy.check_is_admin(self)

View File

@ -17,6 +17,7 @@ import contextlib
import six import six
from oslo_config import cfg from oslo_config import cfg
from oslo_db import api as oslo_db_api
from oslo_db import exception as os_db_exception from oslo_db import exception as os_db_exception
from oslo_db.sqlalchemy import session from oslo_db.sqlalchemy import session
from sqlalchemy import exc from sqlalchemy import exc
@ -26,6 +27,8 @@ from sqlalchemy import orm
_FACADE = None _FACADE = None
MAX_RETRIES = 10 MAX_RETRIES = 10
retry_db_errors = oslo_db_api.wrap_db_retry(max_retries=MAX_RETRIES,
retry_on_deadlock=True)
def _create_facade_lazily(): def _create_facade_lazily():

View File

@ -16,6 +16,8 @@
import weakref import weakref
import six import six
from sqlalchemy import and_
from sqlalchemy import or_
from sqlalchemy import sql from sqlalchemy import sql
from neutron.common import exceptions as n_exc from neutron.common import exceptions as n_exc
@ -98,7 +100,15 @@ class CommonDbMixin(object):
# define basic filter condition for model query # define basic filter condition for model query
query_filter = None query_filter = None
if self.model_query_scope(context, model): if self.model_query_scope(context, model):
if hasattr(model, 'shared'): if hasattr(model, 'rbac_entries'):
rbac_model, join_params = self._get_rbac_query_params(model)
query = query.outerjoin(*join_params)
query_filter = (
(model.tenant_id == context.tenant_id) |
((rbac_model.action == 'access_as_shared') &
((rbac_model.target_tenant == context.tenant_id) |
(rbac_model.target_tenant == '*'))))
elif hasattr(model, 'shared'):
query_filter = ((model.tenant_id == context.tenant_id) | query_filter = ((model.tenant_id == context.tenant_id) |
(model.shared == sql.true())) (model.shared == sql.true()))
else: else:
@ -145,15 +155,47 @@ class CommonDbMixin(object):
query = self._model_query(context, model) query = self._model_query(context, model)
return query.filter(model.id == id).one() return query.filter(model.id == id).one()
def _apply_filters_to_query(self, query, model, filters): @staticmethod
def _get_rbac_query_params(model):
"""Return the class and join params for the rbac relationship."""
try:
cls = model.rbac_entries.property.mapper.class_
return (cls, (cls, ))
except AttributeError:
# an association proxy is being used (e.g. subnets
# depends on network's rbac entries)
rbac_model = (model.rbac_entries.target_class.
rbac_entries.property.mapper.class_)
return (rbac_model, model.rbac_entries.attr)
def _apply_filters_to_query(self, query, model, filters, context=None):
if filters: if filters:
for key, value in six.iteritems(filters): for key, value in six.iteritems(filters):
column = getattr(model, key, None) column = getattr(model, key, None)
if column: # NOTE(kevinbenton): if column is a hybrid property that
# references another expression, attempting to convert to
# a boolean will fail so we must compare to None.
# See "An Important Expression Language Gotcha" in:
# docs.sqlalchemy.org/en/rel_0_9/changelog/migration_06.html
if column is not None:
if not value: if not value:
query = query.filter(sql.false()) query = query.filter(sql.false())
return query return query
query = query.filter(column.in_(value)) query = query.filter(column.in_(value))
elif key == 'shared' and hasattr(model, 'rbac_entries'):
# translate a filter on shared into a query against the
# object's rbac entries
rbac, join_params = self._get_rbac_query_params(model)
query = query.outerjoin(*join_params, aliased=True)
matches = [rbac.target_tenant == '*']
if context:
matches.append(rbac.target_tenant == context.tenant_id)
is_shared = and_(
~rbac.object_id.is_(None),
rbac.action == 'access_as_shared',
or_(*matches)
)
query = query.filter(is_shared if value[0] else ~is_shared)
for _nam, hooks in six.iteritems(self._model_query_hooks.get(model, for _nam, hooks in six.iteritems(self._model_query_hooks.get(model,
{})): {})):
result_filter = hooks.get('result_filters', None) result_filter = hooks.get('result_filters', None)
@ -181,7 +223,8 @@ class CommonDbMixin(object):
sorts=None, limit=None, marker_obj=None, sorts=None, limit=None, marker_obj=None,
page_reverse=False): page_reverse=False):
collection = self._model_query(context, model) collection = self._model_query(context, model)
collection = self._apply_filters_to_query(collection, model, filters) collection = self._apply_filters_to_query(collection, model, filters,
context)
if limit and page_reverse and sorts: if limit and page_reverse and sorts:
sorts = [(s[0], not s[1]) for s in sorts] sorts = [(s[0], not s[1]) for s in sorts]
collection = sqlalchemyutils.paginate_query(collection, model, limit, collection = sqlalchemyutils.paginate_query(collection, model, limit,

View File

@ -13,6 +13,8 @@
# License for the specific language governing permissions and limitations # License for the specific language governing permissions and limitations
# under the License. # under the License.
import functools
from oslo_config import cfg from oslo_config import cfg
from oslo_log import log as logging from oslo_log import log as logging
from sqlalchemy.orm import exc from sqlalchemy.orm import exc
@ -72,7 +74,7 @@ class DbBasePluginCommon(common_db_mixin.CommonDbMixin):
) )
context.session.add(allocated) context.session.add(allocated)
def _make_subnet_dict(self, subnet, fields=None): def _make_subnet_dict(self, subnet, fields=None, context=None):
res = {'id': subnet['id'], res = {'id': subnet['id'],
'name': subnet['name'], 'name': subnet['name'],
'tenant_id': subnet['tenant_id'], 'tenant_id': subnet['tenant_id'],
@ -92,8 +94,10 @@ class DbBasePluginCommon(common_db_mixin.CommonDbMixin):
'host_routes': [{'destination': route['destination'], 'host_routes': [{'destination': route['destination'],
'nexthop': route['nexthop']} 'nexthop': route['nexthop']}
for route in subnet['routes']], for route in subnet['routes']],
'shared': subnet['shared']
} }
# The shared attribute for a subnet is the same as its parent network
res['shared'] = self._make_network_dict(subnet.networks,
context=context)['shared']
# Call auxiliary extend functions, if any # Call auxiliary extend functions, if any
self._apply_dict_extend_functions(attributes.SUBNETS, res, subnet) self._apply_dict_extend_functions(attributes.SUBNETS, res, subnet)
return self._fields(res, fields) return self._fields(res, fields)
@ -168,7 +172,8 @@ class DbBasePluginCommon(common_db_mixin.CommonDbMixin):
def _get_dns_by_subnet(self, context, subnet_id): def _get_dns_by_subnet(self, context, subnet_id):
dns_qry = context.session.query(models_v2.DNSNameServer) dns_qry = context.session.query(models_v2.DNSNameServer)
return dns_qry.filter_by(subnet_id=subnet_id).all() return dns_qry.filter_by(subnet_id=subnet_id).order_by(
models_v2.DNSNameServer.order).all()
def _get_route_by_subnet(self, context, subnet_id): def _get_route_by_subnet(self, context, subnet_id):
route_qry = context.session.query(models_v2.SubnetRoute) route_qry = context.session.query(models_v2.SubnetRoute)
@ -196,8 +201,10 @@ class DbBasePluginCommon(common_db_mixin.CommonDbMixin):
sorts=None, limit=None, marker=None, sorts=None, limit=None, marker=None,
page_reverse=False): page_reverse=False):
marker_obj = self._get_marker_obj(context, 'subnet', limit, marker) marker_obj = self._get_marker_obj(context, 'subnet', limit, marker)
make_subnet_dict = functools.partial(self._make_subnet_dict,
context=context)
return self._get_collection(context, models_v2.Subnet, return self._get_collection(context, models_v2.Subnet,
self._make_subnet_dict, make_subnet_dict,
filters=filters, fields=fields, filters=filters, fields=fields,
sorts=sorts, sorts=sorts,
limit=limit, limit=limit,
@ -205,16 +212,25 @@ class DbBasePluginCommon(common_db_mixin.CommonDbMixin):
page_reverse=page_reverse) page_reverse=page_reverse)
def _make_network_dict(self, network, fields=None, def _make_network_dict(self, network, fields=None,
process_extensions=True): process_extensions=True, context=None):
res = {'id': network['id'], res = {'id': network['id'],
'name': network['name'], 'name': network['name'],
'tenant_id': network['tenant_id'], 'tenant_id': network['tenant_id'],
'admin_state_up': network['admin_state_up'], 'admin_state_up': network['admin_state_up'],
'mtu': network.get('mtu', constants.DEFAULT_NETWORK_MTU), 'mtu': network.get('mtu', constants.DEFAULT_NETWORK_MTU),
'status': network['status'], 'status': network['status'],
'shared': network['shared'],
'subnets': [subnet['id'] 'subnets': [subnet['id']
for subnet in network['subnets']]} for subnet in network['subnets']]}
# The shared attribute for a network now reflects if the network
# is shared to the calling tenant via an RBAC entry.
shared = False
matches = ('*',) + ((context.tenant_id,) if context else ())
for entry in network.rbac_entries:
if (entry.action == 'access_as_shared' and
entry.target_tenant in matches):
shared = True
break
res['shared'] = shared
# TODO(pritesh): Move vlan_transparent to the extension module. # TODO(pritesh): Move vlan_transparent to the extension module.
# vlan_transparent here is only added if the vlantransparent # vlan_transparent here is only added if the vlantransparent
# extension is enabled. # extension is enabled.
@ -227,8 +243,7 @@ class DbBasePluginCommon(common_db_mixin.CommonDbMixin):
attributes.NETWORKS, res, network) attributes.NETWORKS, res, network)
return self._fields(res, fields) return self._fields(res, fields)
def _make_subnet_args(self, shared, detail, def _make_subnet_args(self, detail, subnet, subnetpool_id):
subnet, subnetpool_id):
gateway_ip = str(detail.gateway_ip) if detail.gateway_ip else None gateway_ip = str(detail.gateway_ip) if detail.gateway_ip else None
args = {'tenant_id': detail.tenant_id, args = {'tenant_id': detail.tenant_id,
'id': detail.subnet_id, 'id': detail.subnet_id,
@ -238,8 +253,7 @@ class DbBasePluginCommon(common_db_mixin.CommonDbMixin):
'cidr': str(detail.subnet_cidr), 'cidr': str(detail.subnet_cidr),
'subnetpool_id': subnetpool_id, 'subnetpool_id': subnetpool_id,
'enable_dhcp': subnet['enable_dhcp'], 'enable_dhcp': subnet['enable_dhcp'],
'gateway_ip': gateway_ip, 'gateway_ip': gateway_ip}
'shared': shared}
if subnet['ip_version'] == 6 and subnet['enable_dhcp']: if subnet['ip_version'] == 6 and subnet['enable_dhcp']:
if attributes.is_attr_set(subnet['ipv6_ra_mode']): if attributes.is_attr_set(subnet['ipv6_ra_mode']):
args['ipv6_ra_mode'] = subnet['ipv6_ra_mode'] args['ipv6_ra_mode'] = subnet['ipv6_ra_mode']

View File

@ -13,6 +13,8 @@
# License for the specific language governing permissions and limitations # License for the specific language governing permissions and limitations
# under the License. # under the License.
import functools
import netaddr import netaddr
from oslo_config import cfg from oslo_config import cfg
from oslo_db import exception as db_exc from oslo_db import exception as db_exc
@ -34,7 +36,9 @@ from neutron import context as ctx
from neutron.db import api as db_api from neutron.db import api as db_api
from neutron.db import db_base_plugin_common from neutron.db import db_base_plugin_common
from neutron.db import ipam_non_pluggable_backend from neutron.db import ipam_non_pluggable_backend
from neutron.db import ipam_pluggable_backend
from neutron.db import models_v2 from neutron.db import models_v2
from neutron.db import rbac_db_models as rbac_db
from neutron.db import sqlalchemyutils from neutron.db import sqlalchemyutils
from neutron.extensions import l3 from neutron.extensions import l3
from neutron.i18n import _LE, _LI from neutron.i18n import _LE, _LI
@ -98,7 +102,10 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon,
self.nova_notifier.record_port_status_changed) self.nova_notifier.record_port_status_changed)
def set_ipam_backend(self): def set_ipam_backend(self):
self.ipam = ipam_non_pluggable_backend.IpamNonPluggableBackend() if cfg.CONF.ipam_driver:
self.ipam = ipam_pluggable_backend.IpamPluggableBackend()
else:
self.ipam = ipam_non_pluggable_backend.IpamNonPluggableBackend()
def _validate_host_route(self, route, ip_version): def _validate_host_route(self, route, ip_version):
try: try:
@ -235,7 +242,6 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon,
'name': n['name'], 'name': n['name'],
'admin_state_up': n['admin_state_up'], 'admin_state_up': n['admin_state_up'],
'mtu': n.get('mtu', constants.DEFAULT_NETWORK_MTU), 'mtu': n.get('mtu', constants.DEFAULT_NETWORK_MTU),
'shared': n['shared'],
'status': n.get('status', constants.NET_STATUS_ACTIVE)} 'status': n.get('status', constants.NET_STATUS_ACTIVE)}
# TODO(pritesh): Move vlan_transparent to the extension module. # TODO(pritesh): Move vlan_transparent to the extension module.
# vlan_transparent here is only added if the vlantransparent # vlan_transparent here is only added if the vlantransparent
@ -244,8 +250,14 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon,
attributes.ATTR_NOT_SPECIFIED): attributes.ATTR_NOT_SPECIFIED):
args['vlan_transparent'] = n['vlan_transparent'] args['vlan_transparent'] = n['vlan_transparent']
network = models_v2.Network(**args) network = models_v2.Network(**args)
if n['shared']:
entry = rbac_db.NetworkRBAC(
network=network, action='access_as_shared',
target_tenant='*', tenant_id=network['tenant_id'])
context.session.add(entry)
context.session.add(network) context.session.add(network)
return self._make_network_dict(network, process_extensions=False) return self._make_network_dict(network, process_extensions=False,
context=context)
def update_network(self, context, id, network): def update_network(self, context, id, network):
n = network['network'] n = network['network']
@ -253,13 +265,25 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon,
network = self._get_network(context, id) network = self._get_network(context, id)
# validate 'shared' parameter # validate 'shared' parameter
if 'shared' in n: if 'shared' in n:
entry = None
for item in network.rbac_entries:
if (item.action == 'access_as_shared' and
item.target_tenant == '*'):
entry = item
break
setattr(network, 'shared', True if entry else False)
self._validate_shared_update(context, id, network, n) self._validate_shared_update(context, id, network, n)
update_shared = n.pop('shared')
if update_shared and not entry:
entry = rbac_db.NetworkRBAC(
network=network, action='access_as_shared',
target_tenant='*', tenant_id=network['tenant_id'])
context.session.add(entry)
elif not update_shared and entry:
context.session.delete(entry)
context.session.expire(network, ['rbac_entries'])
network.update(n) network.update(n)
# also update shared in all the subnets for this network return self._make_network_dict(network, context=context)
subnets = self._get_subnets_by_network(context, id)
for subnet in subnets:
subnet['shared'] = network['shared']
return self._make_network_dict(network)
def delete_network(self, context, id): def delete_network(self, context, id):
with context.session.begin(subtransactions=True): with context.session.begin(subtransactions=True):
@ -285,14 +309,16 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon,
def get_network(self, context, id, fields=None): def get_network(self, context, id, fields=None):
network = self._get_network(context, id) network = self._get_network(context, id)
return self._make_network_dict(network, fields) return self._make_network_dict(network, fields, context=context)
def get_networks(self, context, filters=None, fields=None, def get_networks(self, context, filters=None, fields=None,
sorts=None, limit=None, marker=None, sorts=None, limit=None, marker=None,
page_reverse=False): page_reverse=False):
marker_obj = self._get_marker_obj(context, 'network', limit, marker) marker_obj = self._get_marker_obj(context, 'network', limit, marker)
make_network_dict = functools.partial(self._make_network_dict,
context=context)
return self._get_collection(context, models_v2.Network, return self._get_collection(context, models_v2.Network,
self._make_network_dict, make_network_dict,
filters=filters, fields=fields, filters=filters, fields=fields,
sorts=sorts, sorts=sorts,
limit=limit, limit=limit,
@ -448,10 +474,10 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon,
with context.session.begin(subtransactions=True): with context.session.begin(subtransactions=True):
network = self._get_network(context, s["network_id"]) network = self._get_network(context, s["network_id"])
subnet = self.ipam.allocate_subnet(context, subnet, ipam_subnet = self.ipam.allocate_subnet(context,
network, network,
s, s,
subnetpool_id) subnetpool_id)
if hasattr(network, 'external') and network.external: if hasattr(network, 'external') and network.external:
self._update_router_gw_ports(context, self._update_router_gw_ports(context,
network, network,
@ -459,8 +485,9 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon,
# If this subnet supports auto-addressing, then update any # If this subnet supports auto-addressing, then update any
# internal ports on the network with addresses for this subnet. # internal ports on the network with addresses for this subnet.
if ipv6_utils.is_auto_address_subnet(subnet): if ipv6_utils.is_auto_address_subnet(subnet):
self.ipam.add_auto_addrs_on_network_ports(context, subnet) self.ipam.add_auto_addrs_on_network_ports(context, subnet,
return self._make_subnet_dict(subnet) ipam_subnet)
return self._make_subnet_dict(subnet, context=context)
def _get_subnetpool_id(self, subnet): def _get_subnetpool_id(self, subnet):
"""Returns the subnetpool id for this request """Returns the subnetpool id for this request
@ -539,22 +566,25 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon,
s['ip_version'] = db_subnet.ip_version s['ip_version'] = db_subnet.ip_version
s['cidr'] = db_subnet.cidr s['cidr'] = db_subnet.cidr
s['id'] = db_subnet.id s['id'] = db_subnet.id
s['tenant_id'] = db_subnet.tenant_id
self._validate_subnet(context, s, cur_subnet=db_subnet) self._validate_subnet(context, s, cur_subnet=db_subnet)
db_pools = [netaddr.IPRange(p['first_ip'], p['last_ip'])
for p in db_subnet.allocation_pools]
range_pools = None
if s.get('allocation_pools') is not None:
# Convert allocation pools to IPRange to simplify future checks
range_pools = self.ipam.pools_to_ip_range(s['allocation_pools'])
s['allocation_pools'] = range_pools
if s.get('gateway_ip') is not None: if s.get('gateway_ip') is not None:
if s.get('allocation_pools') is not None: pools = range_pools if range_pools is not None else db_pools
allocation_pools = [{'start': p['start'], 'end': p['end']} self.ipam.validate_gw_out_of_pools(s["gateway_ip"], pools)
for p in s['allocation_pools']]
else:
allocation_pools = [{'start': p['first_ip'],
'end': p['last_ip']}
for p in db_subnet.allocation_pools]
self.ipam.validate_gw_out_of_pools(s["gateway_ip"],
allocation_pools)
with context.session.begin(subtransactions=True): with context.session.begin(subtransactions=True):
subnet, changes = self.ipam.update_db_subnet(context, id, s) subnet, changes = self.ipam.update_db_subnet(context, id, s,
result = self._make_subnet_dict(subnet) db_pools)
result = self._make_subnet_dict(subnet, context=context)
# Keep up with fields that changed # Keep up with fields that changed
result.update(changes) result.update(changes)
return result return result
@ -612,7 +642,8 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon,
in_(AUTO_DELETE_PORT_OWNERS))) in_(AUTO_DELETE_PORT_OWNERS)))
network_ports = qry_network_ports.all() network_ports = qry_network_ports.all()
if network_ports: if network_ports:
map(context.session.delete, network_ports) for port in network_ports:
context.session.delete(port)
# Check if there are more IP allocations, unless # Check if there are more IP allocations, unless
# is_auto_address_subnet is True. In that case the check is # is_auto_address_subnet is True. In that case the check is
# unnecessary. This additional check not only would be wasteful # unnecessary. This additional check not only would be wasteful
@ -631,10 +662,13 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon,
raise n_exc.SubnetInUse(subnet_id=id) raise n_exc.SubnetInUse(subnet_id=id)
context.session.delete(subnet) context.session.delete(subnet)
# Delete related ipam subnet manually,
# since there is no FK relationship
self.ipam.delete_subnet(context, id)
def get_subnet(self, context, id, fields=None): def get_subnet(self, context, id, fields=None):
subnet = self._get_subnet(context, id) subnet = self._get_subnet(context, id)
return self._make_subnet_dict(subnet, fields) return self._make_subnet_dict(subnet, fields, context=context)
def get_subnets(self, context, filters=None, fields=None, def get_subnets(self, context, filters=None, fields=None,
sorts=None, limit=None, marker=None, sorts=None, limit=None, marker=None,
@ -914,7 +948,7 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon,
if subnet_ids: if subnet_ids:
query = query.filter(IPAllocation.subnet_id.in_(subnet_ids)) query = query.filter(IPAllocation.subnet_id.in_(subnet_ids))
query = self._apply_filters_to_query(query, Port, filters) query = self._apply_filters_to_query(query, Port, filters, context)
if limit and page_reverse and sorts: if limit and page_reverse and sorts:
sorts = [(s[0], not s[1]) for s in sorts] sorts = [(s[0], not s[1]) for s in sorts]
query = sqlalchemyutils.paginate_query(query, Port, limit, query = sqlalchemyutils.paginate_query(query, Port, limit,

View File

@ -35,8 +35,15 @@ LOG = logging.getLogger(__name__)
dvr_mac_address_opts = [ dvr_mac_address_opts = [
cfg.StrOpt('dvr_base_mac', cfg.StrOpt('dvr_base_mac',
default="fa:16:3f:00:00:00", default="fa:16:3f:00:00:00",
help=_('The base mac address used for unique ' help=_("The base mac address used for unique "
'DVR instances by Neutron')), "DVR instances by Neutron. The first 3 octets will "
"remain unchanged. If the 4th octet is not 00, it will "
"also be used. The others will be randomly generated. "
"The 'dvr_base_mac' *must* be different from "
"'base_mac' to avoid mixing them up with MAC's "
"allocated for tenant ports. A 4 octet example would be "
"dvr_base_mac = fa:16:3f:4f:00:00. The default is 3 "
"octet")),
] ]
cfg.CONF.register_opts(dvr_mac_address_opts) cfg.CONF.register_opts(dvr_mac_address_opts)

356
neutron/db/flavors_db.py Normal file
View File

@ -0,0 +1,356 @@
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log as logging
from oslo_serialization import jsonutils
from oslo_utils import importutils
from oslo_utils import uuidutils
import sqlalchemy as sa
from sqlalchemy import orm
from sqlalchemy.orm import exc as sa_exc
from neutron.common import exceptions as qexception
from neutron.db import common_db_mixin
from neutron.db import model_base
from neutron.db import models_v2
from neutron.plugins.common import constants
LOG = logging.getLogger(__name__)
# Flavor Exceptions
class FlavorNotFound(qexception.NotFound):
message = _("Flavor %(flavor_id)s could not be found")
class FlavorInUse(qexception.InUse):
message = _("Flavor %(flavor_id)s is used by some service instance")
class ServiceProfileNotFound(qexception.NotFound):
message = _("Service Profile %(sp_id)s could not be found")
class ServiceProfileInUse(qexception.InUse):
message = _("Service Profile %(sp_id)s is used by some service instance")
class FlavorServiceProfileBindingExists(qexception.Conflict):
message = _("Service Profile %(sp_id)s is already associated "
"with flavor %(fl_id)s")
class FlavorServiceProfileBindingNotFound(qexception.NotFound):
message = _("Service Profile %(sp_id)s is not associated "
"with flavor %(fl_id)s")
class DummyCorePlugin(object):
pass
class DummyServicePlugin(object):
def driver_loaded(self, driver, service_profile):
pass
def get_plugin_type(self):
return constants.DUMMY
def get_plugin_description(self):
return "Dummy service plugin, aware of flavors"
class DummyServiceDriver(object):
@staticmethod
def get_service_type():
return constants.DUMMY
def __init__(self, plugin):
pass
class Flavor(model_base.BASEV2, models_v2.HasId):
name = sa.Column(sa.String(255))
description = sa.Column(sa.String(1024))
enabled = sa.Column(sa.Boolean, nullable=False, default=True,
server_default=sa.sql.true())
# Make it True for multi-type flavors
service_type = sa.Column(sa.String(36), nullable=True)
service_profiles = orm.relationship("FlavorServiceProfileBinding",
cascade="all, delete-orphan")
class ServiceProfile(model_base.BASEV2, models_v2.HasId):
description = sa.Column(sa.String(1024))
driver = sa.Column(sa.String(1024), nullable=False)
enabled = sa.Column(sa.Boolean, nullable=False, default=True,
server_default=sa.sql.true())
metainfo = sa.Column(sa.String(4096))
flavors = orm.relationship("FlavorServiceProfileBinding")
class FlavorServiceProfileBinding(model_base.BASEV2):
flavor_id = sa.Column(sa.String(36),
sa.ForeignKey("flavors.id",
ondelete="CASCADE"),
nullable=False, primary_key=True)
flavor = orm.relationship(Flavor)
service_profile_id = sa.Column(sa.String(36),
sa.ForeignKey("serviceprofiles.id",
ondelete="CASCADE"),
nullable=False, primary_key=True)
service_profile = orm.relationship(ServiceProfile)
class FlavorManager(common_db_mixin.CommonDbMixin):
"""Class to support flavors and service profiles."""
supported_extension_aliases = ["flavors"]
def __init__(self, manager=None):
# manager = None is UT usage where FlavorManager is loaded as
# a core plugin
self.manager = manager
def get_plugin_name(self):
return constants.FLAVORS
def get_plugin_type(self):
return constants.FLAVORS
def get_plugin_description(self):
return "Neutron Flavors and Service Profiles manager plugin"
def _get_flavor(self, context, flavor_id):
try:
return self._get_by_id(context, Flavor, flavor_id)
except sa_exc.NoResultFound:
raise FlavorNotFound(flavor_id=flavor_id)
def _get_service_profile(self, context, sp_id):
try:
return self._get_by_id(context, ServiceProfile, sp_id)
except sa_exc.NoResultFound:
raise ServiceProfileNotFound(sp_id=sp_id)
def _make_flavor_dict(self, flavor_db, fields=None):
res = {'id': flavor_db['id'],
'name': flavor_db['name'],
'description': flavor_db['description'],
'enabled': flavor_db['enabled'],
'service_profiles': []}
if flavor_db.service_profiles:
res['service_profiles'] = [sp['service_profile_id']
for sp in flavor_db.service_profiles]
return self._fields(res, fields)
def _make_service_profile_dict(self, sp_db, fields=None):
res = {'id': sp_db['id'],
'description': sp_db['description'],
'driver': sp_db['driver'],
'enabled': sp_db['enabled'],
'metainfo': sp_db['metainfo']}
if sp_db.flavors:
res['flavors'] = [fl['flavor_id']
for fl in sp_db.flavors]
return self._fields(res, fields)
def _ensure_flavor_not_in_use(self, context, flavor_id):
"""Checks that flavor is not associated with service instance."""
# Future TODO(enikanorov): check that there is no binding to
# instances. Shall address in future upon getting the right
# flavor supported driver
pass
def _ensure_service_profile_not_in_use(self, context, sp_id):
# Future TODO(enikanorov): check that there is no binding to instances
# and no binding to flavors. Shall be addressed in future
fl = (context.session.query(FlavorServiceProfileBinding).
filter_by(service_profile_id=sp_id).first())
if fl:
raise ServiceProfileInUse(sp_id=sp_id)
def create_flavor(self, context, flavor):
fl = flavor['flavor']
with context.session.begin(subtransactions=True):
fl_db = Flavor(id=uuidutils.generate_uuid(),
name=fl['name'],
description=fl['description'],
enabled=fl['enabled'])
context.session.add(fl_db)
return self._make_flavor_dict(fl_db)
def update_flavor(self, context, flavor_id, flavor):
fl = flavor['flavor']
with context.session.begin(subtransactions=True):
self._ensure_flavor_not_in_use(context, flavor_id)
fl_db = self._get_flavor(context, flavor_id)
fl_db.update(fl)
return self._make_flavor_dict(fl_db)
def get_flavor(self, context, flavor_id, fields=None):
fl = self._get_flavor(context, flavor_id)
return self._make_flavor_dict(fl, fields)
def delete_flavor(self, context, flavor_id):
with context.session.begin(subtransactions=True):
self._ensure_flavor_not_in_use(context, flavor_id)
fl_db = self._get_flavor(context, flavor_id)
context.session.delete(fl_db)
def get_flavors(self, context, filters=None, fields=None,
sorts=None, limit=None, marker=None, page_reverse=False):
return self._get_collection(context, Flavor, self._make_flavor_dict,
filters=filters, fields=fields,
sorts=sorts, limit=limit,
marker_obj=marker,
page_reverse=page_reverse)
def create_flavor_service_profile(self, context,
service_profile, flavor_id):
sp = service_profile['service_profile']
with context.session.begin(subtransactions=True):
bind_qry = context.session.query(FlavorServiceProfileBinding)
binding = bind_qry.filter_by(service_profile_id=sp['id'],
flavor_id=flavor_id).first()
if binding:
raise FlavorServiceProfileBindingExists(
sp_id=sp['id'], fl_id=flavor_id)
binding = FlavorServiceProfileBinding(
service_profile_id=sp['id'],
flavor_id=flavor_id)
context.session.add(binding)
fl_db = self._get_flavor(context, flavor_id)
sps = [x['service_profile_id'] for x in fl_db.service_profiles]
return sps
def delete_flavor_service_profile(self, context,
service_profile_id, flavor_id):
with context.session.begin(subtransactions=True):
binding = (context.session.query(FlavorServiceProfileBinding).
filter_by(service_profile_id=service_profile_id,
flavor_id=flavor_id).first())
if not binding:
raise FlavorServiceProfileBindingNotFound(
sp_id=service_profile_id, fl_id=flavor_id)
context.session.delete(binding)
def get_flavor_service_profile(self, context,
service_profile_id, flavor_id, fields=None):
with context.session.begin(subtransactions=True):
binding = (context.session.query(FlavorServiceProfileBinding).
filter_by(service_profile_id=service_profile_id,
flavor_id=flavor_id).first())
if not binding:
raise FlavorServiceProfileBindingNotFound(
sp_id=service_profile_id, fl_id=flavor_id)
res = {'service_profile_id': service_profile_id,
'flavor_id': flavor_id}
return self._fields(res, fields)
def _load_dummy_driver(self, driver):
driver = DummyServiceDriver
driver_klass = driver
return driver_klass
def _load_driver(self, profile):
driver_klass = importutils.import_class(profile.driver)
return driver_klass
def create_service_profile(self, context, service_profile):
sp = service_profile['service_profile']
with context.session.begin(subtransactions=True):
driver_klass = self._load_dummy_driver(sp['driver'])
# 'get_service_type' must be a static method so it cant be changed
svc_type = DummyServiceDriver.get_service_type()
sp_db = ServiceProfile(id=uuidutils.generate_uuid(),
description=sp['description'],
driver=svc_type,
enabled=sp['enabled'],
metainfo=jsonutils.dumps(sp['metainfo']))
context.session.add(sp_db)
try:
# driver_klass = self._load_dummy_driver(sp_db)
# Future TODO(madhu_ak): commented for now to load dummy driver
# until there is flavor supported driver
# plugin = self.manager.get_service_plugins()[svc_type]
# plugin.driver_loaded(driver_klass(plugin), sp_db)
# svc_type = DummyServiceDriver.get_service_type()
# plugin = self.manager.get_service_plugins()[svc_type]
# plugin = FlavorManager(manager.NeutronManager().get_instance())
# plugin = DummyServicePlugin.get_plugin_type(svc_type)
plugin = DummyServicePlugin()
plugin.driver_loaded(driver_klass(svc_type), sp_db)
except Exception:
# Future TODO(enikanorov): raise proper exception
self.delete_service_profile(context, sp_db['id'])
raise
return self._make_service_profile_dict(sp_db)
def unit_create_service_profile(self, context, service_profile):
# Note: Triggered by unit tests pointing to dummy driver
sp = service_profile['service_profile']
with context.session.begin(subtransactions=True):
sp_db = ServiceProfile(id=uuidutils.generate_uuid(),
description=sp['description'],
driver=sp['driver'],
enabled=sp['enabled'],
metainfo=sp['metainfo'])
context.session.add(sp_db)
try:
driver_klass = self._load_driver(sp_db)
# require get_service_type be a static method
svc_type = driver_klass.get_service_type()
plugin = self.manager.get_service_plugins()[svc_type]
plugin.driver_loaded(driver_klass(plugin), sp_db)
except Exception:
# Future TODO(enikanorov): raise proper exception
self.delete_service_profile(context, sp_db['id'])
raise
return self._make_service_profile_dict(sp_db)
def update_service_profile(self, context,
service_profile_id, service_profile):
sp = service_profile['service_profile']
with context.session.begin(subtransactions=True):
self._ensure_service_profile_not_in_use(context,
service_profile_id)
sp_db = self._get_service_profile(context, service_profile_id)
sp_db.update(sp)
return self._make_service_profile_dict(sp_db)
def get_service_profile(self, context, sp_id, fields=None):
sp_db = self._get_service_profile(context, sp_id)
return self._make_service_profile_dict(sp_db, fields)
def delete_service_profile(self, context, sp_id):
with context.session.begin(subtransactions=True):
self._ensure_service_profile_not_in_use(context, sp_id)
sp_db = self._get_service_profile(context, sp_id)
context.session.delete(sp_db)
def get_service_profiles(self, context, filters=None, fields=None,
sorts=None, limit=None, marker=None,
page_reverse=False):
return self._get_collection(context, ServiceProfile,
self._make_service_profile_dict,
filters=filters, fields=fields,
sorts=sorts, limit=limit,
marker_obj=marker,
page_reverse=page_reverse)

View File

@ -52,6 +52,24 @@ class IpamBackendMixin(db_base_plugin_common.DbBasePluginCommon):
return str(netaddr.IPNetwork(cidr_net).network + 1) return str(netaddr.IPNetwork(cidr_net).network + 1)
return subnet.get('gateway_ip') return subnet.get('gateway_ip')
@staticmethod
def pools_to_ip_range(ip_pools):
ip_range_pools = []
for ip_pool in ip_pools:
try:
ip_range_pools.append(netaddr.IPRange(ip_pool['start'],
ip_pool['end']))
except netaddr.AddrFormatError:
LOG.info(_LI("Found invalid IP address in pool: "
"%(start)s - %(end)s:"),
{'start': ip_pool['start'],
'end': ip_pool['end']})
raise n_exc.InvalidAllocationPool(pool=ip_pool)
return ip_range_pools
def delete_subnet(self, context, subnet_id):
pass
def validate_pools_with_subnetpool(self, subnet): def validate_pools_with_subnetpool(self, subnet):
"""Verifies that allocation pools are set correctly """Verifies that allocation pools are set correctly
@ -120,42 +138,43 @@ class IpamBackendMixin(db_base_plugin_common.DbBasePluginCommon):
def _update_subnet_dns_nameservers(self, context, id, s): def _update_subnet_dns_nameservers(self, context, id, s):
old_dns_list = self._get_dns_by_subnet(context, id) old_dns_list = self._get_dns_by_subnet(context, id)
new_dns_addr_set = set(s["dns_nameservers"]) new_dns_addr_list = s["dns_nameservers"]
old_dns_addr_set = set([dns['address']
for dns in old_dns_list])
new_dns = list(new_dns_addr_set) # NOTE(changzhi) delete all dns nameservers from db
for dns_addr in old_dns_addr_set - new_dns_addr_set: # when update subnet's DNS nameservers. And store new
for dns in old_dns_list: # nameservers with order one by one.
if dns['address'] == dns_addr: for dns in old_dns_list:
context.session.delete(dns) context.session.delete(dns)
for dns_addr in new_dns_addr_set - old_dns_addr_set:
for order, server in enumerate(new_dns_addr_list):
dns = models_v2.DNSNameServer( dns = models_v2.DNSNameServer(
address=dns_addr, address=server,
order=order,
subnet_id=id) subnet_id=id)
context.session.add(dns) context.session.add(dns)
del s["dns_nameservers"] del s["dns_nameservers"]
return new_dns return new_dns_addr_list
def _update_subnet_allocation_pools(self, context, subnet_id, s): def _update_subnet_allocation_pools(self, context, subnet_id, s):
context.session.query(models_v2.IPAllocationPool).filter_by( context.session.query(models_v2.IPAllocationPool).filter_by(
subnet_id=subnet_id).delete() subnet_id=subnet_id).delete()
new_pools = [models_v2.IPAllocationPool(first_ip=p['start'], pools = ((netaddr.IPAddress(p.first, p.version).format(),
last_ip=p['end'], netaddr.IPAddress(p.last, p.version).format())
for p in s['allocation_pools'])
new_pools = [models_v2.IPAllocationPool(first_ip=p[0],
last_ip=p[1],
subnet_id=subnet_id) subnet_id=subnet_id)
for p in s['allocation_pools']] for p in pools]
context.session.add_all(new_pools) context.session.add_all(new_pools)
# Call static method with self to redefine in child # Call static method with self to redefine in child
# (non-pluggable backend) # (non-pluggable backend)
self._rebuild_availability_ranges(context, [s]) self._rebuild_availability_ranges(context, [s])
# Gather new pools for result: # Gather new pools for result
result_pools = [{'start': pool['start'], result_pools = [{'start': p[0], 'end': p[1]} for p in pools]
'end': pool['end']}
for pool in s['allocation_pools']]
del s['allocation_pools'] del s['allocation_pools']
return result_pools return result_pools
def update_db_subnet(self, context, subnet_id, s): def update_db_subnet(self, context, subnet_id, s, oldpools):
changes = {} changes = {}
if "dns_nameservers" in s: if "dns_nameservers" in s:
changes['dns_nameservers'] = ( changes['dns_nameservers'] = (
@ -239,38 +258,23 @@ class IpamBackendMixin(db_base_plugin_common.DbBasePluginCommon):
LOG.debug("Performing IP validity checks on allocation pools") LOG.debug("Performing IP validity checks on allocation pools")
ip_sets = [] ip_sets = []
for ip_pool in ip_pools: for ip_pool in ip_pools:
try: start_ip = netaddr.IPAddress(ip_pool.first, ip_pool.version)
start_ip = netaddr.IPAddress(ip_pool['start']) end_ip = netaddr.IPAddress(ip_pool.last, ip_pool.version)
end_ip = netaddr.IPAddress(ip_pool['end'])
except netaddr.AddrFormatError:
LOG.info(_LI("Found invalid IP address in pool: "
"%(start)s - %(end)s:"),
{'start': ip_pool['start'],
'end': ip_pool['end']})
raise n_exc.InvalidAllocationPool(pool=ip_pool)
if (start_ip.version != subnet.version or if (start_ip.version != subnet.version or
end_ip.version != subnet.version): end_ip.version != subnet.version):
LOG.info(_LI("Specified IP addresses do not match " LOG.info(_LI("Specified IP addresses do not match "
"the subnet IP version")) "the subnet IP version"))
raise n_exc.InvalidAllocationPool(pool=ip_pool) raise n_exc.InvalidAllocationPool(pool=ip_pool)
if end_ip < start_ip:
LOG.info(_LI("Start IP (%(start)s) is greater than end IP "
"(%(end)s)"),
{'start': ip_pool['start'], 'end': ip_pool['end']})
raise n_exc.InvalidAllocationPool(pool=ip_pool)
if start_ip < subnet_first_ip or end_ip > subnet_last_ip: if start_ip < subnet_first_ip or end_ip > subnet_last_ip:
LOG.info(_LI("Found pool larger than subnet " LOG.info(_LI("Found pool larger than subnet "
"CIDR:%(start)s - %(end)s"), "CIDR:%(start)s - %(end)s"),
{'start': ip_pool['start'], {'start': start_ip, 'end': end_ip})
'end': ip_pool['end']})
raise n_exc.OutOfBoundsAllocationPool( raise n_exc.OutOfBoundsAllocationPool(
pool=ip_pool, pool=ip_pool,
subnet_cidr=subnet_cidr) subnet_cidr=subnet_cidr)
# Valid allocation pool # Valid allocation pool
# Create an IPSet for it for easily verifying overlaps # Create an IPSet for it for easily verifying overlaps
ip_sets.append(netaddr.IPSet(netaddr.IPRange( ip_sets.append(netaddr.IPSet(ip_pool.cidrs()))
ip_pool['start'],
ip_pool['end']).cidrs()))
LOG.debug("Checking for overlaps among allocation pools " LOG.debug("Checking for overlaps among allocation pools "
"and gateway ip") "and gateway ip")
@ -291,22 +295,54 @@ class IpamBackendMixin(db_base_plugin_common.DbBasePluginCommon):
pool_2=r_range, pool_2=r_range,
subnet_cidr=subnet_cidr) subnet_cidr=subnet_cidr)
def _validate_max_ips_per_port(self, fixed_ip_list):
if len(fixed_ip_list) > cfg.CONF.max_fixed_ips_per_port:
msg = _('Exceeded maximim amount of fixed ips per port')
raise n_exc.InvalidInput(error_message=msg)
def _get_subnet_for_fixed_ip(self, context, fixed, network_id):
if 'subnet_id' in fixed:
subnet = self._get_subnet(context, fixed['subnet_id'])
if subnet['network_id'] != network_id:
msg = (_("Failed to create port on network %(network_id)s"
", because fixed_ips included invalid subnet "
"%(subnet_id)s") %
{'network_id': network_id,
'subnet_id': fixed['subnet_id']})
raise n_exc.InvalidInput(error_message=msg)
# Ensure that the IP is valid on the subnet
if ('ip_address' in fixed and
not ipam_utils.check_subnet_ip(subnet['cidr'],
fixed['ip_address'])):
raise n_exc.InvalidIpForSubnet(ip_address=fixed['ip_address'])
return subnet
if 'ip_address' not in fixed:
msg = _('IP allocation requires subnet_id or ip_address')
raise n_exc.InvalidInput(error_message=msg)
filter = {'network_id': [network_id]}
subnets = self._get_subnets(context, filters=filter)
for subnet in subnets:
if ipam_utils.check_subnet_ip(subnet['cidr'],
fixed['ip_address']):
return subnet
raise n_exc.InvalidIpForNetwork(ip_address=fixed['ip_address'])
def _prepare_allocation_pools(self, allocation_pools, cidr, gateway_ip): def _prepare_allocation_pools(self, allocation_pools, cidr, gateway_ip):
"""Returns allocation pools represented as list of IPRanges""" """Returns allocation pools represented as list of IPRanges"""
if not attributes.is_attr_set(allocation_pools): if not attributes.is_attr_set(allocation_pools):
return ipam_utils.generate_pools(cidr, gateway_ip) return ipam_utils.generate_pools(cidr, gateway_ip)
self._validate_allocation_pools(allocation_pools, cidr) ip_range_pools = self.pools_to_ip_range(allocation_pools)
self._validate_allocation_pools(ip_range_pools, cidr)
if gateway_ip: if gateway_ip:
self.validate_gw_out_of_pools(gateway_ip, allocation_pools) self.validate_gw_out_of_pools(gateway_ip, ip_range_pools)
return [netaddr.IPRange(p['start'], p['end']) return ip_range_pools
for p in allocation_pools]
def validate_gw_out_of_pools(self, gateway_ip, pools): def validate_gw_out_of_pools(self, gateway_ip, pools):
for allocation_pool in pools: for pool_range in pools:
pool_range = netaddr.IPRange(
allocation_pool['start'],
allocation_pool['end'])
if netaddr.IPAddress(gateway_ip) in pool_range: if netaddr.IPAddress(gateway_ip) in pool_range:
raise n_exc.GatewayConflictWithAllocationPools( raise n_exc.GatewayConflictWithAllocationPools(
pool=pool_range, pool=pool_range,
@ -373,7 +409,7 @@ class IpamBackendMixin(db_base_plugin_common.DbBasePluginCommon):
enable_eagerloads(False).filter_by(id=port_id)) enable_eagerloads(False).filter_by(id=port_id))
if not context.is_admin: if not context.is_admin:
query = query.filter_by(tenant_id=context.tenant_id) query = query.filter_by(tenant_id=context.tenant_id)
query.delete() context.session.delete(query.first())
def _save_subnet(self, context, def _save_subnet(self, context,
network, network,
@ -388,11 +424,15 @@ class IpamBackendMixin(db_base_plugin_common.DbBasePluginCommon):
subnet = models_v2.Subnet(**subnet_args) subnet = models_v2.Subnet(**subnet_args)
context.session.add(subnet) context.session.add(subnet)
# NOTE(changzhi) Store DNS nameservers with order into DB one
# by one when create subnet with DNS nameservers
if attributes.is_attr_set(dns_nameservers): if attributes.is_attr_set(dns_nameservers):
for addr in dns_nameservers: for order, server in enumerate(dns_nameservers):
ns = models_v2.DNSNameServer(address=addr, dns = models_v2.DNSNameServer(
subnet_id=subnet.id) address=server,
context.session.add(ns) order=order,
subnet_id=subnet.id)
context.session.add(dns)
if attributes.is_attr_set(host_routes): if attributes.is_attr_set(host_routes):
for rt in host_routes: for rt in host_routes:

View File

@ -14,7 +14,6 @@
# under the License. # under the License.
import netaddr import netaddr
from oslo_config import cfg
from oslo_db import exception as db_exc from oslo_db import exception as db_exc
from oslo_log import log as logging from oslo_log import log as logging
from sqlalchemy import and_ from sqlalchemy import and_
@ -29,7 +28,6 @@ from neutron.db import ipam_backend_mixin
from neutron.db import models_v2 from neutron.db import models_v2
from neutron.ipam import requests as ipam_req from neutron.ipam import requests as ipam_req
from neutron.ipam import subnet_alloc from neutron.ipam import subnet_alloc
from neutron.ipam import utils as ipam_utils
LOG = logging.getLogger(__name__) LOG = logging.getLogger(__name__)
@ -242,49 +240,17 @@ class IpamNonPluggableBackend(ipam_backend_mixin.IpamBackendMixin):
""" """
fixed_ip_set = [] fixed_ip_set = []
for fixed in fixed_ips: for fixed in fixed_ips:
found = False subnet = self._get_subnet_for_fixed_ip(context, fixed, network_id)
if 'subnet_id' not in fixed:
if 'ip_address' not in fixed:
msg = _('IP allocation requires subnet_id or ip_address')
raise n_exc.InvalidInput(error_message=msg)
filter = {'network_id': [network_id]}
subnets = self._get_subnets(context, filters=filter)
for subnet in subnets:
if ipam_utils.check_subnet_ip(subnet['cidr'],
fixed['ip_address']):
found = True
subnet_id = subnet['id']
break
if not found:
raise n_exc.InvalidIpForNetwork(
ip_address=fixed['ip_address'])
else:
subnet = self._get_subnet(context, fixed['subnet_id'])
if subnet['network_id'] != network_id:
msg = (_("Failed to create port on network %(network_id)s"
", because fixed_ips included invalid subnet "
"%(subnet_id)s") %
{'network_id': network_id,
'subnet_id': fixed['subnet_id']})
raise n_exc.InvalidInput(error_message=msg)
subnet_id = subnet['id']
is_auto_addr_subnet = ipv6_utils.is_auto_address_subnet(subnet) is_auto_addr_subnet = ipv6_utils.is_auto_address_subnet(subnet)
if 'ip_address' in fixed: if 'ip_address' in fixed:
# Ensure that the IP's are unique # Ensure that the IP's are unique
if not IpamNonPluggableBackend._check_unique_ip( if not IpamNonPluggableBackend._check_unique_ip(
context, network_id, context, network_id,
subnet_id, fixed['ip_address']): subnet['id'], fixed['ip_address']):
raise n_exc.IpAddressInUse(net_id=network_id, raise n_exc.IpAddressInUse(net_id=network_id,
ip_address=fixed['ip_address']) ip_address=fixed['ip_address'])
# Ensure that the IP is valid on the subnet
if (not found and
not ipam_utils.check_subnet_ip(subnet['cidr'],
fixed['ip_address'])):
raise n_exc.InvalidIpForSubnet(
ip_address=fixed['ip_address'])
if (is_auto_addr_subnet and if (is_auto_addr_subnet and
device_owner not in device_owner not in
constants.ROUTER_INTERFACE_OWNERS): constants.ROUTER_INTERFACE_OWNERS):
@ -292,23 +258,20 @@ class IpamNonPluggableBackend(ipam_backend_mixin.IpamBackendMixin):
"assigned to a port on subnet %(id)s since the " "assigned to a port on subnet %(id)s since the "
"subnet is configured for automatic addresses") % "subnet is configured for automatic addresses") %
{'address': fixed['ip_address'], {'address': fixed['ip_address'],
'id': subnet_id}) 'id': subnet['id']})
raise n_exc.InvalidInput(error_message=msg) raise n_exc.InvalidInput(error_message=msg)
fixed_ip_set.append({'subnet_id': subnet_id, fixed_ip_set.append({'subnet_id': subnet['id'],
'ip_address': fixed['ip_address']}) 'ip_address': fixed['ip_address']})
else: else:
# A scan for auto-address subnets on the network is done # A scan for auto-address subnets on the network is done
# separately so that all such subnets (not just those # separately so that all such subnets (not just those
# listed explicitly here by subnet ID) are associated # listed explicitly here by subnet ID) are associated
# with the port. # with the port.
if (device_owner in constants.ROUTER_INTERFACE_OWNERS or if (device_owner in constants.ROUTER_INTERFACE_OWNERS_SNAT or
device_owner == constants.DEVICE_OWNER_ROUTER_SNAT or
not is_auto_addr_subnet): not is_auto_addr_subnet):
fixed_ip_set.append({'subnet_id': subnet_id}) fixed_ip_set.append({'subnet_id': subnet['id']})
if len(fixed_ip_set) > cfg.CONF.max_fixed_ips_per_port: self._validate_max_ips_per_port(fixed_ip_set)
msg = _('Exceeded maximim amount of fixed ips per port')
raise n_exc.InvalidInput(error_message=msg)
return fixed_ip_set return fixed_ip_set
def _allocate_fixed_ips(self, context, fixed_ips, mac_address): def _allocate_fixed_ips(self, context, fixed_ips, mac_address):
@ -382,8 +345,7 @@ class IpamNonPluggableBackend(ipam_backend_mixin.IpamBackendMixin):
net_id_filter = {'network_id': [p['network_id']]} net_id_filter = {'network_id': [p['network_id']]}
subnets = self._get_subnets(context, filters=net_id_filter) subnets = self._get_subnets(context, filters=net_id_filter)
is_router_port = ( is_router_port = (
p['device_owner'] in constants.ROUTER_INTERFACE_OWNERS or p['device_owner'] in constants.ROUTER_INTERFACE_OWNERS_SNAT)
p['device_owner'] == constants.DEVICE_OWNER_ROUTER_SNAT)
fixed_configured = p['fixed_ips'] is not attributes.ATTR_NOT_SPECIFIED fixed_configured = p['fixed_ips'] is not attributes.ATTR_NOT_SPECIFIED
if fixed_configured: if fixed_configured:
@ -431,17 +393,16 @@ class IpamNonPluggableBackend(ipam_backend_mixin.IpamBackendMixin):
return ips return ips
def add_auto_addrs_on_network_ports(self, context, subnet): def add_auto_addrs_on_network_ports(self, context, subnet, ipam_subnet):
"""For an auto-address subnet, add addrs for ports on the net.""" """For an auto-address subnet, add addrs for ports on the net."""
with context.session.begin(subtransactions=True): with context.session.begin(subtransactions=True):
network_id = subnet['network_id'] network_id = subnet['network_id']
port_qry = context.session.query(models_v2.Port) port_qry = context.session.query(models_v2.Port)
for port in port_qry.filter( ports = port_qry.filter(
and_(models_v2.Port.network_id == network_id, and_(models_v2.Port.network_id == network_id,
models_v2.Port.device_owner !=
constants.DEVICE_OWNER_ROUTER_SNAT,
~models_v2.Port.device_owner.in_( ~models_v2.Port.device_owner.in_(
constants.ROUTER_INTERFACE_OWNERS))): constants.ROUTER_INTERFACE_OWNERS_SNAT)))
for port in ports:
ip_address = self._calculate_ipv6_eui64_addr( ip_address = self._calculate_ipv6_eui64_addr(
context, subnet, port['mac_address']) context, subnet, port['mac_address'])
allocated = models_v2.IPAllocation(network_id=network_id, allocated = models_v2.IPAllocation(network_id=network_id,
@ -499,11 +460,12 @@ class IpamNonPluggableBackend(ipam_backend_mixin.IpamBackendMixin):
subnet = self._save_subnet(context, subnet = self._save_subnet(context,
network, network,
self._make_subnet_args( self._make_subnet_args(
network.shared,
subnet_request, subnet_request,
subnet, subnet,
subnetpool_id), subnetpool_id),
subnet['dns_nameservers'], subnet['dns_nameservers'],
subnet['host_routes'], subnet['host_routes'],
subnet_request) subnet_request)
return subnet # ipam_subnet is not expected to be allocated for non pluggable ipam,
# so just return None for it (second element in returned tuple)
return subnet, None

View File

@ -0,0 +1,451 @@
# Copyright (c) 2015 Infoblox Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import netaddr
from oslo_db import exception as db_exc
from oslo_log import log as logging
from oslo_utils import excutils
from sqlalchemy import and_
from neutron.api.v2 import attributes
from neutron.common import constants
from neutron.common import exceptions as n_exc
from neutron.common import ipv6_utils
from neutron.db import ipam_backend_mixin
from neutron.db import models_v2
from neutron.i18n import _LE
from neutron.ipam import driver
from neutron.ipam import exceptions as ipam_exc
from neutron.ipam import requests as ipam_req
LOG = logging.getLogger(__name__)
class IpamPluggableBackend(ipam_backend_mixin.IpamBackendMixin):
def _get_failed_ips(self, all_ips, success_ips):
ips_list = (ip_dict['ip_address'] for ip_dict in success_ips)
return (ip_dict['ip_address'] for ip_dict in all_ips
if ip_dict['ip_address'] not in ips_list)
def _ipam_deallocate_ips(self, context, ipam_driver, port, ips,
revert_on_fail=True):
"""Deallocate set of ips over IPAM.
If any single ip deallocation fails, tries to allocate deallocated
ip addresses with fixed ip request
"""
deallocated = []
try:
for ip in ips:
try:
ipam_subnet = ipam_driver.get_subnet(ip['subnet_id'])
ipam_subnet.deallocate(ip['ip_address'])
deallocated.append(ip)
except n_exc.SubnetNotFound:
LOG.debug("Subnet was not found on ip deallocation: %s",
ip)
except Exception:
with excutils.save_and_reraise_exception():
LOG.debug("An exception occurred during IP deallocation.")
if revert_on_fail and deallocated:
LOG.debug("Reverting deallocation")
self._ipam_allocate_ips(context, ipam_driver, port,
deallocated, revert_on_fail=False)
elif not revert_on_fail and ips:
addresses = ', '.join(self._get_failed_ips(ips,
deallocated))
LOG.error(_LE("IP deallocation failed on "
"external system for %s"), addresses)
return deallocated
def _ipam_try_allocate_ip(self, context, ipam_driver, port, ip_dict):
factory = ipam_driver.get_address_request_factory()
ip_request = factory.get_request(context, port, ip_dict)
ipam_subnet = ipam_driver.get_subnet(ip_dict['subnet_id'])
return ipam_subnet.allocate(ip_request)
def _ipam_allocate_single_ip(self, context, ipam_driver, port, subnets):
"""Allocates single ip from set of subnets
Raises n_exc.IpAddressGenerationFailure if allocation failed for
all subnets.
"""
for subnet in subnets:
try:
return [self._ipam_try_allocate_ip(context, ipam_driver,
port, subnet),
subnet]
except ipam_exc.IpAddressGenerationFailure:
continue
raise n_exc.IpAddressGenerationFailure(
net_id=port['network_id'])
def _ipam_allocate_ips(self, context, ipam_driver, port, ips,
revert_on_fail=True):
"""Allocate set of ips over IPAM.
If any single ip allocation fails, tries to deallocate all
allocated ip addresses.
"""
allocated = []
# we need to start with entries that asked for a specific IP in case
# those IPs happen to be next in the line for allocation for ones that
# didn't ask for a specific IP
ips.sort(key=lambda x: 'ip_address' not in x)
try:
for ip in ips:
# By default IP info is dict, used to allocate single ip
# from single subnet.
# IP info can be list, used to allocate single ip from
# multiple subnets (i.e. first successful ip allocation
# is returned)
ip_list = [ip] if isinstance(ip, dict) else ip
ip_address, ip_subnet = self._ipam_allocate_single_ip(
context, ipam_driver, port, ip_list)
allocated.append({'ip_address': ip_address,
'subnet_id': ip_subnet['subnet_id']})
except Exception:
with excutils.save_and_reraise_exception():
LOG.debug("An exception occurred during IP allocation.")
if revert_on_fail and allocated:
LOG.debug("Reverting allocation")
self._ipam_deallocate_ips(context, ipam_driver, port,
allocated, revert_on_fail=False)
elif not revert_on_fail and ips:
addresses = ', '.join(self._get_failed_ips(ips,
allocated))
LOG.error(_LE("IP allocation failed on "
"external system for %s"), addresses)
return allocated
def _ipam_update_allocation_pools(self, context, ipam_driver, subnet):
self._validate_allocation_pools(subnet['allocation_pools'],
subnet['cidr'])
factory = ipam_driver.get_subnet_request_factory()
subnet_request = factory.get_request(context, subnet, None)
ipam_driver.update_subnet(subnet_request)
def delete_subnet(self, context, subnet_id):
ipam_driver = driver.Pool.get_instance(None, context)
ipam_driver.remove_subnet(subnet_id)
def allocate_ips_for_port_and_store(self, context, port, port_id):
network_id = port['port']['network_id']
ips = []
try:
ips = self._allocate_ips_for_port(context, port)
for ip in ips:
ip_address = ip['ip_address']
subnet_id = ip['subnet_id']
IpamPluggableBackend._store_ip_allocation(
context, ip_address, network_id,
subnet_id, port_id)
except Exception:
with excutils.save_and_reraise_exception():
if ips:
LOG.debug("An exception occurred during port creation."
"Reverting IP allocation")
ipam_driver = driver.Pool.get_instance(None, context)
self._ipam_deallocate_ips(context, ipam_driver,
port['port'], ips,
revert_on_fail=False)
def _allocate_ips_for_port(self, context, port):
"""Allocate IP addresses for the port. IPAM version.
If port['fixed_ips'] is set to 'ATTR_NOT_SPECIFIED', allocate IP
addresses for the port. If port['fixed_ips'] contains an IP address or
a subnet_id then allocate an IP address accordingly.
"""
p = port['port']
ips = []
v6_stateless = []
net_id_filter = {'network_id': [p['network_id']]}
subnets = self._get_subnets(context, filters=net_id_filter)
is_router_port = (
p['device_owner'] in constants.ROUTER_INTERFACE_OWNERS_SNAT)
fixed_configured = p['fixed_ips'] is not attributes.ATTR_NOT_SPECIFIED
if fixed_configured:
ips = self._test_fixed_ips_for_port(context,
p["network_id"],
p['fixed_ips'],
p['device_owner'])
# For ports that are not router ports, implicitly include all
# auto-address subnets for address association.
if not is_router_port:
v6_stateless += [subnet for subnet in subnets
if ipv6_utils.is_auto_address_subnet(subnet)]
else:
# Split into v4, v6 stateless and v6 stateful subnets
v4 = []
v6_stateful = []
for subnet in subnets:
if subnet['ip_version'] == 4:
v4.append(subnet)
else:
if ipv6_utils.is_auto_address_subnet(subnet):
if not is_router_port:
v6_stateless.append(subnet)
else:
v6_stateful.append(subnet)
version_subnets = [v4, v6_stateful]
for subnets in version_subnets:
if subnets:
ips.append([{'subnet_id': s['id']}
for s in subnets])
for subnet in v6_stateless:
# IP addresses for IPv6 SLAAC and DHCPv6-stateless subnets
# are implicitly included.
ips.append({'subnet_id': subnet['id'],
'subnet_cidr': subnet['cidr'],
'eui64_address': True,
'mac': p['mac_address']})
ipam_driver = driver.Pool.get_instance(None, context)
return self._ipam_allocate_ips(context, ipam_driver, p, ips)
def _test_fixed_ips_for_port(self, context, network_id, fixed_ips,
device_owner):
"""Test fixed IPs for port.
Check that configured subnets are valid prior to allocating any
IPs. Include the subnet_id in the result if only an IP address is
configured.
:raises: InvalidInput, IpAddressInUse, InvalidIpForNetwork,
InvalidIpForSubnet
"""
fixed_ip_list = []
for fixed in fixed_ips:
subnet = self._get_subnet_for_fixed_ip(context, fixed, network_id)
is_auto_addr_subnet = ipv6_utils.is_auto_address_subnet(subnet)
if 'ip_address' in fixed:
if (is_auto_addr_subnet and device_owner not in
constants.ROUTER_INTERFACE_OWNERS):
msg = (_("IPv6 address %(address)s can not be directly "
"assigned to a port on subnet %(id)s since the "
"subnet is configured for automatic addresses") %
{'address': fixed['ip_address'],
'id': subnet['id']})
raise n_exc.InvalidInput(error_message=msg)
fixed_ip_list.append({'subnet_id': subnet['id'],
'ip_address': fixed['ip_address']})
else:
# A scan for auto-address subnets on the network is done
# separately so that all such subnets (not just those
# listed explicitly here by subnet ID) are associated
# with the port.
if (device_owner in constants.ROUTER_INTERFACE_OWNERS_SNAT or
not is_auto_addr_subnet):
fixed_ip_list.append({'subnet_id': subnet['id']})
self._validate_max_ips_per_port(fixed_ip_list)
return fixed_ip_list
def _update_ips_for_port(self, context, port,
original_ips, new_ips, mac):
"""Add or remove IPs from the port. IPAM version"""
added = []
removed = []
changes = self._get_changed_ips_for_port(
context, original_ips, new_ips, port['device_owner'])
# Check if the IP's to add are OK
to_add = self._test_fixed_ips_for_port(
context, port['network_id'], changes.add,
port['device_owner'])
ipam_driver = driver.Pool.get_instance(None, context)
if changes.remove:
removed = self._ipam_deallocate_ips(context, ipam_driver, port,
changes.remove)
if to_add:
added = self._ipam_allocate_ips(context, ipam_driver,
changes, to_add)
return self.Changes(add=added,
original=changes.original,
remove=removed)
def save_allocation_pools(self, context, subnet, allocation_pools):
for pool in allocation_pools:
first_ip = str(netaddr.IPAddress(pool.first, pool.version))
last_ip = str(netaddr.IPAddress(pool.last, pool.version))
ip_pool = models_v2.IPAllocationPool(subnet=subnet,
first_ip=first_ip,
last_ip=last_ip)
context.session.add(ip_pool)
def update_port_with_ips(self, context, db_port, new_port, new_mac):
changes = self.Changes(add=[], original=[], remove=[])
if 'fixed_ips' in new_port:
original = self._make_port_dict(db_port,
process_extensions=False)
changes = self._update_ips_for_port(context,
db_port,
original["fixed_ips"],
new_port['fixed_ips'],
new_mac)
try:
# Check if the IPs need to be updated
network_id = db_port['network_id']
for ip in changes.add:
self._store_ip_allocation(
context, ip['ip_address'], network_id,
ip['subnet_id'], db_port.id)
for ip in changes.remove:
self._delete_ip_allocation(context, network_id,
ip['subnet_id'], ip['ip_address'])
self._update_db_port(context, db_port, new_port, network_id,
new_mac)
except Exception:
with excutils.save_and_reraise_exception():
if 'fixed_ips' in new_port:
LOG.debug("An exception occurred during port update.")
ipam_driver = driver.Pool.get_instance(None, context)
if changes.add:
LOG.debug("Reverting IP allocation.")
self._ipam_deallocate_ips(context, ipam_driver,
db_port, changes.add,
revert_on_fail=False)
if changes.remove:
LOG.debug("Reverting IP deallocation.")
self._ipam_allocate_ips(context, ipam_driver,
db_port, changes.remove,
revert_on_fail=False)
return changes
def delete_port(self, context, id):
# Get fixed_ips list before port deletion
port = self._get_port(context, id)
ipam_driver = driver.Pool.get_instance(None, context)
super(IpamPluggableBackend, self).delete_port(context, id)
# Deallocating ips via IPAM after port is deleted locally.
# So no need to do rollback actions on remote server
# in case of fail to delete port locally
self._ipam_deallocate_ips(context, ipam_driver, port,
port['fixed_ips'])
def update_db_subnet(self, context, id, s, old_pools):
ipam_driver = driver.Pool.get_instance(None, context)
if "allocation_pools" in s:
self._ipam_update_allocation_pools(context, ipam_driver, s)
try:
subnet, changes = super(IpamPluggableBackend,
self).update_db_subnet(context, id,
s, old_pools)
except Exception:
with excutils.save_and_reraise_exception():
if "allocation_pools" in s and old_pools:
LOG.error(
_LE("An exception occurred during subnet update."
"Reverting allocation pool changes"))
s['allocation_pools'] = old_pools
self._ipam_update_allocation_pools(context, ipam_driver, s)
return subnet, changes
def add_auto_addrs_on_network_ports(self, context, subnet, ipam_subnet):
"""For an auto-address subnet, add addrs for ports on the net."""
with context.session.begin(subtransactions=True):
network_id = subnet['network_id']
port_qry = context.session.query(models_v2.Port)
ports = port_qry.filter(
and_(models_v2.Port.network_id == network_id,
~models_v2.Port.device_owner.in_(
constants.ROUTER_INTERFACE_OWNERS_SNAT)))
for port in ports:
ip_request = ipam_req.AutomaticAddressRequest(
prefix=subnet['cidr'],
mac=port['mac_address'])
ip_address = ipam_subnet.allocate(ip_request)
allocated = models_v2.IPAllocation(network_id=network_id,
port_id=port['id'],
ip_address=ip_address,
subnet_id=subnet['id'])
try:
# Do the insertion of each IP allocation entry within
# the context of a nested transaction, so that the entry
# is rolled back independently of other entries whenever
# the corresponding port has been deleted.
with context.session.begin_nested():
context.session.add(allocated)
except db_exc.DBReferenceError:
LOG.debug("Port %s was deleted while updating it with an "
"IPv6 auto-address. Ignoring.", port['id'])
LOG.debug("Reverting IP allocation for %s", ip_address)
# Do not fail if reverting allocation was unsuccessful
try:
ipam_subnet.deallocate(ip_address)
except Exception:
LOG.debug("Reverting IP allocation failed for %s",
ip_address)
def allocate_subnet(self, context, network, subnet, subnetpool_id):
subnetpool = None
if subnetpool_id:
subnetpool = self._get_subnetpool(context, subnetpool_id)
self._validate_ip_version_with_subnetpool(subnet, subnetpool)
# gateway_ip and allocation pools should be validated or generated
# only for specific request
if subnet['cidr'] is not attributes.ATTR_NOT_SPECIFIED:
subnet['gateway_ip'] = self._gateway_ip_str(subnet,
subnet['cidr'])
subnet['allocation_pools'] = self._prepare_allocation_pools(
subnet['allocation_pools'],
subnet['cidr'],
subnet['gateway_ip'])
ipam_driver = driver.Pool.get_instance(subnetpool, context)
subnet_factory = ipam_driver.get_subnet_request_factory()
subnet_request = subnet_factory.get_request(context, subnet,
subnetpool)
ipam_subnet = ipam_driver.allocate_subnet(subnet_request)
# get updated details with actually allocated subnet
subnet_request = ipam_subnet.get_details()
try:
subnet = self._save_subnet(context,
network,
self._make_subnet_args(
subnet_request,
subnet,
subnetpool_id),
subnet['dns_nameservers'],
subnet['host_routes'],
subnet_request)
except Exception:
# Note(pbondar): Third-party ipam servers can't rely
# on transaction rollback, so explicit rollback call needed.
# IPAM part rolled back in exception handling
# and subnet part is rolled back by transaction rollback.
with excutils.save_and_reraise_exception():
LOG.debug("An exception occurred during subnet creation."
"Reverting subnet allocation.")
self.delete_subnet(context, subnet_request.subnet_id)
return subnet, ipam_subnet

View File

@ -35,6 +35,7 @@ from neutron.db import model_base
from neutron.extensions import l3agentscheduler from neutron.extensions import l3agentscheduler
from neutron.i18n import _LE, _LI, _LW from neutron.i18n import _LE, _LI, _LW
from neutron import manager from neutron import manager
from neutron.plugins.common import constants as service_constants
LOG = logging.getLogger(__name__) LOG = logging.getLogger(__name__)
@ -182,7 +183,9 @@ class L3AgentSchedulerDbMixin(l3agentscheduler.L3AgentSchedulerPluginBase,
return False return False
if router.get('distributed'): if router.get('distributed'):
return False return False
# non-dvr case: centralized router is already bound to some agent if router.get('ha'):
return True
# legacy router case: router is already bound to some agent
raise l3agentscheduler.RouterHostedByL3Agent( raise l3agentscheduler.RouterHostedByL3Agent(
router_id=router_id, router_id=router_id,
agent_id=bindings[0].l3_agent_id) agent_id=bindings[0].l3_agent_id)
@ -193,7 +196,15 @@ class L3AgentSchedulerDbMixin(l3agentscheduler.L3AgentSchedulerPluginBase,
agent_id = agent['id'] agent_id = agent['id']
if self.router_scheduler: if self.router_scheduler:
try: try:
self.router_scheduler.bind_router(context, router_id, agent) if router.get('ha'):
plugin = manager.NeutronManager.get_service_plugins().get(
service_constants.L3_ROUTER_NAT)
self.router_scheduler.create_ha_port_and_bind(
plugin, context, router['id'],
router['tenant_id'], agent)
else:
self.router_scheduler.bind_router(
context, router_id, agent)
except db_exc.DBError: except db_exc.DBError:
raise l3agentscheduler.RouterSchedulingFailed( raise l3agentscheduler.RouterSchedulingFailed(
router_id=router_id, agent_id=agent_id) router_id=router_id, agent_id=agent_id)
@ -223,6 +234,13 @@ class L3AgentSchedulerDbMixin(l3agentscheduler.L3AgentSchedulerPluginBase,
""" """
agent = self._get_agent(context, agent_id) agent = self._get_agent(context, agent_id)
self._unbind_router(context, router_id, agent_id) self._unbind_router(context, router_id, agent_id)
router = self.get_router(context, router_id)
if router.get('ha'):
plugin = manager.NeutronManager.get_service_plugins().get(
service_constants.L3_ROUTER_NAT)
plugin.delete_ha_interfaces_on_host(context, router_id, agent.host)
l3_notifier = self.agent_notifiers.get(constants.AGENT_TYPE_L3) l3_notifier = self.agent_notifiers.get(constants.AGENT_TYPE_L3)
if l3_notifier: if l3_notifier:
l3_notifier.router_removed_from_agent( l3_notifier.router_removed_from_agent(

View File

@ -808,6 +808,10 @@ class L3_NAT_dbonly_mixin(l3.RouterPluginBase):
external_network_id=external_network_id, external_network_id=external_network_id,
port_id=internal_port['id']) port_id=internal_port['id'])
def _port_ipv4_fixed_ips(self, port):
return [ip for ip in port['fixed_ips']
if netaddr.IPAddress(ip['ip_address']).version == 4]
def _internal_fip_assoc_data(self, context, fip): def _internal_fip_assoc_data(self, context, fip):
"""Retrieve internal port data for floating IP. """Retrieve internal port data for floating IP.
@ -833,6 +837,18 @@ class L3_NAT_dbonly_mixin(l3.RouterPluginBase):
internal_subnet_id = None internal_subnet_id = None
if fip.get('fixed_ip_address'): if fip.get('fixed_ip_address'):
internal_ip_address = fip['fixed_ip_address'] internal_ip_address = fip['fixed_ip_address']
if netaddr.IPAddress(internal_ip_address).version != 4:
if 'id' in fip:
data = {'floatingip_id': fip['id'],
'internal_ip': internal_ip_address}
msg = (_('Floating IP %(floatingip_id) is associated '
'with non-IPv4 address %s(internal_ip)s and '
'therefore cannot be bound.') % data)
else:
msg = (_('Cannot create floating IP and bind it to %s, '
'since that is not an IPv4 address.') %
internal_ip_address)
raise n_exc.BadRequest(resource='floatingip', msg=msg)
for ip in internal_port['fixed_ips']: for ip in internal_port['fixed_ips']:
if ip['ip_address'] == internal_ip_address: if ip['ip_address'] == internal_ip_address:
internal_subnet_id = ip['subnet_id'] internal_subnet_id = ip['subnet_id']
@ -842,18 +858,18 @@ class L3_NAT_dbonly_mixin(l3.RouterPluginBase):
'address': internal_ip_address}) 'address': internal_ip_address})
raise n_exc.BadRequest(resource='floatingip', msg=msg) raise n_exc.BadRequest(resource='floatingip', msg=msg)
else: else:
ips = [ip['ip_address'] for ip in internal_port['fixed_ips']] ipv4_fixed_ips = self._port_ipv4_fixed_ips(internal_port)
if not ips: if not ipv4_fixed_ips:
msg = (_('Cannot add floating IP to port %s that has ' msg = (_('Cannot add floating IP to port %s that has '
'no fixed IP addresses') % internal_port['id']) 'no fixed IPv4 addresses') % internal_port['id'])
raise n_exc.BadRequest(resource='floatingip', msg=msg) raise n_exc.BadRequest(resource='floatingip', msg=msg)
if len(ips) > 1: if len(ipv4_fixed_ips) > 1:
msg = (_('Port %s has multiple fixed IPs. Must provide' msg = (_('Port %s has multiple fixed IPv4 addresses. Must '
' a specific IP when assigning a floating IP') % 'provide a specific IPv4 address when assigning a '
internal_port['id']) 'floating IP') % internal_port['id'])
raise n_exc.BadRequest(resource='floatingip', msg=msg) raise n_exc.BadRequest(resource='floatingip', msg=msg)
internal_ip_address = internal_port['fixed_ips'][0]['ip_address'] internal_ip_address = ipv4_fixed_ips[0]['ip_address']
internal_subnet_id = internal_port['fixed_ips'][0]['subnet_id'] internal_subnet_id = ipv4_fixed_ips[0]['subnet_id']
return internal_port, internal_subnet_id, internal_ip_address return internal_port, internal_subnet_id, internal_ip_address
def get_assoc_data(self, context, fip, floating_network_id): def get_assoc_data(self, context, fip, floating_network_id):
@ -909,6 +925,10 @@ class L3_NAT_dbonly_mixin(l3.RouterPluginBase):
'router_id': router_id, 'router_id': router_id,
'last_known_router_id': previous_router_id}) 'last_known_router_id': previous_router_id})
def _is_ipv4_network(self, context, net_id):
net = self._core_plugin._get_network(context, net_id)
return any(s.ip_version == 4 for s in net.subnets)
def create_floatingip(self, context, floatingip, def create_floatingip(self, context, floatingip,
initial_status=l3_constants.FLOATINGIP_STATUS_ACTIVE): initial_status=l3_constants.FLOATINGIP_STATUS_ACTIVE):
fip = floatingip['floatingip'] fip = floatingip['floatingip']
@ -920,6 +940,10 @@ class L3_NAT_dbonly_mixin(l3.RouterPluginBase):
msg = _("Network %s is not a valid external network") % f_net_id msg = _("Network %s is not a valid external network") % f_net_id
raise n_exc.BadRequest(resource='floatingip', msg=msg) raise n_exc.BadRequest(resource='floatingip', msg=msg)
if not self._is_ipv4_network(context, f_net_id):
msg = _("Network %s does not contain any IPv4 subnet") % f_net_id
raise n_exc.BadRequest(resource='floatingip', msg=msg)
with context.session.begin(subtransactions=True): with context.session.begin(subtransactions=True):
# This external port is never exposed to the tenant. # This external port is never exposed to the tenant.
# it is used purely for internal system and admin use when # it is used purely for internal system and admin use when
@ -942,11 +966,12 @@ class L3_NAT_dbonly_mixin(l3.RouterPluginBase):
external_port = self._core_plugin.create_port(context.elevated(), external_port = self._core_plugin.create_port(context.elevated(),
{'port': port}) {'port': port})
# Ensure IP addresses are allocated on external port # Ensure IPv4 addresses are allocated on external port
if not external_port['fixed_ips']: external_ipv4_ips = self._port_ipv4_fixed_ips(external_port)
if not external_ipv4_ips:
raise n_exc.ExternalIpAddressExhausted(net_id=f_net_id) raise n_exc.ExternalIpAddressExhausted(net_id=f_net_id)
floating_fixed_ip = external_port['fixed_ips'][0] floating_fixed_ip = external_ipv4_ips[0]
floating_ip_address = floating_fixed_ip['ip_address'] floating_ip_address = floating_fixed_ip['ip_address']
floatingip_db = FloatingIP( floatingip_db = FloatingIP(
id=fip_id, id=fip_id,
@ -1241,7 +1266,7 @@ class L3_NAT_dbonly_mixin(l3.RouterPluginBase):
routers_dict = dict((router['id'], router) for router in routers) routers_dict = dict((router['id'], router) for router in routers)
self._process_floating_ips(context, routers_dict, floating_ips) self._process_floating_ips(context, routers_dict, floating_ips)
self._process_interfaces(routers_dict, interfaces) self._process_interfaces(routers_dict, interfaces)
return routers_dict.values() return list(routers_dict.values())
class L3RpcNotifierMixin(object): class L3RpcNotifierMixin(object):

View File

@ -87,7 +87,8 @@ class L3_NAT_with_dvr_db_mixin(l3_db.L3_NAT_db_mixin,
router_res.get('distributed') is False): router_res.get('distributed') is False):
LOG.info(_LI("Centralizing distributed router %s " LOG.info(_LI("Centralizing distributed router %s "
"is not supported"), router_db['id']) "is not supported"), router_db['id'])
raise NotImplementedError() raise n_exc.NotSupported(msg=_("Migration from distributed router "
"to centralized"))
elif (not router_db.extra_attributes.distributed and elif (not router_db.extra_attributes.distributed and
router_res.get('distributed')): router_res.get('distributed')):
# Notify advanced services of the imminent state transition # Notify advanced services of the imminent state transition
@ -311,6 +312,13 @@ class L3_NAT_with_dvr_db_mixin(l3_db.L3_NAT_db_mixin,
context, router_interface_info, 'add') context, router_interface_info, 'add')
return router_interface_info return router_interface_info
def _port_has_ipv6_address(self, port):
"""Overridden to return False if DVR SNAT port."""
if port['device_owner'] == DEVICE_OWNER_DVR_SNAT:
return False
return super(L3_NAT_with_dvr_db_mixin,
self)._port_has_ipv6_address(port)
def remove_router_interface(self, context, router_id, interface_info): def remove_router_interface(self, context, router_id, interface_info):
remove_by_port, remove_by_subnet = ( remove_by_port, remove_by_subnet = (
self._validate_interface_info(interface_info, for_removal=True) self._validate_interface_info(interface_info, for_removal=True)
@ -528,7 +536,7 @@ class L3_NAT_with_dvr_db_mixin(l3_db.L3_NAT_db_mixin,
filters=device_filter) filters=device_filter)
for p in ports: for p in ports:
if self._get_vm_port_hostid(context, p['id'], p) == host_id: if self._get_vm_port_hostid(context, p['id'], p) == host_id:
self._core_plugin._delete_port(context, p['id']) self._core_plugin.ipam.delete_port(context, p['id'])
return return
def create_fip_agent_gw_port_if_not_exists( def create_fip_agent_gw_port_if_not_exists(

View File

@ -336,6 +336,15 @@ class L3_HA_NAT_db_mixin(l3_dvr_db.L3_NAT_with_dvr_db_mixin):
self._core_plugin.delete_port(admin_ctx, port['id'], self._core_plugin.delete_port(admin_ctx, port['id'],
l3_port_check=False) l3_port_check=False)
def delete_ha_interfaces_on_host(self, context, router_id, host):
admin_ctx = context.elevated()
port_ids = (binding.port_id for binding
in self.get_ha_router_port_bindings(admin_ctx,
[router_id], host))
for port_id in port_ids:
self._core_plugin.delete_port(admin_ctx, port_id,
l3_port_check=False)
def _notify_ha_interfaces_updated(self, context, router_id): def _notify_ha_interfaces_updated(self, context, router_id):
self.l3_rpc_notifier.routers_updated( self.l3_rpc_notifier.routers_updated(
context, [router_id], shuffle_agents=True) context, [router_id], shuffle_agents=True)
@ -461,7 +470,7 @@ class L3_HA_NAT_db_mixin(l3_dvr_db.L3_NAT_with_dvr_db_mixin):
if interface: if interface:
self._populate_subnets_for_ports(context, [interface]) self._populate_subnets_for_ports(context, [interface])
return routers_dict.values() return list(routers_dict.values())
def get_ha_sync_data_for_host(self, context, host=None, router_ids=None, def get_ha_sync_data_for_host(self, context, host=None, router_ids=None,
active=None): active=None):

View File

@ -234,7 +234,7 @@ class MeteringDbMixin(metering.MeteringPluginBase,
routers_dict[router['id']] = router_dict routers_dict[router['id']] = router_dict
return routers_dict.values() return list(routers_dict.values())
def get_sync_data_for_rule(self, context, rule): def get_sync_data_for_rule(self, context, rule):
label = context.session.query(MeteringLabel).get( label = context.session.query(MeteringLabel).get(
@ -253,7 +253,7 @@ class MeteringDbMixin(metering.MeteringPluginBase,
router_dict[constants.METERING_LABEL_KEY].append(data) router_dict[constants.METERING_LABEL_KEY].append(data)
routers_dict[router['id']] = router_dict routers_dict[router['id']] = router_dict
return routers_dict.values() return list(routers_dict.values())
def get_sync_data_metering(self, context, label_id=None, router_ids=None): def get_sync_data_metering(self, context, label_id=None, router_ids=None):
labels = context.session.query(MeteringLabel) labels = context.session.query(MeteringLabel)

View File

@ -24,4 +24,19 @@ LBAAS_TABLES = ['vips', 'sessionpersistences', 'pools', 'healthmonitors',
FWAAS_TABLES = ['firewall_rules', 'firewalls', 'firewall_policies'] FWAAS_TABLES = ['firewall_rules', 'firewalls', 'firewall_policies']
TABLES = (FWAAS_TABLES + LBAAS_TABLES + VPNAAS_TABLES) DRIVER_TABLES = [
# Models moved to openstack/networking-cisco
'cisco_ml2_apic_contracts',
'cisco_ml2_apic_names',
'cisco_ml2_apic_host_links',
'cisco_ml2_n1kv_policy_profiles',
'cisco_ml2_n1kv_network_profiles',
'cisco_ml2_n1kv_port_bindings',
'cisco_ml2_n1kv_network_bindings',
'cisco_ml2_n1kv_vxlan_allocations',
'cisco_ml2_n1kv_vlan_allocations',
'cisco_ml2_n1kv_profile_bindings',
# Add your tables with moved models here^. Please end with a comma.
]
TABLES = (FWAAS_TABLES + LBAAS_TABLES + VPNAAS_TABLES + DRIVER_TABLES)

View File

@ -24,6 +24,9 @@ Create Date: ${create_date}
# revision identifiers, used by Alembic. # revision identifiers, used by Alembic.
revision = ${repr(up_revision)} revision = ${repr(up_revision)}
down_revision = ${repr(down_revision)} down_revision = ${repr(down_revision)}
% if branch_labels:
branch_labels = ${repr(branch_labels)}
% endif
from alembic import op from alembic import op
import sqlalchemy as sa import sqlalchemy as sa

View File

@ -1 +0,0 @@
52c5312f6baf

View File

@ -0,0 +1,3 @@
1c844d1677f7
45f955889773
kilo

View File

@ -1,5 +1,4 @@
# Copyright 2012, Nachi Ueno, NTT MCL, Inc. # Copyright 2015 OpenStack Foundation
# All Rights Reserved.
# #
# Licensed under the Apache License, Version 2.0 (the "License"); you may # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain # not use this file except in compliance with the License. You may obtain
@ -12,8 +11,23 @@
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations # License for the specific language governing permissions and limitations
# under the License. # under the License.
#
from metaplugin.plugin import meta_neutron_plugin """Metaplugin removal
Revision ID: 2a16083502f3
Revises: 5498d17be016
Create Date: 2015-06-16 09:11:10.488566
"""
# revision identifiers, used by Alembic.
revision = '2a16083502f3'
down_revision = '5498d17be016'
from alembic import op
MetaPluginV2 = meta_neutron_plugin.MetaPluginV2 def upgrade():
op.drop_table('networkflavors')
op.drop_table('routerflavors')

View File

@ -1,6 +1,3 @@
# Copyright 2015 Cisco Systems, Inc.
# All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain # not use this file except in compliance with the License. You may obtain
# a copy of the License at # a copy of the License at
@ -12,13 +9,22 @@
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations # License for the specific language governing permissions and limitations
# under the License. # under the License.
#
"""Initial no-op Liberty contract rule.
Revision ID: 30018084ec99
Revises: None
Create Date: 2015-06-22 00:00:00.000000
"""
ML2 Mechanism Driver for Cisco Nexus1000V distributed virtual switches.
""" """
from networking_cisco.plugins.ml2.drivers.cisco.n1kv import mech_cisco_n1kv # revision identifiers, used by Alembic.
revision = '30018084ec99'
down_revision = None
depends_on = ('kilo',)
branch_labels = ('liberty_contract',)
class N1KVMechanismDriver(mech_cisco_n1kv.N1KVMechanismDriver): def upgrade():
pass pass

View File

@ -0,0 +1,69 @@
# Copyright 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""network_rbac
Revision ID: 4ffceebfada
Revises: 30018084ec99
Create Date: 2015-06-14 13:12:04.012457
"""
# revision identifiers, used by Alembic.
revision = '4ffceebfada'
down_revision = '30018084ec99'
depends_on = ('8675309a5c4f',)
from alembic import op
from oslo_utils import uuidutils
import sqlalchemy as sa
# A simple model of the networks table with only the fields needed for
# the migration.
network = sa.Table('networks', sa.MetaData(),
sa.Column('id', sa.String(length=36), nullable=False),
sa.Column('tenant_id', sa.String(length=255)),
sa.Column('shared', sa.Boolean(), nullable=False))
networkrbacs = sa.Table(
'networkrbacs', sa.MetaData(),
sa.Column('id', sa.String(length=36), nullable=False),
sa.Column('object_id', sa.String(length=36), nullable=False),
sa.Column('tenant_id', sa.String(length=255), nullable=True,
index=True),
sa.Column('target_tenant', sa.String(length=255), nullable=False),
sa.Column('action', sa.String(length=255), nullable=False))
def upgrade():
op.bulk_insert(networkrbacs, get_values())
op.drop_column('networks', 'shared')
# the shared column on subnets was just an internal representation of the
# shared status of the network it was related to. This is now handled by
# other logic so we just drop it.
op.drop_column('subnets', 'shared')
def get_values():
session = sa.orm.Session(bind=op.get_bind())
values = []
for row in session.query(network).filter(network.c.shared).all():
values.append({'id': uuidutils.generate_uuid(), 'object_id': row[0],
'tenant_id': row[1], 'target_tenant': '*',
'action': 'access_as_shared'})
# this commit appears to be necessary to allow further operations
session.commit()
return values

View File

@ -0,0 +1,37 @@
# Copyright 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""Drop legacy OVS and LB plugin tables
Revision ID: 5498d17be016
Revises: 4ffceebfada
Create Date: 2015-06-25 14:08:30.984419
"""
# revision identifiers, used by Alembic.
revision = '5498d17be016'
down_revision = '4ffceebfada'
from alembic import op
def upgrade():
op.drop_table('ovs_network_bindings')
op.drop_table('ovs_vlan_allocations')
op.drop_table('network_bindings')
op.drop_table('ovs_tunnel_allocations')
op.drop_table('network_states')
op.drop_table('ovs_tunnel_endpoints')

View File

@ -0,0 +1,35 @@
# Copyright 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""add order to dnsnameservers
Revision ID: 1c844d1677f7
Revises: 2a16083502f3
Create Date: 2015-07-21 22:59:03.383850
"""
# revision identifiers, used by Alembic.
revision = '1c844d1677f7'
down_revision = '2a16083502f3'
from alembic import op
import sqlalchemy as sa
def upgrade():
op.add_column('dnsnameservers',
sa.Column('order', sa.Integer(),
server_default='0', nullable=False))

View File

@ -0,0 +1,62 @@
# Copyright 2014-2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""Flavor framework
Revision ID: 313373c0ffee
Revises: 52c5312f6baf
Create Date: 2014-07-17 03:00:00.00
"""
# revision identifiers, used by Alembic.
revision = '313373c0ffee'
down_revision = '52c5312f6baf'
from alembic import op
import sqlalchemy as sa
def upgrade():
op.create_table(
'flavors',
sa.Column('id', sa.String(36)),
sa.Column('name', sa.String(255)),
sa.Column('description', sa.String(1024)),
sa.Column('enabled', sa.Boolean, nullable=False,
server_default=sa.sql.true()),
sa.Column('service_type', sa.String(36), nullable=True),
sa.PrimaryKeyConstraint('id')
)
op.create_table(
'serviceprofiles',
sa.Column('id', sa.String(36)),
sa.Column('description', sa.String(1024)),
sa.Column('driver', sa.String(1024), nullable=False),
sa.Column('enabled', sa.Boolean, nullable=False,
server_default=sa.sql.true()),
sa.Column('metainfo', sa.String(4096)),
sa.PrimaryKeyConstraint('id')
)
op.create_table(
'flavorserviceprofilebindings',
sa.Column('service_profile_id', sa.String(36), nullable=False),
sa.Column('flavor_id', sa.String(36), nullable=False),
sa.ForeignKeyConstraint(['service_profile_id'],
['serviceprofiles.id']),
sa.ForeignKeyConstraint(['flavor_id'], ['flavors.id']),
sa.PrimaryKeyConstraint('service_profile_id', 'flavor_id')
)

View File

@ -23,7 +23,10 @@ Create Date: 2015-04-19 14:59:15.102609
# revision identifiers, used by Alembic. # revision identifiers, used by Alembic.
revision = '354db87e3225' revision = '354db87e3225'
down_revision = 'kilo' down_revision = None
branch_labels = ('liberty_expand',)
depends_on = ('kilo',)
from alembic import op from alembic import op
import sqlalchemy as sa import sqlalchemy as sa

View File

@ -0,0 +1,45 @@
# Copyright 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""quota_usage
Revision ID: 45f955889773
Revises: 8675309a5c4f
Create Date: 2015-04-17 08:09:37.611546
"""
# revision identifiers, used by Alembic.
revision = '45f955889773'
down_revision = '8675309a5c4f'
from alembic import op
import sqlalchemy as sa
from sqlalchemy import sql
def upgrade():
op.create_table(
'quotausages',
sa.Column('tenant_id', sa.String(length=255),
nullable=False, primary_key=True, index=True),
sa.Column('resource', sa.String(length=255),
nullable=False, primary_key=True, index=True),
sa.Column('dirty', sa.Boolean(), nullable=False,
server_default=sql.false()),
sa.Column('in_use', sa.Integer(), nullable=False,
server_default='0'),
sa.Column('reserved', sa.Integer(), nullable=False,
server_default='0'))

View File

@ -0,0 +1,47 @@
# Copyright 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""network_rbac
Revision ID: 8675309a5c4f
Revises: 313373c0ffee
Create Date: 2015-06-14 13:12:04.012457
"""
# revision identifiers, used by Alembic.
revision = '8675309a5c4f'
down_revision = '313373c0ffee'
from alembic import op
import sqlalchemy as sa
def upgrade():
op.create_table(
'networkrbacs',
sa.Column('id', sa.String(length=36), nullable=False),
sa.Column('object_id', sa.String(length=36), nullable=False),
sa.Column('tenant_id', sa.String(length=255), nullable=True,
index=True),
sa.Column('target_tenant', sa.String(length=255), nullable=False),
sa.Column('action', sa.String(length=255), nullable=False),
sa.ForeignKeyConstraint(['object_id'],
['networks.id'],
ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id'),
sa.UniqueConstraint(
'action', 'object_id', 'target_tenant',
name='uniq_networkrbacs0tenant_target0object_id0action'))

View File

@ -24,11 +24,18 @@ from oslo_config import cfg
from oslo_utils import importutils from oslo_utils import importutils
from neutron.common import repos from neutron.common import repos
from neutron.common import utils
# TODO(ihrachyshka): maintain separate HEAD files per branch
HEAD_FILENAME = 'HEAD' HEAD_FILENAME = 'HEAD'
HEADS_FILENAME = 'HEADS'
CURRENT_RELEASE = "liberty"
MIGRATION_BRANCHES = ('expand', 'contract')
mods = repos.NeutronModules() mods = repos.NeutronModules()
VALID_SERVICES = map(mods.alembic_name, mods.installed_list()) VALID_SERVICES = list(map(mods.alembic_name, mods.installed_list()))
_core_opts = [ _core_opts = [
@ -41,7 +48,10 @@ _core_opts = [
cfg.StrOpt('service', cfg.StrOpt('service',
choices=VALID_SERVICES, choices=VALID_SERVICES,
help=_("The advanced service to execute the command against. " help=_("The advanced service to execute the command against. "
"Can be one of '%s'.") % "', '".join(VALID_SERVICES)) "Can be one of '%s'.") % "', '".join(VALID_SERVICES)),
cfg.BoolOpt('split_branches',
default=False,
help=_("Enforce using split branches file structure."))
] ]
_quota_opts = [ _quota_opts = [
@ -76,7 +86,7 @@ def do_alembic_command(config, cmd, *args, **kwargs):
def do_check_migration(config, cmd): def do_check_migration(config, cmd):
do_alembic_command(config, 'branches') do_alembic_command(config, 'branches')
validate_head_file(config) validate_heads_file(config)
def add_alembic_subparser(sub, cmd): def add_alembic_subparser(sub, cmd):
@ -101,6 +111,10 @@ def do_upgrade(config, cmd):
raise SystemExit(_('Negative delta (downgrade) not supported')) raise SystemExit(_('Negative delta (downgrade) not supported'))
revision = '%s+%d' % (revision, delta) revision = '%s+%d' % (revision, delta)
# leave branchless 'head' revision request backward compatible by applying
# all heads in all available branches.
if revision == 'head':
revision = 'heads'
if not CONF.command.sql: if not CONF.command.sql:
run_sanity_checks(config, revision) run_sanity_checks(config, revision)
do_alembic_command(config, cmd, revision, sql=CONF.command.sql) do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
@ -116,35 +130,83 @@ def do_stamp(config, cmd):
sql=CONF.command.sql) sql=CONF.command.sql)
def _get_branch_label(branch):
'''Get the latest branch label corresponding to release cycle.'''
return '%s_%s' % (CURRENT_RELEASE, branch)
def _get_branch_head(branch):
'''Get the latest @head specification for a branch.'''
return '%s@head' % _get_branch_label(branch)
def do_revision(config, cmd): def do_revision(config, cmd):
do_alembic_command(config, cmd, '''Generate new revision files, one per branch.'''
message=CONF.command.message, addn_kwargs = {
autogenerate=CONF.command.autogenerate, 'message': CONF.command.message,
sql=CONF.command.sql) 'autogenerate': CONF.command.autogenerate,
update_head_file(config) 'sql': CONF.command.sql,
}
if _use_separate_migration_branches(CONF):
for branch in MIGRATION_BRANCHES:
version_path = _get_version_branch_path(CONF, branch)
addn_kwargs['version_path'] = version_path
def validate_head_file(config): if not os.path.exists(version_path):
script = alembic_script.ScriptDirectory.from_config(config) # Bootstrap initial directory structure
if len(script.get_heads()) > 1: utils.ensure_dir(version_path)
alembic_util.err(_('Timeline branches unable to generate timeline')) # Each new release stream of migrations is detached from
# previous migration chains
addn_kwargs['head'] = 'base'
# Mark the very first revision in the new branch with its label
addn_kwargs['branch_label'] = _get_branch_label(branch)
# TODO(ihrachyshka): ideally, we would also add depends_on here
# to refer to the head of the previous release stream. But
# alembic API does not support it yet.
else:
addn_kwargs['head'] = _get_branch_head(branch)
head_path = os.path.join(script.versions, HEAD_FILENAME) do_alembic_command(config, cmd, **addn_kwargs)
if (os.path.isfile(head_path) and
open(head_path).read().strip() == script.get_current_head()):
return
else: else:
alembic_util.err(_('HEAD file does not match migration timeline head')) do_alembic_command(config, cmd, **addn_kwargs)
update_heads_file(config)
def update_head_file(config): def _get_sorted_heads(script):
'''Get the list of heads for all branches, sorted.'''
heads = script.get_heads()
# +1 stands for the core 'kilo' branch, the one that didn't have branches
if len(heads) > len(MIGRATION_BRANCHES) + 1:
alembic_util.err(_('No new branches are allowed except: %s') %
' '.join(MIGRATION_BRANCHES))
return sorted(heads)
def validate_heads_file(config):
'''Check that HEADS file contains the latest heads for each branch.'''
script = alembic_script.ScriptDirectory.from_config(config) script = alembic_script.ScriptDirectory.from_config(config)
if len(script.get_heads()) > 1: expected_heads = _get_sorted_heads(script)
alembic_util.err(_('Timeline branches unable to generate timeline')) heads_path = _get_active_head_file_path(CONF)
try:
with open(heads_path) as file_:
observed_heads = file_.read().split()
if observed_heads == expected_heads:
return
except IOError:
pass
alembic_util.err(
_('HEADS file does not match migration timeline heads, expected: %s')
% ', '.join(expected_heads))
head_path = os.path.join(script.versions, HEAD_FILENAME)
with open(head_path, 'w+') as f: def update_heads_file(config):
f.write(script.get_current_head()) '''Update HEADS file with the latest branch heads.'''
script = alembic_script.ScriptDirectory.from_config(config)
heads = _get_sorted_heads(script)
heads_path = _get_active_head_file_path(CONF)
with open(heads_path, 'w+') as f:
f.write('\n'.join(heads))
def add_command_parsers(subparsers): def add_command_parsers(subparsers):
@ -191,6 +253,72 @@ command_opt = cfg.SubCommandOpt('command',
CONF.register_cli_opt(command_opt) CONF.register_cli_opt(command_opt)
def _get_neutron_service_base(neutron_config):
'''Return base python namespace name for a service.'''
if neutron_config.service:
validate_service_installed(neutron_config.service)
return "neutron_%s" % neutron_config.service
return "neutron"
def _get_root_versions_dir(neutron_config):
'''Return root directory that contains all migration rules.'''
service_base = _get_neutron_service_base(neutron_config)
root_module = importutils.import_module(service_base)
return os.path.join(
os.path.dirname(root_module.__file__),
'db/migration/alembic_migrations/versions')
def _get_head_file_path(neutron_config):
'''Return the path of the file that contains single head.'''
return os.path.join(
_get_root_versions_dir(neutron_config),
HEAD_FILENAME)
def _get_heads_file_path(neutron_config):
'''Return the path of the file that contains all latest heads, sorted.'''
return os.path.join(
_get_root_versions_dir(neutron_config),
HEADS_FILENAME)
def _get_active_head_file_path(neutron_config):
'''Return the path of the file that contains latest head(s), depending on
whether multiple branches are used.
'''
if _use_separate_migration_branches(neutron_config):
return _get_heads_file_path(neutron_config)
return _get_head_file_path(neutron_config)
def _get_version_branch_path(neutron_config, branch=None):
version_path = _get_root_versions_dir(neutron_config)
if branch:
return os.path.join(version_path, CURRENT_RELEASE, branch)
return version_path
def _use_separate_migration_branches(neutron_config):
'''Detect whether split migration branches should be used.'''
return (neutron_config.split_branches or
# Use HEADS file to indicate the new, split migration world
os.path.exists(_get_heads_file_path(neutron_config)))
def _set_version_locations(config):
'''Make alembic see all revisions in all migration branches.'''
version_paths = []
version_paths.append(_get_version_branch_path(CONF))
if _use_separate_migration_branches(CONF):
for branch in MIGRATION_BRANCHES:
version_paths.append(_get_version_branch_path(CONF, branch))
config.set_main_option('version_locations', ' '.join(version_paths))
def validate_service_installed(service): def validate_service_installed(service):
if not importutils.try_import('neutron_%s' % service): if not importutils.try_import('neutron_%s' % service):
alembic_util.err(_('Package neutron-%s not installed') % service) alembic_util.err(_('Package neutron-%s not installed') % service)
@ -198,18 +326,14 @@ def validate_service_installed(service):
def get_script_location(neutron_config): def get_script_location(neutron_config):
location = '%s.db.migration:alembic_migrations' location = '%s.db.migration:alembic_migrations'
if neutron_config.service: return location % _get_neutron_service_base(neutron_config)
validate_service_installed(neutron_config.service)
base = "neutron_%s" % neutron_config.service
else:
base = "neutron"
return location % base
def get_alembic_config(): def get_alembic_config():
config = alembic_config.Config(os.path.join(os.path.dirname(__file__), config = alembic_config.Config(os.path.join(os.path.dirname(__file__),
'alembic.ini')) 'alembic.ini'))
config.set_main_option('script_location', get_script_location(CONF)) config.set_main_option('script_location', get_script_location(CONF))
_set_version_locations(config)
return config return config
@ -217,7 +341,11 @@ def run_sanity_checks(config, revision):
script_dir = alembic_script.ScriptDirectory.from_config(config) script_dir = alembic_script.ScriptDirectory.from_config(config)
def check_sanity(rev, context): def check_sanity(rev, context):
for script in script_dir.iterate_revisions(revision, rev): # TODO(ihrachyshka): here we use internal API for alembic; we may need
# alembic to expose implicit_base= argument into public
# iterate_revisions() call
for script in script_dir.revision_map.iterate_revisions(
revision, rev, implicit_base=True):
if hasattr(script.module, 'check_sanity'): if hasattr(script.module, 'check_sanity'):
script.module.check_sanity(context.connection) script.module.check_sanity(context.connection)
return [] return []

View File

@ -1,515 +0,0 @@
# Copyright (c) 2014 Red Hat, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
This script will migrate the database of an openvswitch, linuxbridge or
Hyper-V plugin so that it can be used with the ml2 plugin.
Known Limitations:
- THIS SCRIPT IS DESTRUCTIVE! Make sure to backup your
Neutron database before running this script, in case anything goes
wrong.
- It will be necessary to upgrade the database to the target release
via neutron-db-manage before attempting to migrate to ml2.
Initially, only the icehouse release is supported.
- This script does not automate configuration migration.
Example usage:
python -m neutron.db.migration.migrate_to_ml2 openvswitch \
mysql+pymysql://login:pass@127.0.0.1/neutron
Note that migration of tunneling state will only be attempted if the
--tunnel-type parameter is provided.
To manually test migration from ovs to ml2 with devstack:
- stack with Q_PLUGIN=openvswitch
- boot an instance and validate connectivity
- stop the neutron service and all agents
- run the neutron-migrate-to-ml2 script
- update /etc/neutron/neutron.conf as follows:
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
- Create /etc/neutron/plugins/ml2/ml2_conf.ini and ensure that:
- ml2.mechanism_drivers includes 'openvswitch'
- ovs.local_ip is set correctly
- database.connection is set correctly
- Start the neutron service with the ml2 config file created in
the previous step in place of the openvswitch config file
- Start all the agents
- verify that the booted instance still has connectivity
- boot a second instance and validate connectivity
"""
import argparse
from oslo_db.sqlalchemy import session
from oslo_utils import uuidutils
import sqlalchemy as sa
from neutron.extensions import portbindings
from neutron.plugins.common import constants as p_const
# Migration targets
LINUXBRIDGE = 'linuxbridge'
OPENVSWITCH = 'openvswitch'
HYPERV = 'hyperv'
# Releases
ICEHOUSE = 'icehouse'
JUNO = 'juno'
SUPPORTED_SCHEMA_VERSIONS = [ICEHOUSE, JUNO]
def check_db_schema_version(engine, metadata):
"""Check that current version of the db schema is supported."""
version_table = sa.Table(
'alembic_version', metadata, autoload=True, autoload_with=engine)
versions = [v[0] for v in engine.execute(version_table.select())]
if not versions:
raise ValueError(_("Missing version in alembic_versions table"))
elif len(versions) > 1:
raise ValueError(_("Multiple versions in alembic_versions table: %s")
% versions)
current_version = versions[0]
if current_version not in SUPPORTED_SCHEMA_VERSIONS:
raise SystemError(_("Unsupported database schema %(current)s. "
"Please migrate your database to one of following "
"versions: %(supported)s")
% {'current': current_version,
'supported': ', '.join(SUPPORTED_SCHEMA_VERSIONS)}
)
# Duplicated from
# neutron.plugins.ml2.drivers.linuxbridge.agent.common.constants to
# avoid having any dependency on the linuxbridge plugin being
# installed.
def interpret_vlan_id(vlan_id):
"""Return (network_type, segmentation_id) tuple for encoded vlan_id."""
FLAT_VLAN_ID = -1
LOCAL_VLAN_ID = -2
if vlan_id == LOCAL_VLAN_ID:
return (p_const.TYPE_LOCAL, None)
elif vlan_id == FLAT_VLAN_ID:
return (p_const.TYPE_FLAT, None)
else:
return (p_const.TYPE_VLAN, vlan_id)
class BaseMigrateToMl2(object):
def __init__(self, vif_type, driver_type, segment_table_name,
vlan_allocation_table_name, old_tables):
self.vif_type = vif_type
self.driver_type = driver_type
self.segment_table_name = segment_table_name
self.vlan_allocation_table_name = vlan_allocation_table_name
self.old_tables = old_tables
def __call__(self, connection_url, save_tables=False, tunnel_type=None,
vxlan_udp_port=None):
engine = session.create_engine(connection_url)
metadata = sa.MetaData()
check_db_schema_version(engine, metadata)
if hasattr(self, 'define_ml2_tables'):
self.define_ml2_tables(metadata)
# Autoload the ports table to ensure that foreign keys to it and
# the network table can be created for the new tables.
sa.Table('ports', metadata, autoload=True, autoload_with=engine)
metadata.create_all(engine)
self.migrate_network_segments(engine, metadata)
if tunnel_type:
self.migrate_tunnels(engine, tunnel_type, vxlan_udp_port)
self.migrate_vlan_allocations(engine)
self.migrate_port_bindings(engine, metadata)
if hasattr(self, 'drop_old_tables'):
self.drop_old_tables(engine, save_tables)
def migrate_segment_dict(self, binding):
binding['id'] = uuidutils.generate_uuid()
def migrate_network_segments(self, engine, metadata):
# Migrating network segments requires loading the data to python
# so that a uuid can be generated for each segment.
source_table = sa.Table(self.segment_table_name, metadata,
autoload=True, autoload_with=engine)
source_segments = engine.execute(source_table.select())
ml2_segments = [dict(x) for x in source_segments]
for segment in ml2_segments:
self.migrate_segment_dict(segment)
if ml2_segments:
ml2_network_segments = metadata.tables['ml2_network_segments']
engine.execute(ml2_network_segments.insert(), ml2_segments)
def migrate_tunnels(self, engine, tunnel_type, vxlan_udp_port=None):
"""Override this method to perform plugin-specific tunnel migration."""
pass
def migrate_vlan_allocations(self, engine):
engine.execute(("""
INSERT INTO ml2_vlan_allocations
SELECT physical_network, vlan_id, allocated
FROM %(source_table)s
WHERE allocated = TRUE
""") % {'source_table': self.vlan_allocation_table_name})
def get_port_segment_map(self, engine):
"""Retrieve a mapping of port id to segment id.
The monolithic plugins only support a single segment per
network, so the segment id can be uniquely identified by
the network associated with a given port.
"""
port_segments = engine.execute("""
SELECT ports_network.port_id, ml2_network_segments.id AS segment_id
FROM ml2_network_segments, (
SELECT portbindingports.port_id, ports.network_id
FROM portbindingports, ports
WHERE portbindingports.port_id = ports.id
) AS ports_network
WHERE ml2_network_segments.network_id = ports_network.network_id
""")
return dict(x for x in port_segments)
def migrate_port_bindings(self, engine, metadata):
port_segment_map = self.get_port_segment_map(engine)
port_binding_ports = sa.Table('portbindingports', metadata,
autoload=True, autoload_with=engine)
source_bindings = engine.execute(port_binding_ports.select())
ml2_bindings = [dict(x) for x in source_bindings]
for binding in ml2_bindings:
binding['vif_type'] = self.vif_type
binding['driver'] = self.driver_type
segment = port_segment_map.get(binding['port_id'])
if segment:
binding['segment'] = segment
if ml2_bindings:
ml2_port_bindings = metadata.tables['ml2_port_bindings']
engine.execute(ml2_port_bindings.insert(), ml2_bindings)
class BaseMigrateToMl2_IcehouseMixin(object):
"""A mixin to ensure ml2 database schema state for Icehouse.
This classes the missing tables for Icehouse schema revisions. In Juno,
the schema state has been healed, so we do not need to run these.
"""
def drop_old_tables(self, engine, save_tables=False):
if save_tables:
return
old_tables = self.old_tables + [self.vlan_allocation_table_name,
self.segment_table_name]
for table_name in old_tables:
engine.execute('DROP TABLE %s' % table_name)
def define_ml2_tables(self, metadata):
sa.Table(
'arista_provisioned_nets', metadata,
sa.Column('tenant_id', sa.String(length=255), nullable=True),
sa.Column('id', sa.String(length=36), nullable=False),
sa.Column('network_id', sa.String(length=36), nullable=True),
sa.Column('segmentation_id', sa.Integer(),
autoincrement=False, nullable=True),
sa.PrimaryKeyConstraint('id'),
)
sa.Table(
'arista_provisioned_vms', metadata,
sa.Column('tenant_id', sa.String(length=255), nullable=True),
sa.Column('id', sa.String(length=36), nullable=False),
sa.Column('vm_id', sa.String(length=255), nullable=True),
sa.Column('host_id', sa.String(length=255), nullable=True),
sa.Column('port_id', sa.String(length=36), nullable=True),
sa.Column('network_id', sa.String(length=36), nullable=True),
sa.PrimaryKeyConstraint('id'),
)
sa.Table(
'arista_provisioned_tenants', metadata,
sa.Column('tenant_id', sa.String(length=255), nullable=True),
sa.Column('id', sa.String(length=36), nullable=False),
sa.PrimaryKeyConstraint('id'),
)
sa.Table(
'cisco_ml2_nexusport_bindings', metadata,
sa.Column('binding_id', sa.Integer(), nullable=False),
sa.Column('port_id', sa.String(length=255), nullable=True),
sa.Column('vlan_id', sa.Integer(), autoincrement=False,
nullable=False),
sa.Column('switch_ip', sa.String(length=255), nullable=True),
sa.Column('instance_id', sa.String(length=255), nullable=True),
sa.PrimaryKeyConstraint('binding_id'),
)
sa.Table(
'cisco_ml2_credentials', metadata,
sa.Column('credential_id', sa.String(length=255), nullable=True),
sa.Column('tenant_id', sa.String(length=255), nullable=False),
sa.Column('credential_name', sa.String(length=255),
nullable=False),
sa.Column('user_name', sa.String(length=255), nullable=True),
sa.Column('password', sa.String(length=255), nullable=True),
sa.PrimaryKeyConstraint('tenant_id', 'credential_name'),
)
sa.Table(
'ml2_flat_allocations', metadata,
sa.Column('physical_network', sa.String(length=64),
nullable=False),
sa.PrimaryKeyConstraint('physical_network'),
)
sa.Table(
'ml2_gre_allocations', metadata,
sa.Column('gre_id', sa.Integer, nullable=False,
autoincrement=False),
sa.Column('allocated', sa.Boolean, nullable=False),
sa.PrimaryKeyConstraint('gre_id'),
)
sa.Table(
'ml2_gre_endpoints', metadata,
sa.Column('ip_address', sa.String(length=64)),
sa.PrimaryKeyConstraint('ip_address'),
)
sa.Table(
'ml2_network_segments', metadata,
sa.Column('id', sa.String(length=36), nullable=False),
sa.Column('network_id', sa.String(length=36), nullable=False),
sa.Column('network_type', sa.String(length=32), nullable=False),
sa.Column('physical_network', sa.String(length=64), nullable=True),
sa.Column('segmentation_id', sa.Integer(), nullable=True),
sa.ForeignKeyConstraint(['network_id'], ['networks.id'],
ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id'),
)
sa.Table(
'ml2_port_bindings', metadata,
sa.Column('port_id', sa.String(length=36), nullable=False),
sa.Column('host', sa.String(length=255), nullable=False),
sa.Column('vif_type', sa.String(length=64), nullable=False),
sa.Column('driver', sa.String(length=64), nullable=True),
sa.Column('segment', sa.String(length=36), nullable=True),
sa.Column('vnic_type', sa.String(length=64), nullable=False,
server_default='normal'),
sa.Column('vif_details', sa.String(4095), nullable=False,
server_default=''),
sa.Column('profile', sa.String(4095), nullable=False,
server_default=''),
sa.ForeignKeyConstraint(['port_id'], ['ports.id'],
ondelete='CASCADE'),
sa.ForeignKeyConstraint(['segment'], ['ml2_network_segments.id'],
ondelete='SET NULL'),
sa.PrimaryKeyConstraint('port_id'),
)
sa.Table(
'ml2_vlan_allocations', metadata,
sa.Column('physical_network', sa.String(length=64),
nullable=False),
sa.Column('vlan_id', sa.Integer(), autoincrement=False,
nullable=False),
sa.Column('allocated', sa.Boolean(), autoincrement=False,
nullable=False),
sa.PrimaryKeyConstraint('physical_network', 'vlan_id'),
)
sa.Table(
'ml2_vxlan_allocations', metadata,
sa.Column('vxlan_vni', sa.Integer, nullable=False,
autoincrement=False),
sa.Column('allocated', sa.Boolean, nullable=False),
sa.PrimaryKeyConstraint('vxlan_vni'),
)
sa.Table(
'ml2_vxlan_endpoints', metadata,
sa.Column('ip_address', sa.String(length=64)),
sa.Column('udp_port', sa.Integer(), nullable=False,
autoincrement=False),
sa.PrimaryKeyConstraint('ip_address', 'udp_port'),
)
class MigrateLinuxBridgeToMl2_Juno(BaseMigrateToMl2):
def __init__(self):
super(MigrateLinuxBridgeToMl2_Juno, self).__init__(
vif_type=portbindings.VIF_TYPE_BRIDGE,
driver_type=LINUXBRIDGE,
segment_table_name='network_bindings',
vlan_allocation_table_name='network_states',
old_tables=['portbindingports'])
def migrate_segment_dict(self, binding):
super(MigrateLinuxBridgeToMl2_Juno, self).migrate_segment_dict(
binding)
vlan_id = binding.pop('vlan_id')
network_type, segmentation_id = interpret_vlan_id(vlan_id)
binding['network_type'] = network_type
binding['segmentation_id'] = segmentation_id
class MigrateHyperVPluginToMl2_Juno(BaseMigrateToMl2):
def __init__(self):
super(MigrateHyperVPluginToMl2_Juno, self).__init__(
vif_type=portbindings.VIF_TYPE_HYPERV,
driver_type=HYPERV,
segment_table_name='hyperv_network_bindings',
vlan_allocation_table_name='hyperv_vlan_allocations',
old_tables=['portbindingports'])
def migrate_segment_dict(self, binding):
super(MigrateHyperVPluginToMl2_Juno, self).migrate_segment_dict(
binding)
# the 'hyperv_network_bindings' table has the column
# 'segmentation_id' instead of 'vlan_id'.
vlan_id = binding.pop('segmentation_id')
network_type, segmentation_id = interpret_vlan_id(vlan_id)
binding['network_type'] = network_type
binding['segmentation_id'] = segmentation_id
class MigrateOpenvswitchToMl2_Juno(BaseMigrateToMl2):
def __init__(self):
super(MigrateOpenvswitchToMl2_Juno, self).__init__(
vif_type=portbindings.VIF_TYPE_OVS,
driver_type=OPENVSWITCH,
segment_table_name='ovs_network_bindings',
vlan_allocation_table_name='ovs_vlan_allocations',
old_tables=[
'ovs_tunnel_allocations',
'ovs_tunnel_endpoints',
'portbindingports',
])
def migrate_tunnels(self, engine, tunnel_type, vxlan_udp_port=None):
if tunnel_type == p_const.TYPE_GRE:
engine.execute("""
INSERT INTO ml2_gre_allocations
SELECT tunnel_id as gre_id, allocated
FROM ovs_tunnel_allocations
WHERE allocated = TRUE
""")
engine.execute("""
INSERT INTO ml2_gre_endpoints
SELECT ip_address
FROM ovs_tunnel_endpoints
""")
elif tunnel_type == p_const.TYPE_VXLAN:
if not vxlan_udp_port:
vxlan_udp_port = p_const.VXLAN_UDP_PORT
engine.execute("""
INSERT INTO ml2_vxlan_allocations
SELECT tunnel_id as vxlan_vni, allocated
FROM ovs_tunnel_allocations
WHERE allocated = TRUE
""")
engine.execute(sa.text("""
INSERT INTO ml2_vxlan_endpoints
SELECT ip_address, :udp_port as udp_port
FROM ovs_tunnel_endpoints
"""), udp_port=vxlan_udp_port)
else:
raise ValueError(_('Unknown tunnel type: %s') % tunnel_type)
class MigrateLinuxBridgeToMl2_Icehouse(MigrateLinuxBridgeToMl2_Juno,
BaseMigrateToMl2_IcehouseMixin):
pass
class MigrateOpenvswitchToMl2_Icehouse(MigrateOpenvswitchToMl2_Juno,
BaseMigrateToMl2_IcehouseMixin):
pass
class MigrateHyperVPluginToMl2_Icehouse(MigrateHyperVPluginToMl2_Juno,
BaseMigrateToMl2_IcehouseMixin):
pass
migrate_map = {
ICEHOUSE: {
OPENVSWITCH: MigrateOpenvswitchToMl2_Icehouse,
LINUXBRIDGE: MigrateLinuxBridgeToMl2_Icehouse,
HYPERV: MigrateHyperVPluginToMl2_Icehouse,
},
JUNO: {
OPENVSWITCH: MigrateOpenvswitchToMl2_Juno,
LINUXBRIDGE: MigrateLinuxBridgeToMl2_Juno,
HYPERV: MigrateHyperVPluginToMl2_Juno,
},
}
def main():
parser = argparse.ArgumentParser()
parser.add_argument('plugin', choices=[OPENVSWITCH, LINUXBRIDGE, HYPERV],
help=_('The plugin type whose database will be '
'migrated'))
parser.add_argument('connection',
help=_('The connection url for the target db'))
parser.add_argument('--tunnel-type', choices=[p_const.TYPE_GRE,
p_const.TYPE_VXLAN],
help=_('The %s tunnel type to migrate from') %
OPENVSWITCH)
parser.add_argument('--vxlan-udp-port', default=None, type=int,
help=_('The UDP port to use for VXLAN tunnels.'))
parser.add_argument('--release', default=JUNO, choices=[ICEHOUSE, JUNO])
parser.add_argument('--save-tables', default=False, action='store_true',
help=_("Retain the old plugin's tables"))
#TODO(marun) Provide a verbose option
args = parser.parse_args()
if args.plugin in [LINUXBRIDGE, HYPERV] and (args.tunnel_type or
args.vxlan_udp_port):
msg = _('Tunnel args (tunnel-type and vxlan-udp-port) are not valid '
'for the %s plugin')
parser.error(msg % args.plugin)
try:
migrate_func = migrate_map[args.release][args.plugin]()
except KeyError:
msg = _('Support for migrating %(plugin)s for release '
'%(release)s is not yet implemented')
parser.error(msg % {'plugin': args.plugin, 'release': args.release})
else:
migrate_func(args.connection, args.save_tables, args.tunnel_type,
args.vxlan_udp_port)
if __name__ == '__main__':
main()

View File

@ -21,6 +21,7 @@ Based on this comparison database can be healed with healing migration.
""" """
from neutron.db import address_scope_db # noqa
from neutron.db import agents_db # noqa from neutron.db import agents_db # noqa
from neutron.db import agentschedulers_db # noqa from neutron.db import agentschedulers_db # noqa
from neutron.db import allowedaddresspairs_db # noqa from neutron.db import allowedaddresspairs_db # noqa
@ -28,6 +29,7 @@ from neutron.db import dvr_mac_db # noqa
from neutron.db import external_net_db # noqa from neutron.db import external_net_db # noqa
from neutron.db import extradhcpopt_db # noqa from neutron.db import extradhcpopt_db # noqa
from neutron.db import extraroute_db # noqa from neutron.db import extraroute_db # noqa
from neutron.db import flavors_db # noqa
from neutron.db import l3_agentschedulers_db # noqa from neutron.db import l3_agentschedulers_db # noqa
from neutron.db import l3_attrs_db # noqa from neutron.db import l3_attrs_db # noqa
from neutron.db import l3_db # noqa from neutron.db import l3_db # noqa
@ -39,7 +41,8 @@ from neutron.db import model_base
from neutron.db import models_v2 # noqa from neutron.db import models_v2 # noqa
from neutron.db import portbindings_db # noqa from neutron.db import portbindings_db # noqa
from neutron.db import portsecurity_db # noqa from neutron.db import portsecurity_db # noqa
from neutron.db import quota_db # noqa from neutron.db.quota import models # noqa
from neutron.db import rbac_db_models # noqa
from neutron.db import securitygroups_db # noqa from neutron.db import securitygroups_db # noqa
from neutron.db import servicetype_db # noqa from neutron.db import servicetype_db # noqa
from neutron.ipam.drivers.neutrondb_ipam import db_models # noqa from neutron.ipam.drivers.neutrondb_ipam import db_models # noqa
@ -49,18 +52,12 @@ from neutron.plugins.brocade.db import models as brocade_models # noqa
from neutron.plugins.cisco.db.l3 import l3_models # noqa from neutron.plugins.cisco.db.l3 import l3_models # noqa
from neutron.plugins.cisco.db import n1kv_models_v2 # noqa from neutron.plugins.cisco.db import n1kv_models_v2 # noqa
from neutron.plugins.cisco.db import network_models_v2 # noqa from neutron.plugins.cisco.db import network_models_v2 # noqa
from neutron.plugins.metaplugin import meta_models_v2 # noqa
from neutron.plugins.ml2.drivers.arista import db # noqa from neutron.plugins.ml2.drivers.arista import db # noqa
from neutron.plugins.ml2.drivers.brocade.db import ( # noqa from neutron.plugins.ml2.drivers.brocade.db import ( # noqa
models as ml2_brocade_models) models as ml2_brocade_models)
from neutron.plugins.ml2.drivers.cisco.apic import apic_model # noqa
from neutron.plugins.ml2.drivers.cisco.n1kv import n1kv_models # noqa
from neutron.plugins.ml2.drivers.cisco.nexus import ( # noqa from neutron.plugins.ml2.drivers.cisco.nexus import ( # noqa
nexus_models_v2 as ml2_nexus_models_v2) nexus_models_v2 as ml2_nexus_models_v2)
from neutron.plugins.ml2.drivers.cisco.ucsm import ucsm_model # noqa from neutron.plugins.ml2.drivers.cisco.ucsm import ucsm_model # noqa
from neutron.plugins.ml2.drivers.linuxbridge.agent import ( # noqa
l2network_models_v2)
from neutron.plugins.ml2.drivers.openvswitch.agent import ovs_models_v2 # noqa
from neutron.plugins.ml2.drivers import type_flat # noqa from neutron.plugins.ml2.drivers import type_flat # noqa
from neutron.plugins.ml2.drivers import type_gre # noqa from neutron.plugins.ml2.drivers import type_gre # noqa
from neutron.plugins.ml2.drivers import type_vlan # noqa from neutron.plugins.ml2.drivers import type_vlan # noqa

View File

@ -15,6 +15,7 @@
from oslo_utils import uuidutils from oslo_utils import uuidutils
import sqlalchemy as sa import sqlalchemy as sa
from sqlalchemy.ext.associationproxy import association_proxy
from sqlalchemy import orm from sqlalchemy import orm
from neutron.api.v2 import attributes as attr from neutron.api.v2 import attributes as attr
@ -132,7 +133,8 @@ class Port(model_base.BASEV2, HasId, HasTenant):
name = sa.Column(sa.String(attr.NAME_MAX_LEN)) name = sa.Column(sa.String(attr.NAME_MAX_LEN))
network_id = sa.Column(sa.String(36), sa.ForeignKey("networks.id"), network_id = sa.Column(sa.String(36), sa.ForeignKey("networks.id"),
nullable=False) nullable=False)
fixed_ips = orm.relationship(IPAllocation, backref='port', lazy='joined') fixed_ips = orm.relationship(IPAllocation, backref='port', lazy='joined',
passive_deletes='all')
mac_address = sa.Column(sa.String(32), nullable=False) mac_address = sa.Column(sa.String(32), nullable=False)
admin_state_up = sa.Column(sa.Boolean(), nullable=False) admin_state_up = sa.Column(sa.Boolean(), nullable=False)
status = sa.Column(sa.String(16), nullable=False) status = sa.Column(sa.String(16), nullable=False)
@ -177,6 +179,7 @@ class DNSNameServer(model_base.BASEV2):
sa.ForeignKey('subnets.id', sa.ForeignKey('subnets.id',
ondelete="CASCADE"), ondelete="CASCADE"),
primary_key=True) primary_key=True)
order = sa.Column(sa.Integer, nullable=False, server_default='0')
class Subnet(model_base.BASEV2, HasId, HasTenant): class Subnet(model_base.BASEV2, HasId, HasTenant):
@ -200,12 +203,12 @@ class Subnet(model_base.BASEV2, HasId, HasTenant):
dns_nameservers = orm.relationship(DNSNameServer, dns_nameservers = orm.relationship(DNSNameServer,
backref='subnet', backref='subnet',
cascade='all, delete, delete-orphan', cascade='all, delete, delete-orphan',
order_by=DNSNameServer.order,
lazy='joined') lazy='joined')
routes = orm.relationship(SubnetRoute, routes = orm.relationship(SubnetRoute,
backref='subnet', backref='subnet',
cascade='all, delete, delete-orphan', cascade='all, delete, delete-orphan',
lazy='joined') lazy='joined')
shared = sa.Column(sa.Boolean)
ipv6_ra_mode = sa.Column(sa.Enum(constants.IPV6_SLAAC, ipv6_ra_mode = sa.Column(sa.Enum(constants.IPV6_SLAAC,
constants.DHCPV6_STATEFUL, constants.DHCPV6_STATEFUL,
constants.DHCPV6_STATELESS, constants.DHCPV6_STATELESS,
@ -214,6 +217,7 @@ class Subnet(model_base.BASEV2, HasId, HasTenant):
constants.DHCPV6_STATEFUL, constants.DHCPV6_STATEFUL,
constants.DHCPV6_STATELESS, constants.DHCPV6_STATELESS,
name='ipv6_address_modes'), nullable=True) name='ipv6_address_modes'), nullable=True)
rbac_entries = association_proxy('networks', 'rbac_entries')
class SubnetPoolPrefix(model_base.BASEV2): class SubnetPoolPrefix(model_base.BASEV2):
@ -251,10 +255,13 @@ class Network(model_base.BASEV2, HasId, HasTenant):
name = sa.Column(sa.String(attr.NAME_MAX_LEN)) name = sa.Column(sa.String(attr.NAME_MAX_LEN))
ports = orm.relationship(Port, backref='networks') ports = orm.relationship(Port, backref='networks')
subnets = orm.relationship(Subnet, backref='networks', subnets = orm.relationship(
lazy="joined") Subnet, backref=orm.backref('networks', lazy='joined'),
lazy="joined")
status = sa.Column(sa.String(16)) status = sa.Column(sa.String(16))
admin_state_up = sa.Column(sa.Boolean) admin_state_up = sa.Column(sa.Boolean)
shared = sa.Column(sa.Boolean)
mtu = sa.Column(sa.Integer, nullable=True) mtu = sa.Column(sa.Integer, nullable=True)
vlan_transparent = sa.Column(sa.Boolean, nullable=True) vlan_transparent = sa.Column(sa.Boolean, nullable=True)
rbac_entries = orm.relationship("NetworkRBAC", backref='network',
lazy='joined',
cascade='all, delete, delete-orphan')

159
neutron/db/quota/api.py Normal file
View File

@ -0,0 +1,159 @@
# Copyright (c) 2015 OpenStack Foundation. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
from neutron.db import common_db_mixin as common_db_api
from neutron.db.quota import models as quota_models
class QuotaUsageInfo(collections.namedtuple(
'QuotaUsageInfo', ['resource', 'tenant_id', 'used', 'reserved', 'dirty'])):
@property
def total(self):
"""Total resource usage (reserved and used)."""
return self.reserved + self.used
def get_quota_usage_by_resource_and_tenant(context, resource, tenant_id,
lock_for_update=False):
"""Return usage info for a given resource and tenant.
:param context: Request context
:param resource: Name of the resource
:param tenant_id: Tenant identifier
:param lock_for_update: if True sets a write-intent lock on the query
:returns: a QuotaUsageInfo instance
"""
query = common_db_api.model_query(context, quota_models.QuotaUsage)
query = query.filter_by(resource=resource, tenant_id=tenant_id)
if lock_for_update:
query = query.with_lockmode('update')
result = query.first()
if not result:
return
return QuotaUsageInfo(result.resource,
result.tenant_id,
result.in_use,
result.reserved,
result.dirty)
def get_quota_usage_by_resource(context, resource):
query = common_db_api.model_query(context, quota_models.QuotaUsage)
query = query.filter_by(resource=resource)
return [QuotaUsageInfo(item.resource,
item.tenant_id,
item.in_use,
item.reserved,
item.dirty) for item in query]
def get_quota_usage_by_tenant_id(context, tenant_id):
query = common_db_api.model_query(context, quota_models.QuotaUsage)
query = query.filter_by(tenant_id=tenant_id)
return [QuotaUsageInfo(item.resource,
item.tenant_id,
item.in_use,
item.reserved,
item.dirty) for item in query]
def set_quota_usage(context, resource, tenant_id,
in_use=None, reserved=None, delta=False):
"""Set resource quota usage.
:param context: instance of neutron context with db session
:param resource: name of the resource for which usage is being set
:param tenant_id: identifier of the tenant for which quota usage is
being set
:param in_use: integer specifying the new quantity of used resources,
or a delta to apply to current used resource
:param reserved: integer specifying the new quantity of reserved resources,
or a delta to apply to current reserved resources
:param delta: Specififies whether in_use or reserved are absolute numbers
or deltas (default to False)
"""
query = common_db_api.model_query(context, quota_models.QuotaUsage)
query = query.filter_by(resource=resource).filter_by(tenant_id=tenant_id)
usage_data = query.first()
with context.session.begin(subtransactions=True):
if not usage_data:
# Must create entry
usage_data = quota_models.QuotaUsage(
resource=resource,
tenant_id=tenant_id)
context.session.add(usage_data)
# Perform explicit comparison with None as 0 is a valid value
if in_use is not None:
if delta:
in_use = usage_data.in_use + in_use
usage_data.in_use = in_use
if reserved is not None:
if delta:
reserved = usage_data.reserved + reserved
usage_data.reserved = reserved
# After an explicit update the dirty bit should always be reset
usage_data.dirty = False
return QuotaUsageInfo(usage_data.resource,
usage_data.tenant_id,
usage_data.in_use,
usage_data.reserved,
usage_data.dirty)
def set_quota_usage_dirty(context, resource, tenant_id, dirty=True):
"""Set quota usage dirty bit for a given resource and tenant.
:param resource: a resource for which quota usage if tracked
:param tenant_id: tenant identifier
:param dirty: the desired value for the dirty bit (defaults to True)
:returns: 1 if the quota usage data were updated, 0 otherwise.
"""
query = common_db_api.model_query(context, quota_models.QuotaUsage)
query = query.filter_by(resource=resource).filter_by(tenant_id=tenant_id)
return query.update({'dirty': dirty})
def set_resources_quota_usage_dirty(context, resources, tenant_id, dirty=True):
"""Set quota usage dirty bit for a given tenant and multiple resources.
:param resources: list of resource for which the dirty bit is going
to be set
:param tenant_id: tenant identifier
:param dirty: the desired value for the dirty bit (defaults to True)
:returns: the number of records for which the bit was actually set.
"""
query = common_db_api.model_query(context, quota_models.QuotaUsage)
query = query.filter_by(tenant_id=tenant_id)
if resources:
query = query.filter(quota_models.QuotaUsage.resource.in_(resources))
# synchronize_session=False needed because of the IN condition
return query.update({'dirty': dirty}, synchronize_session=False)
def set_all_quota_usage_dirty(context, resource, dirty=True):
"""Set the dirty bit on quota usage for all tenants.
:param resource: the resource for which the dirty bit should be set
:returns: the number of tenants for which the dirty bit was
actually updated
"""
query = common_db_api.model_query(context, quota_models.QuotaUsage)
query = query.filter_by(resource=resource)
return query.update({'dirty': dirty})

Some files were not shown because too many files have changed in this diff Show More