diff --git a/doc/source/devref/callbacks.rst b/doc/source/devref/callbacks.rst index 4c6e6e71f24..71c85f80edb 100644 --- a/doc/source/devref/callbacks.rst +++ b/doc/source/devref/callbacks.rst @@ -65,7 +65,7 @@ do whatever they are supposed to do. In a callback-less world this would work li # A gets hold of the references of B and C # A calls B # A calls C - B->my_random_method_for_knowning_about_router_created() + B->my_random_method_for_knowing_about_router_created() C->my_random_very_difficult_to_remember_method_about_router_created() If B and/or C change, things become sour. In a callback-based world, things become a lot diff --git a/doc/source/devref/client_command_extensions.rst b/doc/source/devref/client_command_extensions.rst new file mode 100644 index 00000000000..f2bb8d83027 --- /dev/null +++ b/doc/source/devref/client_command_extensions.rst @@ -0,0 +1,9 @@ +================================= +Client command extension support +================================= + +The client command extension adds support for extending the neutron client while +considering ease of creation. + +The full document can be found in the python-neutronclient repository: +http://docs.openstack.org/developer/python-neutronclient/devref/client_command_extensions.html \ No newline at end of file diff --git a/doc/source/devref/contribute.rst b/doc/source/devref/contribute.rst index a39d011a30a..bf4f6ba5d74 100644 --- a/doc/source/devref/contribute.rst +++ b/doc/source/devref/contribute.rst @@ -1,14 +1,11 @@ Contributing new extensions to Neutron ====================================== -**NOTE!** ---------- +.. note:: **Third-party plugins/drivers which do not start decomposition in + Liberty will be marked as deprecated and removed before the Mitaka-3 + milestone.** -**Third-party plugins/drivers which do not start decomposition in Liberty will -be marked as deprecated, and they will be removed before the Mxxx-3 -milestone.** - -Read on for details ... + Read on for details ... Introduction @@ -46,7 +43,7 @@ by allowing third-party code to exist entirely out of tree. Further extension mechanisms have been provided to better support external plugins and drivers that alter the API and/or the data model. -In the Mxxx cycle we will **require** all third-party code to be moved out of +In the Mitaka cycle we will **require** all third-party code to be moved out of the neutron tree completely. 'Outside the tree' can be anything that is publicly available: it may be a repo diff --git a/doc/source/devref/db_layer.rst b/doc/source/devref/db_layer.rst index a240f1d630f..2b6ded3fa05 100644 --- a/doc/source/devref/db_layer.rst +++ b/doc/source/devref/db_layer.rst @@ -23,6 +23,152 @@ should also be added in model. If default value in database is not needed, business logic. +How we manage database migration rules +-------------------------------------- + +Since Liberty, Neutron maintains two parallel alembic migration branches. + +The first one, called 'expand', is used to store expansion-only migration +rules. Those rules are strictly additive and can be applied while +neutron-server is running. Examples of additive database schema changes are: +creating a new table, adding a new table column, adding a new index, etc. + +The second branch, called 'contract', is used to store those migration rules +that are not safe to apply while neutron-server is running. Those include: +column or table removal, moving data from one part of the database into another +(renaming a column, transforming single table into multiple, etc.), introducing +or modifying constraints, etc. + +The intent of the split is to allow invoking those safe migrations from +'expand' branch while neutron-server is running, reducing downtime needed to +upgrade the service. + +To apply just expansion rules, execute: + +- neutron-db-manage upgrade liberty_expand@head + +After the first step is done, you can stop neutron-server, apply remaining +non-expansive migration rules, if any: + +- neutron-db-manage upgrade liberty_contract@head + +and finally, start your neutron-server again. + +If you are not interested in applying safe migration rules while the service is +running, you can still upgrade database the old way, by stopping the service, +and then applying all available rules: + +- neutron-db-manage upgrade head[s] + +It will apply all the rules from both the expand and the contract branches, in +proper order. + + +Expand and Contract Scripts +--------------------------- + +The obsolete "branchless" design of a migration script included that it +indicates a specific "version" of the schema, and includes directives that +apply all necessary changes to the database at once. If we look for example at +the script ``2d2a8a565438_hierarchical_binding.py``, we will see:: + + # .../alembic_migrations/versions/2d2a8a565438_hierarchical_binding.py + + def upgrade(): + + # .. inspection code ... + + op.create_table( + 'ml2_port_binding_levels', + sa.Column('port_id', sa.String(length=36), nullable=False), + sa.Column('host', sa.String(length=255), nullable=False), + # ... more columns ... + ) + + for table in port_binding_tables: + op.execute(( + "INSERT INTO ml2_port_binding_levels " + "SELECT port_id, host, 0 AS level, driver, segment AS segment_id " + "FROM %s " + "WHERE host <> '' " + "AND driver <> '';" + ) % table) + + op.drop_constraint(fk_name_dvr[0], 'ml2_dvr_port_bindings', 'foreignkey') + op.drop_column('ml2_dvr_port_bindings', 'cap_port_filter') + op.drop_column('ml2_dvr_port_bindings', 'segment') + op.drop_column('ml2_dvr_port_bindings', 'driver') + + # ... more DROP instructions ... + +The above script contains directives that are both under the "expand" +and "contract" categories, as well as some data migrations. the ``op.create_table`` +directive is an "expand"; it may be run safely while the old version of the +application still runs, as the old code simply doesn't look for this table. +The ``op.drop_constraint`` and ``op.drop_column`` directives are +"contract" directives (the drop column moreso than the drop constraint); running +at least the ``op.drop_column`` directives means that the old version of the +application will fail, as it will attempt to access these columns which no longer +exist. + +The data migrations in this script are adding new +rows to the newly added ``ml2_port_binding_levels`` table. + +Under the new migration script directory structure, the above script would be +stated as two scripts; an "expand" and a "contract" script:: + + # expansion operations + # .../alembic_migrations/versions/liberty/expand/2bde560fc638_hierarchical_binding.py + + def upgrade(): + + op.create_table( + 'ml2_port_binding_levels', + sa.Column('port_id', sa.String(length=36), nullable=False), + sa.Column('host', sa.String(length=255), nullable=False), + # ... more columns ... + ) + + + # contraction operations + # .../alembic_migrations/versions/liberty/contract/4405aedc050e_hierarchical_binding.py + + def upgrade(): + + for table in port_binding_tables: + op.execute(( + "INSERT INTO ml2_port_binding_levels " + "SELECT port_id, host, 0 AS level, driver, segment AS segment_id " + "FROM %s " + "WHERE host <> '' " + "AND driver <> '';" + ) % table) + + op.drop_constraint(fk_name_dvr[0], 'ml2_dvr_port_bindings', 'foreignkey') + op.drop_column('ml2_dvr_port_bindings', 'cap_port_filter') + op.drop_column('ml2_dvr_port_bindings', 'segment') + op.drop_column('ml2_dvr_port_bindings', 'driver') + + # ... more DROP instructions ... + +The two scripts would be present in different subdirectories and also part of +entirely separate versioning streams. The "expand" operations are in the +"expand" script, and the "contract" operations are in the "contract" script. + +For the time being, data migration rules also belong to contract branch. There +is expectation that eventually live data migrations move into middleware that +will be aware about different database schema elements to converge on, but +Neutron is still not there. + +Scripts that contain only expansion or contraction rules do not require a split +into two parts. + +If a contraction script depends on a script from expansion stream, the +following directive should be added in the contraction script:: + + depends_on = ('',) + + Tests to verify that database migrations and models are in sync --------------------------------------------------------------- diff --git a/doc/source/devref/dns_order.rst b/doc/source/devref/dns_order.rst new file mode 100644 index 00000000000..bb8397081c6 --- /dev/null +++ b/doc/source/devref/dns_order.rst @@ -0,0 +1,74 @@ +Keep DNS Nameserver Order Consistency In Neutron +================================================ + +In Neutron subnets, DNS nameservers are given priority when created or updated. +This means if you create a subnet with multiple DNS servers, the order will +be retained and guests will receive the DNS servers in the order you +created them in when the subnet was created. The same thing applies for update +operations on subnets to add, remove, or update DNS servers. + +Get Subnet Details Info +----------------------- +:: + + changzhi@stack:~/devstack$ neutron subnet-list + +--------------------------------------+------+-------------+--------------------------------------------+ + | id | name | cidr | allocation_pools | + +--------------------------------------+------+-------------+--------------------------------------------+ + | 1a2d261b-b233-3ab9-902e-88576a82afa6 | | 10.0.0.0/24 | {"start": "10.0.0.2", "end": "10.0.0.254"} | + +--------------------------------------+------+-------------+--------------------------------------------+ + + changzhi@stack:~/devstack$ neutron subnet-show 1a2d261b-b233-3ab9-902e-88576a82afa6 + +------------------+--------------------------------------------+ + | Field | Value | + +------------------+--------------------------------------------+ + | allocation_pools | {"start": "10.0.0.2", "end": "10.0.0.254"} | + | cidr | 10.0.0.0/24 | + | dns_nameservers | 1.1.1.1 | + | | 2.2.2.2 | + | | 3.3.3.3 | + | enable_dhcp | True | + | gateway_ip | 10.0.0.1 | + | host_routes | | + | id | 1a2d26fb-b733-4ab3-992e-88554a87afa6 | + | ip_version | 4 | + | name | | + | network_id | a404518c-800d-2353-9193-57dbb42ac5ee | + | tenant_id | 3868290ab10f417390acbb754160dbb2 | + +------------------+--------------------------------------------+ + +Update Subnet DNS Nameservers +----------------------------- +:: + + neutron subnet-update 1a2d261b-b233-3ab9-902e-88576a82afa6 \ + --dns_nameservers list=true 3.3.3.3 2.2.2.2 1.1.1.1 + + changzhi@stack:~/devstack$ neutron subnet-show 1a2d261b-b233-3ab9-902e-88576a82afa6 + +------------------+--------------------------------------------+ + | Field | Value | + +------------------+--------------------------------------------+ + | allocation_pools | {"start": "10.0.0.2", "end": "10.0.0.254"} | + | cidr | 10.0.0.0/24 | + | dns_nameservers | 3.3.3.3 | + | | 2.2.2.2 | + | | 1.1.1.1 | + | enable_dhcp | True | + | gateway_ip | 10.0.0.1 | + | host_routes | | + | id | 1a2d26fb-b733-4ab3-992e-88554a87afa6 | + | ip_version | 4 | + | name | | + | network_id | a404518c-800d-2353-9193-57dbb42ac5ee | + | tenant_id | 3868290ab10f417390acbb754160dbb2 | + +------------------+--------------------------------------------+ + +As shown in above output, the order of the DNS nameservers has been updated. +New virtual machines deployed to this subnet will receive the DNS nameservers +in this new priority order. Existing virtual machines that have already been +deployed will not be immediately affected by changing the DNS nameserver order +on the neutron subnet. Virtual machines that are configured to get their IP +address via DHCP will detect the DNS nameserver order change +when their DHCP lease expires or when the virtual machine is restarted. +Existing virtual machines configured with a static IP address will never +detect the updated DNS nameserver order. diff --git a/doc/source/devref/fullstack_testing.rst b/doc/source/devref/fullstack_testing.rst index b761e9bf401..b8ed1dfe267 100644 --- a/doc/source/devref/fullstack_testing.rst +++ b/doc/source/devref/fullstack_testing.rst @@ -29,11 +29,11 @@ Since the test runs on the machine itself, full stack testing enables through the API and then assert that a namespace was created for it. Full stack tests run in the Neutron tree with Neutron resources alone. You -may use the Neutron API (Keystone is set to NOAUTH so that it's out of the -picture). VMs may be simulated with a helper class that contains a container- -like object in its own namespace and IP address. It has helper methods to send -different kinds of traffic. The "VM" may be connected to br-int or br-ex, -to simulate internal or external traffic. +may use the Neutron API (The Neutron server is set to NOAUTH so that Keystone +is out of the picture). instances may be simulated with a helper class that +contains a container-like object in its own namespace and IP address. It has +helper methods to send different kinds of traffic. The "instance" may be +connected to br-int or br-ex, to simulate internal or external traffic. Full stack testing can simulate multi node testing by starting an agent multiple times. Specifically, each node would have its own copy of the @@ -84,9 +84,12 @@ Long Term Goals * Currently we configure the OVS agent with VLANs segmentation (Only because it's easier). This allows us to validate most functionality, but we might need to support tunneling somehow. -* How do advanced services use the full stack testing infrastructure? I'd - assume we treat all of the infrastructure classes as a publicly consumed - API and have the XaaS repos import and use them. +* How will advanced services use the full stack testing infrastructure? Full + stack tests infrastructure classes are expected to change quite a bit over + the next coming months. This means that other repositories may import these + classes and break from time to time, or copy them in their repositories + instead. Since changes to full stack testing infrastructure is a given, + XaaS repositories should be copying it and not importing it directly. * Currently we configure the Neutron server with the ML2 plugin and the OVS mechanism driver. We may modularize the topology configuration further to allow to rerun full stack tests against different Neutron plugins or ML2 diff --git a/doc/source/devref/index.rst b/doc/source/devref/index.rst index d54d442697d..7ed6143c62c 100644 --- a/doc/source/devref/index.rst +++ b/doc/source/devref/index.rst @@ -34,6 +34,7 @@ Programming HowTos and Tutorials contribute neutron_api sub_projects + client_command_extensions Neutron Internals @@ -52,6 +53,7 @@ Neutron Internals advanced_services oslo-incubator callbacks + dns_order Testing ------- diff --git a/doc/source/devref/l2_agents.rst b/doc/source/devref/l2_agents.rst index 83786dabe02..daa3b2a0047 100644 --- a/doc/source/devref/l2_agents.rst +++ b/doc/source/devref/l2_agents.rst @@ -5,3 +5,4 @@ L2 Agent Networking openvswitch_agent linuxbridge_agent + sriov_nic_agent diff --git a/doc/source/devref/neutron_api.rst b/doc/source/devref/neutron_api.rst index 6479b6d8b79..46492f0fccb 100644 --- a/doc/source/devref/neutron_api.rst +++ b/doc/source/devref/neutron_api.rst @@ -2,7 +2,7 @@ Neutron public API ================== Neutron main tree serves as a library for multiple subprojects that rely on -different modules from neutron.* namespace to accomodate their needs. +different modules from neutron.* namespace to accommodate their needs. Specifically, advanced service repositories and open source or vendor plugin/driver repositories do it. @@ -33,3 +33,34 @@ incompatible changes that could or are known to trigger those breakages. - commit: 6e693fc91dd79cfbf181e3b015a1816d985ad02c - solution: switch using oslo_service.* namespace; stop using ANY neutron.openstack.* contents. - severity: low (plugins must not rely on that subtree). + +* change: oslo.utils.fileutils adopted. + + - commit: I933d02aa48260069149d16caed02b020296b943a + - solution: switch using oslo_utils.fileutils module; stop using neutron.openstack.fileutils module. + - severity: low (plugins must not rely on that subtree). + +* change: Reuse caller's session in DB methods. + + - commit: 47dd65cf986d712e9c6ca5dcf4420dfc44900b66 + - solution: Add context to args and reuse. + - severity: High (mostly undetected, because 3rd party CI run Tempest tests only). + +* change: switches to oslo.log, removes neutron.openstack.common.log. + + - commit: 22328baf1f60719fcaa5b0fbd91c0a3158d09c31 + - solution: a) switch to oslo.log; b) copy log module into your tree and use it + (may not work due to conflicts between the module and oslo.log configuration options). + - severity: High (most CI systems are affected). + +* change: Implements reorganize-unit-test-tree spec. + + - commit: 1105782e3914f601b8f4be64939816b1afe8fb54 + - solution: Code affected need to update existing unit tests to reflect new locations. + - severity: High (mostly undetected, because 3rd party CI run Tempest tests only). + +* change: drop linux/ovs_lib compat layer. + + - commit: 3bbf473b49457c4afbfc23fd9f59be8aa08a257d + - solution: switch to using neutron/agent/common/ovs_lib.py. + - severity: High (most CI systems are affected). diff --git a/doc/source/devref/sriov_nic_agent.rst b/doc/source/devref/sriov_nic_agent.rst new file mode 100644 index 00000000000..a316877b071 --- /dev/null +++ b/doc/source/devref/sriov_nic_agent.rst @@ -0,0 +1,27 @@ +====================================== +L2 Networking with SR-IOV enabled NICs +====================================== +SR-IOV (Single Root I/O Virtualization) is a specification that allows +a PCIe device to appear to be multiple separate physical PCIe devices. +SR-IOV works by introducing the idea of physical functions (PFs) and virtual functions (VFs). +Physical functions (PFs) are full-featured PCIe functions. +Virtual functions (VFs) are “lightweight” functions that lack configuration resources. + +SR-IOV supports VLANs for L2 network isolation, other networking technologies +such as VXLAN/GRE may be supported in the future. + +SR-IOV NIC agent manages configuration of SR-IOV Virtual Functions that connect +VM instances running on the compute node to the public network. + +In most common deployments, there are compute and a network nodes. +Compute node can support VM connectivity via SR-IOV enabled NIC. SR-IOV NIC Agent manages +Virtual Functions admin state. In the future it will manage additional settings, such as +quality of service, rate limit settings, spoofcheck and more. +Network node will be usually deployed with either Open vSwitch or Linux Bridge to support network node functionality. + + +Further Reading +--------------- + +* `Nir Yechiel - SR-IOV Networking – Part I: Understanding the Basics `_ +* `SR-IOV Passthrough For Networking `_ diff --git a/doc/source/devref/sub_projects.rst b/doc/source/devref/sub_projects.rst index 28cc0c6c6c7..de9db8e5380 100644 --- a/doc/source/devref/sub_projects.rst +++ b/doc/source/devref/sub_projects.rst @@ -7,10 +7,10 @@ part of the overall Neutron project. Inclusion Process ----------------- -The process for proposing the move of a repo into openstack/ and under -the Neutron project is to propose a patch to the openstack/governance -repository. For example, to propose moving networking-foo, one -would add the following entry under Neutron in reference/projects.yaml:: +The process for proposing a repo into openstack/ and under the Neutron +project is to propose a patch to the openstack/governance repository. +For example, to propose networking-foo, one would add the following entry +under Neutron in reference/projects.yaml:: - repo: openstack/networking-foo tags: @@ -28,6 +28,11 @@ repositories are within the existing approved scope of the project. http://git.openstack.org/cgit/openstack/governance/commit/?id=321a020cbcaada01976478ea9f677ebb4df7bd6d +In order to create a project, in case it does not exist, follow steps +as explained in: + + http://docs.openstack.org/infra/manual/creators.html + Responsibilities ---------------- @@ -86,14 +91,14 @@ repo but are summarized here to describe the functionality they provide. +-------------------------------+-----------------------+ | networking-edge-vpn_ | vpn | +-------------------------------+-----------------------+ +| networking-fujitsu_ | ml2 | ++-------------------------------+-----------------------+ | networking-hyperv_ | ml2 | +-------------------------------+-----------------------+ | networking-ibm_ | ml2,l3 | +-------------------------------+-----------------------+ | networking-l2gw_ | l2 | +-------------------------------+-----------------------+ -| networking-metaplugin_ | core | -+-------------------------------+-----------------------+ | networking-midonet_ | core,lb | +-------------------------------+-----------------------+ | networking-mlnx_ | ml2 | @@ -205,6 +210,15 @@ Edge VPN * Git: https://git.openstack.org/cgit/stackforge/networking-edge-vpn * Launchpad: https://launchpad.net/edge-vpn +.. _networking-fujitsu: + +FUJITSU +------- + +* Git: https://git.openstack.org/cgit/openstack/networking-fujitsu +* Launchpad: https://launchpad.net/networking-fujitsu +* PyPI: https://pypi.python.org/pypi/networking-fujitsu + .. _networking-hyperv: Hyper-V @@ -239,13 +253,6 @@ L2 Gateway * Git: https://git.openstack.org/cgit/openstack/networking-l2gw * Launchpad: https://launchpad.net/networking-l2gw -.. _networking-metaplugin: - -Metaplugin ----------- - -* Git: https://github.com/ntt-sic/networking-metaplugin - .. _networking-midonet: MidoNet diff --git a/doc/source/policies/bugs.rst b/doc/source/policies/bugs.rst index d7d5f23d050..5c1aaf238b9 100644 --- a/doc/source/policies/bugs.rst +++ b/doc/source/policies/bugs.rst @@ -13,7 +13,7 @@ triaging. The bug czar is expected to communicate with the various Neutron teams been triaged. In addition, the bug czar should be reporting "High" and "Critical" priority bugs to both the PTL and the core reviewer team during each weekly Neutron meeting. -The current Neutron bug czar is Eugene Nikanorov (IRC nick enikanorov). +The current Neutron bug czar is Kyle Mestery (IRC nick mestery). Plugin and Driver Repositories ------------------------------ diff --git a/doc/source/policies/core-reviewers.rst b/doc/source/policies/core-reviewers.rst index 0c690aa3ae9..d6a54d05fdf 100644 --- a/doc/source/policies/core-reviewers.rst +++ b/doc/source/policies/core-reviewers.rst @@ -100,9 +100,14 @@ updating the core review team for the sub-project's repositories. | Area | Lieutenant | IRC nick | +========================+===========================+======================+ | dragonflow | Eran Gampel | gampel | +| | Gal Sagie | gsagie | +------------------------+---------------------------+----------------------+ | networking-l2gw | Sukhdev Kapur | sukhdev | +------------------------+---------------------------+----------------------+ +| networking-midonet | Ryu Ishimoto | ryu_ishimoto | +| | Jaume Devesa | devvesa | +| | YAMAMOTO Takashi | yamamoto | ++------------------------+---------------------------+----------------------+ | networking-odl | Flavio Fernandes | flaviof | | | Kyle Mestery | mestery | +------------------------+---------------------------+----------------------+ @@ -110,6 +115,10 @@ updating the core review team for the sub-project's repositories. +------------------------+---------------------------+----------------------+ | networking-ovn | Russell Bryant | russellb | +------------------------+---------------------------+----------------------+ +| networking-plumgrid | Fawad Khaliq | fawadkhaliq | ++------------------------+---------------------------+----------------------+ +| networking-sfc | Cathy Zhang | cathy | ++------------------------+---------------------------+----------------------+ | networking-vshpere | Vivekanandan Narasimhan | viveknarasimhan | +------------------------+---------------------------+----------------------+ | octavia | German Eichberger | xgerman | diff --git a/etc/metadata_agent.ini b/etc/metadata_agent.ini index ca31c7fe976..e436069e5f9 100644 --- a/etc/metadata_agent.ini +++ b/etc/metadata_agent.ini @@ -45,7 +45,7 @@ admin_password = %SERVICE_PASSWORD% # Location of Metadata Proxy UNIX domain socket # metadata_proxy_socket = $state_path/metadata_proxy -# Metadata Proxy UNIX domain socket mode, 3 values allowed: +# Metadata Proxy UNIX domain socket mode, 4 values allowed: # 'deduce': deduce mode from metadata_proxy_user/group values, # 'user': set metadata proxy socket mode to 0o644, to use when # metadata_proxy_user is agent effective user or root, diff --git a/etc/neutron.conf b/etc/neutron.conf index f5a6da62767..ca3baa9cf32 100755 --- a/etc/neutron.conf +++ b/etc/neutron.conf @@ -593,7 +593,7 @@ [quotas] # Default driver to use for quota checks -# quota_driver = neutron.db.quota_db.DbQuotaDriver +# quota_driver = neutron.db.quota.driver.DbQuotaDriver # Resource name(s) that are supported in quota features # This option is deprecated for removal in the M release, please refrain from using it diff --git a/etc/neutron/plugins/metaplugin/metaplugin.ini b/etc/neutron/plugins/metaplugin/metaplugin.ini deleted file mode 100644 index 2b9bfa5ea35..00000000000 --- a/etc/neutron/plugins/metaplugin/metaplugin.ini +++ /dev/null @@ -1,31 +0,0 @@ -# Config file for Metaplugin - -[meta] -# Comma separated list of flavor:neutron_plugin for plugins to load. -# Extension method is searched in the list order and the first one is used. -plugin_list = 'ml2:neutron.plugins.ml2.plugin.Ml2Plugin,nvp:neutron.plugins.vmware.plugin.NsxPluginV2' - -# Comma separated list of flavor:neutron_plugin for L3 service plugins -# to load. -# This is intended for specifying L2 plugins which support L3 functions. -# If you use a router service plugin, set this blank. -l3_plugin_list = - -# Default flavor to use, when flavor:network is not specified at network -# creation. -default_flavor = 'nvp' - -# Default L3 flavor to use, when flavor:router is not specified at router -# creation. -# Ignored if 'l3_plugin_list' is blank. -default_l3_flavor = - -# Comma separated list of supported extension aliases. -supported_extension_aliases = 'provider,binding,agent,dhcp_agent_scheduler' - -# Comma separated list of method:flavor to select specific plugin for a method. -# This has priority over method search order based on 'plugin_list'. -extension_map = 'get_port_stats:nvp' - -# Specifies flavor for plugin to handle 'q-plugin' RPC requests. -rpc_flavor = 'ml2' diff --git a/etc/neutron/plugins/ml2/ml2_conf_cisco.ini b/etc/neutron/plugins/ml2/ml2_conf_cisco.ini index 699b2ec3724..7900047ad2b 100644 --- a/etc/neutron/plugins/ml2/ml2_conf_cisco.ini +++ b/etc/neutron/plugins/ml2/ml2_conf_cisco.ini @@ -137,76 +137,6 @@ # mcast_ranges = # Example: mcast_ranges = 224.0.0.1:224.0.0.3,224.0.1.1:224.0.1. -[ml2_cisco_apic] - -# Hostname:port list of APIC controllers -# apic_hosts = 1.1.1.1:80, 1.1.1.2:8080, 1.1.1.3:80 - -# Username for the APIC controller -# apic_username = user - -# Password for the APIC controller -# apic_password = password - -# Whether use SSl for connecting to the APIC controller or not -# apic_use_ssl = True - -# How to map names to APIC: use_uuid or use_name -# apic_name_mapping = use_name - -# Names for APIC objects used by Neutron -# Note: When deploying multiple clouds against one APIC, -# these names must be unique between the clouds. -# apic_vmm_domain = openstack -# apic_vlan_ns_name = openstack_ns -# apic_node_profile = openstack_profile -# apic_entity_profile = openstack_entity -# apic_function_profile = openstack_function -# apic_app_profile_name = openstack_app -# Agent timers for State reporting and topology discovery -# apic_sync_interval = 30 -# apic_agent_report_interval = 30 -# apic_agent_poll_interval = 2 - -# Specify your network topology. -# This section indicates how your compute nodes are connected to the fabric's -# switches and ports. The format is as follows: -# -# [apic_switch:] -# , = -# -# You can have multiple sections, one for each switch in your fabric that is -# participating in OpenStack. e.g. -# -# [apic_switch:17] -# ubuntu,ubuntu1 = 1/10 -# ubuntu2,ubuntu3 = 1/11 -# -# [apic_switch:18] -# ubuntu5,ubuntu6 = 1/1 -# ubuntu7,ubuntu8 = 1/2 - -# Describe external connectivity. -# In this section you can specify the external network configuration in order -# for the plugin to be able to teach the fabric how to route the internal -# traffic to the outside world. The external connectivity configuration -# format is as follows: -# -# [apic_external_network:] -# switch = -# port = -# encap = -# cidr_exposed = -# gateway_ip = -# -# An example follows: -# [apic_external_network:network_ext] -# switch=203 -# port=1/34 -# encap=vlan-100 -# cidr_exposed=10.10.40.2/16 -# gateway_ip=10.10.40.1 - [ml2_cisco_ucsm] # Cisco UCS Manager IP address diff --git a/etc/neutron/rootwrap.d/cisco-apic.filters b/etc/neutron/rootwrap.d/cisco-apic.filters deleted file mode 100644 index a74a3602d0b..00000000000 --- a/etc/neutron/rootwrap.d/cisco-apic.filters +++ /dev/null @@ -1,17 +0,0 @@ -# neutron-rootwrap command filters for nodes on which neutron is -# expected to control network -# -# This file should be owned by (and only-writeable by) the root user - -# format seems to be -# cmd-name: filter-name, raw-command, user, args - -[Filters] - -# cisco-apic filters -lldpctl: CommandFilter, lldpctl, root - -# ip_lib filters -ip: IpFilter, ip, root -find: RegExpFilter, find, root, find, /sys/class/net, -maxdepth, 1, -type, l, -printf, %.* -ip_exec: IpNetnsExecFilter, ip, root diff --git a/etc/policy.json b/etc/policy.json index eaf6d685ffe..72756bdb630 100644 --- a/etc/policy.json +++ b/etc/policy.json @@ -163,5 +163,16 @@ "get_service_provider": "rule:regular_user", "get_lsn": "rule:admin_only", - "create_lsn": "rule:admin_only" + "create_lsn": "rule:admin_only", + + "create_flavor": "rule:admin_only", + "update_flavor": "rule:admin_only", + "delete_flavor": "rule:admin_only", + "get_flavors": "rule:regular_user", + "get_flavor": "rule:regular_user", + "create_service_profile": "rule:admin_only", + "update_service_profile": "rule:admin_only", + "delete_service_profile": "rule:admin_only", + "get_service_profiles": "rule:admin_only", + "get_service_profile": "rule:admin_only" } diff --git a/neutron/agent/common/ovs_lib.py b/neutron/agent/common/ovs_lib.py index 49c7a6e9c19..de8cf3dd83a 100644 --- a/neutron/agent/common/ovs_lib.py +++ b/neutron/agent/common/ovs_lib.py @@ -304,7 +304,12 @@ class OVSBridge(BaseOVS): ('options', {'peer': remote_name})] return self.add_port(local_name, *attrs) + def get_iface_name_list(self): + # get the interface name list for this bridge + return self.ovsdb.list_ifaces(self.br_name).execute(check_error=True) + def get_port_name_list(self): + # get the port name list for this bridge return self.ovsdb.list_ports(self.br_name).execute(check_error=True) def get_port_stats(self, port_name): @@ -557,7 +562,7 @@ class DeferredOVSBridge(object): key=operator.itemgetter(0)) itemgetter_1 = operator.itemgetter(1) for action, action_flow_list in grouped: - flows = map(itemgetter_1, action_flow_list) + flows = list(map(itemgetter_1, action_flow_list)) self.br.do_action_flows(action, flows) def __enter__(self): diff --git a/neutron/agent/common/utils.py b/neutron/agent/common/utils.py index 2eddabd6db7..2b50da21704 100644 --- a/neutron/agent/common/utils.py +++ b/neutron/agent/common/utils.py @@ -15,10 +15,33 @@ import os +from oslo_log import log as logging +from oslo_utils import importutils + +from neutron.i18n import _LE + if os.name == 'nt': from neutron.agent.windows import utils else: from neutron.agent.linux import utils + +LOG = logging.getLogger(__name__) + + execute = utils.execute + + +def load_interface_driver(conf): + if not conf.interface_driver: + LOG.error(_LE('An interface driver must be specified')) + raise SystemExit(1) + try: + return importutils.import_object(conf.interface_driver, conf) + except ImportError as e: + LOG.error(_LE("Error importing interface driver " + "'%(driver)s': %(inner)s"), + {'driver': conf.interface_driver, + 'inner': e}) + raise SystemExit(1) diff --git a/neutron/agent/dhcp/agent.py b/neutron/agent/dhcp/agent.py index 4d78d6428b4..d1294db6ac0 100644 --- a/neutron/agent/dhcp/agent.py +++ b/neutron/agent/dhcp/agent.py @@ -26,7 +26,6 @@ from oslo_utils import importutils from neutron.agent.linux import dhcp from neutron.agent.linux import external_process -from neutron.agent.linux import utils as linux_utils from neutron.agent.metadata import driver as metadata_driver from neutron.agent import rpc as agent_rpc from neutron.common import constants @@ -63,7 +62,7 @@ class DhcpAgent(manager.Manager): ctx, self.conf.use_namespaces) # create dhcp dir to store dhcp info dhcp_dir = os.path.dirname("/%s/dhcp/" % self.conf.state_path) - linux_utils.ensure_dir(dhcp_dir) + utils.ensure_dir(dhcp_dir) self.dhcp_version = self.dhcp_driver_cls.check_version() self._populate_networks_cache() self._process_monitor = external_process.ProcessMonitor( diff --git a/neutron/agent/firewall.py b/neutron/agent/firewall.py index 8ce8e7b16bf..afb0f18f59e 100644 --- a/neutron/agent/firewall.py +++ b/neutron/agent/firewall.py @@ -19,6 +19,10 @@ import contextlib import six +INGRESS_DIRECTION = 'ingress' +EGRESS_DIRECTION = 'egress' + + @six.add_metaclass(abc.ABCMeta) class FirewallDriver(object): """Firewall Driver base class. diff --git a/neutron/agent/l3/agent.py b/neutron/agent/l3/agent.py index e3379bfae65..23906bc3d73 100644 --- a/neutron/agent/l3/agent.py +++ b/neutron/agent/l3/agent.py @@ -21,9 +21,9 @@ import oslo_messaging from oslo_service import loopingcall from oslo_service import periodic_task from oslo_utils import excutils -from oslo_utils import importutils from oslo_utils import timeutils +from neutron.agent.common import utils as common_utils from neutron.agent.l3 import dvr from neutron.agent.l3 import dvr_edge_router as dvr_router from neutron.agent.l3 import dvr_local_router as dvr_local_router @@ -165,15 +165,7 @@ class L3NATAgent(firewall_l3_agent.FWaaSL3AgentRpcCallback, config=self.conf, resource_type='router') - try: - self.driver = importutils.import_object( - self.conf.interface_driver, - self.conf - ) - except Exception: - LOG.error(_LE("Error importing interface driver " - "'%s'"), self.conf.interface_driver) - raise SystemExit(1) + self.driver = common_utils.load_interface_driver(self.conf) self.context = n_context.get_admin_context_without_session() self.plugin_rpc = L3PluginApi(topics.L3PLUGIN, host) diff --git a/neutron/agent/l3/config.py b/neutron/agent/l3/config.py index e98147765e2..edb5c5c90f1 100644 --- a/neutron/agent/l3/config.py +++ b/neutron/agent/l3/config.py @@ -84,9 +84,11 @@ OPTS = [ cfg.StrOpt('metadata_access_mark', default='0x1', help=_('Iptables mangle mark used to mark metadata valid ' - 'requests')), + 'requests. This mark will be masked with 0xffff so ' + 'that only the lower 16 bits will be used.')), cfg.StrOpt('external_ingress_mark', default='0x2', help=_('Iptables mangle mark used to mark ingress from ' - 'external network')), + 'external network. This mark will be masked with ' + '0xffff so that only the lower 16 bits will be used.')), ] diff --git a/neutron/agent/l3/dvr_edge_router.py b/neutron/agent/l3/dvr_edge_router.py index 167df080a11..b68af5cdecf 100644 --- a/neutron/agent/l3/dvr_edge_router.py +++ b/neutron/agent/l3/dvr_edge_router.py @@ -28,13 +28,13 @@ class DvrEdgeRouter(dvr_local_router.DvrLocalRouter): def __init__(self, agent, host, *args, **kwargs): super(DvrEdgeRouter, self).__init__(agent, host, *args, **kwargs) self.snat_namespace = None + self.snat_iptables_manager = None def external_gateway_added(self, ex_gw_port, interface_name): super(DvrEdgeRouter, self).external_gateway_added( ex_gw_port, interface_name) if self._is_this_snat_host(): - snat_ports = self.get_snat_interfaces() - self._create_dvr_gateway(ex_gw_port, interface_name, snat_ports) + self._create_dvr_gateway(ex_gw_port, interface_name) def external_gateway_updated(self, ex_gw_port, interface_name): if not self._is_this_snat_host(): @@ -70,8 +70,7 @@ class DvrEdgeRouter(dvr_local_router.DvrLocalRouter): if not self._is_this_snat_host(): return - snat_ports = self.get_snat_interfaces() - sn_port = self._map_internal_interfaces(port, snat_ports) + sn_port = self.get_snat_port_for_internal_port(port) if not sn_port: return @@ -92,7 +91,7 @@ class DvrEdgeRouter(dvr_local_router.DvrLocalRouter): if not self.ex_gw_port: return - sn_port = self._map_internal_interfaces(port, self.snat_ports) + sn_port = self.get_snat_port_for_internal_port(port) if not sn_port: return @@ -108,12 +107,11 @@ class DvrEdgeRouter(dvr_local_router.DvrLocalRouter): self.driver.unplug(snat_interface, namespace=ns_name, prefix=prefix) - def _create_dvr_gateway(self, ex_gw_port, gw_interface_name, - snat_ports): + def _create_dvr_gateway(self, ex_gw_port, gw_interface_name): """Create SNAT namespace.""" snat_ns = self.create_snat_namespace() # connect snat_ports to br_int from SNAT namespace - for port in snat_ports: + for port in self.get_snat_interfaces(): # create interface_name interface_name = self.get_snat_int_device_name(port['id']) self._internal_network_added( @@ -145,4 +143,26 @@ class DvrEdgeRouter(dvr_local_router.DvrLocalRouter): return long_name[:self.driver.DEV_NAME_LEN] def _is_this_snat_host(self): - return self.get_gw_port_host() == self.host + host = self.router.get('gw_port_host') + if not host: + LOG.debug("gw_port_host missing from router: %s", + self.router['id']) + return host == self.host + + def _handle_router_snat_rules(self, ex_gw_port, interface_name): + if not self._is_this_snat_host(): + return + if not self.get_ex_gw_port(): + return + + if not self.snat_iptables_manager: + LOG.debug("DVR router: no snat rules to be handled") + return + + with self.snat_iptables_manager.defer_apply(): + self._empty_snat_chains(self.snat_iptables_manager) + + # NOTE DVR doesn't add the jump to float snat like the super class. + + self._add_snat_rules(ex_gw_port, self.snat_iptables_manager, + interface_name) diff --git a/neutron/agent/l3/dvr_local_router.py b/neutron/agent/l3/dvr_local_router.py index e306c7c91f0..805b31ed7f7 100755 --- a/neutron/agent/l3/dvr_local_router.py +++ b/neutron/agent/l3/dvr_local_router.py @@ -19,7 +19,7 @@ from oslo_log import log as logging from oslo_utils import excutils from neutron.agent.l3 import dvr_fip_ns -from neutron.agent.l3 import router_info as router +from neutron.agent.l3 import dvr_router_base from neutron.agent.linux import ip_lib from neutron.common import constants as l3_constants from neutron.common import exceptions @@ -31,15 +31,11 @@ LOG = logging.getLogger(__name__) MASK_30 = 0x3fffffff -class DvrLocalRouter(router.RouterInfo): +class DvrLocalRouter(dvr_router_base.DvrRouterBase): def __init__(self, agent, host, *args, **kwargs): - super(DvrLocalRouter, self).__init__(*args, **kwargs) - - self.agent = agent - self.host = host + super(DvrLocalRouter, self).__init__(agent, host, *args, **kwargs) self.floating_ips_dict = {} - self.snat_iptables_manager = None # Linklocal subnet for router and floating IP namespace link self.rtr_fip_subnet = None self.dist_fip_count = None @@ -50,9 +46,6 @@ class DvrLocalRouter(router.RouterInfo): floating_ips = super(DvrLocalRouter, self).get_floating_ips() return [i for i in floating_ips if i['host'] == self.host] - def get_snat_interfaces(self): - return self.router.get(l3_constants.SNAT_ROUTER_INTF_KEY, []) - def _handle_fip_nat_rules(self, interface_name, action): """Configures NAT rules for Floating IPs for DVR. @@ -201,17 +194,6 @@ class DvrLocalRouter(router.RouterInfo): subnet_id, 'add') - def _map_internal_interfaces(self, int_port, snat_ports): - """Return the SNAT port for the given internal interface port.""" - fixed_ip = int_port['fixed_ips'][0] - subnet_id = fixed_ip['subnet_id'] - match_port = [p for p in snat_ports if - p['fixed_ips'][0]['subnet_id'] == subnet_id] - if match_port: - return match_port[0] - else: - LOG.error(_LE('DVR: no map match_port found!')) - @staticmethod def _get_snat_idx(ip_cidr): """Generate index for DVR snat rules and route tables. @@ -291,13 +273,6 @@ class DvrLocalRouter(router.RouterInfo): """Removes rules and routes for SNAT redirection.""" self._snat_redirect_modify(gateway, sn_port, sn_int, is_add=False) - def get_gw_port_host(self): - host = self.router.get('gw_port_host') - if not host: - LOG.debug("gw_port_host missing from router: %s", - self.router['id']) - return host - def internal_network_added(self, port): super(DvrLocalRouter, self).internal_network_added(port) @@ -313,8 +288,7 @@ class DvrLocalRouter(router.RouterInfo): if not ex_gw_port: return - snat_ports = self.get_snat_interfaces() - sn_port = self._map_internal_interfaces(port, snat_ports) + sn_port = self.get_snat_port_for_internal_port(port) if not sn_port: return @@ -325,7 +299,7 @@ class DvrLocalRouter(router.RouterInfo): if not self.ex_gw_port: return - sn_port = self._map_internal_interfaces(port, self.snat_ports) + sn_port = self.get_snat_port_for_internal_port(port) if not sn_port: return @@ -355,14 +329,13 @@ class DvrLocalRouter(router.RouterInfo): ip_wrapr = ip_lib.IPWrapper(namespace=self.ns_name) ip_wrapr.netns.execute(['sysctl', '-w', 'net.ipv4.conf.all.send_redirects=0']) - snat_ports = self.get_snat_interfaces() for p in self.internal_ports: - gateway = self._map_internal_interfaces(p, snat_ports) + gateway = self.get_snat_port_for_internal_port(p) id_name = self.get_internal_device_name(p['id']) if gateway: self._snat_redirect_add(gateway, p, id_name) - for port in snat_ports: + for port in self.get_snat_interfaces(): for ip in port['fixed_ips']: self._update_arp_entry(ip['ip_address'], port['mac_address'], @@ -379,35 +352,13 @@ class DvrLocalRouter(router.RouterInfo): to_fip_interface_name = ( self.get_external_device_interface_name(ex_gw_port)) self.process_floating_ip_addresses(to_fip_interface_name) - snat_ports = self.get_snat_interfaces() for p in self.internal_ports: - gateway = self._map_internal_interfaces(p, snat_ports) + gateway = self.get_snat_port_for_internal_port(p) internal_interface = self.get_internal_device_name(p['id']) self._snat_redirect_remove(gateway, p, internal_interface) - def _handle_router_snat_rules(self, ex_gw_port, - interface_name, action): - if not self.snat_iptables_manager: - LOG.debug("DVR router: no snat rules to be handled") - return - - with self.snat_iptables_manager.defer_apply(): - self._empty_snat_chains(self.snat_iptables_manager) - - # NOTE DVR doesn't add the jump to float snat like the super class. - - self._add_snat_rules(ex_gw_port, self.snat_iptables_manager, - interface_name, action) - - def perform_snat_action(self, snat_callback, *args): - # NOTE DVR skips this step in a few cases... - if not self.get_ex_gw_port(): - return - if self.get_gw_port_host() != self.host: - return - - super(DvrLocalRouter, - self).perform_snat_action(snat_callback, *args) + def _handle_router_snat_rules(self, ex_gw_port, interface_name): + pass def process_external(self, agent): ex_gw_port = self.get_ex_gw_port() diff --git a/neutron/agent/l3/dvr_router_base.py b/neutron/agent/l3/dvr_router_base.py new file mode 100644 index 00000000000..0c872c4c345 --- /dev/null +++ b/neutron/agent/l3/dvr_router_base.py @@ -0,0 +1,42 @@ +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from oslo_log import log as logging + +from neutron.agent.l3 import router_info as router +from neutron.common import constants as l3_constants +from neutron.i18n import _LE + +LOG = logging.getLogger(__name__) + + +class DvrRouterBase(router.RouterInfo): + def __init__(self, agent, host, *args, **kwargs): + super(DvrRouterBase, self).__init__(*args, **kwargs) + + self.agent = agent + self.host = host + + def get_snat_interfaces(self): + return self.router.get(l3_constants.SNAT_ROUTER_INTF_KEY, []) + + def get_snat_port_for_internal_port(self, int_port): + """Return the SNAT port for the given internal interface port.""" + snat_ports = self.get_snat_interfaces() + fixed_ip = int_port['fixed_ips'][0] + subnet_id = fixed_ip['subnet_id'] + match_port = [p for p in snat_ports + if p['fixed_ips'][0]['subnet_id'] == subnet_id] + if match_port: + return match_port[0] + else: + LOG.error(_LE('DVR: no map match_port found!')) diff --git a/neutron/agent/l3/ha.py b/neutron/agent/l3/ha.py index 95f3fd76355..76363c9afba 100644 --- a/neutron/agent/l3/ha.py +++ b/neutron/agent/l3/ha.py @@ -22,6 +22,7 @@ import webob from neutron.agent.linux import keepalived from neutron.agent.linux import utils as agent_utils +from neutron.common import utils as common_utils from neutron.i18n import _LI from neutron.notifiers import batch_notifier @@ -157,4 +158,4 @@ class AgentMixin(object): def _init_ha_conf_path(self): ha_full_path = os.path.dirname("/%s/" % self.conf.ha_confs_path) - agent_utils.ensure_dir(ha_full_path) + common_utils.ensure_dir(ha_full_path) diff --git a/neutron/agent/l3/ha_router.py b/neutron/agent/l3/ha_router.py index 31a2c18dc58..e7b7b5020af 100644 --- a/neutron/agent/l3/ha_router.py +++ b/neutron/agent/l3/ha_router.py @@ -200,6 +200,15 @@ class HaRouter(router.RouterInfo): if enable_ra_on_gw: self.driver.configure_ipv6_ra(self.ns_name, interface_name) + def _add_extra_subnet_onlink_routes(self, ex_gw_port, interface_name): + extra_subnets = ex_gw_port.get('extra_subnets', []) + instance = self._get_keepalived_instance() + onlink_route_cidrs = set(s['cidr'] for s in extra_subnets) + instance.virtual_routes.extra_subnets = [ + keepalived.KeepalivedVirtualRoute( + onlink_route_cidr, None, interface_name, scope='link') for + onlink_route_cidr in onlink_route_cidrs] + def _should_delete_ipv6_lladdr(self, ipv6_lladdr): """Only the master should have any IP addresses configured. Let keepalived manage IPv6 link local addresses, the same way we let @@ -235,6 +244,7 @@ class HaRouter(router.RouterInfo): for ip_cidr in common_utils.fixed_ip_cidrs(ex_gw_port['fixed_ips']): self._add_vip(ip_cidr, interface_name) self._add_default_gw_virtual_route(ex_gw_port, interface_name) + self._add_extra_subnet_onlink_routes(ex_gw_port, interface_name) def add_floating_ip(self, fip, interface_name, device): fip_ip = fip['floating_ip_address'] @@ -353,6 +363,7 @@ class HaRouter(router.RouterInfo): if self.ha_port: self.enable_keepalived() + @common_utils.synchronized('enable_radvd') def enable_radvd(self, internal_ports=None): if (self.keepalived_manager.get_process().active and self.ha_state == 'master'): diff --git a/neutron/agent/l3/namespace_manager.py b/neutron/agent/l3/namespace_manager.py index 51464e4e5bd..31df5d22747 100644 --- a/neutron/agent/l3/namespace_manager.py +++ b/neutron/agent/l3/namespace_manager.py @@ -12,6 +12,7 @@ from oslo_log import log as logging +from neutron.agent.l3 import dvr_fip_ns from neutron.agent.l3 import dvr_snat_ns from neutron.agent.l3 import namespaces from neutron.agent.linux import external_process @@ -42,6 +43,12 @@ class NamespaceManager(object): agent restarts gracefully. """ + ns_prefix_to_class_map = { + namespaces.NS_PREFIX: namespaces.RouterNamespace, + dvr_snat_ns.SNAT_NS_PREFIX: dvr_snat_ns.SnatNamespace, + dvr_fip_ns.FIP_NS_PREFIX: dvr_fip_ns.FipNamespace, + } + def __init__(self, agent_conf, driver, clean_stale, metadata_driver=None): """Initialize the NamespaceManager. @@ -95,7 +102,7 @@ class NamespaceManager(object): :returns: tuple with prefix and id or None if no prefix matches """ prefix = namespaces.get_prefix_from_ns_name(ns_name) - if prefix in (namespaces.NS_PREFIX, dvr_snat_ns.SNAT_NS_PREFIX): + if prefix in self.ns_prefix_to_class_map: identifier = namespaces.get_id_from_ns_name(ns_name) return (prefix, identifier) @@ -123,10 +130,7 @@ class NamespaceManager(object): self._cleanup(ns_prefix, ns_id) def _cleanup(self, ns_prefix, ns_id): - if ns_prefix == namespaces.NS_PREFIX: - ns_class = namespaces.RouterNamespace - else: - ns_class = dvr_snat_ns.SnatNamespace + ns_class = self.ns_prefix_to_class_map[ns_prefix] ns = ns_class(ns_id, self.agent_conf, self.driver, use_ipv6=False) try: if self.metadata_driver: diff --git a/neutron/agent/l3/router_info.py b/neutron/agent/l3/router_info.py index 978f2f8c8a3..8b25f0a6a33 100644 --- a/neutron/agent/l3/router_info.py +++ b/neutron/agent/l3/router_info.py @@ -30,7 +30,6 @@ LOG = logging.getLogger(__name__) INTERNAL_DEV_PREFIX = namespaces.INTERNAL_DEV_PREFIX EXTERNAL_DEV_PREFIX = namespaces.EXTERNAL_DEV_PREFIX -EXTERNAL_INGRESS_MARK_MASK = '0xffffffff' FLOATINGIP_STATUS_NOCHANGE = object() @@ -45,7 +44,6 @@ class RouterInfo(object): self.router_id = router_id self.ex_gw_port = None self._snat_enabled = None - self._snat_action = None self.internal_ports = [] self.floating_ips = set() # Invoke the setter for establishing initial SNAT action @@ -97,13 +95,6 @@ class RouterInfo(object): return # enable_snat by default if it wasn't specified by plugin self._snat_enabled = self._router.get('enable_snat', True) - # Set a SNAT action for the router - if self._router.get('gw_port'): - self._snat_action = ('add_rules' if self._snat_enabled - else 'remove_rules') - elif self.ex_gw_port: - # Gateway port was removed, remove rules - self._snat_action = 'remove_rules' @property def is_ha(self): @@ -119,14 +110,6 @@ class RouterInfo(object): def get_external_device_interface_name(self, ex_gw_port): return self.get_external_device_name(ex_gw_port['id']) - def perform_snat_action(self, snat_callback, *args): - # Process SNAT rules for attached subnets - if self._snat_action: - snat_callback(self._router.get('gw_port'), - *args, - action=self._snat_action) - self._snat_action = None - def _update_routing_table(self, operation, route): cmd = ['ip', 'route', operation, 'to', route['destination'], 'via', route['nexthop']] @@ -534,27 +517,38 @@ class RouterInfo(object): prefix=EXTERNAL_DEV_PREFIX) # Process SNAT rules for external gateway - self.perform_snat_action(self._handle_router_snat_rules, - interface_name) + gw_port = self._router.get('gw_port') + self._handle_router_snat_rules(gw_port, interface_name) def external_gateway_nat_rules(self, ex_gw_ip, interface_name): - mark = self.agent_conf.external_ingress_mark - rules = [('POSTROUTING', '! -i %(interface_name)s ' - '! -o %(interface_name)s -m conntrack ! ' - '--ctstate DNAT -j ACCEPT' % - {'interface_name': interface_name}), - ('snat', '-o %s -j SNAT --to-source %s' % - (interface_name, ex_gw_ip)), - ('snat', '-m mark ! --mark %s ' - '-m conntrack --ctstate DNAT ' - '-j SNAT --to-source %s' % (mark, ex_gw_ip))] - return rules + dont_snat_traffic_to_internal_ports_if_not_to_floating_ip = ( + 'POSTROUTING', '! -i %(interface_name)s ' + '! -o %(interface_name)s -m conntrack ! ' + '--ctstate DNAT -j ACCEPT' % + {'interface_name': interface_name}) + + snat_normal_external_traffic = ( + 'snat', '-o %s -j SNAT --to-source %s' % + (interface_name, ex_gw_ip)) + + # Makes replies come back through the router to reverse DNAT + ext_in_mark = self.agent_conf.external_ingress_mark + snat_internal_traffic_to_floating_ip = ( + 'snat', '-m mark ! --mark %s/%s ' + '-m conntrack --ctstate DNAT ' + '-j SNAT --to-source %s' + % (ext_in_mark, l3_constants.ROUTER_MARK_MASK, ex_gw_ip)) + + return [dont_snat_traffic_to_internal_ports_if_not_to_floating_ip, + snat_normal_external_traffic, + snat_internal_traffic_to_floating_ip] def external_gateway_mangle_rules(self, interface_name): mark = self.agent_conf.external_ingress_mark - rules = [('mark', '-i %s -j MARK --set-xmark %s/%s' % - (interface_name, mark, EXTERNAL_INGRESS_MARK_MASK))] - return rules + mark_packets_entering_external_gateway_port = ( + 'mark', '-i %s -j MARK --set-xmark %s/%s' % + (interface_name, mark, l3_constants.ROUTER_MARK_MASK)) + return [mark_packets_entering_external_gateway_port] def _empty_snat_chains(self, iptables_manager): iptables_manager.ipv4['nat'].empty_chain('POSTROUTING') @@ -562,8 +556,8 @@ class RouterInfo(object): iptables_manager.ipv4['mangle'].empty_chain('mark') def _add_snat_rules(self, ex_gw_port, iptables_manager, - interface_name, action): - if action == 'add_rules' and ex_gw_port: + interface_name): + if self._snat_enabled and ex_gw_port: # ex_gw_port should not be None in this case # NAT rules are added only if ex_gw_port has an IPv4 address for ip_addr in ex_gw_port['fixed_ips']: @@ -578,25 +572,22 @@ class RouterInfo(object): iptables_manager.ipv4['mangle'].add_rule(*rule) break - def _handle_router_snat_rules(self, ex_gw_port, - interface_name, action): + def _handle_router_snat_rules(self, ex_gw_port, interface_name): self._empty_snat_chains(self.iptables_manager) self.iptables_manager.ipv4['nat'].add_rule('snat', '-j $float-snat') self._add_snat_rules(ex_gw_port, self.iptables_manager, - interface_name, - action) + interface_name) def process_external(self, agent): + fip_statuses = {} existing_floating_ips = self.floating_ips try: with self.iptables_manager.defer_apply(): ex_gw_port = self.get_ex_gw_port() self._process_external_gateway(ex_gw_port) - # TODO(Carl) Return after setting existing_floating_ips and - # still call update_fip_statuses? if not ex_gw_port: return @@ -614,8 +605,9 @@ class RouterInfo(object): # All floating IPs must be put in error state LOG.exception(e) fip_statuses = self.put_fips_in_error_state() - - agent.update_fip_statuses(self, existing_floating_ips, fip_statuses) + finally: + agent.update_fip_statuses( + self, existing_floating_ips, fip_statuses) @common_utils.exception_logger() def process(self, agent): @@ -633,6 +625,5 @@ class RouterInfo(object): # Update ex_gw_port and enable_snat on the router info cache self.ex_gw_port = self.get_ex_gw_port() - self.snat_ports = self.router.get( - l3_constants.SNAT_ROUTER_INTF_KEY, []) + # TODO(Carl) FWaaS uses this. Why is it set after processing is done? self.enable_snat = self.router.get('enable_snat') diff --git a/neutron/agent/linux/async_process.py b/neutron/agent/linux/async_process.py index e9879a0696a..cd25ffb0951 100644 --- a/neutron/agent/linux/async_process.py +++ b/neutron/agent/linux/async_process.py @@ -181,7 +181,10 @@ class AsyncProcess(object): """Kill the async process and respawn if necessary.""" LOG.debug('Halting async process [%s] in response to an error.', self.cmd) - respawning = self.respawn_interval >= 0 + if self.respawn_interval is not None and self.respawn_interval >= 0: + respawning = True + else: + respawning = False self._kill(respawning=respawning) if respawning: eventlet.sleep(self.respawn_interval) diff --git a/neutron/agent/linux/bridge_lib.py b/neutron/agent/linux/bridge_lib.py index 2bbc9f2cefa..e8176510f8f 100644 --- a/neutron/agent/linux/bridge_lib.py +++ b/neutron/agent/linux/bridge_lib.py @@ -39,3 +39,9 @@ class BridgeDevice(ip_lib.IPDevice): def delif(self, interface): return self._brctl(['delif', self.name, interface]) + + def setfd(self, fd): + return self._brctl(['setfd', self.name, str(fd)]) + + def disable_stp(self): + return self._brctl(['stp', self.name, 'off']) diff --git a/neutron/agent/linux/daemon.py b/neutron/agent/linux/daemon.py index b4c7853b54a..7f786e24158 100644 --- a/neutron/agent/linux/daemon.py +++ b/neutron/agent/linux/daemon.py @@ -31,6 +31,16 @@ LOG = logging.getLogger(__name__) DEVNULL = object() +# Note: We can't use sys.std*.fileno() here. sys.std* objects may be +# random file-like objects that may not match the true system std* fds +# - and indeed may not even have a file descriptor at all (eg: test +# fixtures that monkey patch fixtures.StringStream onto sys.stdout). +# Below we always want the _real_ well-known 0,1,2 Unix fds during +# os.dup2 manipulation. +STDIN_FILENO = 0 +STDOUT_FILENO = 1 +STDERR_FILENO = 2 + def setuid(user_id_or_name): try: @@ -121,8 +131,7 @@ class Pidfile(object): return self.pidfile def unlock(self): - if not not fcntl.flock(self.fd, fcntl.LOCK_UN): - raise IOError(_('Unable to unlock pid file')) + fcntl.flock(self.fd, fcntl.LOCK_UN) def write(self, pid): os.ftruncate(self.fd, 0) @@ -160,11 +169,13 @@ class Daemon(object): def __init__(self, pidfile, stdin=DEVNULL, stdout=DEVNULL, stderr=DEVNULL, procname='python', uuid=None, user=None, group=None, watch_log=True): + """Note: pidfile may be None.""" self.stdin = stdin self.stdout = stdout self.stderr = stderr self.procname = procname - self.pidfile = Pidfile(pidfile, procname, uuid) + self.pidfile = (Pidfile(pidfile, procname, uuid) + if pidfile is not None else None) self.user = user self.group = group self.watch_log = watch_log @@ -180,6 +191,16 @@ class Daemon(object): def daemonize(self): """Daemonize process by doing Stevens double fork.""" + + # flush any buffered data before fork/dup2. + if self.stdout is not DEVNULL: + self.stdout.flush() + if self.stderr is not DEVNULL: + self.stderr.flush() + # sys.std* may not match STD{OUT,ERR}_FILENO. Tough. + for f in (sys.stdout, sys.stderr): + f.flush() + # fork first time self._fork() @@ -192,23 +213,23 @@ class Daemon(object): self._fork() # redirect standard file descriptors - sys.stdout.flush() - sys.stderr.flush() - devnull = open(os.devnull, 'w+') - stdin = devnull if self.stdin is DEVNULL else self.stdin - stdout = devnull if self.stdout is DEVNULL else self.stdout - stderr = devnull if self.stderr is DEVNULL else self.stderr - os.dup2(stdin.fileno(), sys.stdin.fileno()) - os.dup2(stdout.fileno(), sys.stdout.fileno()) - os.dup2(stderr.fileno(), sys.stderr.fileno()) + with open(os.devnull, 'w+') as devnull: + stdin = devnull if self.stdin is DEVNULL else self.stdin + stdout = devnull if self.stdout is DEVNULL else self.stdout + stderr = devnull if self.stderr is DEVNULL else self.stderr + os.dup2(stdin.fileno(), STDIN_FILENO) + os.dup2(stdout.fileno(), STDOUT_FILENO) + os.dup2(stderr.fileno(), STDERR_FILENO) - # write pidfile - atexit.register(self.delete_pid) - signal.signal(signal.SIGTERM, self.handle_sigterm) - self.pidfile.write(os.getpid()) + if self.pidfile is not None: + # write pidfile + atexit.register(self.delete_pid) + signal.signal(signal.SIGTERM, self.handle_sigterm) + self.pidfile.write(os.getpid()) def delete_pid(self): - os.remove(str(self.pidfile)) + if self.pidfile is not None: + os.remove(str(self.pidfile)) def handle_sigterm(self, signum, frame): sys.exit(0) @@ -216,7 +237,7 @@ class Daemon(object): def start(self): """Start the daemon.""" - if self.pidfile.is_running(): + if self.pidfile is not None and self.pidfile.is_running(): self.pidfile.unlock() LOG.error(_LE('Pidfile %s already exist. Daemon already ' 'running?'), self.pidfile) diff --git a/neutron/agent/linux/dhcp.py b/neutron/agent/linux/dhcp.py index d5d748dec4f..efdf12fa3f8 100644 --- a/neutron/agent/linux/dhcp.py +++ b/neutron/agent/linux/dhcp.py @@ -23,10 +23,10 @@ import time import netaddr from oslo_config import cfg from oslo_log import log as logging -from oslo_utils import importutils from oslo_utils import uuidutils import six +from neutron.agent.common import utils as common_utils from neutron.agent.linux import external_process from neutron.agent.linux import ip_lib from neutron.agent.linux import iptables_manager @@ -36,7 +36,7 @@ from neutron.common import exceptions from neutron.common import ipv6_utils from neutron.common import utils as commonutils from neutron.extensions import extra_dhcp_opt as edo_ext -from neutron.i18n import _LE, _LI, _LW +from neutron.i18n import _LI, _LW LOG = logging.getLogger(__name__) @@ -174,7 +174,7 @@ class DhcpLocalProcess(DhcpBase): version, plugin) self.confs_dir = self.get_confs_dir(conf) self.network_conf_dir = os.path.join(self.confs_dir, network.id) - utils.ensure_dir(self.network_conf_dir) + commonutils.ensure_dir(self.network_conf_dir) @staticmethod def get_confs_dir(conf): @@ -199,7 +199,7 @@ class DhcpLocalProcess(DhcpBase): if self.active: self.restart() elif self._enable_dhcp(): - utils.ensure_dir(self.network_conf_dir) + commonutils.ensure_dir(self.network_conf_dir) interface_name = self.device_manager.setup(self.network) self.interface_name = interface_name self.spawn_process() @@ -657,14 +657,23 @@ class Dnsmasq(DhcpLocalProcess): old_leases = self._read_hosts_file_leases(filename) new_leases = set() + dhcp_port_exists = False + dhcp_port_on_this_host = self.device_manager.get_device_id( + self.network) for port in self.network.ports: client_id = self._get_client_id(port) for alloc in port.fixed_ips: new_leases.add((alloc.ip_address, port.mac_address, client_id)) + if port.device_id == dhcp_port_on_this_host: + dhcp_port_exists = True for ip, mac, client_id in old_leases - new_leases: self._release_lease(mac, ip, client_id) + if not dhcp_port_exists: + self.device_manager.driver.unplug( + self.interface_name, namespace=self.network.namespace) + def _output_addn_hosts_file(self): """Writes a dnsmasq compatible additional hosts file. @@ -919,18 +928,7 @@ class DeviceManager(object): def __init__(self, conf, plugin): self.conf = conf self.plugin = plugin - if not conf.interface_driver: - LOG.error(_LE('An interface driver must be specified')) - raise SystemExit(1) - try: - self.driver = importutils.import_object( - conf.interface_driver, conf) - except Exception as e: - LOG.error(_LE("Error importing interface driver '%(driver)s': " - "%(inner)s"), - {'driver': conf.interface_driver, - 'inner': e}) - raise SystemExit(1) + self.driver = common_utils.load_interface_driver(conf) def get_interface_name(self, network, port): """Return interface(device) name for use by the DHCP process.""" @@ -1058,9 +1056,18 @@ class DeviceManager(object): return dhcp_port + def _update_dhcp_port(self, network, port): + for index in range(len(network.ports)): + if network.ports[index].id == port.id: + network.ports[index] = port + break + else: + network.ports.append(port) + def setup(self, network): """Create and initialize a device for network's DHCP on this host.""" port = self.setup_dhcp_port(network) + self._update_dhcp_port(network, port) interface_name = self.get_interface_name(network, port) if ip_lib.ensure_device_is_ready(interface_name, diff --git a/neutron/agent/linux/external_process.py b/neutron/agent/linux/external_process.py index f3ac93a7f09..4cf287218df 100644 --- a/neutron/agent/linux/external_process.py +++ b/neutron/agent/linux/external_process.py @@ -21,12 +21,13 @@ import eventlet from oslo_concurrency import lockutils from oslo_config import cfg from oslo_log import log as logging +from oslo_utils import fileutils from neutron.agent.common import config as agent_cfg from neutron.agent.linux import ip_lib from neutron.agent.linux import utils +from neutron.common import utils as common_utils from neutron.i18n import _LE -from neutron.openstack.common import fileutils LOG = logging.getLogger(__name__) @@ -78,7 +79,7 @@ class ProcessManager(MonitoredProcess): self.service_pid_fname = 'pid' self.service = 'default-service' - utils.ensure_dir(os.path.dirname(self.get_pid_file_name())) + common_utils.ensure_dir(os.path.dirname(self.get_pid_file_name())) def enable(self, cmd_callback=None, reload_cfg=False): if not self.active: diff --git a/neutron/agent/linux/interface.py b/neutron/agent/linux/interface.py index cd7f9c6903d..9207503e7ac 100644 --- a/neutron/agent/linux/interface.py +++ b/neutron/agent/linux/interface.py @@ -18,7 +18,6 @@ import abc import netaddr from oslo_config import cfg from oslo_log import log as logging -from oslo_utils import importutils import six from neutron.agent.common import ovs_lib @@ -26,7 +25,6 @@ from neutron.agent.linux import ip_lib from neutron.agent.linux import utils from neutron.common import constants as n_const from neutron.common import exceptions -from neutron.extensions import flavor from neutron.i18n import _LE, _LI @@ -41,29 +39,6 @@ OPTS = [ help=_('Uses veth for an interface or not')), cfg.IntOpt('network_device_mtu', help=_('MTU setting for device.')), - cfg.StrOpt('meta_flavor_driver_mappings', - help=_('Mapping between flavor and LinuxInterfaceDriver. ' - 'It is specific to MetaInterfaceDriver used with ' - 'admin_user, admin_password, admin_tenant_name, ' - 'admin_url, auth_strategy, auth_region and ' - 'endpoint_type.')), - cfg.StrOpt('admin_user', - help=_("Admin username")), - cfg.StrOpt('admin_password', - help=_("Admin password"), - secret=True), - cfg.StrOpt('admin_tenant_name', - help=_("Admin tenant name")), - cfg.StrOpt('auth_url', - help=_("Authentication URL")), - cfg.StrOpt('auth_strategy', default='keystone', - help=_("The type of authentication to use")), - cfg.StrOpt('auth_region', - help=_("Authentication region")), - cfg.StrOpt('endpoint_type', - default='publicURL', - help=_("Network service endpoint type to pull from " - "the keystone catalog")), ] @@ -420,63 +395,3 @@ class BridgeInterfaceDriver(LinuxInterfaceDriver): except RuntimeError: LOG.error(_LE("Failed unplugging interface '%s'"), device_name) - - -class MetaInterfaceDriver(LinuxInterfaceDriver): - def __init__(self, conf): - super(MetaInterfaceDriver, self).__init__(conf) - from neutronclient.v2_0 import client - self.neutron = client.Client( - username=self.conf.admin_user, - password=self.conf.admin_password, - tenant_name=self.conf.admin_tenant_name, - auth_url=self.conf.auth_url, - auth_strategy=self.conf.auth_strategy, - region_name=self.conf.auth_region, - endpoint_type=self.conf.endpoint_type - ) - self.flavor_driver_map = {} - for net_flavor, driver_name in [ - driver_set.split(':') - for driver_set in - self.conf.meta_flavor_driver_mappings.split(',')]: - self.flavor_driver_map[net_flavor] = self._load_driver(driver_name) - - def _get_flavor_by_network_id(self, network_id): - network = self.neutron.show_network(network_id) - return network['network'][flavor.FLAVOR_NETWORK] - - def _get_driver_by_network_id(self, network_id): - net_flavor = self._get_flavor_by_network_id(network_id) - return self.flavor_driver_map[net_flavor] - - def _set_device_plugin_tag(self, network_id, device_name, namespace=None): - plugin_tag = self._get_flavor_by_network_id(network_id) - device = ip_lib.IPDevice(device_name, namespace=namespace) - device.link.set_alias(plugin_tag) - - def _get_device_plugin_tag(self, device_name, namespace=None): - device = ip_lib.IPDevice(device_name, namespace=namespace) - return device.link.alias - - def get_device_name(self, port): - driver = self._get_driver_by_network_id(port.network_id) - return driver.get_device_name(port) - - def plug_new(self, network_id, port_id, device_name, mac_address, - bridge=None, namespace=None, prefix=None): - driver = self._get_driver_by_network_id(network_id) - ret = driver.plug(network_id, port_id, device_name, mac_address, - bridge=bridge, namespace=namespace, prefix=prefix) - self._set_device_plugin_tag(network_id, device_name, namespace) - return ret - - def unplug(self, device_name, bridge=None, namespace=None, prefix=None): - plugin_tag = self._get_device_plugin_tag(device_name, namespace) - driver = self.flavor_driver_map[plugin_tag] - return driver.unplug(device_name, bridge, namespace, prefix) - - def _load_driver(self, driver_provider): - LOG.debug("Driver location: %s", driver_provider) - plugin_klass = importutils.import_class(driver_provider) - return plugin_klass(self.conf) diff --git a/neutron/agent/linux/ip_lib.py b/neutron/agent/linux/ip_lib.py index 36d2b09523b..e3268b7aad3 100644 --- a/neutron/agent/linux/ip_lib.py +++ b/neutron/agent/linux/ip_lib.py @@ -348,10 +348,10 @@ class IpLinkCommand(IpDeviceCommandBase): self._as_root([], ('set', self.name, 'mtu', mtu_size)) def set_up(self): - self._as_root([], ('set', self.name, 'up')) + return self._as_root([], ('set', self.name, 'up')) def set_down(self): - self._as_root([], ('set', self.name, 'down')) + return self._as_root([], ('set', self.name, 'down')) def set_netns(self, namespace): self._as_root([], ('set', self.name, 'netns', namespace)) @@ -489,6 +489,17 @@ class IpAddrCommand(IpDeviceCommandBase): class IpRouteCommand(IpDeviceCommandBase): COMMAND = 'route' + def __init__(self, parent, table=None): + super(IpRouteCommand, self).__init__(parent) + self._table = table + + def table(self, table): + """Return an instance of IpRouteCommand which works on given table""" + return IpRouteCommand(self._parent, table) + + def _table_args(self): + return ['table', self._table] if self._table else [] + def add_gateway(self, gateway, metric=None, table=None): ip_version = get_ip_version(gateway) args = ['replace', 'default', 'via', gateway] @@ -497,6 +508,8 @@ class IpRouteCommand(IpDeviceCommandBase): args += ['dev', self.name] if table: args += ['table', table] + else: + args += self._table_args() self._as_root([ip_version], tuple(args)) def delete_gateway(self, gateway, table=None): @@ -506,6 +519,8 @@ class IpRouteCommand(IpDeviceCommandBase): 'dev', self.name] if table: args += ['table', table] + else: + args += self._table_args() try: self._as_root([ip_version], tuple(args)) except RuntimeError as rte: @@ -517,10 +532,9 @@ class IpRouteCommand(IpDeviceCommandBase): def list_onlink_routes(self, ip_version): def iterate_routes(): - output = self._run([ip_version], - ('list', - 'dev', self.name, - 'scope', 'link')) + args = ['list', 'dev', self.name, 'scope', 'link'] + args += self._table_args() + output = self._run([ip_version], tuple(args)) for line in output.split('\n'): line = line.strip() if line and not line.count('src'): @@ -530,22 +544,21 @@ class IpRouteCommand(IpDeviceCommandBase): def add_onlink_route(self, cidr): ip_version = get_ip_version(cidr) - self._as_root([ip_version], - ('replace', cidr, - 'dev', self.name, - 'scope', 'link')) + args = ['replace', cidr, 'dev', self.name, 'scope', 'link'] + args += self._table_args() + self._as_root([ip_version], tuple(args)) def delete_onlink_route(self, cidr): ip_version = get_ip_version(cidr) - self._as_root([ip_version], - ('del', cidr, - 'dev', self.name, - 'scope', 'link')) + args = ['del', cidr, 'dev', self.name, 'scope', 'link'] + args += self._table_args() + self._as_root([ip_version], tuple(args)) def get_gateway(self, scope=None, filters=None, ip_version=None): options = [ip_version] if ip_version else [] args = ['list', 'dev', self.name] + args += self._table_args() if filters: args += filters @@ -739,16 +752,22 @@ def device_exists_with_ips_and_mac(device_name, ip_cidrs, mac, namespace=None): return True -def get_routing_table(namespace=None): +def get_routing_table(ip_version, namespace=None): """Return a list of dictionaries, each representing a route. + @param ip_version: the routes of version to return, for example 4 + @param namespace + @return: a list of dictionaries, each representing a route. The dictionary format is: {'destination': cidr, 'nexthop': ip, - 'device': device_name} + 'device': device_name, + 'scope': scope} """ ip_wrapper = IPWrapper(namespace=namespace) - table = ip_wrapper.netns.execute(['ip', 'route'], check_exit_code=True) + table = ip_wrapper.netns.execute( + ['ip', '-%s' % ip_version, 'route'], + check_exit_code=True) routes = [] # Example for route_lines: @@ -765,7 +784,8 @@ def get_routing_table(namespace=None): data = dict(route[i:i + 2] for i in range(1, len(route), 2)) routes.append({'destination': network, 'nexthop': data.get('via'), - 'device': data.get('dev')}) + 'device': data.get('dev'), + 'scope': data.get('scope')}) return routes diff --git a/neutron/agent/linux/iptables_firewall.py b/neutron/agent/linux/iptables_firewall.py index ff12802e163..014873370d3 100644 --- a/neutron/agent/linux/iptables_firewall.py +++ b/neutron/agent/linux/iptables_firewall.py @@ -32,24 +32,22 @@ from neutron.i18n import _LI LOG = logging.getLogger(__name__) SG_CHAIN = 'sg-chain' -INGRESS_DIRECTION = 'ingress' -EGRESS_DIRECTION = 'egress' SPOOF_FILTER = 'spoof-filter' -CHAIN_NAME_PREFIX = {INGRESS_DIRECTION: 'i', - EGRESS_DIRECTION: 'o', +CHAIN_NAME_PREFIX = {firewall.INGRESS_DIRECTION: 'i', + firewall.EGRESS_DIRECTION: 'o', SPOOF_FILTER: 's'} -DIRECTION_IP_PREFIX = {'ingress': 'source_ip_prefix', - 'egress': 'dest_ip_prefix'} -IPSET_DIRECTION = {INGRESS_DIRECTION: 'src', - EGRESS_DIRECTION: 'dst'} +DIRECTION_IP_PREFIX = {firewall.INGRESS_DIRECTION: 'source_ip_prefix', + firewall.EGRESS_DIRECTION: 'dest_ip_prefix'} +IPSET_DIRECTION = {firewall.INGRESS_DIRECTION: 'src', + firewall.EGRESS_DIRECTION: 'dst'} LINUX_DEV_LEN = 14 comment_rule = iptables_manager.comment_rule class IptablesFirewallDriver(firewall.FirewallDriver): """Driver which enforces security groups through iptables rules.""" - IPTABLES_DIRECTION = {INGRESS_DIRECTION: 'physdev-out', - EGRESS_DIRECTION: 'physdev-in'} + IPTABLES_DIRECTION = {firewall.INGRESS_DIRECTION: 'physdev-out', + firewall.EGRESS_DIRECTION: 'physdev-in'} def __init__(self, namespace=None): self.iptables = iptables_manager.IptablesManager( @@ -180,14 +178,14 @@ class IptablesFirewallDriver(firewall.FirewallDriver): def _setup_chains_apply(self, ports, unfiltered_ports): self._add_chain_by_name_v4v6(SG_CHAIN) for port in ports.values(): - self._setup_chain(port, INGRESS_DIRECTION) - self._setup_chain(port, EGRESS_DIRECTION) + self._setup_chain(port, firewall.INGRESS_DIRECTION) + self._setup_chain(port, firewall.EGRESS_DIRECTION) self.iptables.ipv4['filter'].add_rule(SG_CHAIN, '-j ACCEPT') self.iptables.ipv6['filter'].add_rule(SG_CHAIN, '-j ACCEPT') for port in unfiltered_ports.values(): - self._add_accept_rule_port_sec(port, INGRESS_DIRECTION) - self._add_accept_rule_port_sec(port, EGRESS_DIRECTION) + self._add_accept_rule_port_sec(port, firewall.INGRESS_DIRECTION) + self._add_accept_rule_port_sec(port, firewall.EGRESS_DIRECTION) def _remove_chains(self): """Remove ingress and egress chain for a port.""" @@ -197,12 +195,12 @@ class IptablesFirewallDriver(firewall.FirewallDriver): def _remove_chains_apply(self, ports, unfiltered_ports): for port in ports.values(): - self._remove_chain(port, INGRESS_DIRECTION) - self._remove_chain(port, EGRESS_DIRECTION) + self._remove_chain(port, firewall.INGRESS_DIRECTION) + self._remove_chain(port, firewall.EGRESS_DIRECTION) self._remove_chain(port, SPOOF_FILTER) for port in unfiltered_ports.values(): - self._remove_rule_port_sec(port, INGRESS_DIRECTION) - self._remove_rule_port_sec(port, EGRESS_DIRECTION) + self._remove_rule_port_sec(port, firewall.INGRESS_DIRECTION) + self._remove_rule_port_sec(port, firewall.EGRESS_DIRECTION) self._remove_chain_by_name_v4v6(SG_CHAIN) def _setup_chain(self, port, DIRECTION): @@ -263,7 +261,7 @@ class IptablesFirewallDriver(firewall.FirewallDriver): else: self._remove_rule_from_chain_v4v6('FORWARD', jump_rule, jump_rule) - if direction == EGRESS_DIRECTION: + if direction == firewall.EGRESS_DIRECTION: jump_rule = ['-m physdev --%s %s --physdev-is-bridged ' '-j ACCEPT' % (self.IPTABLES_DIRECTION[direction], device)] @@ -300,7 +298,7 @@ class IptablesFirewallDriver(firewall.FirewallDriver): self._add_rules_to_chain_v4v6(SG_CHAIN, jump_rule, jump_rule, comment=ic.SG_TO_VM_SG) - if direction == EGRESS_DIRECTION: + if direction == firewall.EGRESS_DIRECTION: self._add_rules_to_chain_v4v6('INPUT', jump_rule, jump_rule, comment=ic.INPUT_TO_SG) @@ -358,7 +356,7 @@ class IptablesFirewallDriver(firewall.FirewallDriver): ipv6_rules += [comment_rule('-p icmpv6 -j RETURN', comment=ic.IPV6_ICMP_ALLOW)] ipv6_rules += [comment_rule('-p udp -m udp --sport 546 --dport 547 ' - '-j RETURN', comment=None)] + '-j RETURN', comment=ic.DHCP_CLIENT)] mac_ipv4_pairs = [] mac_ipv6_pairs = [] @@ -386,7 +384,7 @@ class IptablesFirewallDriver(firewall.FirewallDriver): ipv4_rules += [comment_rule('-p udp -m udp --sport 67 --dport 68 ' '-j DROP', comment=ic.DHCP_SPOOF)] ipv6_rules += [comment_rule('-p udp -m udp --sport 547 --dport 546 ' - '-j DROP', comment=None)] + '-j DROP', comment=ic.DHCP_SPOOF)] def _accept_inbound_icmpv6(self): # Allow multicast listener, neighbor solicitation and @@ -458,11 +456,11 @@ class IptablesFirewallDriver(firewall.FirewallDriver): ipv4_iptables_rules = [] ipv6_iptables_rules = [] # include fixed egress/ingress rules - if direction == EGRESS_DIRECTION: + if direction == firewall.EGRESS_DIRECTION: self._add_fixed_egress_rules(port, ipv4_iptables_rules, ipv6_iptables_rules) - elif direction == INGRESS_DIRECTION: + elif direction == firewall.INGRESS_DIRECTION: ipv6_iptables_rules += self._accept_inbound_icmpv6() # include IPv4 and IPv6 iptable rules from security group ipv4_iptables_rules += self._convert_sgr_to_iptables_rules( @@ -568,7 +566,7 @@ class IptablesFirewallDriver(firewall.FirewallDriver): def _port_arg(self, direction, protocol, port_range_min, port_range_max): if (protocol not in ['udp', 'tcp', 'icmp', 'icmpv6'] - or not port_range_min): + or port_range_min is None): return [] if protocol in ['icmp', 'icmpv6']: @@ -717,7 +715,7 @@ class OVSHybridIptablesFirewallDriver(IptablesFirewallDriver): return ('qvb' + port['device'])[:LINUX_DEV_LEN] def _get_jump_rule(self, port, direction): - if direction == INGRESS_DIRECTION: + if direction == firewall.INGRESS_DIRECTION: device = self._get_br_device_name(port) else: device = self._get_device_name(port) @@ -740,11 +738,13 @@ class OVSHybridIptablesFirewallDriver(IptablesFirewallDriver): def _add_chain(self, port, direction): super(OVSHybridIptablesFirewallDriver, self)._add_chain(port, direction) - if direction in [INGRESS_DIRECTION, EGRESS_DIRECTION]: + if direction in [firewall.INGRESS_DIRECTION, + firewall.EGRESS_DIRECTION]: self._add_raw_chain_rules(port, direction) def _remove_chain(self, port, direction): super(OVSHybridIptablesFirewallDriver, self)._remove_chain(port, direction) - if direction in [INGRESS_DIRECTION, EGRESS_DIRECTION]: + if direction in [firewall.INGRESS_DIRECTION, + firewall.EGRESS_DIRECTION]: self._remove_raw_chain_rules(port, direction) diff --git a/neutron/agent/linux/keepalived.py b/neutron/agent/linux/keepalived.py index f856def6d46..61e583dac7c 100644 --- a/neutron/agent/linux/keepalived.py +++ b/neutron/agent/linux/keepalived.py @@ -23,6 +23,7 @@ from oslo_log import log as logging from neutron.agent.linux import external_process from neutron.agent.linux import utils from neutron.common import exceptions +from neutron.common import utils as common_utils VALID_STATES = ['MASTER', 'BACKUP'] VALID_AUTH_TYPES = ['AH', 'PASS'] @@ -31,7 +32,8 @@ PRIMARY_VIP_RANGE_SIZE = 24 # TODO(amuller): Use L3 agent constant when new constants module is introduced. FIP_LL_SUBNET = '169.254.30.0/23' KEEPALIVED_SERVICE_NAME = 'keepalived' - +GARP_MASTER_REPEAT = 5 +GARP_MASTER_REFRESH = 10 LOG = logging.getLogger(__name__) @@ -95,15 +97,21 @@ class KeepalivedVipAddress(object): class KeepalivedVirtualRoute(object): """A virtual route entry of a keepalived configuration.""" - def __init__(self, destination, nexthop, interface_name=None): + def __init__(self, destination, nexthop, interface_name=None, + scope=None): self.destination = destination self.nexthop = nexthop self.interface_name = interface_name + self.scope = scope def build_config(self): - output = '%s via %s' % (self.destination, self.nexthop) + output = self.destination + if self.nexthop: + output += ' via %s' % self.nexthop if self.interface_name: output += ' dev %s' % self.interface_name + if self.scope: + output += ' scope %s' % self.scope return output @@ -111,6 +119,7 @@ class KeepalivedInstanceRoutes(object): def __init__(self): self.gateway_routes = [] self.extra_routes = [] + self.extra_subnets = [] def remove_routes_on_interface(self, interface_name): self.gateway_routes = [gw_rt for gw_rt in self.gateway_routes @@ -118,10 +127,12 @@ class KeepalivedInstanceRoutes(object): # NOTE(amuller): extra_routes are initialized from the router's # 'routes' attribute. These routes do not have an interface # parameter and so cannot be removed via an interface_name lookup. + self.extra_subnets = [route for route in self.extra_subnets if + route.interface_name != interface_name] @property def routes(self): - return self.gateway_routes + self.extra_routes + return self.gateway_routes + self.extra_routes + self.extra_subnets def __len__(self): return len(self.routes) @@ -138,7 +149,9 @@ class KeepalivedInstance(object): def __init__(self, state, interface, vrouter_id, ha_cidrs, priority=HA_DEFAULT_PRIORITY, advert_int=None, - mcast_src_ip=None, nopreempt=False): + mcast_src_ip=None, nopreempt=False, + garp_master_repeat=GARP_MASTER_REPEAT, + garp_master_refresh=GARP_MASTER_REFRESH): self.name = 'VR_%s' % vrouter_id if state not in VALID_STATES: @@ -151,6 +164,8 @@ class KeepalivedInstance(object): self.nopreempt = nopreempt self.advert_int = advert_int self.mcast_src_ip = mcast_src_ip + self.garp_master_repeat = garp_master_repeat + self.garp_master_refresh = garp_master_refresh self.track_interfaces = [] self.vips = [] self.virtual_routes = KeepalivedInstanceRoutes() @@ -244,7 +259,9 @@ class KeepalivedInstance(object): ' state %s' % self.state, ' interface %s' % self.interface, ' virtual_router_id %s' % self.vrouter_id, - ' priority %s' % self.priority] + ' priority %s' % self.priority, + ' garp_master_repeat %s' % self.garp_master_repeat, + ' garp_master_refresh %s' % self.garp_master_refresh] if self.nopreempt: config.append(' nopreempt') @@ -331,7 +348,7 @@ class KeepalivedManager(object): def get_full_config_file_path(self, filename, ensure_conf_dir=True): conf_dir = self.get_conf_dir() if ensure_conf_dir: - utils.ensure_dir(conf_dir) + common_utils.ensure_dir(conf_dir) return os.path.join(conf_dir, filename) def _output_config_file(self): diff --git a/neutron/agent/linux/utils.py b/neutron/agent/linux/utils.py index b646aa8f428..30d3f5cc0ff 100644 --- a/neutron/agent/linux/utils.py +++ b/neutron/agent/linux/utils.py @@ -13,7 +13,6 @@ # License for the specific language governing permissions and limitations # under the License. -import errno import fcntl import glob import grp @@ -25,6 +24,7 @@ import struct import tempfile import threading +from debtcollector import removals import eventlet from eventlet.green import subprocess from eventlet import greenthread @@ -79,7 +79,7 @@ def create_process(cmd, run_as_root=False, addl_env=None): The return value will be a tuple of the process object and the list of command arguments used to create it. """ - cmd = map(str, addl_env_args(addl_env) + cmd) + cmd = list(map(str, addl_env_args(addl_env) + cmd)) if run_as_root: cmd = shlex.split(config.get_root_helper(cfg.CONF)) + cmd LOG.debug("Running command: %s", cmd) @@ -92,7 +92,7 @@ def create_process(cmd, run_as_root=False, addl_env=None): def execute_rootwrap_daemon(cmd, process_input, addl_env): - cmd = map(str, addl_env_args(addl_env) + cmd) + cmd = list(map(str, addl_env_args(addl_env) + cmd)) # NOTE(twilson) oslo_rootwrap.daemon will raise on filter match # errors, whereas oslo_rootwrap.cmd converts them to return codes. # In practice, no neutron code should be trying to execute something that @@ -189,14 +189,9 @@ def find_child_pids(pid): return [x.strip() for x in raw_pids.split('\n') if x.strip()] -def ensure_dir(dir_path): - """Ensure a directory with 755 permissions mode.""" - try: - os.makedirs(dir_path, 0o755) - except OSError as e: - # If the directory already existed, don't raise the error. - if e.errno != errno.EEXIST: - raise +@removals.remove(message='Use neutron.common.utils.ensure_dir instead.') +def ensure_dir(*args, **kwargs): + return utils.ensure_dir(*args, **kwargs) def _get_conf_base(cfg_root, uuid, ensure_conf_dir): @@ -205,7 +200,7 @@ def _get_conf_base(cfg_root, uuid, ensure_conf_dir): conf_dir = os.path.abspath(os.path.normpath(cfg_root)) conf_base = os.path.join(conf_dir, uuid) if ensure_conf_dir: - ensure_dir(conf_dir) + utils.ensure_dir(conf_dir) return conf_base @@ -338,7 +333,7 @@ def ensure_directory_exists_without_file(path): if not os.path.exists(path): ctxt.reraise = False else: - ensure_dir(dirname) + utils.ensure_dir(dirname) def is_effective_user(user_id_or_name): diff --git a/neutron/agent/metadata/agent.py b/neutron/agent/metadata/agent.py index 9e764c126fc..60a571f087c 100644 --- a/neutron/agent/metadata/agent.py +++ b/neutron/agent/metadata/agent.py @@ -87,20 +87,24 @@ class MetadataProxyHandler(object): self.use_rpc = True def _get_neutron_client(self): - qclient = client.Client( - username=self.conf.admin_user, - password=self.conf.admin_password, - tenant_name=self.conf.admin_tenant_name, - auth_url=self.conf.auth_url, - auth_strategy=self.conf.auth_strategy, - region_name=self.conf.auth_region, - token=self.auth_info.get('auth_token'), - insecure=self.conf.auth_insecure, - ca_cert=self.conf.auth_ca_cert, - endpoint_url=self.auth_info.get('endpoint_url'), - endpoint_type=self.conf.endpoint_type - ) - return qclient + params = { + 'username': self.conf.admin_user, + 'password': self.conf.admin_password, + 'tenant_name': self.conf.admin_tenant_name, + 'auth_url': self.conf.auth_url, + 'auth_strategy': self.conf.auth_strategy, + 'region_name': self.conf.auth_region, + 'token': self.auth_info.get('auth_token'), + 'insecure': self.conf.auth_insecure, + 'ca_cert': self.conf.auth_ca_cert, + } + if self.conf.endpoint_url: + params['endpoint_url'] = self.conf.endpoint_url + else: + params['endpoint_url'] = self.auth_info.get('endpoint_url') + params['endpoint_type'] = self.conf.endpoint_type + + return client.Client(**params) @webob.dec.wsgify(RequestClass=webob.Request) def __call__(self, req): diff --git a/neutron/agent/metadata/config.py b/neutron/agent/metadata/config.py index 2a5706e8e4f..6c6cadba612 100644 --- a/neutron/agent/metadata/config.py +++ b/neutron/agent/metadata/config.py @@ -74,6 +74,10 @@ METADATA_PROXY_HANDLER_OPTS = [ default='adminURL', help=_("Network service endpoint type to pull from " "the keystone catalog")), + cfg.StrOpt('endpoint_url', + default=None, + help=_("Neutron endpoint URL, if not set will use endpoint " + "from the keystone catalog along with endpoint_type")), cfg.StrOpt('nova_metadata_ip', default='127.0.0.1', help=_("IP address used by Nova metadata server.")), cfg.IntOpt('nova_metadata_port', @@ -109,7 +113,7 @@ UNIX_DOMAIN_METADATA_PROXY_OPTS = [ cfg.StrOpt('metadata_proxy_socket_mode', default=DEDUCE_MODE, choices=SOCKET_MODES, - help=_("Metadata Proxy UNIX domain socket mode, 3 values " + help=_("Metadata Proxy UNIX domain socket mode, 4 values " "allowed: " "'deduce': deduce mode from metadata_proxy_user/group " "values, " diff --git a/neutron/agent/metadata/driver.py b/neutron/agent/metadata/driver.py index 94e2a309240..338a78c94d2 100644 --- a/neutron/agent/metadata/driver.py +++ b/neutron/agent/metadata/driver.py @@ -24,12 +24,12 @@ from neutron.agent.linux import utils from neutron.callbacks import events from neutron.callbacks import registry from neutron.callbacks import resources +from neutron.common import constants from neutron.common import exceptions LOG = logging.getLogger(__name__) # Access with redirection to metadata proxy iptables mark mask -METADATA_ACCESS_MARK_MASK = '0xffffffff' METADATA_SERVICE_NAME = 'metadata-proxy' @@ -45,7 +45,8 @@ class MetadataDriver(object): @classmethod def metadata_filter_rules(cls, port, mark): - return [('INPUT', '-m mark --mark %s -j ACCEPT' % mark), + return [('INPUT', '-m mark --mark %s/%s -j ACCEPT' % + (mark, constants.ROUTER_MARK_MASK)), ('INPUT', '-p tcp -m tcp --dport %s ' '-j DROP' % port)] @@ -55,7 +56,7 @@ class MetadataDriver(object): '-p tcp -m tcp --dport 80 ' '-j MARK --set-xmark %(value)s/%(mask)s' % {'value': mark, - 'mask': METADATA_ACCESS_MARK_MASK})] + 'mask': constants.ROUTER_MARK_MASK})] @classmethod def metadata_nat_rules(cls, port): diff --git a/neutron/agent/ovsdb/api.py b/neutron/agent/ovsdb/api.py index e696f8e85d6..58fb135f552 100644 --- a/neutron/agent/ovsdb/api.py +++ b/neutron/agent/ovsdb/api.py @@ -308,13 +308,22 @@ class API(object): @abc.abstractmethod def list_ports(self, bridge): - """Create a command to list the names of porsts on a bridge + """Create a command to list the names of ports on a bridge :param bridge: The name of the bridge :type bridge: string :returns: :class:`Command` with list of port names result """ + @abc.abstractmethod + def list_ifaces(self, bridge): + """Create a command to list the names of interfaces on a bridge + + :param bridge: The name of the bridge + :type bridge: string + :returns: :class:`Command` with list of interfaces names result + """ + def val_to_py(val): """Convert a json ovsdb return value to native python object""" diff --git a/neutron/agent/ovsdb/impl_idl.py b/neutron/agent/ovsdb/impl_idl.py index 5b15472874d..4edb407c366 100644 --- a/neutron/agent/ovsdb/impl_idl.py +++ b/neutron/agent/ovsdb/impl_idl.py @@ -157,8 +157,7 @@ class OvsdbIdl(api.API): return cmd.PortToBridgeCommand(self, name) def iface_to_br(self, name): - # For our purposes, ports and interfaces always have the same name - return cmd.PortToBridgeCommand(self, name) + return cmd.InterfaceToBridgeCommand(self, name) def list_br(self): return cmd.ListBridgesCommand(self) @@ -204,3 +203,6 @@ class OvsdbIdl(api.API): def list_ports(self, bridge): return cmd.ListPortsCommand(self, bridge) + + def list_ifaces(self, bridge): + return cmd.ListIfacesCommand(self, bridge) diff --git a/neutron/agent/ovsdb/impl_vsctl.py b/neutron/agent/ovsdb/impl_vsctl.py index 15f52529b52..aa00922979f 100644 --- a/neutron/agent/ovsdb/impl_vsctl.py +++ b/neutron/agent/ovsdb/impl_vsctl.py @@ -241,6 +241,9 @@ class OvsdbVsctl(ovsdb.API): def list_ports(self, bridge): return MultiLineCommand(self.context, 'list-ports', args=[bridge]) + def list_ifaces(self, bridge): + return MultiLineCommand(self.context, 'list-ifaces', args=[bridge]) + def _set_colval_args(*col_values): args = [] diff --git a/neutron/agent/ovsdb/native/commands.py b/neutron/agent/ovsdb/native/commands.py index 973c4cac1f4..0ae9dd9c296 100644 --- a/neutron/agent/ovsdb/native/commands.py +++ b/neutron/agent/ovsdb/native/commands.py @@ -332,6 +332,17 @@ class ListPortsCommand(BaseCommand): self.result = [p.name for p in br.ports if p.name != self.bridge] +class ListIfacesCommand(BaseCommand): + def __init__(self, api, bridge): + super(ListIfacesCommand, self).__init__(api) + self.bridge = bridge + + def run_idl(self, txn): + br = idlutils.row_by_value(self.api.idl, 'Bridge', 'name', self.bridge) + self.result = [i.name for p in br.ports if p.name != self.bridge + for i in p.interfaces] + + class PortToBridgeCommand(BaseCommand): def __init__(self, api, name): super(PortToBridgeCommand, self).__init__(api) @@ -340,7 +351,7 @@ class PortToBridgeCommand(BaseCommand): def run_idl(self, txn): # TODO(twilson) This is expensive! # This traversal of all ports could be eliminated by caching the bridge - # name on the Port's (or Interface's for iface_to_br) external_id field + # name on the Port's external_id field # In fact, if we did that, the only place that uses to_br functions # could just add the external_id field to the conditions passed to find port = idlutils.row_by_value(self.api.idl, 'Port', 'name', self.name) @@ -348,45 +359,62 @@ class PortToBridgeCommand(BaseCommand): self.result = next(br.name for br in bridges if port in br.ports) +class InterfaceToBridgeCommand(BaseCommand): + def __init__(self, api, name): + super(InterfaceToBridgeCommand, self).__init__(api) + self.name = name + + def run_idl(self, txn): + interface = idlutils.row_by_value(self.api.idl, 'Interface', 'name', + self.name) + ports = self.api._tables['Port'].rows.values() + pname = next( + port for port in ports if interface in port.interfaces) + + bridges = self.api._tables['Bridge'].rows.values() + self.result = next(br.name for br in bridges if pname in br.ports) + + class DbListCommand(BaseCommand): def __init__(self, api, table, records, columns, if_exists): super(DbListCommand, self).__init__(api) - self.requested_info = {'records': records, 'columns': columns, - 'table': table} - self.table = self.api._tables[table] - self.columns = columns or self.table.columns.keys() + ['_uuid'] + self.table = table + self.columns = columns self.if_exists = if_exists - if records: - self.records = [] - for record in records: + self.records = records + + def run_idl(self, txn): + table_schema = self.api._tables[self.table] + columns = self.columns or table_schema.columns.keys() + ['_uuid'] + if self.records: + row_uuids = [] + for record in self.records: try: - self.records.append(idlutils.row_by_record( - self.api.idl, table, record).uuid) + row_uuids.append(idlutils.row_by_record( + self.api.idl, self.table, record).uuid) except idlutils.RowNotFound: if self.if_exists: continue - raise + # NOTE(kevinbenton): this is converted to a RuntimeError + # for compat with the vsctl version. It might make more + # sense to change this to a RowNotFoundError in the future. + raise RuntimeError(_LE( + "Row doesn't exist in the DB. Request info: " + "Table=%(table)s. Columns=%(columns)s. " + "Records=%(records)s.") % { + "table": self.table, + "columns": self.columns, + "records": self.records, + }) else: - self.records = self.table.rows.keys() - - def run_idl(self, txn): - try: - self.result = [ - { - c: idlutils.get_column_value(self.table.rows[uuid], c) - for c in self.columns - if not self.if_exists or uuid in self.table.rows - } - for uuid in self.records - ] - except KeyError: - # NOTE(kevinbenton): this is converted to a RuntimeError for compat - # with the vsctl version. It might make more sense to change this - # to a RowNotFoundError in the future. - raise RuntimeError(_LE( - "Row removed from DB during listing. Request info: " - "Table=%(table)s. Columns=%(columns)s. " - "Records=%(records)s.") % self.requested_info) + row_uuids = table_schema.rows.keys() + self.result = [ + { + c: idlutils.get_column_value(table_schema.rows[uuid], c) + for c in columns + } + for uuid in row_uuids + ] class DbFindCommand(BaseCommand): diff --git a/neutron/agent/rpc.py b/neutron/agent/rpc.py index 11bf79784c5..a920ba16bbb 100644 --- a/neutron/agent/rpc.py +++ b/neutron/agent/rpc.py @@ -13,11 +13,11 @@ # License for the specific language governing permissions and limitations # under the License. +from datetime import datetime import itertools from oslo_log import log as logging import oslo_messaging -from oslo_utils import timeutils from oslo_utils import uuidutils from neutron.common import constants @@ -80,7 +80,7 @@ class PluginReportStateAPI(object): agent_state['uuid'] = uuidutils.generate_uuid() kwargs = { 'agent_state': {'agent_state': agent_state}, - 'time': timeutils.strtime(), + 'time': datetime.utcnow().isoformat(), } method = cctxt.call if use_call else cctxt.cast return method(context, 'report_state', **kwargs) @@ -95,6 +95,8 @@ class PluginApi(object): return value to include fixed_ips and device_owner for the device port 1.4 - tunnel_sync rpc signature upgrade to obtain 'host' + 1.5 - Support update_device_list and + get_devices_details_list_and_failed_devices ''' def __init__(self, topic): @@ -123,6 +125,26 @@ class PluginApi(object): ] return res + def get_devices_details_list_and_failed_devices(self, context, devices, + agent_id, host=None): + """Get devices details and the list of devices that failed. + + This method returns the devices details. If an error is thrown when + retrieving the devices details, the device is put in a list of + failed devices. + """ + try: + cctxt = self.client.prepare(version='1.5') + res = cctxt.call( + context, + 'get_devices_details_list_and_failed_devices', + devices=devices, agent_id=agent_id, host=host) + except oslo_messaging.UnsupportedVersion: + #TODO(rossella_s): Remove this failback logic in M + res = self._device_list_rpc_call_with_failed_dev( + self.get_device_details, context, agent_id, host, devices) + return res + def update_device_down(self, context, device, agent_id, host=None): cctxt = self.client.prepare() return cctxt.call(context, 'update_device_down', device=device, @@ -133,6 +155,41 @@ class PluginApi(object): return cctxt.call(context, 'update_device_up', device=device, agent_id=agent_id, host=host) + def _device_list_rpc_call_with_failed_dev(self, rpc_call, context, + agent_id, host, devices): + succeeded_devices = [] + failed_devices = [] + for device in devices: + try: + rpc_device = rpc_call(context, device, agent_id, host) + except Exception: + failed_devices.append(device) + else: + # update_device_up doesn't return the device + succeeded_dev = rpc_device or device + succeeded_devices.append(succeeded_dev) + return {'devices': succeeded_devices, 'failed_devices': failed_devices} + + def update_device_list(self, context, devices_up, devices_down, + agent_id, host): + try: + cctxt = self.client.prepare(version='1.5') + res = cctxt.call(context, 'update_device_list', + devices_up=devices_up, devices_down=devices_down, + agent_id=agent_id, host=host) + except oslo_messaging.UnsupportedVersion: + #TODO(rossella_s): Remove this failback logic in M + dev_up = self._device_list_rpc_call_with_failed_dev( + self.update_device_up, context, agent_id, host, devices_up) + dev_down = self._device_list_rpc_call_with_failed_dev( + self.update_device_down, context, agent_id, host, devices_down) + + res = {'devices_up': dev_up.get('devices'), + 'failed_devices_up': dev_up.get('failed_devices'), + 'devices_down': dev_down.get('devices'), + 'failed_devices_down': dev_down.get('failed_devices')} + return res + def tunnel_sync(self, context, tunnel_ip, tunnel_type=None, host=None): try: cctxt = self.client.prepare(version='1.4') diff --git a/neutron/agent/windows/utils.py b/neutron/agent/windows/utils.py index 8c878395a30..5221534a63b 100644 --- a/neutron/agent/windows/utils.py +++ b/neutron/agent/windows/utils.py @@ -25,7 +25,7 @@ LOG = logging.getLogger(__name__) def create_process(cmd, addl_env=None): - cmd = map(str, cmd) + cmd = list(map(str, cmd)) LOG.debug("Running command: %s", cmd) env = os.environ.copy() diff --git a/neutron/api/extensions.py b/neutron/api/extensions.py index 01bdf45129f..8eb0f9070c9 100644 --- a/neutron/api/extensions.py +++ b/neutron/api/extensions.py @@ -452,10 +452,7 @@ class ExtensionManager(object): try: extended_attrs = ext.get_extended_resources(version) for res, resource_attrs in six.iteritems(extended_attrs): - if attr_map.get(res, None): - attr_map[res].update(resource_attrs) - else: - attr_map[res] = resource_attrs + attr_map.setdefault(res, {}).update(resource_attrs) except AttributeError: LOG.exception(_LE("Error fetching extended attributes for " "extension '%s'"), ext.get_name()) diff --git a/neutron/api/rpc/handlers/dhcp_rpc.py b/neutron/api/rpc/handlers/dhcp_rpc.py index 7d97b7c5226..07438334a38 100644 --- a/neutron/api/rpc/handlers/dhcp_rpc.py +++ b/neutron/api/rpc/handlers/dhcp_rpc.py @@ -29,7 +29,7 @@ from neutron.common import utils from neutron.extensions import portbindings from neutron.i18n import _LW from neutron import manager - +from neutron.quota import resource_registry LOG = logging.getLogger(__name__) @@ -203,6 +203,7 @@ class DhcpRpcCallback(object): LOG.warning(_LW('Updating lease expiration is now deprecated. Issued ' 'from host %s.'), host) + @resource_registry.mark_resources_dirty def create_dhcp_port(self, context, **kwargs): """Create and return dhcp port information. diff --git a/neutron/api/v2/attributes.py b/neutron/api/v2/attributes.py index 64a45e89105..ce51c3035d0 100644 --- a/neutron/api/v2/attributes.py +++ b/neutron/api/v2/attributes.py @@ -180,9 +180,8 @@ def _validate_mac_address(data, valid_values=None): def _validate_mac_address_or_none(data, valid_values=None): - if data is None: - return - return _validate_mac_address(data, valid_values) + if data is not None: + return _validate_mac_address(data, valid_values) def _validate_ip_address(data, valid_values=None): @@ -308,9 +307,8 @@ def _validate_hostroutes(data, valid_values=None): def _validate_ip_address_or_none(data, valid_values=None): - if data is None: - return None - return _validate_ip_address(data, valid_values) + if data is not None: + return _validate_ip_address(data, valid_values) def _validate_subnet(data, valid_values=None): @@ -348,9 +346,8 @@ def _validate_subnet_list(data, valid_values=None): def _validate_subnet_or_none(data, valid_values=None): - if data is None: - return - return _validate_subnet(data, valid_values) + if data is not None: + return _validate_subnet(data, valid_values) def _validate_regex(data, valid_values=None): @@ -366,9 +363,8 @@ def _validate_regex(data, valid_values=None): def _validate_regex_or_none(data, valid_values=None): - if data is None: - return - return _validate_regex(data, valid_values) + if data is not None: + return _validate_regex(data, valid_values) def _validate_uuid(data, valid_values=None): @@ -578,7 +574,7 @@ def convert_none_to_empty_dict(value): def convert_to_list(data): if data is None: return [] - elif hasattr(data, '__iter__'): + elif hasattr(data, '__iter__') and not isinstance(data, six.string_types): return list(data) else: return [data] diff --git a/neutron/api/v2/base.py b/neutron/api/v2/base.py index 48dea6bf6d0..cd591b4f9ea 100644 --- a/neutron/api/v2/base.py +++ b/neutron/api/v2/base.py @@ -17,7 +17,6 @@ import copy import netaddr from oslo_config import cfg -from oslo_db import api as oslo_db_api from oslo_log import log as logging from oslo_policy import policy as oslo_policy from oslo_utils import excutils @@ -35,6 +34,7 @@ from neutron.db import api as db_api from neutron.i18n import _LE, _LI from neutron import policy from neutron import quota +from neutron.quota import resource_registry LOG = logging.getLogger(__name__) @@ -187,6 +187,7 @@ class Controller(object): def __getattr__(self, name): if name in self._member_actions: + @db_api.retry_db_errors def _handle_action(request, id, **kwargs): arg_list = [request.context, id] # Ensure policy engine is initialized @@ -197,7 +198,7 @@ class Controller(object): except oslo_policy.PolicyNotAuthorized: msg = _('The resource could not be found.') raise webob.exc.HTTPNotFound(msg) - body = kwargs.pop('body', None) + body = copy.deepcopy(kwargs.pop('body', None)) # Explicit comparison with None to distinguish from {} if body is not None: arg_list.append(body) @@ -207,7 +208,15 @@ class Controller(object): name, resource, pluralized=self._collection) - return getattr(self._plugin, name)(*arg_list, **kwargs) + ret_value = getattr(self._plugin, name)(*arg_list, **kwargs) + # It is simply impossible to predict whether one of this + # actions alters resource usage. For instance a tenant port + # is created when a router interface is added. Therefore it is + # important to mark as dirty resources whose counters have + # been altered by this operation + resource_registry.set_resources_dirty(request.context) + return ret_value + return _handle_action else: raise AttributeError() @@ -280,6 +289,9 @@ class Controller(object): pagination_links = pagination_helper.get_links(obj_list) if pagination_links: collection[self._collection + "_links"] = pagination_links + # Synchronize usage trackers, if needed + resource_registry.resync_resource( + request.context, self._resource, request.context.tenant_id) return collection def _item(self, request, id, do_authz=False, field_list=None, @@ -383,8 +395,7 @@ class Controller(object): # We need a way for ensuring that if it has been created, # it is then deleted - @oslo_db_api.wrap_db_retry(max_retries=db_api.MAX_RETRIES, - retry_on_deadlock=True) + @db_api.retry_db_errors def create(self, request, body=None, **kwargs): """Creates a new instance of the requested entity.""" parent_id = kwargs.get(self._parent_id_name) @@ -414,11 +425,13 @@ class Controller(object): action, item[self._resource], pluralized=self._collection) + if 'tenant_id' not in item[self._resource]: + # no tenant_id - no quota check + continue try: tenant_id = item[self._resource]['tenant_id'] count = quota.QUOTAS.count(request.context, self._resource, - self._plugin, self._collection, - tenant_id) + self._plugin, tenant_id) if bulk: delta = deltas.get(tenant_id, 0) + 1 deltas[tenant_id] = delta @@ -434,6 +447,12 @@ class Controller(object): **kwargs) def notify(create_result): + # Ensure usage trackers for all resources affected by this API + # operation are marked as dirty + # TODO(salv-orlando): This operation will happen in a single + # transaction with reservation commit once that is implemented + resource_registry.set_resources_dirty(request.context) + notifier_method = self._resource + '.create.end' self._notifier.info(request.context, notifier_method, @@ -470,8 +489,7 @@ class Controller(object): return notify({self._resource: self._view(request.context, obj)}) - @oslo_db_api.wrap_db_retry(max_retries=db_api.MAX_RETRIES, - retry_on_deadlock=True) + @db_api.retry_db_errors def delete(self, request, id, **kwargs): """Deletes the specified entity.""" self._notifier.info(request.context, @@ -496,6 +514,9 @@ class Controller(object): obj_deleter = getattr(self._plugin, action) obj_deleter(request.context, id, **kwargs) + # A delete operation usually alters resource usage, so mark affected + # usage trackers as dirty + resource_registry.set_resources_dirty(request.context) notifier_method = self._resource + '.delete.end' self._notifier.info(request.context, notifier_method, @@ -506,8 +527,7 @@ class Controller(object): result, notifier_method) - @oslo_db_api.wrap_db_retry(max_retries=db_api.MAX_RETRIES, - retry_on_deadlock=True) + @db_api.retry_db_errors def update(self, request, id, body=None, **kwargs): """Updates the specified entity's attributes.""" parent_id = kwargs.get(self._parent_id_name) @@ -561,6 +581,12 @@ class Controller(object): if parent_id: kwargs[self._parent_id_name] = parent_id obj = obj_updater(request.context, id, **kwargs) + # Usually an update operation does not alter resource usage, but as + # there might be side effects it might be worth checking for changes + # in resource usage here as well (e.g: a tenant port is created when a + # router interface is added) + resource_registry.set_resources_dirty(request.context) + result = {self._resource: self._view(request.context, obj)} notifier_method = self._resource + '.update.end' self._notifier.info(request.context, notifier_method, result) @@ -571,8 +597,7 @@ class Controller(object): return result @staticmethod - def _populate_tenant_id(context, res_dict, is_create): - + def _populate_tenant_id(context, res_dict, attr_info, is_create): if (('tenant_id' in res_dict and res_dict['tenant_id'] != context.tenant_id and not context.is_admin)): @@ -583,9 +608,9 @@ class Controller(object): if is_create and 'tenant_id' not in res_dict: if context.tenant_id: res_dict['tenant_id'] = context.tenant_id - else: + elif 'tenant_id' in attr_info: msg = _("Running without keystone AuthN requires " - " that tenant_id is specified") + "that tenant_id is specified") raise webob.exc.HTTPBadRequest(msg) @staticmethod @@ -627,7 +652,7 @@ class Controller(object): msg = _("Unable to find '%s' in request body") % resource raise webob.exc.HTTPBadRequest(msg) - Controller._populate_tenant_id(context, res_dict, is_create) + Controller._populate_tenant_id(context, res_dict, attr_info, is_create) Controller._verify_attributes(res_dict, attr_info) if is_create: # POST diff --git a/neutron/api/v2/resource.py b/neutron/api/v2/resource.py index 2fd66d1a4e9..ac999465024 100644 --- a/neutron/api/v2/resource.py +++ b/neutron/api/v2/resource.py @@ -102,7 +102,11 @@ def Resource(controller, faults=None, deserializers=None, serializers=None): raise mapped_exc(**kwargs) except webob.exc.HTTPException as e: type_, value, tb = sys.exc_info() - LOG.exception(_LE('%s failed'), action) + if hasattr(e, 'code') and 400 <= e.code < 500: + LOG.info(_LI('%(action)s failed (client error): %(exc)s'), + {'action': action, 'exc': e}) + else: + LOG.exception(_LE('%s failed'), action) translate(e, language) value.body = serializer.serialize( {'NeutronError': get_exception_data(e)}) diff --git a/neutron/api/v2/resource_helper.py b/neutron/api/v2/resource_helper.py index 05e403d030d..c506320c91d 100644 --- a/neutron/api/v2/resource_helper.py +++ b/neutron/api/v2/resource_helper.py @@ -20,7 +20,7 @@ from neutron.api import extensions from neutron.api.v2 import base from neutron import manager from neutron.plugins.common import constants -from neutron import quota +from neutron.quota import resource_registry LOG = logging.getLogger(__name__) @@ -80,7 +80,7 @@ def build_resource_info(plural_mappings, resource_map, which_service, if translate_name: collection_name = collection_name.replace('_', '-') if register_quota: - quota.QUOTAS.register_resource_by_name(resource_name) + resource_registry.register_resource_by_name(resource_name) member_actions = action_map.get(resource_name, {}) controller = base.create_resource( collection_name, resource_name, plugin, params, diff --git a/neutron/api/v2/router.py b/neutron/api/v2/router.py index c76f2d02ac5..bd59d854b0e 100644 --- a/neutron/api/v2/router.py +++ b/neutron/api/v2/router.py @@ -27,7 +27,7 @@ from neutron.api.v2 import attributes from neutron.api.v2 import base from neutron import manager from neutron import policy -from neutron import quota +from neutron.quota import resource_registry from neutron import wsgi @@ -106,7 +106,7 @@ class APIRouter(wsgi.Router): _map_resource(RESOURCES[resource], resource, attributes.RESOURCE_ATTRIBUTE_MAP.get( RESOURCES[resource], dict())) - quota.QUOTAS.register_resource_by_name(resource) + resource_registry.register_resource_by_name(resource) for resource in SUB_RESOURCES: _map_resource(SUB_RESOURCES[resource]['collection_name'], resource, diff --git a/neutron/cmd/sanity/checks.py b/neutron/cmd/sanity/checks.py index 22570857e64..5d90ad9c306 100644 --- a/neutron/cmd/sanity/checks.py +++ b/neutron/cmd/sanity/checks.py @@ -14,16 +14,24 @@ # under the License. import re +import shutil +import tempfile import netaddr +from oslo_config import cfg from oslo_log import log as logging from oslo_utils import uuidutils import six from neutron.agent.common import ovs_lib +from neutron.agent.l3 import ha_router +from neutron.agent.l3 import namespaces +from neutron.agent.linux import external_process from neutron.agent.linux import ip_lib from neutron.agent.linux import ip_link_support +from neutron.agent.linux import keepalived from neutron.agent.linux import utils as agent_utils +from neutron.common import constants as n_consts from neutron.common import utils from neutron.i18n import _LE from neutron.plugins.common import constants as const @@ -166,6 +174,124 @@ def dnsmasq_version_supported(): return True +class KeepalivedIPv6Test(object): + def __init__(self, ha_port, gw_port, gw_vip, default_gw): + self.ha_port = ha_port + self.gw_port = gw_port + self.gw_vip = gw_vip + self.default_gw = default_gw + self.manager = None + self.config = None + self.config_path = None + self.nsname = "keepalivedtest-" + uuidutils.generate_uuid() + self.pm = external_process.ProcessMonitor(cfg.CONF, 'router') + self.orig_interval = cfg.CONF.AGENT.check_child_processes_interval + + def configure(self): + config = keepalived.KeepalivedConf() + instance1 = keepalived.KeepalivedInstance('MASTER', self.ha_port, 1, + ['169.254.192.0/18'], + advert_int=5) + instance1.track_interfaces.append(self.ha_port) + + # Configure keepalived with an IPv6 address (gw_vip) on gw_port. + vip_addr1 = keepalived.KeepalivedVipAddress(self.gw_vip, self.gw_port) + instance1.vips.append(vip_addr1) + + # Configure keepalived with an IPv6 default route on gw_port. + gateway_route = keepalived.KeepalivedVirtualRoute(n_consts.IPv6_ANY, + self.default_gw, + self.gw_port) + instance1.virtual_routes.gateway_routes = [gateway_route] + config.add_instance(instance1) + self.config = config + + def start_keepalived_process(self): + # Disable process monitoring for Keepalived process. + cfg.CONF.set_override('check_child_processes_interval', 0, 'AGENT') + + # Create a temp directory to store keepalived configuration. + self.config_path = tempfile.mkdtemp() + + # Instantiate keepalived manager with the IPv6 configuration. + self.manager = keepalived.KeepalivedManager('router1', self.config, + namespace=self.nsname, process_monitor=self.pm, + conf_path=self.config_path) + self.manager.spawn() + + def verify_ipv6_address_assignment(self, gw_dev): + process = self.manager.get_process() + agent_utils.wait_until_true(lambda: process.active) + + def _gw_vip_assigned(): + iface_ip = gw_dev.addr.list(ip_version=6, scope='global') + if iface_ip: + return self.gw_vip == iface_ip[0]['cidr'] + + agent_utils.wait_until_true(_gw_vip_assigned) + + def __enter__(self): + ip_lib.IPWrapper().netns.add(self.nsname) + return self + + def __exit__(self, exc_type, exc_value, exc_tb): + self.pm.stop() + if self.manager: + self.manager.disable() + if self.config_path: + shutil.rmtree(self.config_path, ignore_errors=True) + ip_lib.IPWrapper().netns.delete(self.nsname) + cfg.CONF.set_override('check_child_processes_interval', + self.orig_interval, 'AGENT') + + +def keepalived_ipv6_supported(): + """Check if keepalived supports IPv6 functionality. + + Validation is done as follows. + 1. Create a namespace. + 2. Create OVS bridge with two ports (ha_port and gw_port) + 3. Move the ovs ports to the namespace. + 4. Spawn keepalived process inside the namespace with IPv6 configuration. + 5. Verify if IPv6 address is assigned to gw_port. + 6. Verify if IPv6 default route is configured by keepalived. + """ + + random_str = utils.get_random_string(6) + br_name = "ka-test-" + random_str + ha_port = ha_router.HA_DEV_PREFIX + random_str + gw_port = namespaces.INTERNAL_DEV_PREFIX + random_str + gw_vip = 'fdf8:f53b:82e4::10/64' + expected_default_gw = 'fe80:f816::1' + + with ovs_lib.OVSBridge(br_name) as br: + with KeepalivedIPv6Test(ha_port, gw_port, gw_vip, + expected_default_gw) as ka: + br.add_port(ha_port, ('type', 'internal')) + br.add_port(gw_port, ('type', 'internal')) + + ha_dev = ip_lib.IPDevice(ha_port) + gw_dev = ip_lib.IPDevice(gw_port) + + ha_dev.link.set_netns(ka.nsname) + gw_dev.link.set_netns(ka.nsname) + + ha_dev.link.set_up() + gw_dev.link.set_up() + + ka.configure() + + ka.start_keepalived_process() + + ka.verify_ipv6_address_assignment(gw_dev) + + default_gw = gw_dev.route.get_gateway(ip_version=6) + if default_gw: + default_gw = default_gw['gateway'] + + return expected_default_gw == default_gw + + def ovsdb_native_supported(): # Running the test should ensure we are configured for OVSDB native try: diff --git a/neutron/cmd/sanity_check.py b/neutron/cmd/sanity_check.py index 9d5bae36df4..90895e2340f 100644 --- a/neutron/cmd/sanity_check.py +++ b/neutron/cmd/sanity_check.py @@ -21,6 +21,7 @@ from oslo_log import log as logging from neutron.agent import dhcp_agent from neutron.cmd.sanity import checks from neutron.common import config +from neutron.db import l3_hamode_db from neutron.i18n import _LE, _LW @@ -35,6 +36,7 @@ cfg.CONF.import_group('ml2', 'neutron.plugins.ml2.config') cfg.CONF.import_group('ml2_sriov', 'neutron.plugins.ml2.drivers.mech_sriov.mech_driver') dhcp_agent.register_options() +cfg.CONF.register_opts(l3_hamode_db.L3_HA_OPTS) class BoolOptCallback(cfg.BoolOpt): @@ -105,6 +107,15 @@ def check_dnsmasq_version(): return result +def check_keepalived_ipv6_support(): + result = checks.keepalived_ipv6_supported() + if not result: + LOG.error(_LE('The installed version of keepalived does not support ' + 'IPv6. Please update to at least version 1.2.10 for ' + 'IPv6 support.')) + return result + + def check_nova_notify(): result = checks.nova_notify_supported() if not result: @@ -181,6 +192,8 @@ OPTS = [ help=_('Check ovsdb native interface support')), BoolOptCallback('ebtables_installed', check_ebtables, help=_('Check ebtables installation')), + BoolOptCallback('keepalived_ipv6_support', check_keepalived_ipv6_support, + help=_('Check keepalived IPv6 support')), ] @@ -214,6 +227,8 @@ def enable_tests_from_config(): cfg.CONF.set_override('dnsmasq_version', True) if cfg.CONF.OVS.ovsdb_interface == 'native': cfg.CONF.set_override('ovsdb_native', True) + if cfg.CONF.l3_ha: + cfg.CONF.set_override('keepalived_ipv6_support', True) def all_tests_passed(): diff --git a/neutron/common/constants.py b/neutron/common/constants.py index fc9c4b24633..fec9713ce39 100644 --- a/neutron/common/constants.py +++ b/neutron/common/constants.py @@ -45,6 +45,9 @@ DEVICE_OWNER_LOADBALANCERV2 = "neutron:LOADBALANCERV2" # DEVICE_OWNER_ROUTER_HA_INTF is a special case and so is not included. ROUTER_INTERFACE_OWNERS = (DEVICE_OWNER_ROUTER_INTF, DEVICE_OWNER_DVR_INTERFACE) +ROUTER_INTERFACE_OWNERS_SNAT = (DEVICE_OWNER_ROUTER_INTF, + DEVICE_OWNER_DVR_INTERFACE, + DEVICE_OWNER_ROUTER_SNAT) L3_AGENT_MODE_DVR = 'dvr' L3_AGENT_MODE_DVR_SNAT = 'dvr_snat' L3_AGENT_MODE_LEGACY = 'legacy' @@ -178,3 +181,5 @@ RPC_NAMESPACE_STATE = None # Default network MTU value when not configured DEFAULT_NETWORK_MTU = 0 + +ROUTER_MARK_MASK = "0xffff" diff --git a/neutron/common/exceptions.py b/neutron/common/exceptions.py index c6ec6ccca54..373c69979b6 100644 --- a/neutron/common/exceptions.py +++ b/neutron/common/exceptions.py @@ -69,6 +69,10 @@ class ServiceUnavailable(NeutronException): message = _("The service is unavailable") +class NotSupported(NeutronException): + message = _('Not supported: %(msg)s') + + class AdminRequired(NotAuthorized): message = _("User does not have admin privileges: %(reason)s") diff --git a/neutron/common/log.py b/neutron/common/log.py index 7cee18e9cdb..496d09e46b0 100644 --- a/neutron/common/log.py +++ b/neutron/common/log.py @@ -13,26 +13,11 @@ # under the License. """Log helper functions.""" -import functools -from oslo_log import log as logging +from oslo_log import helpers from oslo_log import versionutils -@versionutils.deprecated(as_of=versionutils.deprecated.LIBERTY, - in_favor_of='oslo_log.helpers.log_method_call') -def log(method): - """Decorator helping to log method calls.""" - LOG = logging.getLogger(method.__module__) - - @functools.wraps(method) - def wrapper(*args, **kwargs): - instance = args[0] - data = {"class_name": "%s.%s" % (instance.__class__.__module__, - instance.__class__.__name__), - "method_name": method.__name__, - "args": args[1:], "kwargs": kwargs} - LOG.debug('%(class_name)s method %(method_name)s' - ' called with arguments %(args)s %(kwargs)s', data) - return method(*args, **kwargs) - return wrapper +log = versionutils.deprecated( + as_of=versionutils.deprecated.LIBERTY, + in_favor_of='oslo_log.helpers.log_method_call')(helpers.log_method_call) diff --git a/neutron/common/utils.py b/neutron/common/utils.py index bd2dccdb0d2..c3e56d75c15 100644 --- a/neutron/common/utils.py +++ b/neutron/common/utils.py @@ -19,6 +19,7 @@ """Utilities and helper functions.""" import datetime +import errno import functools import hashlib import logging as std_logging @@ -172,6 +173,16 @@ def find_config_file(options, config_file): return cfg_file +def ensure_dir(dir_path): + """Ensure a directory with 755 permissions mode.""" + try: + os.makedirs(dir_path, 0o755) + except OSError as e: + # If the directory already existed, don't raise the error. + if e.errno != errno.EEXIST: + raise + + def _subprocess_setup(): # Python installs a SIGPIPE handler by default. This is usually not what # non-Python subprocesses expect. diff --git a/neutron/context.py b/neutron/context.py index 1e3b5e8223c..5f3d26e58fe 100644 --- a/neutron/context.py +++ b/neutron/context.py @@ -39,7 +39,8 @@ class ContextBase(oslo_context.RequestContext): @removals.removed_kwarg('read_deleted') def __init__(self, user_id, tenant_id, is_admin=None, roles=None, timestamp=None, request_id=None, tenant_name=None, - user_name=None, overwrite=True, auth_token=None, **kwargs): + user_name=None, overwrite=True, auth_token=None, + is_advsvc=None, **kwargs): """Object initialization. :param overwrite: Set to False to ensure that the greenthread local @@ -60,7 +61,9 @@ class ContextBase(oslo_context.RequestContext): timestamp = datetime.datetime.utcnow() self.timestamp = timestamp self.roles = roles or [] - self.is_advsvc = self.is_admin or policy.check_is_advsvc(self) + self.is_advsvc = is_advsvc + if self.is_advsvc is None: + self.is_advsvc = self.is_admin or policy.check_is_advsvc(self) if self.is_admin is None: self.is_admin = policy.check_is_admin(self) diff --git a/neutron/db/api.py b/neutron/db/api.py index 0b68bd3310a..dec09bd3572 100644 --- a/neutron/db/api.py +++ b/neutron/db/api.py @@ -17,6 +17,7 @@ import contextlib import six from oslo_config import cfg +from oslo_db import api as oslo_db_api from oslo_db import exception as os_db_exception from oslo_db.sqlalchemy import session from sqlalchemy import exc @@ -26,6 +27,8 @@ from sqlalchemy import orm _FACADE = None MAX_RETRIES = 10 +retry_db_errors = oslo_db_api.wrap_db_retry(max_retries=MAX_RETRIES, + retry_on_deadlock=True) def _create_facade_lazily(): diff --git a/neutron/db/common_db_mixin.py b/neutron/db/common_db_mixin.py index 27b75be7f3b..3b31c61df1a 100644 --- a/neutron/db/common_db_mixin.py +++ b/neutron/db/common_db_mixin.py @@ -16,6 +16,8 @@ import weakref import six +from sqlalchemy import and_ +from sqlalchemy import or_ from sqlalchemy import sql from neutron.common import exceptions as n_exc @@ -98,7 +100,15 @@ class CommonDbMixin(object): # define basic filter condition for model query query_filter = None if self.model_query_scope(context, model): - if hasattr(model, 'shared'): + if hasattr(model, 'rbac_entries'): + rbac_model, join_params = self._get_rbac_query_params(model) + query = query.outerjoin(*join_params) + query_filter = ( + (model.tenant_id == context.tenant_id) | + ((rbac_model.action == 'access_as_shared') & + ((rbac_model.target_tenant == context.tenant_id) | + (rbac_model.target_tenant == '*')))) + elif hasattr(model, 'shared'): query_filter = ((model.tenant_id == context.tenant_id) | (model.shared == sql.true())) else: @@ -145,15 +155,47 @@ class CommonDbMixin(object): query = self._model_query(context, model) return query.filter(model.id == id).one() - def _apply_filters_to_query(self, query, model, filters): + @staticmethod + def _get_rbac_query_params(model): + """Return the class and join params for the rbac relationship.""" + try: + cls = model.rbac_entries.property.mapper.class_ + return (cls, (cls, )) + except AttributeError: + # an association proxy is being used (e.g. subnets + # depends on network's rbac entries) + rbac_model = (model.rbac_entries.target_class. + rbac_entries.property.mapper.class_) + return (rbac_model, model.rbac_entries.attr) + + def _apply_filters_to_query(self, query, model, filters, context=None): if filters: for key, value in six.iteritems(filters): column = getattr(model, key, None) - if column: + # NOTE(kevinbenton): if column is a hybrid property that + # references another expression, attempting to convert to + # a boolean will fail so we must compare to None. + # See "An Important Expression Language Gotcha" in: + # docs.sqlalchemy.org/en/rel_0_9/changelog/migration_06.html + if column is not None: if not value: query = query.filter(sql.false()) return query query = query.filter(column.in_(value)) + elif key == 'shared' and hasattr(model, 'rbac_entries'): + # translate a filter on shared into a query against the + # object's rbac entries + rbac, join_params = self._get_rbac_query_params(model) + query = query.outerjoin(*join_params, aliased=True) + matches = [rbac.target_tenant == '*'] + if context: + matches.append(rbac.target_tenant == context.tenant_id) + is_shared = and_( + ~rbac.object_id.is_(None), + rbac.action == 'access_as_shared', + or_(*matches) + ) + query = query.filter(is_shared if value[0] else ~is_shared) for _nam, hooks in six.iteritems(self._model_query_hooks.get(model, {})): result_filter = hooks.get('result_filters', None) @@ -181,7 +223,8 @@ class CommonDbMixin(object): sorts=None, limit=None, marker_obj=None, page_reverse=False): collection = self._model_query(context, model) - collection = self._apply_filters_to_query(collection, model, filters) + collection = self._apply_filters_to_query(collection, model, filters, + context) if limit and page_reverse and sorts: sorts = [(s[0], not s[1]) for s in sorts] collection = sqlalchemyutils.paginate_query(collection, model, limit, diff --git a/neutron/db/db_base_plugin_common.py b/neutron/db/db_base_plugin_common.py index 9cf1ba6bb1d..c68b25fbbff 100644 --- a/neutron/db/db_base_plugin_common.py +++ b/neutron/db/db_base_plugin_common.py @@ -13,6 +13,8 @@ # License for the specific language governing permissions and limitations # under the License. +import functools + from oslo_config import cfg from oslo_log import log as logging from sqlalchemy.orm import exc @@ -72,7 +74,7 @@ class DbBasePluginCommon(common_db_mixin.CommonDbMixin): ) context.session.add(allocated) - def _make_subnet_dict(self, subnet, fields=None): + def _make_subnet_dict(self, subnet, fields=None, context=None): res = {'id': subnet['id'], 'name': subnet['name'], 'tenant_id': subnet['tenant_id'], @@ -92,8 +94,10 @@ class DbBasePluginCommon(common_db_mixin.CommonDbMixin): 'host_routes': [{'destination': route['destination'], 'nexthop': route['nexthop']} for route in subnet['routes']], - 'shared': subnet['shared'] } + # The shared attribute for a subnet is the same as its parent network + res['shared'] = self._make_network_dict(subnet.networks, + context=context)['shared'] # Call auxiliary extend functions, if any self._apply_dict_extend_functions(attributes.SUBNETS, res, subnet) return self._fields(res, fields) @@ -168,7 +172,8 @@ class DbBasePluginCommon(common_db_mixin.CommonDbMixin): def _get_dns_by_subnet(self, context, subnet_id): dns_qry = context.session.query(models_v2.DNSNameServer) - return dns_qry.filter_by(subnet_id=subnet_id).all() + return dns_qry.filter_by(subnet_id=subnet_id).order_by( + models_v2.DNSNameServer.order).all() def _get_route_by_subnet(self, context, subnet_id): route_qry = context.session.query(models_v2.SubnetRoute) @@ -196,8 +201,10 @@ class DbBasePluginCommon(common_db_mixin.CommonDbMixin): sorts=None, limit=None, marker=None, page_reverse=False): marker_obj = self._get_marker_obj(context, 'subnet', limit, marker) + make_subnet_dict = functools.partial(self._make_subnet_dict, + context=context) return self._get_collection(context, models_v2.Subnet, - self._make_subnet_dict, + make_subnet_dict, filters=filters, fields=fields, sorts=sorts, limit=limit, @@ -205,16 +212,25 @@ class DbBasePluginCommon(common_db_mixin.CommonDbMixin): page_reverse=page_reverse) def _make_network_dict(self, network, fields=None, - process_extensions=True): + process_extensions=True, context=None): res = {'id': network['id'], 'name': network['name'], 'tenant_id': network['tenant_id'], 'admin_state_up': network['admin_state_up'], 'mtu': network.get('mtu', constants.DEFAULT_NETWORK_MTU), 'status': network['status'], - 'shared': network['shared'], 'subnets': [subnet['id'] for subnet in network['subnets']]} + # The shared attribute for a network now reflects if the network + # is shared to the calling tenant via an RBAC entry. + shared = False + matches = ('*',) + ((context.tenant_id,) if context else ()) + for entry in network.rbac_entries: + if (entry.action == 'access_as_shared' and + entry.target_tenant in matches): + shared = True + break + res['shared'] = shared # TODO(pritesh): Move vlan_transparent to the extension module. # vlan_transparent here is only added if the vlantransparent # extension is enabled. @@ -227,8 +243,7 @@ class DbBasePluginCommon(common_db_mixin.CommonDbMixin): attributes.NETWORKS, res, network) return self._fields(res, fields) - def _make_subnet_args(self, shared, detail, - subnet, subnetpool_id): + def _make_subnet_args(self, detail, subnet, subnetpool_id): gateway_ip = str(detail.gateway_ip) if detail.gateway_ip else None args = {'tenant_id': detail.tenant_id, 'id': detail.subnet_id, @@ -238,8 +253,7 @@ class DbBasePluginCommon(common_db_mixin.CommonDbMixin): 'cidr': str(detail.subnet_cidr), 'subnetpool_id': subnetpool_id, 'enable_dhcp': subnet['enable_dhcp'], - 'gateway_ip': gateway_ip, - 'shared': shared} + 'gateway_ip': gateway_ip} if subnet['ip_version'] == 6 and subnet['enable_dhcp']: if attributes.is_attr_set(subnet['ipv6_ra_mode']): args['ipv6_ra_mode'] = subnet['ipv6_ra_mode'] diff --git a/neutron/db/db_base_plugin_v2.py b/neutron/db/db_base_plugin_v2.py index d2b5f89972f..b0d23d26199 100644 --- a/neutron/db/db_base_plugin_v2.py +++ b/neutron/db/db_base_plugin_v2.py @@ -13,6 +13,8 @@ # License for the specific language governing permissions and limitations # under the License. +import functools + import netaddr from oslo_config import cfg from oslo_db import exception as db_exc @@ -34,7 +36,9 @@ from neutron import context as ctx from neutron.db import api as db_api from neutron.db import db_base_plugin_common from neutron.db import ipam_non_pluggable_backend +from neutron.db import ipam_pluggable_backend from neutron.db import models_v2 +from neutron.db import rbac_db_models as rbac_db from neutron.db import sqlalchemyutils from neutron.extensions import l3 from neutron.i18n import _LE, _LI @@ -98,7 +102,10 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon, self.nova_notifier.record_port_status_changed) def set_ipam_backend(self): - self.ipam = ipam_non_pluggable_backend.IpamNonPluggableBackend() + if cfg.CONF.ipam_driver: + self.ipam = ipam_pluggable_backend.IpamPluggableBackend() + else: + self.ipam = ipam_non_pluggable_backend.IpamNonPluggableBackend() def _validate_host_route(self, route, ip_version): try: @@ -235,7 +242,6 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon, 'name': n['name'], 'admin_state_up': n['admin_state_up'], 'mtu': n.get('mtu', constants.DEFAULT_NETWORK_MTU), - 'shared': n['shared'], 'status': n.get('status', constants.NET_STATUS_ACTIVE)} # TODO(pritesh): Move vlan_transparent to the extension module. # vlan_transparent here is only added if the vlantransparent @@ -244,8 +250,14 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon, attributes.ATTR_NOT_SPECIFIED): args['vlan_transparent'] = n['vlan_transparent'] network = models_v2.Network(**args) + if n['shared']: + entry = rbac_db.NetworkRBAC( + network=network, action='access_as_shared', + target_tenant='*', tenant_id=network['tenant_id']) + context.session.add(entry) context.session.add(network) - return self._make_network_dict(network, process_extensions=False) + return self._make_network_dict(network, process_extensions=False, + context=context) def update_network(self, context, id, network): n = network['network'] @@ -253,13 +265,25 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon, network = self._get_network(context, id) # validate 'shared' parameter if 'shared' in n: + entry = None + for item in network.rbac_entries: + if (item.action == 'access_as_shared' and + item.target_tenant == '*'): + entry = item + break + setattr(network, 'shared', True if entry else False) self._validate_shared_update(context, id, network, n) + update_shared = n.pop('shared') + if update_shared and not entry: + entry = rbac_db.NetworkRBAC( + network=network, action='access_as_shared', + target_tenant='*', tenant_id=network['tenant_id']) + context.session.add(entry) + elif not update_shared and entry: + context.session.delete(entry) + context.session.expire(network, ['rbac_entries']) network.update(n) - # also update shared in all the subnets for this network - subnets = self._get_subnets_by_network(context, id) - for subnet in subnets: - subnet['shared'] = network['shared'] - return self._make_network_dict(network) + return self._make_network_dict(network, context=context) def delete_network(self, context, id): with context.session.begin(subtransactions=True): @@ -285,14 +309,16 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon, def get_network(self, context, id, fields=None): network = self._get_network(context, id) - return self._make_network_dict(network, fields) + return self._make_network_dict(network, fields, context=context) def get_networks(self, context, filters=None, fields=None, sorts=None, limit=None, marker=None, page_reverse=False): marker_obj = self._get_marker_obj(context, 'network', limit, marker) + make_network_dict = functools.partial(self._make_network_dict, + context=context) return self._get_collection(context, models_v2.Network, - self._make_network_dict, + make_network_dict, filters=filters, fields=fields, sorts=sorts, limit=limit, @@ -448,10 +474,10 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon, with context.session.begin(subtransactions=True): network = self._get_network(context, s["network_id"]) - subnet = self.ipam.allocate_subnet(context, - network, - s, - subnetpool_id) + subnet, ipam_subnet = self.ipam.allocate_subnet(context, + network, + s, + subnetpool_id) if hasattr(network, 'external') and network.external: self._update_router_gw_ports(context, network, @@ -459,8 +485,9 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon, # If this subnet supports auto-addressing, then update any # internal ports on the network with addresses for this subnet. if ipv6_utils.is_auto_address_subnet(subnet): - self.ipam.add_auto_addrs_on_network_ports(context, subnet) - return self._make_subnet_dict(subnet) + self.ipam.add_auto_addrs_on_network_ports(context, subnet, + ipam_subnet) + return self._make_subnet_dict(subnet, context=context) def _get_subnetpool_id(self, subnet): """Returns the subnetpool id for this request @@ -539,22 +566,25 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon, s['ip_version'] = db_subnet.ip_version s['cidr'] = db_subnet.cidr s['id'] = db_subnet.id + s['tenant_id'] = db_subnet.tenant_id self._validate_subnet(context, s, cur_subnet=db_subnet) + db_pools = [netaddr.IPRange(p['first_ip'], p['last_ip']) + for p in db_subnet.allocation_pools] + + range_pools = None + if s.get('allocation_pools') is not None: + # Convert allocation pools to IPRange to simplify future checks + range_pools = self.ipam.pools_to_ip_range(s['allocation_pools']) + s['allocation_pools'] = range_pools if s.get('gateway_ip') is not None: - if s.get('allocation_pools') is not None: - allocation_pools = [{'start': p['start'], 'end': p['end']} - for p in s['allocation_pools']] - else: - allocation_pools = [{'start': p['first_ip'], - 'end': p['last_ip']} - for p in db_subnet.allocation_pools] - self.ipam.validate_gw_out_of_pools(s["gateway_ip"], - allocation_pools) + pools = range_pools if range_pools is not None else db_pools + self.ipam.validate_gw_out_of_pools(s["gateway_ip"], pools) with context.session.begin(subtransactions=True): - subnet, changes = self.ipam.update_db_subnet(context, id, s) - result = self._make_subnet_dict(subnet) + subnet, changes = self.ipam.update_db_subnet(context, id, s, + db_pools) + result = self._make_subnet_dict(subnet, context=context) # Keep up with fields that changed result.update(changes) return result @@ -612,7 +642,8 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon, in_(AUTO_DELETE_PORT_OWNERS))) network_ports = qry_network_ports.all() if network_ports: - map(context.session.delete, network_ports) + for port in network_ports: + context.session.delete(port) # Check if there are more IP allocations, unless # is_auto_address_subnet is True. In that case the check is # unnecessary. This additional check not only would be wasteful @@ -631,10 +662,13 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon, raise n_exc.SubnetInUse(subnet_id=id) context.session.delete(subnet) + # Delete related ipam subnet manually, + # since there is no FK relationship + self.ipam.delete_subnet(context, id) def get_subnet(self, context, id, fields=None): subnet = self._get_subnet(context, id) - return self._make_subnet_dict(subnet, fields) + return self._make_subnet_dict(subnet, fields, context=context) def get_subnets(self, context, filters=None, fields=None, sorts=None, limit=None, marker=None, @@ -914,7 +948,7 @@ class NeutronDbPluginV2(db_base_plugin_common.DbBasePluginCommon, if subnet_ids: query = query.filter(IPAllocation.subnet_id.in_(subnet_ids)) - query = self._apply_filters_to_query(query, Port, filters) + query = self._apply_filters_to_query(query, Port, filters, context) if limit and page_reverse and sorts: sorts = [(s[0], not s[1]) for s in sorts] query = sqlalchemyutils.paginate_query(query, Port, limit, diff --git a/neutron/db/dvr_mac_db.py b/neutron/db/dvr_mac_db.py index c0f0d656aa7..0502aa0029f 100644 --- a/neutron/db/dvr_mac_db.py +++ b/neutron/db/dvr_mac_db.py @@ -35,8 +35,15 @@ LOG = logging.getLogger(__name__) dvr_mac_address_opts = [ cfg.StrOpt('dvr_base_mac', default="fa:16:3f:00:00:00", - help=_('The base mac address used for unique ' - 'DVR instances by Neutron')), + help=_("The base mac address used for unique " + "DVR instances by Neutron. The first 3 octets will " + "remain unchanged. If the 4th octet is not 00, it will " + "also be used. The others will be randomly generated. " + "The 'dvr_base_mac' *must* be different from " + "'base_mac' to avoid mixing them up with MAC's " + "allocated for tenant ports. A 4 octet example would be " + "dvr_base_mac = fa:16:3f:4f:00:00. The default is 3 " + "octet")), ] cfg.CONF.register_opts(dvr_mac_address_opts) diff --git a/neutron/db/flavors_db.py b/neutron/db/flavors_db.py new file mode 100644 index 00000000000..75f5241be9b --- /dev/null +++ b/neutron/db/flavors_db.py @@ -0,0 +1,356 @@ +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from oslo_log import log as logging +from oslo_serialization import jsonutils +from oslo_utils import importutils +from oslo_utils import uuidutils +import sqlalchemy as sa +from sqlalchemy import orm +from sqlalchemy.orm import exc as sa_exc + +from neutron.common import exceptions as qexception +from neutron.db import common_db_mixin +from neutron.db import model_base +from neutron.db import models_v2 +from neutron.plugins.common import constants + + +LOG = logging.getLogger(__name__) + + +# Flavor Exceptions +class FlavorNotFound(qexception.NotFound): + message = _("Flavor %(flavor_id)s could not be found") + + +class FlavorInUse(qexception.InUse): + message = _("Flavor %(flavor_id)s is used by some service instance") + + +class ServiceProfileNotFound(qexception.NotFound): + message = _("Service Profile %(sp_id)s could not be found") + + +class ServiceProfileInUse(qexception.InUse): + message = _("Service Profile %(sp_id)s is used by some service instance") + + +class FlavorServiceProfileBindingExists(qexception.Conflict): + message = _("Service Profile %(sp_id)s is already associated " + "with flavor %(fl_id)s") + + +class FlavorServiceProfileBindingNotFound(qexception.NotFound): + message = _("Service Profile %(sp_id)s is not associated " + "with flavor %(fl_id)s") + + +class DummyCorePlugin(object): + pass + + +class DummyServicePlugin(object): + + def driver_loaded(self, driver, service_profile): + pass + + def get_plugin_type(self): + return constants.DUMMY + + def get_plugin_description(self): + return "Dummy service plugin, aware of flavors" + + +class DummyServiceDriver(object): + + @staticmethod + def get_service_type(): + return constants.DUMMY + + def __init__(self, plugin): + pass + + +class Flavor(model_base.BASEV2, models_v2.HasId): + name = sa.Column(sa.String(255)) + description = sa.Column(sa.String(1024)) + enabled = sa.Column(sa.Boolean, nullable=False, default=True, + server_default=sa.sql.true()) + # Make it True for multi-type flavors + service_type = sa.Column(sa.String(36), nullable=True) + service_profiles = orm.relationship("FlavorServiceProfileBinding", + cascade="all, delete-orphan") + + +class ServiceProfile(model_base.BASEV2, models_v2.HasId): + description = sa.Column(sa.String(1024)) + driver = sa.Column(sa.String(1024), nullable=False) + enabled = sa.Column(sa.Boolean, nullable=False, default=True, + server_default=sa.sql.true()) + metainfo = sa.Column(sa.String(4096)) + flavors = orm.relationship("FlavorServiceProfileBinding") + + +class FlavorServiceProfileBinding(model_base.BASEV2): + flavor_id = sa.Column(sa.String(36), + sa.ForeignKey("flavors.id", + ondelete="CASCADE"), + nullable=False, primary_key=True) + flavor = orm.relationship(Flavor) + service_profile_id = sa.Column(sa.String(36), + sa.ForeignKey("serviceprofiles.id", + ondelete="CASCADE"), + nullable=False, primary_key=True) + service_profile = orm.relationship(ServiceProfile) + + +class FlavorManager(common_db_mixin.CommonDbMixin): + """Class to support flavors and service profiles.""" + + supported_extension_aliases = ["flavors"] + + def __init__(self, manager=None): + # manager = None is UT usage where FlavorManager is loaded as + # a core plugin + self.manager = manager + + def get_plugin_name(self): + return constants.FLAVORS + + def get_plugin_type(self): + return constants.FLAVORS + + def get_plugin_description(self): + return "Neutron Flavors and Service Profiles manager plugin" + + def _get_flavor(self, context, flavor_id): + try: + return self._get_by_id(context, Flavor, flavor_id) + except sa_exc.NoResultFound: + raise FlavorNotFound(flavor_id=flavor_id) + + def _get_service_profile(self, context, sp_id): + try: + return self._get_by_id(context, ServiceProfile, sp_id) + except sa_exc.NoResultFound: + raise ServiceProfileNotFound(sp_id=sp_id) + + def _make_flavor_dict(self, flavor_db, fields=None): + res = {'id': flavor_db['id'], + 'name': flavor_db['name'], + 'description': flavor_db['description'], + 'enabled': flavor_db['enabled'], + 'service_profiles': []} + if flavor_db.service_profiles: + res['service_profiles'] = [sp['service_profile_id'] + for sp in flavor_db.service_profiles] + return self._fields(res, fields) + + def _make_service_profile_dict(self, sp_db, fields=None): + res = {'id': sp_db['id'], + 'description': sp_db['description'], + 'driver': sp_db['driver'], + 'enabled': sp_db['enabled'], + 'metainfo': sp_db['metainfo']} + if sp_db.flavors: + res['flavors'] = [fl['flavor_id'] + for fl in sp_db.flavors] + return self._fields(res, fields) + + def _ensure_flavor_not_in_use(self, context, flavor_id): + """Checks that flavor is not associated with service instance.""" + # Future TODO(enikanorov): check that there is no binding to + # instances. Shall address in future upon getting the right + # flavor supported driver + pass + + def _ensure_service_profile_not_in_use(self, context, sp_id): + # Future TODO(enikanorov): check that there is no binding to instances + # and no binding to flavors. Shall be addressed in future + fl = (context.session.query(FlavorServiceProfileBinding). + filter_by(service_profile_id=sp_id).first()) + if fl: + raise ServiceProfileInUse(sp_id=sp_id) + + def create_flavor(self, context, flavor): + fl = flavor['flavor'] + with context.session.begin(subtransactions=True): + fl_db = Flavor(id=uuidutils.generate_uuid(), + name=fl['name'], + description=fl['description'], + enabled=fl['enabled']) + context.session.add(fl_db) + return self._make_flavor_dict(fl_db) + + def update_flavor(self, context, flavor_id, flavor): + fl = flavor['flavor'] + with context.session.begin(subtransactions=True): + self._ensure_flavor_not_in_use(context, flavor_id) + fl_db = self._get_flavor(context, flavor_id) + fl_db.update(fl) + + return self._make_flavor_dict(fl_db) + + def get_flavor(self, context, flavor_id, fields=None): + fl = self._get_flavor(context, flavor_id) + return self._make_flavor_dict(fl, fields) + + def delete_flavor(self, context, flavor_id): + with context.session.begin(subtransactions=True): + self._ensure_flavor_not_in_use(context, flavor_id) + fl_db = self._get_flavor(context, flavor_id) + context.session.delete(fl_db) + + def get_flavors(self, context, filters=None, fields=None, + sorts=None, limit=None, marker=None, page_reverse=False): + return self._get_collection(context, Flavor, self._make_flavor_dict, + filters=filters, fields=fields, + sorts=sorts, limit=limit, + marker_obj=marker, + page_reverse=page_reverse) + + def create_flavor_service_profile(self, context, + service_profile, flavor_id): + sp = service_profile['service_profile'] + with context.session.begin(subtransactions=True): + bind_qry = context.session.query(FlavorServiceProfileBinding) + binding = bind_qry.filter_by(service_profile_id=sp['id'], + flavor_id=flavor_id).first() + if binding: + raise FlavorServiceProfileBindingExists( + sp_id=sp['id'], fl_id=flavor_id) + binding = FlavorServiceProfileBinding( + service_profile_id=sp['id'], + flavor_id=flavor_id) + context.session.add(binding) + fl_db = self._get_flavor(context, flavor_id) + sps = [x['service_profile_id'] for x in fl_db.service_profiles] + return sps + + def delete_flavor_service_profile(self, context, + service_profile_id, flavor_id): + with context.session.begin(subtransactions=True): + binding = (context.session.query(FlavorServiceProfileBinding). + filter_by(service_profile_id=service_profile_id, + flavor_id=flavor_id).first()) + if not binding: + raise FlavorServiceProfileBindingNotFound( + sp_id=service_profile_id, fl_id=flavor_id) + context.session.delete(binding) + + def get_flavor_service_profile(self, context, + service_profile_id, flavor_id, fields=None): + with context.session.begin(subtransactions=True): + binding = (context.session.query(FlavorServiceProfileBinding). + filter_by(service_profile_id=service_profile_id, + flavor_id=flavor_id).first()) + if not binding: + raise FlavorServiceProfileBindingNotFound( + sp_id=service_profile_id, fl_id=flavor_id) + res = {'service_profile_id': service_profile_id, + 'flavor_id': flavor_id} + return self._fields(res, fields) + + def _load_dummy_driver(self, driver): + driver = DummyServiceDriver + driver_klass = driver + return driver_klass + + def _load_driver(self, profile): + driver_klass = importutils.import_class(profile.driver) + return driver_klass + + def create_service_profile(self, context, service_profile): + sp = service_profile['service_profile'] + with context.session.begin(subtransactions=True): + driver_klass = self._load_dummy_driver(sp['driver']) + # 'get_service_type' must be a static method so it cant be changed + svc_type = DummyServiceDriver.get_service_type() + + sp_db = ServiceProfile(id=uuidutils.generate_uuid(), + description=sp['description'], + driver=svc_type, + enabled=sp['enabled'], + metainfo=jsonutils.dumps(sp['metainfo'])) + context.session.add(sp_db) + try: + # driver_klass = self._load_dummy_driver(sp_db) + # Future TODO(madhu_ak): commented for now to load dummy driver + # until there is flavor supported driver + # plugin = self.manager.get_service_plugins()[svc_type] + # plugin.driver_loaded(driver_klass(plugin), sp_db) + # svc_type = DummyServiceDriver.get_service_type() + # plugin = self.manager.get_service_plugins()[svc_type] + # plugin = FlavorManager(manager.NeutronManager().get_instance()) + # plugin = DummyServicePlugin.get_plugin_type(svc_type) + plugin = DummyServicePlugin() + plugin.driver_loaded(driver_klass(svc_type), sp_db) + except Exception: + # Future TODO(enikanorov): raise proper exception + self.delete_service_profile(context, sp_db['id']) + raise + return self._make_service_profile_dict(sp_db) + + def unit_create_service_profile(self, context, service_profile): + # Note: Triggered by unit tests pointing to dummy driver + sp = service_profile['service_profile'] + with context.session.begin(subtransactions=True): + sp_db = ServiceProfile(id=uuidutils.generate_uuid(), + description=sp['description'], + driver=sp['driver'], + enabled=sp['enabled'], + metainfo=sp['metainfo']) + context.session.add(sp_db) + try: + driver_klass = self._load_driver(sp_db) + # require get_service_type be a static method + svc_type = driver_klass.get_service_type() + plugin = self.manager.get_service_plugins()[svc_type] + plugin.driver_loaded(driver_klass(plugin), sp_db) + except Exception: + # Future TODO(enikanorov): raise proper exception + self.delete_service_profile(context, sp_db['id']) + raise + return self._make_service_profile_dict(sp_db) + + def update_service_profile(self, context, + service_profile_id, service_profile): + sp = service_profile['service_profile'] + with context.session.begin(subtransactions=True): + self._ensure_service_profile_not_in_use(context, + service_profile_id) + sp_db = self._get_service_profile(context, service_profile_id) + sp_db.update(sp) + return self._make_service_profile_dict(sp_db) + + def get_service_profile(self, context, sp_id, fields=None): + sp_db = self._get_service_profile(context, sp_id) + return self._make_service_profile_dict(sp_db, fields) + + def delete_service_profile(self, context, sp_id): + with context.session.begin(subtransactions=True): + self._ensure_service_profile_not_in_use(context, sp_id) + sp_db = self._get_service_profile(context, sp_id) + context.session.delete(sp_db) + + def get_service_profiles(self, context, filters=None, fields=None, + sorts=None, limit=None, marker=None, + page_reverse=False): + return self._get_collection(context, ServiceProfile, + self._make_service_profile_dict, + filters=filters, fields=fields, + sorts=sorts, limit=limit, + marker_obj=marker, + page_reverse=page_reverse) diff --git a/neutron/db/ipam_backend_mixin.py b/neutron/db/ipam_backend_mixin.py index d4de20937af..e52650de3d8 100644 --- a/neutron/db/ipam_backend_mixin.py +++ b/neutron/db/ipam_backend_mixin.py @@ -52,6 +52,24 @@ class IpamBackendMixin(db_base_plugin_common.DbBasePluginCommon): return str(netaddr.IPNetwork(cidr_net).network + 1) return subnet.get('gateway_ip') + @staticmethod + def pools_to_ip_range(ip_pools): + ip_range_pools = [] + for ip_pool in ip_pools: + try: + ip_range_pools.append(netaddr.IPRange(ip_pool['start'], + ip_pool['end'])) + except netaddr.AddrFormatError: + LOG.info(_LI("Found invalid IP address in pool: " + "%(start)s - %(end)s:"), + {'start': ip_pool['start'], + 'end': ip_pool['end']}) + raise n_exc.InvalidAllocationPool(pool=ip_pool) + return ip_range_pools + + def delete_subnet(self, context, subnet_id): + pass + def validate_pools_with_subnetpool(self, subnet): """Verifies that allocation pools are set correctly @@ -120,42 +138,43 @@ class IpamBackendMixin(db_base_plugin_common.DbBasePluginCommon): def _update_subnet_dns_nameservers(self, context, id, s): old_dns_list = self._get_dns_by_subnet(context, id) - new_dns_addr_set = set(s["dns_nameservers"]) - old_dns_addr_set = set([dns['address'] - for dns in old_dns_list]) + new_dns_addr_list = s["dns_nameservers"] - new_dns = list(new_dns_addr_set) - for dns_addr in old_dns_addr_set - new_dns_addr_set: - for dns in old_dns_list: - if dns['address'] == dns_addr: - context.session.delete(dns) - for dns_addr in new_dns_addr_set - old_dns_addr_set: + # NOTE(changzhi) delete all dns nameservers from db + # when update subnet's DNS nameservers. And store new + # nameservers with order one by one. + for dns in old_dns_list: + context.session.delete(dns) + + for order, server in enumerate(new_dns_addr_list): dns = models_v2.DNSNameServer( - address=dns_addr, + address=server, + order=order, subnet_id=id) context.session.add(dns) del s["dns_nameservers"] - return new_dns + return new_dns_addr_list def _update_subnet_allocation_pools(self, context, subnet_id, s): context.session.query(models_v2.IPAllocationPool).filter_by( subnet_id=subnet_id).delete() - new_pools = [models_v2.IPAllocationPool(first_ip=p['start'], - last_ip=p['end'], + pools = ((netaddr.IPAddress(p.first, p.version).format(), + netaddr.IPAddress(p.last, p.version).format()) + for p in s['allocation_pools']) + new_pools = [models_v2.IPAllocationPool(first_ip=p[0], + last_ip=p[1], subnet_id=subnet_id) - for p in s['allocation_pools']] + for p in pools] context.session.add_all(new_pools) # Call static method with self to redefine in child # (non-pluggable backend) self._rebuild_availability_ranges(context, [s]) - # Gather new pools for result: - result_pools = [{'start': pool['start'], - 'end': pool['end']} - for pool in s['allocation_pools']] + # Gather new pools for result + result_pools = [{'start': p[0], 'end': p[1]} for p in pools] del s['allocation_pools'] return result_pools - def update_db_subnet(self, context, subnet_id, s): + def update_db_subnet(self, context, subnet_id, s, oldpools): changes = {} if "dns_nameservers" in s: changes['dns_nameservers'] = ( @@ -239,38 +258,23 @@ class IpamBackendMixin(db_base_plugin_common.DbBasePluginCommon): LOG.debug("Performing IP validity checks on allocation pools") ip_sets = [] for ip_pool in ip_pools: - try: - start_ip = netaddr.IPAddress(ip_pool['start']) - end_ip = netaddr.IPAddress(ip_pool['end']) - except netaddr.AddrFormatError: - LOG.info(_LI("Found invalid IP address in pool: " - "%(start)s - %(end)s:"), - {'start': ip_pool['start'], - 'end': ip_pool['end']}) - raise n_exc.InvalidAllocationPool(pool=ip_pool) + start_ip = netaddr.IPAddress(ip_pool.first, ip_pool.version) + end_ip = netaddr.IPAddress(ip_pool.last, ip_pool.version) if (start_ip.version != subnet.version or end_ip.version != subnet.version): LOG.info(_LI("Specified IP addresses do not match " "the subnet IP version")) raise n_exc.InvalidAllocationPool(pool=ip_pool) - if end_ip < start_ip: - LOG.info(_LI("Start IP (%(start)s) is greater than end IP " - "(%(end)s)"), - {'start': ip_pool['start'], 'end': ip_pool['end']}) - raise n_exc.InvalidAllocationPool(pool=ip_pool) if start_ip < subnet_first_ip or end_ip > subnet_last_ip: LOG.info(_LI("Found pool larger than subnet " "CIDR:%(start)s - %(end)s"), - {'start': ip_pool['start'], - 'end': ip_pool['end']}) + {'start': start_ip, 'end': end_ip}) raise n_exc.OutOfBoundsAllocationPool( pool=ip_pool, subnet_cidr=subnet_cidr) # Valid allocation pool # Create an IPSet for it for easily verifying overlaps - ip_sets.append(netaddr.IPSet(netaddr.IPRange( - ip_pool['start'], - ip_pool['end']).cidrs())) + ip_sets.append(netaddr.IPSet(ip_pool.cidrs())) LOG.debug("Checking for overlaps among allocation pools " "and gateway ip") @@ -291,22 +295,54 @@ class IpamBackendMixin(db_base_plugin_common.DbBasePluginCommon): pool_2=r_range, subnet_cidr=subnet_cidr) + def _validate_max_ips_per_port(self, fixed_ip_list): + if len(fixed_ip_list) > cfg.CONF.max_fixed_ips_per_port: + msg = _('Exceeded maximim amount of fixed ips per port') + raise n_exc.InvalidInput(error_message=msg) + + def _get_subnet_for_fixed_ip(self, context, fixed, network_id): + if 'subnet_id' in fixed: + subnet = self._get_subnet(context, fixed['subnet_id']) + if subnet['network_id'] != network_id: + msg = (_("Failed to create port on network %(network_id)s" + ", because fixed_ips included invalid subnet " + "%(subnet_id)s") % + {'network_id': network_id, + 'subnet_id': fixed['subnet_id']}) + raise n_exc.InvalidInput(error_message=msg) + # Ensure that the IP is valid on the subnet + if ('ip_address' in fixed and + not ipam_utils.check_subnet_ip(subnet['cidr'], + fixed['ip_address'])): + raise n_exc.InvalidIpForSubnet(ip_address=fixed['ip_address']) + return subnet + + if 'ip_address' not in fixed: + msg = _('IP allocation requires subnet_id or ip_address') + raise n_exc.InvalidInput(error_message=msg) + + filter = {'network_id': [network_id]} + subnets = self._get_subnets(context, filters=filter) + + for subnet in subnets: + if ipam_utils.check_subnet_ip(subnet['cidr'], + fixed['ip_address']): + return subnet + raise n_exc.InvalidIpForNetwork(ip_address=fixed['ip_address']) + def _prepare_allocation_pools(self, allocation_pools, cidr, gateway_ip): """Returns allocation pools represented as list of IPRanges""" if not attributes.is_attr_set(allocation_pools): return ipam_utils.generate_pools(cidr, gateway_ip) - self._validate_allocation_pools(allocation_pools, cidr) + ip_range_pools = self.pools_to_ip_range(allocation_pools) + self._validate_allocation_pools(ip_range_pools, cidr) if gateway_ip: - self.validate_gw_out_of_pools(gateway_ip, allocation_pools) - return [netaddr.IPRange(p['start'], p['end']) - for p in allocation_pools] + self.validate_gw_out_of_pools(gateway_ip, ip_range_pools) + return ip_range_pools def validate_gw_out_of_pools(self, gateway_ip, pools): - for allocation_pool in pools: - pool_range = netaddr.IPRange( - allocation_pool['start'], - allocation_pool['end']) + for pool_range in pools: if netaddr.IPAddress(gateway_ip) in pool_range: raise n_exc.GatewayConflictWithAllocationPools( pool=pool_range, @@ -373,7 +409,7 @@ class IpamBackendMixin(db_base_plugin_common.DbBasePluginCommon): enable_eagerloads(False).filter_by(id=port_id)) if not context.is_admin: query = query.filter_by(tenant_id=context.tenant_id) - query.delete() + context.session.delete(query.first()) def _save_subnet(self, context, network, @@ -388,11 +424,15 @@ class IpamBackendMixin(db_base_plugin_common.DbBasePluginCommon): subnet = models_v2.Subnet(**subnet_args) context.session.add(subnet) + # NOTE(changzhi) Store DNS nameservers with order into DB one + # by one when create subnet with DNS nameservers if attributes.is_attr_set(dns_nameservers): - for addr in dns_nameservers: - ns = models_v2.DNSNameServer(address=addr, - subnet_id=subnet.id) - context.session.add(ns) + for order, server in enumerate(dns_nameservers): + dns = models_v2.DNSNameServer( + address=server, + order=order, + subnet_id=subnet.id) + context.session.add(dns) if attributes.is_attr_set(host_routes): for rt in host_routes: diff --git a/neutron/db/ipam_non_pluggable_backend.py b/neutron/db/ipam_non_pluggable_backend.py index c515848b5e1..5f5daa7ac7d 100644 --- a/neutron/db/ipam_non_pluggable_backend.py +++ b/neutron/db/ipam_non_pluggable_backend.py @@ -14,7 +14,6 @@ # under the License. import netaddr -from oslo_config import cfg from oslo_db import exception as db_exc from oslo_log import log as logging from sqlalchemy import and_ @@ -29,7 +28,6 @@ from neutron.db import ipam_backend_mixin from neutron.db import models_v2 from neutron.ipam import requests as ipam_req from neutron.ipam import subnet_alloc -from neutron.ipam import utils as ipam_utils LOG = logging.getLogger(__name__) @@ -242,49 +240,17 @@ class IpamNonPluggableBackend(ipam_backend_mixin.IpamBackendMixin): """ fixed_ip_set = [] for fixed in fixed_ips: - found = False - if 'subnet_id' not in fixed: - if 'ip_address' not in fixed: - msg = _('IP allocation requires subnet_id or ip_address') - raise n_exc.InvalidInput(error_message=msg) - - filter = {'network_id': [network_id]} - subnets = self._get_subnets(context, filters=filter) - for subnet in subnets: - if ipam_utils.check_subnet_ip(subnet['cidr'], - fixed['ip_address']): - found = True - subnet_id = subnet['id'] - break - if not found: - raise n_exc.InvalidIpForNetwork( - ip_address=fixed['ip_address']) - else: - subnet = self._get_subnet(context, fixed['subnet_id']) - if subnet['network_id'] != network_id: - msg = (_("Failed to create port on network %(network_id)s" - ", because fixed_ips included invalid subnet " - "%(subnet_id)s") % - {'network_id': network_id, - 'subnet_id': fixed['subnet_id']}) - raise n_exc.InvalidInput(error_message=msg) - subnet_id = subnet['id'] + subnet = self._get_subnet_for_fixed_ip(context, fixed, network_id) is_auto_addr_subnet = ipv6_utils.is_auto_address_subnet(subnet) if 'ip_address' in fixed: # Ensure that the IP's are unique if not IpamNonPluggableBackend._check_unique_ip( context, network_id, - subnet_id, fixed['ip_address']): + subnet['id'], fixed['ip_address']): raise n_exc.IpAddressInUse(net_id=network_id, ip_address=fixed['ip_address']) - # Ensure that the IP is valid on the subnet - if (not found and - not ipam_utils.check_subnet_ip(subnet['cidr'], - fixed['ip_address'])): - raise n_exc.InvalidIpForSubnet( - ip_address=fixed['ip_address']) if (is_auto_addr_subnet and device_owner not in constants.ROUTER_INTERFACE_OWNERS): @@ -292,23 +258,20 @@ class IpamNonPluggableBackend(ipam_backend_mixin.IpamBackendMixin): "assigned to a port on subnet %(id)s since the " "subnet is configured for automatic addresses") % {'address': fixed['ip_address'], - 'id': subnet_id}) + 'id': subnet['id']}) raise n_exc.InvalidInput(error_message=msg) - fixed_ip_set.append({'subnet_id': subnet_id, + fixed_ip_set.append({'subnet_id': subnet['id'], 'ip_address': fixed['ip_address']}) else: # A scan for auto-address subnets on the network is done # separately so that all such subnets (not just those # listed explicitly here by subnet ID) are associated # with the port. - if (device_owner in constants.ROUTER_INTERFACE_OWNERS or - device_owner == constants.DEVICE_OWNER_ROUTER_SNAT or + if (device_owner in constants.ROUTER_INTERFACE_OWNERS_SNAT or not is_auto_addr_subnet): - fixed_ip_set.append({'subnet_id': subnet_id}) + fixed_ip_set.append({'subnet_id': subnet['id']}) - if len(fixed_ip_set) > cfg.CONF.max_fixed_ips_per_port: - msg = _('Exceeded maximim amount of fixed ips per port') - raise n_exc.InvalidInput(error_message=msg) + self._validate_max_ips_per_port(fixed_ip_set) return fixed_ip_set def _allocate_fixed_ips(self, context, fixed_ips, mac_address): @@ -382,8 +345,7 @@ class IpamNonPluggableBackend(ipam_backend_mixin.IpamBackendMixin): net_id_filter = {'network_id': [p['network_id']]} subnets = self._get_subnets(context, filters=net_id_filter) is_router_port = ( - p['device_owner'] in constants.ROUTER_INTERFACE_OWNERS or - p['device_owner'] == constants.DEVICE_OWNER_ROUTER_SNAT) + p['device_owner'] in constants.ROUTER_INTERFACE_OWNERS_SNAT) fixed_configured = p['fixed_ips'] is not attributes.ATTR_NOT_SPECIFIED if fixed_configured: @@ -431,17 +393,16 @@ class IpamNonPluggableBackend(ipam_backend_mixin.IpamBackendMixin): return ips - def add_auto_addrs_on_network_ports(self, context, subnet): + def add_auto_addrs_on_network_ports(self, context, subnet, ipam_subnet): """For an auto-address subnet, add addrs for ports on the net.""" with context.session.begin(subtransactions=True): network_id = subnet['network_id'] port_qry = context.session.query(models_v2.Port) - for port in port_qry.filter( + ports = port_qry.filter( and_(models_v2.Port.network_id == network_id, - models_v2.Port.device_owner != - constants.DEVICE_OWNER_ROUTER_SNAT, ~models_v2.Port.device_owner.in_( - constants.ROUTER_INTERFACE_OWNERS))): + constants.ROUTER_INTERFACE_OWNERS_SNAT))) + for port in ports: ip_address = self._calculate_ipv6_eui64_addr( context, subnet, port['mac_address']) allocated = models_v2.IPAllocation(network_id=network_id, @@ -499,11 +460,12 @@ class IpamNonPluggableBackend(ipam_backend_mixin.IpamBackendMixin): subnet = self._save_subnet(context, network, self._make_subnet_args( - network.shared, subnet_request, subnet, subnetpool_id), subnet['dns_nameservers'], subnet['host_routes'], subnet_request) - return subnet + # ipam_subnet is not expected to be allocated for non pluggable ipam, + # so just return None for it (second element in returned tuple) + return subnet, None diff --git a/neutron/db/ipam_pluggable_backend.py b/neutron/db/ipam_pluggable_backend.py new file mode 100644 index 00000000000..dd35ac0b271 --- /dev/null +++ b/neutron/db/ipam_pluggable_backend.py @@ -0,0 +1,451 @@ +# Copyright (c) 2015 Infoblox Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import netaddr +from oslo_db import exception as db_exc +from oslo_log import log as logging +from oslo_utils import excutils +from sqlalchemy import and_ + +from neutron.api.v2 import attributes +from neutron.common import constants +from neutron.common import exceptions as n_exc +from neutron.common import ipv6_utils +from neutron.db import ipam_backend_mixin +from neutron.db import models_v2 +from neutron.i18n import _LE +from neutron.ipam import driver +from neutron.ipam import exceptions as ipam_exc +from neutron.ipam import requests as ipam_req + + +LOG = logging.getLogger(__name__) + + +class IpamPluggableBackend(ipam_backend_mixin.IpamBackendMixin): + + def _get_failed_ips(self, all_ips, success_ips): + ips_list = (ip_dict['ip_address'] for ip_dict in success_ips) + return (ip_dict['ip_address'] for ip_dict in all_ips + if ip_dict['ip_address'] not in ips_list) + + def _ipam_deallocate_ips(self, context, ipam_driver, port, ips, + revert_on_fail=True): + """Deallocate set of ips over IPAM. + + If any single ip deallocation fails, tries to allocate deallocated + ip addresses with fixed ip request + """ + deallocated = [] + + try: + for ip in ips: + try: + ipam_subnet = ipam_driver.get_subnet(ip['subnet_id']) + ipam_subnet.deallocate(ip['ip_address']) + deallocated.append(ip) + except n_exc.SubnetNotFound: + LOG.debug("Subnet was not found on ip deallocation: %s", + ip) + except Exception: + with excutils.save_and_reraise_exception(): + LOG.debug("An exception occurred during IP deallocation.") + if revert_on_fail and deallocated: + LOG.debug("Reverting deallocation") + self._ipam_allocate_ips(context, ipam_driver, port, + deallocated, revert_on_fail=False) + elif not revert_on_fail and ips: + addresses = ', '.join(self._get_failed_ips(ips, + deallocated)) + LOG.error(_LE("IP deallocation failed on " + "external system for %s"), addresses) + return deallocated + + def _ipam_try_allocate_ip(self, context, ipam_driver, port, ip_dict): + factory = ipam_driver.get_address_request_factory() + ip_request = factory.get_request(context, port, ip_dict) + ipam_subnet = ipam_driver.get_subnet(ip_dict['subnet_id']) + return ipam_subnet.allocate(ip_request) + + def _ipam_allocate_single_ip(self, context, ipam_driver, port, subnets): + """Allocates single ip from set of subnets + + Raises n_exc.IpAddressGenerationFailure if allocation failed for + all subnets. + """ + for subnet in subnets: + try: + return [self._ipam_try_allocate_ip(context, ipam_driver, + port, subnet), + subnet] + except ipam_exc.IpAddressGenerationFailure: + continue + raise n_exc.IpAddressGenerationFailure( + net_id=port['network_id']) + + def _ipam_allocate_ips(self, context, ipam_driver, port, ips, + revert_on_fail=True): + """Allocate set of ips over IPAM. + + If any single ip allocation fails, tries to deallocate all + allocated ip addresses. + """ + allocated = [] + + # we need to start with entries that asked for a specific IP in case + # those IPs happen to be next in the line for allocation for ones that + # didn't ask for a specific IP + ips.sort(key=lambda x: 'ip_address' not in x) + try: + for ip in ips: + # By default IP info is dict, used to allocate single ip + # from single subnet. + # IP info can be list, used to allocate single ip from + # multiple subnets (i.e. first successful ip allocation + # is returned) + ip_list = [ip] if isinstance(ip, dict) else ip + ip_address, ip_subnet = self._ipam_allocate_single_ip( + context, ipam_driver, port, ip_list) + allocated.append({'ip_address': ip_address, + 'subnet_id': ip_subnet['subnet_id']}) + except Exception: + with excutils.save_and_reraise_exception(): + LOG.debug("An exception occurred during IP allocation.") + + if revert_on_fail and allocated: + LOG.debug("Reverting allocation") + self._ipam_deallocate_ips(context, ipam_driver, port, + allocated, revert_on_fail=False) + elif not revert_on_fail and ips: + addresses = ', '.join(self._get_failed_ips(ips, + allocated)) + LOG.error(_LE("IP allocation failed on " + "external system for %s"), addresses) + + return allocated + + def _ipam_update_allocation_pools(self, context, ipam_driver, subnet): + self._validate_allocation_pools(subnet['allocation_pools'], + subnet['cidr']) + + factory = ipam_driver.get_subnet_request_factory() + subnet_request = factory.get_request(context, subnet, None) + + ipam_driver.update_subnet(subnet_request) + + def delete_subnet(self, context, subnet_id): + ipam_driver = driver.Pool.get_instance(None, context) + ipam_driver.remove_subnet(subnet_id) + + def allocate_ips_for_port_and_store(self, context, port, port_id): + network_id = port['port']['network_id'] + ips = [] + try: + ips = self._allocate_ips_for_port(context, port) + for ip in ips: + ip_address = ip['ip_address'] + subnet_id = ip['subnet_id'] + IpamPluggableBackend._store_ip_allocation( + context, ip_address, network_id, + subnet_id, port_id) + except Exception: + with excutils.save_and_reraise_exception(): + if ips: + LOG.debug("An exception occurred during port creation." + "Reverting IP allocation") + ipam_driver = driver.Pool.get_instance(None, context) + self._ipam_deallocate_ips(context, ipam_driver, + port['port'], ips, + revert_on_fail=False) + + def _allocate_ips_for_port(self, context, port): + """Allocate IP addresses for the port. IPAM version. + + If port['fixed_ips'] is set to 'ATTR_NOT_SPECIFIED', allocate IP + addresses for the port. If port['fixed_ips'] contains an IP address or + a subnet_id then allocate an IP address accordingly. + """ + p = port['port'] + ips = [] + v6_stateless = [] + net_id_filter = {'network_id': [p['network_id']]} + subnets = self._get_subnets(context, filters=net_id_filter) + is_router_port = ( + p['device_owner'] in constants.ROUTER_INTERFACE_OWNERS_SNAT) + + fixed_configured = p['fixed_ips'] is not attributes.ATTR_NOT_SPECIFIED + if fixed_configured: + ips = self._test_fixed_ips_for_port(context, + p["network_id"], + p['fixed_ips'], + p['device_owner']) + # For ports that are not router ports, implicitly include all + # auto-address subnets for address association. + if not is_router_port: + v6_stateless += [subnet for subnet in subnets + if ipv6_utils.is_auto_address_subnet(subnet)] + else: + # Split into v4, v6 stateless and v6 stateful subnets + v4 = [] + v6_stateful = [] + for subnet in subnets: + if subnet['ip_version'] == 4: + v4.append(subnet) + else: + if ipv6_utils.is_auto_address_subnet(subnet): + if not is_router_port: + v6_stateless.append(subnet) + else: + v6_stateful.append(subnet) + + version_subnets = [v4, v6_stateful] + for subnets in version_subnets: + if subnets: + ips.append([{'subnet_id': s['id']} + for s in subnets]) + + for subnet in v6_stateless: + # IP addresses for IPv6 SLAAC and DHCPv6-stateless subnets + # are implicitly included. + ips.append({'subnet_id': subnet['id'], + 'subnet_cidr': subnet['cidr'], + 'eui64_address': True, + 'mac': p['mac_address']}) + ipam_driver = driver.Pool.get_instance(None, context) + return self._ipam_allocate_ips(context, ipam_driver, p, ips) + + def _test_fixed_ips_for_port(self, context, network_id, fixed_ips, + device_owner): + """Test fixed IPs for port. + + Check that configured subnets are valid prior to allocating any + IPs. Include the subnet_id in the result if only an IP address is + configured. + + :raises: InvalidInput, IpAddressInUse, InvalidIpForNetwork, + InvalidIpForSubnet + """ + fixed_ip_list = [] + for fixed in fixed_ips: + subnet = self._get_subnet_for_fixed_ip(context, fixed, network_id) + + is_auto_addr_subnet = ipv6_utils.is_auto_address_subnet(subnet) + if 'ip_address' in fixed: + if (is_auto_addr_subnet and device_owner not in + constants.ROUTER_INTERFACE_OWNERS): + msg = (_("IPv6 address %(address)s can not be directly " + "assigned to a port on subnet %(id)s since the " + "subnet is configured for automatic addresses") % + {'address': fixed['ip_address'], + 'id': subnet['id']}) + raise n_exc.InvalidInput(error_message=msg) + fixed_ip_list.append({'subnet_id': subnet['id'], + 'ip_address': fixed['ip_address']}) + else: + # A scan for auto-address subnets on the network is done + # separately so that all such subnets (not just those + # listed explicitly here by subnet ID) are associated + # with the port. + if (device_owner in constants.ROUTER_INTERFACE_OWNERS_SNAT or + not is_auto_addr_subnet): + fixed_ip_list.append({'subnet_id': subnet['id']}) + + self._validate_max_ips_per_port(fixed_ip_list) + return fixed_ip_list + + def _update_ips_for_port(self, context, port, + original_ips, new_ips, mac): + """Add or remove IPs from the port. IPAM version""" + added = [] + removed = [] + changes = self._get_changed_ips_for_port( + context, original_ips, new_ips, port['device_owner']) + # Check if the IP's to add are OK + to_add = self._test_fixed_ips_for_port( + context, port['network_id'], changes.add, + port['device_owner']) + + ipam_driver = driver.Pool.get_instance(None, context) + if changes.remove: + removed = self._ipam_deallocate_ips(context, ipam_driver, port, + changes.remove) + if to_add: + added = self._ipam_allocate_ips(context, ipam_driver, + changes, to_add) + return self.Changes(add=added, + original=changes.original, + remove=removed) + + def save_allocation_pools(self, context, subnet, allocation_pools): + for pool in allocation_pools: + first_ip = str(netaddr.IPAddress(pool.first, pool.version)) + last_ip = str(netaddr.IPAddress(pool.last, pool.version)) + ip_pool = models_v2.IPAllocationPool(subnet=subnet, + first_ip=first_ip, + last_ip=last_ip) + context.session.add(ip_pool) + + def update_port_with_ips(self, context, db_port, new_port, new_mac): + changes = self.Changes(add=[], original=[], remove=[]) + + if 'fixed_ips' in new_port: + original = self._make_port_dict(db_port, + process_extensions=False) + changes = self._update_ips_for_port(context, + db_port, + original["fixed_ips"], + new_port['fixed_ips'], + new_mac) + try: + # Check if the IPs need to be updated + network_id = db_port['network_id'] + for ip in changes.add: + self._store_ip_allocation( + context, ip['ip_address'], network_id, + ip['subnet_id'], db_port.id) + for ip in changes.remove: + self._delete_ip_allocation(context, network_id, + ip['subnet_id'], ip['ip_address']) + self._update_db_port(context, db_port, new_port, network_id, + new_mac) + except Exception: + with excutils.save_and_reraise_exception(): + if 'fixed_ips' in new_port: + LOG.debug("An exception occurred during port update.") + ipam_driver = driver.Pool.get_instance(None, context) + if changes.add: + LOG.debug("Reverting IP allocation.") + self._ipam_deallocate_ips(context, ipam_driver, + db_port, changes.add, + revert_on_fail=False) + if changes.remove: + LOG.debug("Reverting IP deallocation.") + self._ipam_allocate_ips(context, ipam_driver, + db_port, changes.remove, + revert_on_fail=False) + return changes + + def delete_port(self, context, id): + # Get fixed_ips list before port deletion + port = self._get_port(context, id) + ipam_driver = driver.Pool.get_instance(None, context) + + super(IpamPluggableBackend, self).delete_port(context, id) + # Deallocating ips via IPAM after port is deleted locally. + # So no need to do rollback actions on remote server + # in case of fail to delete port locally + self._ipam_deallocate_ips(context, ipam_driver, port, + port['fixed_ips']) + + def update_db_subnet(self, context, id, s, old_pools): + ipam_driver = driver.Pool.get_instance(None, context) + if "allocation_pools" in s: + self._ipam_update_allocation_pools(context, ipam_driver, s) + + try: + subnet, changes = super(IpamPluggableBackend, + self).update_db_subnet(context, id, + s, old_pools) + except Exception: + with excutils.save_and_reraise_exception(): + if "allocation_pools" in s and old_pools: + LOG.error( + _LE("An exception occurred during subnet update." + "Reverting allocation pool changes")) + s['allocation_pools'] = old_pools + self._ipam_update_allocation_pools(context, ipam_driver, s) + return subnet, changes + + def add_auto_addrs_on_network_ports(self, context, subnet, ipam_subnet): + """For an auto-address subnet, add addrs for ports on the net.""" + with context.session.begin(subtransactions=True): + network_id = subnet['network_id'] + port_qry = context.session.query(models_v2.Port) + ports = port_qry.filter( + and_(models_v2.Port.network_id == network_id, + ~models_v2.Port.device_owner.in_( + constants.ROUTER_INTERFACE_OWNERS_SNAT))) + for port in ports: + ip_request = ipam_req.AutomaticAddressRequest( + prefix=subnet['cidr'], + mac=port['mac_address']) + ip_address = ipam_subnet.allocate(ip_request) + allocated = models_v2.IPAllocation(network_id=network_id, + port_id=port['id'], + ip_address=ip_address, + subnet_id=subnet['id']) + try: + # Do the insertion of each IP allocation entry within + # the context of a nested transaction, so that the entry + # is rolled back independently of other entries whenever + # the corresponding port has been deleted. + with context.session.begin_nested(): + context.session.add(allocated) + except db_exc.DBReferenceError: + LOG.debug("Port %s was deleted while updating it with an " + "IPv6 auto-address. Ignoring.", port['id']) + LOG.debug("Reverting IP allocation for %s", ip_address) + # Do not fail if reverting allocation was unsuccessful + try: + ipam_subnet.deallocate(ip_address) + except Exception: + LOG.debug("Reverting IP allocation failed for %s", + ip_address) + + def allocate_subnet(self, context, network, subnet, subnetpool_id): + subnetpool = None + + if subnetpool_id: + subnetpool = self._get_subnetpool(context, subnetpool_id) + self._validate_ip_version_with_subnetpool(subnet, subnetpool) + + # gateway_ip and allocation pools should be validated or generated + # only for specific request + if subnet['cidr'] is not attributes.ATTR_NOT_SPECIFIED: + subnet['gateway_ip'] = self._gateway_ip_str(subnet, + subnet['cidr']) + subnet['allocation_pools'] = self._prepare_allocation_pools( + subnet['allocation_pools'], + subnet['cidr'], + subnet['gateway_ip']) + + ipam_driver = driver.Pool.get_instance(subnetpool, context) + subnet_factory = ipam_driver.get_subnet_request_factory() + subnet_request = subnet_factory.get_request(context, subnet, + subnetpool) + ipam_subnet = ipam_driver.allocate_subnet(subnet_request) + # get updated details with actually allocated subnet + subnet_request = ipam_subnet.get_details() + + try: + subnet = self._save_subnet(context, + network, + self._make_subnet_args( + subnet_request, + subnet, + subnetpool_id), + subnet['dns_nameservers'], + subnet['host_routes'], + subnet_request) + except Exception: + # Note(pbondar): Third-party ipam servers can't rely + # on transaction rollback, so explicit rollback call needed. + # IPAM part rolled back in exception handling + # and subnet part is rolled back by transaction rollback. + with excutils.save_and_reraise_exception(): + LOG.debug("An exception occurred during subnet creation." + "Reverting subnet allocation.") + self.delete_subnet(context, subnet_request.subnet_id) + return subnet, ipam_subnet diff --git a/neutron/db/l3_agentschedulers_db.py b/neutron/db/l3_agentschedulers_db.py index 83d404d5253..9c6413054fc 100644 --- a/neutron/db/l3_agentschedulers_db.py +++ b/neutron/db/l3_agentschedulers_db.py @@ -35,6 +35,7 @@ from neutron.db import model_base from neutron.extensions import l3agentscheduler from neutron.i18n import _LE, _LI, _LW from neutron import manager +from neutron.plugins.common import constants as service_constants LOG = logging.getLogger(__name__) @@ -182,7 +183,9 @@ class L3AgentSchedulerDbMixin(l3agentscheduler.L3AgentSchedulerPluginBase, return False if router.get('distributed'): return False - # non-dvr case: centralized router is already bound to some agent + if router.get('ha'): + return True + # legacy router case: router is already bound to some agent raise l3agentscheduler.RouterHostedByL3Agent( router_id=router_id, agent_id=bindings[0].l3_agent_id) @@ -193,7 +196,15 @@ class L3AgentSchedulerDbMixin(l3agentscheduler.L3AgentSchedulerPluginBase, agent_id = agent['id'] if self.router_scheduler: try: - self.router_scheduler.bind_router(context, router_id, agent) + if router.get('ha'): + plugin = manager.NeutronManager.get_service_plugins().get( + service_constants.L3_ROUTER_NAT) + self.router_scheduler.create_ha_port_and_bind( + plugin, context, router['id'], + router['tenant_id'], agent) + else: + self.router_scheduler.bind_router( + context, router_id, agent) except db_exc.DBError: raise l3agentscheduler.RouterSchedulingFailed( router_id=router_id, agent_id=agent_id) @@ -223,6 +234,13 @@ class L3AgentSchedulerDbMixin(l3agentscheduler.L3AgentSchedulerPluginBase, """ agent = self._get_agent(context, agent_id) self._unbind_router(context, router_id, agent_id) + + router = self.get_router(context, router_id) + if router.get('ha'): + plugin = manager.NeutronManager.get_service_plugins().get( + service_constants.L3_ROUTER_NAT) + plugin.delete_ha_interfaces_on_host(context, router_id, agent.host) + l3_notifier = self.agent_notifiers.get(constants.AGENT_TYPE_L3) if l3_notifier: l3_notifier.router_removed_from_agent( diff --git a/neutron/db/l3_db.py b/neutron/db/l3_db.py index 5ac658c64ac..c09c5273d33 100644 --- a/neutron/db/l3_db.py +++ b/neutron/db/l3_db.py @@ -808,6 +808,10 @@ class L3_NAT_dbonly_mixin(l3.RouterPluginBase): external_network_id=external_network_id, port_id=internal_port['id']) + def _port_ipv4_fixed_ips(self, port): + return [ip for ip in port['fixed_ips'] + if netaddr.IPAddress(ip['ip_address']).version == 4] + def _internal_fip_assoc_data(self, context, fip): """Retrieve internal port data for floating IP. @@ -833,6 +837,18 @@ class L3_NAT_dbonly_mixin(l3.RouterPluginBase): internal_subnet_id = None if fip.get('fixed_ip_address'): internal_ip_address = fip['fixed_ip_address'] + if netaddr.IPAddress(internal_ip_address).version != 4: + if 'id' in fip: + data = {'floatingip_id': fip['id'], + 'internal_ip': internal_ip_address} + msg = (_('Floating IP %(floatingip_id) is associated ' + 'with non-IPv4 address %s(internal_ip)s and ' + 'therefore cannot be bound.') % data) + else: + msg = (_('Cannot create floating IP and bind it to %s, ' + 'since that is not an IPv4 address.') % + internal_ip_address) + raise n_exc.BadRequest(resource='floatingip', msg=msg) for ip in internal_port['fixed_ips']: if ip['ip_address'] == internal_ip_address: internal_subnet_id = ip['subnet_id'] @@ -842,18 +858,18 @@ class L3_NAT_dbonly_mixin(l3.RouterPluginBase): 'address': internal_ip_address}) raise n_exc.BadRequest(resource='floatingip', msg=msg) else: - ips = [ip['ip_address'] for ip in internal_port['fixed_ips']] - if not ips: + ipv4_fixed_ips = self._port_ipv4_fixed_ips(internal_port) + if not ipv4_fixed_ips: msg = (_('Cannot add floating IP to port %s that has ' - 'no fixed IP addresses') % internal_port['id']) + 'no fixed IPv4 addresses') % internal_port['id']) raise n_exc.BadRequest(resource='floatingip', msg=msg) - if len(ips) > 1: - msg = (_('Port %s has multiple fixed IPs. Must provide' - ' a specific IP when assigning a floating IP') % - internal_port['id']) + if len(ipv4_fixed_ips) > 1: + msg = (_('Port %s has multiple fixed IPv4 addresses. Must ' + 'provide a specific IPv4 address when assigning a ' + 'floating IP') % internal_port['id']) raise n_exc.BadRequest(resource='floatingip', msg=msg) - internal_ip_address = internal_port['fixed_ips'][0]['ip_address'] - internal_subnet_id = internal_port['fixed_ips'][0]['subnet_id'] + internal_ip_address = ipv4_fixed_ips[0]['ip_address'] + internal_subnet_id = ipv4_fixed_ips[0]['subnet_id'] return internal_port, internal_subnet_id, internal_ip_address def get_assoc_data(self, context, fip, floating_network_id): @@ -909,6 +925,10 @@ class L3_NAT_dbonly_mixin(l3.RouterPluginBase): 'router_id': router_id, 'last_known_router_id': previous_router_id}) + def _is_ipv4_network(self, context, net_id): + net = self._core_plugin._get_network(context, net_id) + return any(s.ip_version == 4 for s in net.subnets) + def create_floatingip(self, context, floatingip, initial_status=l3_constants.FLOATINGIP_STATUS_ACTIVE): fip = floatingip['floatingip'] @@ -920,6 +940,10 @@ class L3_NAT_dbonly_mixin(l3.RouterPluginBase): msg = _("Network %s is not a valid external network") % f_net_id raise n_exc.BadRequest(resource='floatingip', msg=msg) + if not self._is_ipv4_network(context, f_net_id): + msg = _("Network %s does not contain any IPv4 subnet") % f_net_id + raise n_exc.BadRequest(resource='floatingip', msg=msg) + with context.session.begin(subtransactions=True): # This external port is never exposed to the tenant. # it is used purely for internal system and admin use when @@ -942,11 +966,12 @@ class L3_NAT_dbonly_mixin(l3.RouterPluginBase): external_port = self._core_plugin.create_port(context.elevated(), {'port': port}) - # Ensure IP addresses are allocated on external port - if not external_port['fixed_ips']: + # Ensure IPv4 addresses are allocated on external port + external_ipv4_ips = self._port_ipv4_fixed_ips(external_port) + if not external_ipv4_ips: raise n_exc.ExternalIpAddressExhausted(net_id=f_net_id) - floating_fixed_ip = external_port['fixed_ips'][0] + floating_fixed_ip = external_ipv4_ips[0] floating_ip_address = floating_fixed_ip['ip_address'] floatingip_db = FloatingIP( id=fip_id, @@ -1241,7 +1266,7 @@ class L3_NAT_dbonly_mixin(l3.RouterPluginBase): routers_dict = dict((router['id'], router) for router in routers) self._process_floating_ips(context, routers_dict, floating_ips) self._process_interfaces(routers_dict, interfaces) - return routers_dict.values() + return list(routers_dict.values()) class L3RpcNotifierMixin(object): diff --git a/neutron/db/l3_dvr_db.py b/neutron/db/l3_dvr_db.py index 95c82f1a8ab..47fbbc63f2e 100644 --- a/neutron/db/l3_dvr_db.py +++ b/neutron/db/l3_dvr_db.py @@ -87,7 +87,8 @@ class L3_NAT_with_dvr_db_mixin(l3_db.L3_NAT_db_mixin, router_res.get('distributed') is False): LOG.info(_LI("Centralizing distributed router %s " "is not supported"), router_db['id']) - raise NotImplementedError() + raise n_exc.NotSupported(msg=_("Migration from distributed router " + "to centralized")) elif (not router_db.extra_attributes.distributed and router_res.get('distributed')): # Notify advanced services of the imminent state transition @@ -311,6 +312,13 @@ class L3_NAT_with_dvr_db_mixin(l3_db.L3_NAT_db_mixin, context, router_interface_info, 'add') return router_interface_info + def _port_has_ipv6_address(self, port): + """Overridden to return False if DVR SNAT port.""" + if port['device_owner'] == DEVICE_OWNER_DVR_SNAT: + return False + return super(L3_NAT_with_dvr_db_mixin, + self)._port_has_ipv6_address(port) + def remove_router_interface(self, context, router_id, interface_info): remove_by_port, remove_by_subnet = ( self._validate_interface_info(interface_info, for_removal=True) @@ -528,7 +536,7 @@ class L3_NAT_with_dvr_db_mixin(l3_db.L3_NAT_db_mixin, filters=device_filter) for p in ports: if self._get_vm_port_hostid(context, p['id'], p) == host_id: - self._core_plugin._delete_port(context, p['id']) + self._core_plugin.ipam.delete_port(context, p['id']) return def create_fip_agent_gw_port_if_not_exists( diff --git a/neutron/db/l3_hamode_db.py b/neutron/db/l3_hamode_db.py index 96647d8b1c9..29350ce7785 100644 --- a/neutron/db/l3_hamode_db.py +++ b/neutron/db/l3_hamode_db.py @@ -336,6 +336,15 @@ class L3_HA_NAT_db_mixin(l3_dvr_db.L3_NAT_with_dvr_db_mixin): self._core_plugin.delete_port(admin_ctx, port['id'], l3_port_check=False) + def delete_ha_interfaces_on_host(self, context, router_id, host): + admin_ctx = context.elevated() + port_ids = (binding.port_id for binding + in self.get_ha_router_port_bindings(admin_ctx, + [router_id], host)) + for port_id in port_ids: + self._core_plugin.delete_port(admin_ctx, port_id, + l3_port_check=False) + def _notify_ha_interfaces_updated(self, context, router_id): self.l3_rpc_notifier.routers_updated( context, [router_id], shuffle_agents=True) @@ -461,7 +470,7 @@ class L3_HA_NAT_db_mixin(l3_dvr_db.L3_NAT_with_dvr_db_mixin): if interface: self._populate_subnets_for_ports(context, [interface]) - return routers_dict.values() + return list(routers_dict.values()) def get_ha_sync_data_for_host(self, context, host=None, router_ids=None, active=None): diff --git a/neutron/db/metering/metering_db.py b/neutron/db/metering/metering_db.py index 2e7f9c372fe..227b9ad23e2 100644 --- a/neutron/db/metering/metering_db.py +++ b/neutron/db/metering/metering_db.py @@ -234,7 +234,7 @@ class MeteringDbMixin(metering.MeteringPluginBase, routers_dict[router['id']] = router_dict - return routers_dict.values() + return list(routers_dict.values()) def get_sync_data_for_rule(self, context, rule): label = context.session.query(MeteringLabel).get( @@ -253,7 +253,7 @@ class MeteringDbMixin(metering.MeteringPluginBase, router_dict[constants.METERING_LABEL_KEY].append(data) routers_dict[router['id']] = router_dict - return routers_dict.values() + return list(routers_dict.values()) def get_sync_data_metering(self, context, label_id=None, router_ids=None): labels = context.session.query(MeteringLabel) diff --git a/neutron/db/migration/alembic_migrations/external.py b/neutron/db/migration/alembic_migrations/external.py index 412992db5cc..46d24a45861 100644 --- a/neutron/db/migration/alembic_migrations/external.py +++ b/neutron/db/migration/alembic_migrations/external.py @@ -24,4 +24,19 @@ LBAAS_TABLES = ['vips', 'sessionpersistences', 'pools', 'healthmonitors', FWAAS_TABLES = ['firewall_rules', 'firewalls', 'firewall_policies'] -TABLES = (FWAAS_TABLES + LBAAS_TABLES + VPNAAS_TABLES) +DRIVER_TABLES = [ + # Models moved to openstack/networking-cisco + 'cisco_ml2_apic_contracts', + 'cisco_ml2_apic_names', + 'cisco_ml2_apic_host_links', + 'cisco_ml2_n1kv_policy_profiles', + 'cisco_ml2_n1kv_network_profiles', + 'cisco_ml2_n1kv_port_bindings', + 'cisco_ml2_n1kv_network_bindings', + 'cisco_ml2_n1kv_vxlan_allocations', + 'cisco_ml2_n1kv_vlan_allocations', + 'cisco_ml2_n1kv_profile_bindings', + # Add your tables with moved models here^. Please end with a comma. +] + +TABLES = (FWAAS_TABLES + LBAAS_TABLES + VPNAAS_TABLES + DRIVER_TABLES) diff --git a/neutron/db/migration/alembic_migrations/script.py.mako b/neutron/db/migration/alembic_migrations/script.py.mako index f35c9b7c87b..121181a32cd 100644 --- a/neutron/db/migration/alembic_migrations/script.py.mako +++ b/neutron/db/migration/alembic_migrations/script.py.mako @@ -24,6 +24,9 @@ Create Date: ${create_date} # revision identifiers, used by Alembic. revision = ${repr(up_revision)} down_revision = ${repr(down_revision)} +% if branch_labels: +branch_labels = ${repr(branch_labels)} +% endif from alembic import op import sqlalchemy as sa diff --git a/neutron/db/migration/alembic_migrations/versions/HEAD b/neutron/db/migration/alembic_migrations/versions/HEAD deleted file mode 100644 index 5d2bcdc22c2..00000000000 --- a/neutron/db/migration/alembic_migrations/versions/HEAD +++ /dev/null @@ -1 +0,0 @@ -52c5312f6baf diff --git a/neutron/db/migration/alembic_migrations/versions/HEADS b/neutron/db/migration/alembic_migrations/versions/HEADS new file mode 100644 index 00000000000..9fef8352067 --- /dev/null +++ b/neutron/db/migration/alembic_migrations/versions/HEADS @@ -0,0 +1,3 @@ +1c844d1677f7 +45f955889773 +kilo diff --git a/neutron/plugins/metaplugin/meta_neutron_plugin.py b/neutron/db/migration/alembic_migrations/versions/liberty/contract/2a16083502f3_metaplugin_removal.py similarity index 61% rename from neutron/plugins/metaplugin/meta_neutron_plugin.py rename to neutron/db/migration/alembic_migrations/versions/liberty/contract/2a16083502f3_metaplugin_removal.py index 9ded159d45f..802ad7a8cab 100644 --- a/neutron/plugins/metaplugin/meta_neutron_plugin.py +++ b/neutron/db/migration/alembic_migrations/versions/liberty/contract/2a16083502f3_metaplugin_removal.py @@ -1,5 +1,4 @@ -# Copyright 2012, Nachi Ueno, NTT MCL, Inc. -# All Rights Reserved. +# Copyright 2015 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain @@ -12,8 +11,23 @@ # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. +# -from metaplugin.plugin import meta_neutron_plugin +"""Metaplugin removal + +Revision ID: 2a16083502f3 +Revises: 5498d17be016 +Create Date: 2015-06-16 09:11:10.488566 + +""" + +# revision identifiers, used by Alembic. +revision = '2a16083502f3' +down_revision = '5498d17be016' + +from alembic import op -MetaPluginV2 = meta_neutron_plugin.MetaPluginV2 +def upgrade(): + op.drop_table('networkflavors') + op.drop_table('routerflavors') diff --git a/neutron/plugins/ml2/drivers/cisco/n1kv/mech_cisco_n1kv.py b/neutron/db/migration/alembic_migrations/versions/liberty/contract/30018084ec99_initial.py similarity index 67% rename from neutron/plugins/ml2/drivers/cisco/n1kv/mech_cisco_n1kv.py rename to neutron/db/migration/alembic_migrations/versions/liberty/contract/30018084ec99_initial.py index 04e83a78a51..bd1ddccf930 100644 --- a/neutron/plugins/ml2/drivers/cisco/n1kv/mech_cisco_n1kv.py +++ b/neutron/db/migration/alembic_migrations/versions/liberty/contract/30018084ec99_initial.py @@ -1,6 +1,3 @@ -# Copyright 2015 Cisco Systems, Inc. -# All rights reserved. -# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at @@ -12,13 +9,22 @@ # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. +# + +"""Initial no-op Liberty contract rule. + +Revision ID: 30018084ec99 +Revises: None +Create Date: 2015-06-22 00:00:00.000000 -""" -ML2 Mechanism Driver for Cisco Nexus1000V distributed virtual switches. """ -from networking_cisco.plugins.ml2.drivers.cisco.n1kv import mech_cisco_n1kv +# revision identifiers, used by Alembic. +revision = '30018084ec99' +down_revision = None +depends_on = ('kilo',) +branch_labels = ('liberty_contract',) -class N1KVMechanismDriver(mech_cisco_n1kv.N1KVMechanismDriver): +def upgrade(): pass diff --git a/neutron/db/migration/alembic_migrations/versions/liberty/contract/4ffceebfada_rbac_network.py b/neutron/db/migration/alembic_migrations/versions/liberty/contract/4ffceebfada_rbac_network.py new file mode 100644 index 00000000000..76926fa6a5d --- /dev/null +++ b/neutron/db/migration/alembic_migrations/versions/liberty/contract/4ffceebfada_rbac_network.py @@ -0,0 +1,69 @@ +# Copyright 2015 OpenStack Foundation +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +# + +"""network_rbac + +Revision ID: 4ffceebfada +Revises: 30018084ec99 +Create Date: 2015-06-14 13:12:04.012457 + +""" + +# revision identifiers, used by Alembic. +revision = '4ffceebfada' +down_revision = '30018084ec99' +depends_on = ('8675309a5c4f',) + +from alembic import op +from oslo_utils import uuidutils +import sqlalchemy as sa + + +# A simple model of the networks table with only the fields needed for +# the migration. +network = sa.Table('networks', sa.MetaData(), + sa.Column('id', sa.String(length=36), nullable=False), + sa.Column('tenant_id', sa.String(length=255)), + sa.Column('shared', sa.Boolean(), nullable=False)) + +networkrbacs = sa.Table( + 'networkrbacs', sa.MetaData(), + sa.Column('id', sa.String(length=36), nullable=False), + sa.Column('object_id', sa.String(length=36), nullable=False), + sa.Column('tenant_id', sa.String(length=255), nullable=True, + index=True), + sa.Column('target_tenant', sa.String(length=255), nullable=False), + sa.Column('action', sa.String(length=255), nullable=False)) + + +def upgrade(): + op.bulk_insert(networkrbacs, get_values()) + op.drop_column('networks', 'shared') + # the shared column on subnets was just an internal representation of the + # shared status of the network it was related to. This is now handled by + # other logic so we just drop it. + op.drop_column('subnets', 'shared') + + +def get_values(): + session = sa.orm.Session(bind=op.get_bind()) + values = [] + for row in session.query(network).filter(network.c.shared).all(): + values.append({'id': uuidutils.generate_uuid(), 'object_id': row[0], + 'tenant_id': row[1], 'target_tenant': '*', + 'action': 'access_as_shared'}) + # this commit appears to be necessary to allow further operations + session.commit() + return values diff --git a/neutron/db/migration/alembic_migrations/versions/liberty/contract/5498d17be016_drop_legacy_ovs_and_lb.py b/neutron/db/migration/alembic_migrations/versions/liberty/contract/5498d17be016_drop_legacy_ovs_and_lb.py new file mode 100644 index 00000000000..55ad8d15b5e --- /dev/null +++ b/neutron/db/migration/alembic_migrations/versions/liberty/contract/5498d17be016_drop_legacy_ovs_and_lb.py @@ -0,0 +1,37 @@ +# Copyright 2015 OpenStack Foundation +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +# + +"""Drop legacy OVS and LB plugin tables + +Revision ID: 5498d17be016 +Revises: 4ffceebfada +Create Date: 2015-06-25 14:08:30.984419 + +""" + +# revision identifiers, used by Alembic. +revision = '5498d17be016' +down_revision = '4ffceebfada' + +from alembic import op + + +def upgrade(): + op.drop_table('ovs_network_bindings') + op.drop_table('ovs_vlan_allocations') + op.drop_table('network_bindings') + op.drop_table('ovs_tunnel_allocations') + op.drop_table('network_states') + op.drop_table('ovs_tunnel_endpoints') diff --git a/neutron/db/migration/alembic_migrations/versions/liberty/expand/1c844d1677f7_dns_nameservers_order.py b/neutron/db/migration/alembic_migrations/versions/liberty/expand/1c844d1677f7_dns_nameservers_order.py new file mode 100644 index 00000000000..baeafa3f3d7 --- /dev/null +++ b/neutron/db/migration/alembic_migrations/versions/liberty/expand/1c844d1677f7_dns_nameservers_order.py @@ -0,0 +1,35 @@ +# Copyright 2015 OpenStack Foundation +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +# + +"""add order to dnsnameservers + +Revision ID: 1c844d1677f7 +Revises: 2a16083502f3 +Create Date: 2015-07-21 22:59:03.383850 + +""" + +# revision identifiers, used by Alembic. +revision = '1c844d1677f7' +down_revision = '2a16083502f3' + +from alembic import op +import sqlalchemy as sa + + +def upgrade(): + op.add_column('dnsnameservers', + sa.Column('order', sa.Integer(), + server_default='0', nullable=False)) diff --git a/neutron/db/migration/alembic_migrations/versions/liberty/expand/31337ec0ffee_flavors.py b/neutron/db/migration/alembic_migrations/versions/liberty/expand/31337ec0ffee_flavors.py new file mode 100644 index 00000000000..4ac5ac8063a --- /dev/null +++ b/neutron/db/migration/alembic_migrations/versions/liberty/expand/31337ec0ffee_flavors.py @@ -0,0 +1,62 @@ +# Copyright 2014-2015 OpenStack Foundation +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +# + +"""Flavor framework + +Revision ID: 313373c0ffee +Revises: 52c5312f6baf + +Create Date: 2014-07-17 03:00:00.00 +""" +# revision identifiers, used by Alembic. +revision = '313373c0ffee' +down_revision = '52c5312f6baf' + +from alembic import op +import sqlalchemy as sa + + +def upgrade(): + op.create_table( + 'flavors', + sa.Column('id', sa.String(36)), + sa.Column('name', sa.String(255)), + sa.Column('description', sa.String(1024)), + sa.Column('enabled', sa.Boolean, nullable=False, + server_default=sa.sql.true()), + sa.Column('service_type', sa.String(36), nullable=True), + sa.PrimaryKeyConstraint('id') + ) + + op.create_table( + 'serviceprofiles', + sa.Column('id', sa.String(36)), + sa.Column('description', sa.String(1024)), + sa.Column('driver', sa.String(1024), nullable=False), + sa.Column('enabled', sa.Boolean, nullable=False, + server_default=sa.sql.true()), + sa.Column('metainfo', sa.String(4096)), + sa.PrimaryKeyConstraint('id') + ) + + op.create_table( + 'flavorserviceprofilebindings', + sa.Column('service_profile_id', sa.String(36), nullable=False), + sa.Column('flavor_id', sa.String(36), nullable=False), + sa.ForeignKeyConstraint(['service_profile_id'], + ['serviceprofiles.id']), + sa.ForeignKeyConstraint(['flavor_id'], ['flavors.id']), + sa.PrimaryKeyConstraint('service_profile_id', 'flavor_id') + ) diff --git a/neutron/db/migration/alembic_migrations/versions/354db87e3225_nsxv_vdr_metadata.py b/neutron/db/migration/alembic_migrations/versions/liberty/expand/354db87e3225_nsxv_vdr_metadata.py similarity index 94% rename from neutron/db/migration/alembic_migrations/versions/354db87e3225_nsxv_vdr_metadata.py rename to neutron/db/migration/alembic_migrations/versions/liberty/expand/354db87e3225_nsxv_vdr_metadata.py index fb864470669..df82f17c936 100644 --- a/neutron/db/migration/alembic_migrations/versions/354db87e3225_nsxv_vdr_metadata.py +++ b/neutron/db/migration/alembic_migrations/versions/liberty/expand/354db87e3225_nsxv_vdr_metadata.py @@ -23,7 +23,10 @@ Create Date: 2015-04-19 14:59:15.102609 # revision identifiers, used by Alembic. revision = '354db87e3225' -down_revision = 'kilo' +down_revision = None +branch_labels = ('liberty_expand',) +depends_on = ('kilo',) + from alembic import op import sqlalchemy as sa diff --git a/neutron/db/migration/alembic_migrations/versions/liberty/expand/45f955889773_quota_usage.py b/neutron/db/migration/alembic_migrations/versions/liberty/expand/45f955889773_quota_usage.py new file mode 100644 index 00000000000..e10edc94db6 --- /dev/null +++ b/neutron/db/migration/alembic_migrations/versions/liberty/expand/45f955889773_quota_usage.py @@ -0,0 +1,45 @@ +# Copyright 2015 OpenStack Foundation +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +# + +"""quota_usage + +Revision ID: 45f955889773 +Revises: 8675309a5c4f +Create Date: 2015-04-17 08:09:37.611546 + +""" + +# revision identifiers, used by Alembic. +revision = '45f955889773' +down_revision = '8675309a5c4f' + +from alembic import op +import sqlalchemy as sa +from sqlalchemy import sql + + +def upgrade(): + op.create_table( + 'quotausages', + sa.Column('tenant_id', sa.String(length=255), + nullable=False, primary_key=True, index=True), + sa.Column('resource', sa.String(length=255), + nullable=False, primary_key=True, index=True), + sa.Column('dirty', sa.Boolean(), nullable=False, + server_default=sql.false()), + sa.Column('in_use', sa.Integer(), nullable=False, + server_default='0'), + sa.Column('reserved', sa.Integer(), nullable=False, + server_default='0')) diff --git a/neutron/db/migration/alembic_migrations/versions/52c5312f6baf_address_scopes.py b/neutron/db/migration/alembic_migrations/versions/liberty/expand/52c5312f6baf_address_scopes.py similarity index 100% rename from neutron/db/migration/alembic_migrations/versions/52c5312f6baf_address_scopes.py rename to neutron/db/migration/alembic_migrations/versions/liberty/expand/52c5312f6baf_address_scopes.py diff --git a/neutron/db/migration/alembic_migrations/versions/599c6a226151_neutrodb_ipam.py b/neutron/db/migration/alembic_migrations/versions/liberty/expand/599c6a226151_neutrodb_ipam.py similarity index 100% rename from neutron/db/migration/alembic_migrations/versions/599c6a226151_neutrodb_ipam.py rename to neutron/db/migration/alembic_migrations/versions/liberty/expand/599c6a226151_neutrodb_ipam.py diff --git a/neutron/db/migration/alembic_migrations/versions/liberty/expand/8675309a5c4f_rbac_network.py b/neutron/db/migration/alembic_migrations/versions/liberty/expand/8675309a5c4f_rbac_network.py new file mode 100644 index 00000000000..b2c7156e702 --- /dev/null +++ b/neutron/db/migration/alembic_migrations/versions/liberty/expand/8675309a5c4f_rbac_network.py @@ -0,0 +1,47 @@ +# Copyright 2015 OpenStack Foundation +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +# + +"""network_rbac + +Revision ID: 8675309a5c4f +Revises: 313373c0ffee +Create Date: 2015-06-14 13:12:04.012457 + +""" + +# revision identifiers, used by Alembic. +revision = '8675309a5c4f' +down_revision = '313373c0ffee' + +from alembic import op +import sqlalchemy as sa + + +def upgrade(): + op.create_table( + 'networkrbacs', + sa.Column('id', sa.String(length=36), nullable=False), + sa.Column('object_id', sa.String(length=36), nullable=False), + sa.Column('tenant_id', sa.String(length=255), nullable=True, + index=True), + sa.Column('target_tenant', sa.String(length=255), nullable=False), + sa.Column('action', sa.String(length=255), nullable=False), + sa.ForeignKeyConstraint(['object_id'], + ['networks.id'], + ondelete='CASCADE'), + sa.PrimaryKeyConstraint('id'), + sa.UniqueConstraint( + 'action', 'object_id', 'target_tenant', + name='uniq_networkrbacs0tenant_target0object_id0action')) diff --git a/neutron/db/migration/cli.py b/neutron/db/migration/cli.py index c829c9d3d62..0881c72112b 100644 --- a/neutron/db/migration/cli.py +++ b/neutron/db/migration/cli.py @@ -24,11 +24,18 @@ from oslo_config import cfg from oslo_utils import importutils from neutron.common import repos +from neutron.common import utils + +# TODO(ihrachyshka): maintain separate HEAD files per branch HEAD_FILENAME = 'HEAD' +HEADS_FILENAME = 'HEADS' +CURRENT_RELEASE = "liberty" +MIGRATION_BRANCHES = ('expand', 'contract') + mods = repos.NeutronModules() -VALID_SERVICES = map(mods.alembic_name, mods.installed_list()) +VALID_SERVICES = list(map(mods.alembic_name, mods.installed_list())) _core_opts = [ @@ -41,7 +48,10 @@ _core_opts = [ cfg.StrOpt('service', choices=VALID_SERVICES, help=_("The advanced service to execute the command against. " - "Can be one of '%s'.") % "', '".join(VALID_SERVICES)) + "Can be one of '%s'.") % "', '".join(VALID_SERVICES)), + cfg.BoolOpt('split_branches', + default=False, + help=_("Enforce using split branches file structure.")) ] _quota_opts = [ @@ -76,7 +86,7 @@ def do_alembic_command(config, cmd, *args, **kwargs): def do_check_migration(config, cmd): do_alembic_command(config, 'branches') - validate_head_file(config) + validate_heads_file(config) def add_alembic_subparser(sub, cmd): @@ -101,6 +111,10 @@ def do_upgrade(config, cmd): raise SystemExit(_('Negative delta (downgrade) not supported')) revision = '%s+%d' % (revision, delta) + # leave branchless 'head' revision request backward compatible by applying + # all heads in all available branches. + if revision == 'head': + revision = 'heads' if not CONF.command.sql: run_sanity_checks(config, revision) do_alembic_command(config, cmd, revision, sql=CONF.command.sql) @@ -116,35 +130,83 @@ def do_stamp(config, cmd): sql=CONF.command.sql) +def _get_branch_label(branch): + '''Get the latest branch label corresponding to release cycle.''' + return '%s_%s' % (CURRENT_RELEASE, branch) + + +def _get_branch_head(branch): + '''Get the latest @head specification for a branch.''' + return '%s@head' % _get_branch_label(branch) + + def do_revision(config, cmd): - do_alembic_command(config, cmd, - message=CONF.command.message, - autogenerate=CONF.command.autogenerate, - sql=CONF.command.sql) - update_head_file(config) + '''Generate new revision files, one per branch.''' + addn_kwargs = { + 'message': CONF.command.message, + 'autogenerate': CONF.command.autogenerate, + 'sql': CONF.command.sql, + } + if _use_separate_migration_branches(CONF): + for branch in MIGRATION_BRANCHES: + version_path = _get_version_branch_path(CONF, branch) + addn_kwargs['version_path'] = version_path -def validate_head_file(config): - script = alembic_script.ScriptDirectory.from_config(config) - if len(script.get_heads()) > 1: - alembic_util.err(_('Timeline branches unable to generate timeline')) + if not os.path.exists(version_path): + # Bootstrap initial directory structure + utils.ensure_dir(version_path) + # Each new release stream of migrations is detached from + # previous migration chains + addn_kwargs['head'] = 'base' + # Mark the very first revision in the new branch with its label + addn_kwargs['branch_label'] = _get_branch_label(branch) + # TODO(ihrachyshka): ideally, we would also add depends_on here + # to refer to the head of the previous release stream. But + # alembic API does not support it yet. + else: + addn_kwargs['head'] = _get_branch_head(branch) - head_path = os.path.join(script.versions, HEAD_FILENAME) - if (os.path.isfile(head_path) and - open(head_path).read().strip() == script.get_current_head()): - return + do_alembic_command(config, cmd, **addn_kwargs) else: - alembic_util.err(_('HEAD file does not match migration timeline head')) + do_alembic_command(config, cmd, **addn_kwargs) + update_heads_file(config) -def update_head_file(config): +def _get_sorted_heads(script): + '''Get the list of heads for all branches, sorted.''' + heads = script.get_heads() + # +1 stands for the core 'kilo' branch, the one that didn't have branches + if len(heads) > len(MIGRATION_BRANCHES) + 1: + alembic_util.err(_('No new branches are allowed except: %s') % + ' '.join(MIGRATION_BRANCHES)) + return sorted(heads) + + +def validate_heads_file(config): + '''Check that HEADS file contains the latest heads for each branch.''' script = alembic_script.ScriptDirectory.from_config(config) - if len(script.get_heads()) > 1: - alembic_util.err(_('Timeline branches unable to generate timeline')) + expected_heads = _get_sorted_heads(script) + heads_path = _get_active_head_file_path(CONF) + try: + with open(heads_path) as file_: + observed_heads = file_.read().split() + if observed_heads == expected_heads: + return + except IOError: + pass + alembic_util.err( + _('HEADS file does not match migration timeline heads, expected: %s') + % ', '.join(expected_heads)) - head_path = os.path.join(script.versions, HEAD_FILENAME) - with open(head_path, 'w+') as f: - f.write(script.get_current_head()) + +def update_heads_file(config): + '''Update HEADS file with the latest branch heads.''' + script = alembic_script.ScriptDirectory.from_config(config) + heads = _get_sorted_heads(script) + heads_path = _get_active_head_file_path(CONF) + with open(heads_path, 'w+') as f: + f.write('\n'.join(heads)) def add_command_parsers(subparsers): @@ -191,6 +253,72 @@ command_opt = cfg.SubCommandOpt('command', CONF.register_cli_opt(command_opt) +def _get_neutron_service_base(neutron_config): + '''Return base python namespace name for a service.''' + if neutron_config.service: + validate_service_installed(neutron_config.service) + return "neutron_%s" % neutron_config.service + return "neutron" + + +def _get_root_versions_dir(neutron_config): + '''Return root directory that contains all migration rules.''' + service_base = _get_neutron_service_base(neutron_config) + root_module = importutils.import_module(service_base) + return os.path.join( + os.path.dirname(root_module.__file__), + 'db/migration/alembic_migrations/versions') + + +def _get_head_file_path(neutron_config): + '''Return the path of the file that contains single head.''' + return os.path.join( + _get_root_versions_dir(neutron_config), + HEAD_FILENAME) + + +def _get_heads_file_path(neutron_config): + '''Return the path of the file that contains all latest heads, sorted.''' + return os.path.join( + _get_root_versions_dir(neutron_config), + HEADS_FILENAME) + + +def _get_active_head_file_path(neutron_config): + '''Return the path of the file that contains latest head(s), depending on + whether multiple branches are used. + ''' + if _use_separate_migration_branches(neutron_config): + return _get_heads_file_path(neutron_config) + return _get_head_file_path(neutron_config) + + +def _get_version_branch_path(neutron_config, branch=None): + version_path = _get_root_versions_dir(neutron_config) + if branch: + return os.path.join(version_path, CURRENT_RELEASE, branch) + return version_path + + +def _use_separate_migration_branches(neutron_config): + '''Detect whether split migration branches should be used.''' + return (neutron_config.split_branches or + # Use HEADS file to indicate the new, split migration world + os.path.exists(_get_heads_file_path(neutron_config))) + + +def _set_version_locations(config): + '''Make alembic see all revisions in all migration branches.''' + version_paths = [] + + version_paths.append(_get_version_branch_path(CONF)) + if _use_separate_migration_branches(CONF): + for branch in MIGRATION_BRANCHES: + version_paths.append(_get_version_branch_path(CONF, branch)) + + config.set_main_option('version_locations', ' '.join(version_paths)) + + def validate_service_installed(service): if not importutils.try_import('neutron_%s' % service): alembic_util.err(_('Package neutron-%s not installed') % service) @@ -198,18 +326,14 @@ def validate_service_installed(service): def get_script_location(neutron_config): location = '%s.db.migration:alembic_migrations' - if neutron_config.service: - validate_service_installed(neutron_config.service) - base = "neutron_%s" % neutron_config.service - else: - base = "neutron" - return location % base + return location % _get_neutron_service_base(neutron_config) def get_alembic_config(): config = alembic_config.Config(os.path.join(os.path.dirname(__file__), 'alembic.ini')) config.set_main_option('script_location', get_script_location(CONF)) + _set_version_locations(config) return config @@ -217,7 +341,11 @@ def run_sanity_checks(config, revision): script_dir = alembic_script.ScriptDirectory.from_config(config) def check_sanity(rev, context): - for script in script_dir.iterate_revisions(revision, rev): + # TODO(ihrachyshka): here we use internal API for alembic; we may need + # alembic to expose implicit_base= argument into public + # iterate_revisions() call + for script in script_dir.revision_map.iterate_revisions( + revision, rev, implicit_base=True): if hasattr(script.module, 'check_sanity'): script.module.check_sanity(context.connection) return [] diff --git a/neutron/db/migration/migrate_to_ml2.py b/neutron/db/migration/migrate_to_ml2.py deleted file mode 100755 index bc78a09eeb4..00000000000 --- a/neutron/db/migration/migrate_to_ml2.py +++ /dev/null @@ -1,515 +0,0 @@ -# Copyright (c) 2014 Red Hat, Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -""" -This script will migrate the database of an openvswitch, linuxbridge or -Hyper-V plugin so that it can be used with the ml2 plugin. - -Known Limitations: - - - THIS SCRIPT IS DESTRUCTIVE! Make sure to backup your - Neutron database before running this script, in case anything goes - wrong. - - - It will be necessary to upgrade the database to the target release - via neutron-db-manage before attempting to migrate to ml2. - Initially, only the icehouse release is supported. - - - This script does not automate configuration migration. - -Example usage: - - python -m neutron.db.migration.migrate_to_ml2 openvswitch \ - mysql+pymysql://login:pass@127.0.0.1/neutron - -Note that migration of tunneling state will only be attempted if the ---tunnel-type parameter is provided. - -To manually test migration from ovs to ml2 with devstack: - - - stack with Q_PLUGIN=openvswitch - - boot an instance and validate connectivity - - stop the neutron service and all agents - - run the neutron-migrate-to-ml2 script - - update /etc/neutron/neutron.conf as follows: - - core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin - - - Create /etc/neutron/plugins/ml2/ml2_conf.ini and ensure that: - - ml2.mechanism_drivers includes 'openvswitch' - - ovs.local_ip is set correctly - - database.connection is set correctly - - Start the neutron service with the ml2 config file created in - the previous step in place of the openvswitch config file - - Start all the agents - - verify that the booted instance still has connectivity - - boot a second instance and validate connectivity -""" - -import argparse - -from oslo_db.sqlalchemy import session -from oslo_utils import uuidutils -import sqlalchemy as sa - -from neutron.extensions import portbindings -from neutron.plugins.common import constants as p_const - - -# Migration targets -LINUXBRIDGE = 'linuxbridge' -OPENVSWITCH = 'openvswitch' -HYPERV = 'hyperv' - -# Releases -ICEHOUSE = 'icehouse' -JUNO = 'juno' - - -SUPPORTED_SCHEMA_VERSIONS = [ICEHOUSE, JUNO] - - -def check_db_schema_version(engine, metadata): - """Check that current version of the db schema is supported.""" - version_table = sa.Table( - 'alembic_version', metadata, autoload=True, autoload_with=engine) - versions = [v[0] for v in engine.execute(version_table.select())] - if not versions: - raise ValueError(_("Missing version in alembic_versions table")) - elif len(versions) > 1: - raise ValueError(_("Multiple versions in alembic_versions table: %s") - % versions) - current_version = versions[0] - if current_version not in SUPPORTED_SCHEMA_VERSIONS: - raise SystemError(_("Unsupported database schema %(current)s. " - "Please migrate your database to one of following " - "versions: %(supported)s") - % {'current': current_version, - 'supported': ', '.join(SUPPORTED_SCHEMA_VERSIONS)} - ) - - -# Duplicated from -# neutron.plugins.ml2.drivers.linuxbridge.agent.common.constants to -# avoid having any dependency on the linuxbridge plugin being -# installed. -def interpret_vlan_id(vlan_id): - """Return (network_type, segmentation_id) tuple for encoded vlan_id.""" - FLAT_VLAN_ID = -1 - LOCAL_VLAN_ID = -2 - if vlan_id == LOCAL_VLAN_ID: - return (p_const.TYPE_LOCAL, None) - elif vlan_id == FLAT_VLAN_ID: - return (p_const.TYPE_FLAT, None) - else: - return (p_const.TYPE_VLAN, vlan_id) - - -class BaseMigrateToMl2(object): - - def __init__(self, vif_type, driver_type, segment_table_name, - vlan_allocation_table_name, old_tables): - self.vif_type = vif_type - self.driver_type = driver_type - self.segment_table_name = segment_table_name - self.vlan_allocation_table_name = vlan_allocation_table_name - self.old_tables = old_tables - - def __call__(self, connection_url, save_tables=False, tunnel_type=None, - vxlan_udp_port=None): - engine = session.create_engine(connection_url) - metadata = sa.MetaData() - check_db_schema_version(engine, metadata) - - if hasattr(self, 'define_ml2_tables'): - self.define_ml2_tables(metadata) - - # Autoload the ports table to ensure that foreign keys to it and - # the network table can be created for the new tables. - sa.Table('ports', metadata, autoload=True, autoload_with=engine) - metadata.create_all(engine) - - self.migrate_network_segments(engine, metadata) - if tunnel_type: - self.migrate_tunnels(engine, tunnel_type, vxlan_udp_port) - self.migrate_vlan_allocations(engine) - self.migrate_port_bindings(engine, metadata) - - if hasattr(self, 'drop_old_tables'): - self.drop_old_tables(engine, save_tables) - - def migrate_segment_dict(self, binding): - binding['id'] = uuidutils.generate_uuid() - - def migrate_network_segments(self, engine, metadata): - # Migrating network segments requires loading the data to python - # so that a uuid can be generated for each segment. - source_table = sa.Table(self.segment_table_name, metadata, - autoload=True, autoload_with=engine) - source_segments = engine.execute(source_table.select()) - ml2_segments = [dict(x) for x in source_segments] - for segment in ml2_segments: - self.migrate_segment_dict(segment) - if ml2_segments: - ml2_network_segments = metadata.tables['ml2_network_segments'] - engine.execute(ml2_network_segments.insert(), ml2_segments) - - def migrate_tunnels(self, engine, tunnel_type, vxlan_udp_port=None): - """Override this method to perform plugin-specific tunnel migration.""" - pass - - def migrate_vlan_allocations(self, engine): - engine.execute((""" - INSERT INTO ml2_vlan_allocations - SELECT physical_network, vlan_id, allocated - FROM %(source_table)s - WHERE allocated = TRUE - """) % {'source_table': self.vlan_allocation_table_name}) - - def get_port_segment_map(self, engine): - """Retrieve a mapping of port id to segment id. - - The monolithic plugins only support a single segment per - network, so the segment id can be uniquely identified by - the network associated with a given port. - - """ - port_segments = engine.execute(""" - SELECT ports_network.port_id, ml2_network_segments.id AS segment_id - FROM ml2_network_segments, ( - SELECT portbindingports.port_id, ports.network_id - FROM portbindingports, ports - WHERE portbindingports.port_id = ports.id - ) AS ports_network - WHERE ml2_network_segments.network_id = ports_network.network_id - """) - return dict(x for x in port_segments) - - def migrate_port_bindings(self, engine, metadata): - port_segment_map = self.get_port_segment_map(engine) - - port_binding_ports = sa.Table('portbindingports', metadata, - autoload=True, autoload_with=engine) - source_bindings = engine.execute(port_binding_ports.select()) - ml2_bindings = [dict(x) for x in source_bindings] - for binding in ml2_bindings: - binding['vif_type'] = self.vif_type - binding['driver'] = self.driver_type - segment = port_segment_map.get(binding['port_id']) - if segment: - binding['segment'] = segment - if ml2_bindings: - ml2_port_bindings = metadata.tables['ml2_port_bindings'] - engine.execute(ml2_port_bindings.insert(), ml2_bindings) - - -class BaseMigrateToMl2_IcehouseMixin(object): - """A mixin to ensure ml2 database schema state for Icehouse. - - This classes the missing tables for Icehouse schema revisions. In Juno, - the schema state has been healed, so we do not need to run these. - """ - def drop_old_tables(self, engine, save_tables=False): - if save_tables: - return - old_tables = self.old_tables + [self.vlan_allocation_table_name, - self.segment_table_name] - for table_name in old_tables: - engine.execute('DROP TABLE %s' % table_name) - - def define_ml2_tables(self, metadata): - - sa.Table( - 'arista_provisioned_nets', metadata, - sa.Column('tenant_id', sa.String(length=255), nullable=True), - sa.Column('id', sa.String(length=36), nullable=False), - sa.Column('network_id', sa.String(length=36), nullable=True), - sa.Column('segmentation_id', sa.Integer(), - autoincrement=False, nullable=True), - sa.PrimaryKeyConstraint('id'), - ) - - sa.Table( - 'arista_provisioned_vms', metadata, - sa.Column('tenant_id', sa.String(length=255), nullable=True), - sa.Column('id', sa.String(length=36), nullable=False), - sa.Column('vm_id', sa.String(length=255), nullable=True), - sa.Column('host_id', sa.String(length=255), nullable=True), - sa.Column('port_id', sa.String(length=36), nullable=True), - sa.Column('network_id', sa.String(length=36), nullable=True), - sa.PrimaryKeyConstraint('id'), - ) - - sa.Table( - 'arista_provisioned_tenants', metadata, - sa.Column('tenant_id', sa.String(length=255), nullable=True), - sa.Column('id', sa.String(length=36), nullable=False), - sa.PrimaryKeyConstraint('id'), - ) - - sa.Table( - 'cisco_ml2_nexusport_bindings', metadata, - sa.Column('binding_id', sa.Integer(), nullable=False), - sa.Column('port_id', sa.String(length=255), nullable=True), - sa.Column('vlan_id', sa.Integer(), autoincrement=False, - nullable=False), - sa.Column('switch_ip', sa.String(length=255), nullable=True), - sa.Column('instance_id', sa.String(length=255), nullable=True), - sa.PrimaryKeyConstraint('binding_id'), - ) - - sa.Table( - 'cisco_ml2_credentials', metadata, - sa.Column('credential_id', sa.String(length=255), nullable=True), - sa.Column('tenant_id', sa.String(length=255), nullable=False), - sa.Column('credential_name', sa.String(length=255), - nullable=False), - sa.Column('user_name', sa.String(length=255), nullable=True), - sa.Column('password', sa.String(length=255), nullable=True), - sa.PrimaryKeyConstraint('tenant_id', 'credential_name'), - ) - - sa.Table( - 'ml2_flat_allocations', metadata, - sa.Column('physical_network', sa.String(length=64), - nullable=False), - sa.PrimaryKeyConstraint('physical_network'), - ) - - sa.Table( - 'ml2_gre_allocations', metadata, - sa.Column('gre_id', sa.Integer, nullable=False, - autoincrement=False), - sa.Column('allocated', sa.Boolean, nullable=False), - sa.PrimaryKeyConstraint('gre_id'), - ) - - sa.Table( - 'ml2_gre_endpoints', metadata, - sa.Column('ip_address', sa.String(length=64)), - sa.PrimaryKeyConstraint('ip_address'), - ) - - sa.Table( - 'ml2_network_segments', metadata, - sa.Column('id', sa.String(length=36), nullable=False), - sa.Column('network_id', sa.String(length=36), nullable=False), - sa.Column('network_type', sa.String(length=32), nullable=False), - sa.Column('physical_network', sa.String(length=64), nullable=True), - sa.Column('segmentation_id', sa.Integer(), nullable=True), - sa.ForeignKeyConstraint(['network_id'], ['networks.id'], - ondelete='CASCADE'), - sa.PrimaryKeyConstraint('id'), - ) - - sa.Table( - 'ml2_port_bindings', metadata, - sa.Column('port_id', sa.String(length=36), nullable=False), - sa.Column('host', sa.String(length=255), nullable=False), - sa.Column('vif_type', sa.String(length=64), nullable=False), - sa.Column('driver', sa.String(length=64), nullable=True), - sa.Column('segment', sa.String(length=36), nullable=True), - sa.Column('vnic_type', sa.String(length=64), nullable=False, - server_default='normal'), - sa.Column('vif_details', sa.String(4095), nullable=False, - server_default=''), - sa.Column('profile', sa.String(4095), nullable=False, - server_default=''), - sa.ForeignKeyConstraint(['port_id'], ['ports.id'], - ondelete='CASCADE'), - sa.ForeignKeyConstraint(['segment'], ['ml2_network_segments.id'], - ondelete='SET NULL'), - sa.PrimaryKeyConstraint('port_id'), - ) - - sa.Table( - 'ml2_vlan_allocations', metadata, - sa.Column('physical_network', sa.String(length=64), - nullable=False), - sa.Column('vlan_id', sa.Integer(), autoincrement=False, - nullable=False), - sa.Column('allocated', sa.Boolean(), autoincrement=False, - nullable=False), - sa.PrimaryKeyConstraint('physical_network', 'vlan_id'), - ) - - sa.Table( - 'ml2_vxlan_allocations', metadata, - sa.Column('vxlan_vni', sa.Integer, nullable=False, - autoincrement=False), - sa.Column('allocated', sa.Boolean, nullable=False), - sa.PrimaryKeyConstraint('vxlan_vni'), - ) - - sa.Table( - 'ml2_vxlan_endpoints', metadata, - sa.Column('ip_address', sa.String(length=64)), - sa.Column('udp_port', sa.Integer(), nullable=False, - autoincrement=False), - sa.PrimaryKeyConstraint('ip_address', 'udp_port'), - ) - - -class MigrateLinuxBridgeToMl2_Juno(BaseMigrateToMl2): - - def __init__(self): - super(MigrateLinuxBridgeToMl2_Juno, self).__init__( - vif_type=portbindings.VIF_TYPE_BRIDGE, - driver_type=LINUXBRIDGE, - segment_table_name='network_bindings', - vlan_allocation_table_name='network_states', - old_tables=['portbindingports']) - - def migrate_segment_dict(self, binding): - super(MigrateLinuxBridgeToMl2_Juno, self).migrate_segment_dict( - binding) - vlan_id = binding.pop('vlan_id') - network_type, segmentation_id = interpret_vlan_id(vlan_id) - binding['network_type'] = network_type - binding['segmentation_id'] = segmentation_id - - -class MigrateHyperVPluginToMl2_Juno(BaseMigrateToMl2): - - def __init__(self): - super(MigrateHyperVPluginToMl2_Juno, self).__init__( - vif_type=portbindings.VIF_TYPE_HYPERV, - driver_type=HYPERV, - segment_table_name='hyperv_network_bindings', - vlan_allocation_table_name='hyperv_vlan_allocations', - old_tables=['portbindingports']) - - def migrate_segment_dict(self, binding): - super(MigrateHyperVPluginToMl2_Juno, self).migrate_segment_dict( - binding) - # the 'hyperv_network_bindings' table has the column - # 'segmentation_id' instead of 'vlan_id'. - vlan_id = binding.pop('segmentation_id') - network_type, segmentation_id = interpret_vlan_id(vlan_id) - binding['network_type'] = network_type - binding['segmentation_id'] = segmentation_id - - -class MigrateOpenvswitchToMl2_Juno(BaseMigrateToMl2): - - def __init__(self): - super(MigrateOpenvswitchToMl2_Juno, self).__init__( - vif_type=portbindings.VIF_TYPE_OVS, - driver_type=OPENVSWITCH, - segment_table_name='ovs_network_bindings', - vlan_allocation_table_name='ovs_vlan_allocations', - old_tables=[ - 'ovs_tunnel_allocations', - 'ovs_tunnel_endpoints', - 'portbindingports', - ]) - - def migrate_tunnels(self, engine, tunnel_type, vxlan_udp_port=None): - if tunnel_type == p_const.TYPE_GRE: - engine.execute(""" - INSERT INTO ml2_gre_allocations - SELECT tunnel_id as gre_id, allocated - FROM ovs_tunnel_allocations - WHERE allocated = TRUE - """) - engine.execute(""" - INSERT INTO ml2_gre_endpoints - SELECT ip_address - FROM ovs_tunnel_endpoints - """) - elif tunnel_type == p_const.TYPE_VXLAN: - if not vxlan_udp_port: - vxlan_udp_port = p_const.VXLAN_UDP_PORT - engine.execute(""" - INSERT INTO ml2_vxlan_allocations - SELECT tunnel_id as vxlan_vni, allocated - FROM ovs_tunnel_allocations - WHERE allocated = TRUE - """) - engine.execute(sa.text(""" - INSERT INTO ml2_vxlan_endpoints - SELECT ip_address, :udp_port as udp_port - FROM ovs_tunnel_endpoints - """), udp_port=vxlan_udp_port) - else: - raise ValueError(_('Unknown tunnel type: %s') % tunnel_type) - - -class MigrateLinuxBridgeToMl2_Icehouse(MigrateLinuxBridgeToMl2_Juno, - BaseMigrateToMl2_IcehouseMixin): - pass - - -class MigrateOpenvswitchToMl2_Icehouse(MigrateOpenvswitchToMl2_Juno, - BaseMigrateToMl2_IcehouseMixin): - pass - - -class MigrateHyperVPluginToMl2_Icehouse(MigrateHyperVPluginToMl2_Juno, - BaseMigrateToMl2_IcehouseMixin): - pass - -migrate_map = { - ICEHOUSE: { - OPENVSWITCH: MigrateOpenvswitchToMl2_Icehouse, - LINUXBRIDGE: MigrateLinuxBridgeToMl2_Icehouse, - HYPERV: MigrateHyperVPluginToMl2_Icehouse, - }, - JUNO: { - OPENVSWITCH: MigrateOpenvswitchToMl2_Juno, - LINUXBRIDGE: MigrateLinuxBridgeToMl2_Juno, - HYPERV: MigrateHyperVPluginToMl2_Juno, - }, -} - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument('plugin', choices=[OPENVSWITCH, LINUXBRIDGE, HYPERV], - help=_('The plugin type whose database will be ' - 'migrated')) - parser.add_argument('connection', - help=_('The connection url for the target db')) - parser.add_argument('--tunnel-type', choices=[p_const.TYPE_GRE, - p_const.TYPE_VXLAN], - help=_('The %s tunnel type to migrate from') % - OPENVSWITCH) - parser.add_argument('--vxlan-udp-port', default=None, type=int, - help=_('The UDP port to use for VXLAN tunnels.')) - parser.add_argument('--release', default=JUNO, choices=[ICEHOUSE, JUNO]) - parser.add_argument('--save-tables', default=False, action='store_true', - help=_("Retain the old plugin's tables")) - #TODO(marun) Provide a verbose option - args = parser.parse_args() - - if args.plugin in [LINUXBRIDGE, HYPERV] and (args.tunnel_type or - args.vxlan_udp_port): - msg = _('Tunnel args (tunnel-type and vxlan-udp-port) are not valid ' - 'for the %s plugin') - parser.error(msg % args.plugin) - - try: - migrate_func = migrate_map[args.release][args.plugin]() - except KeyError: - msg = _('Support for migrating %(plugin)s for release ' - '%(release)s is not yet implemented') - parser.error(msg % {'plugin': args.plugin, 'release': args.release}) - else: - migrate_func(args.connection, args.save_tables, args.tunnel_type, - args.vxlan_udp_port) - - -if __name__ == '__main__': - main() diff --git a/neutron/db/migration/models/head.py b/neutron/db/migration/models/head.py index 7119b4d5b2e..6044cd4ab5c 100644 --- a/neutron/db/migration/models/head.py +++ b/neutron/db/migration/models/head.py @@ -21,6 +21,7 @@ Based on this comparison database can be healed with healing migration. """ +from neutron.db import address_scope_db # noqa from neutron.db import agents_db # noqa from neutron.db import agentschedulers_db # noqa from neutron.db import allowedaddresspairs_db # noqa @@ -28,6 +29,7 @@ from neutron.db import dvr_mac_db # noqa from neutron.db import external_net_db # noqa from neutron.db import extradhcpopt_db # noqa from neutron.db import extraroute_db # noqa +from neutron.db import flavors_db # noqa from neutron.db import l3_agentschedulers_db # noqa from neutron.db import l3_attrs_db # noqa from neutron.db import l3_db # noqa @@ -39,7 +41,8 @@ from neutron.db import model_base from neutron.db import models_v2 # noqa from neutron.db import portbindings_db # noqa from neutron.db import portsecurity_db # noqa -from neutron.db import quota_db # noqa +from neutron.db.quota import models # noqa +from neutron.db import rbac_db_models # noqa from neutron.db import securitygroups_db # noqa from neutron.db import servicetype_db # noqa from neutron.ipam.drivers.neutrondb_ipam import db_models # noqa @@ -49,18 +52,12 @@ from neutron.plugins.brocade.db import models as brocade_models # noqa from neutron.plugins.cisco.db.l3 import l3_models # noqa from neutron.plugins.cisco.db import n1kv_models_v2 # noqa from neutron.plugins.cisco.db import network_models_v2 # noqa -from neutron.plugins.metaplugin import meta_models_v2 # noqa from neutron.plugins.ml2.drivers.arista import db # noqa from neutron.plugins.ml2.drivers.brocade.db import ( # noqa models as ml2_brocade_models) -from neutron.plugins.ml2.drivers.cisco.apic import apic_model # noqa -from neutron.plugins.ml2.drivers.cisco.n1kv import n1kv_models # noqa from neutron.plugins.ml2.drivers.cisco.nexus import ( # noqa nexus_models_v2 as ml2_nexus_models_v2) from neutron.plugins.ml2.drivers.cisco.ucsm import ucsm_model # noqa -from neutron.plugins.ml2.drivers.linuxbridge.agent import ( # noqa - l2network_models_v2) -from neutron.plugins.ml2.drivers.openvswitch.agent import ovs_models_v2 # noqa from neutron.plugins.ml2.drivers import type_flat # noqa from neutron.plugins.ml2.drivers import type_gre # noqa from neutron.plugins.ml2.drivers import type_vlan # noqa diff --git a/neutron/db/models_v2.py b/neutron/db/models_v2.py index 606207a7de9..6e6d270efb5 100644 --- a/neutron/db/models_v2.py +++ b/neutron/db/models_v2.py @@ -15,6 +15,7 @@ from oslo_utils import uuidutils import sqlalchemy as sa +from sqlalchemy.ext.associationproxy import association_proxy from sqlalchemy import orm from neutron.api.v2 import attributes as attr @@ -132,7 +133,8 @@ class Port(model_base.BASEV2, HasId, HasTenant): name = sa.Column(sa.String(attr.NAME_MAX_LEN)) network_id = sa.Column(sa.String(36), sa.ForeignKey("networks.id"), nullable=False) - fixed_ips = orm.relationship(IPAllocation, backref='port', lazy='joined') + fixed_ips = orm.relationship(IPAllocation, backref='port', lazy='joined', + passive_deletes='all') mac_address = sa.Column(sa.String(32), nullable=False) admin_state_up = sa.Column(sa.Boolean(), nullable=False) status = sa.Column(sa.String(16), nullable=False) @@ -177,6 +179,7 @@ class DNSNameServer(model_base.BASEV2): sa.ForeignKey('subnets.id', ondelete="CASCADE"), primary_key=True) + order = sa.Column(sa.Integer, nullable=False, server_default='0') class Subnet(model_base.BASEV2, HasId, HasTenant): @@ -200,12 +203,12 @@ class Subnet(model_base.BASEV2, HasId, HasTenant): dns_nameservers = orm.relationship(DNSNameServer, backref='subnet', cascade='all, delete, delete-orphan', + order_by=DNSNameServer.order, lazy='joined') routes = orm.relationship(SubnetRoute, backref='subnet', cascade='all, delete, delete-orphan', lazy='joined') - shared = sa.Column(sa.Boolean) ipv6_ra_mode = sa.Column(sa.Enum(constants.IPV6_SLAAC, constants.DHCPV6_STATEFUL, constants.DHCPV6_STATELESS, @@ -214,6 +217,7 @@ class Subnet(model_base.BASEV2, HasId, HasTenant): constants.DHCPV6_STATEFUL, constants.DHCPV6_STATELESS, name='ipv6_address_modes'), nullable=True) + rbac_entries = association_proxy('networks', 'rbac_entries') class SubnetPoolPrefix(model_base.BASEV2): @@ -251,10 +255,13 @@ class Network(model_base.BASEV2, HasId, HasTenant): name = sa.Column(sa.String(attr.NAME_MAX_LEN)) ports = orm.relationship(Port, backref='networks') - subnets = orm.relationship(Subnet, backref='networks', - lazy="joined") + subnets = orm.relationship( + Subnet, backref=orm.backref('networks', lazy='joined'), + lazy="joined") status = sa.Column(sa.String(16)) admin_state_up = sa.Column(sa.Boolean) - shared = sa.Column(sa.Boolean) mtu = sa.Column(sa.Integer, nullable=True) vlan_transparent = sa.Column(sa.Boolean, nullable=True) + rbac_entries = orm.relationship("NetworkRBAC", backref='network', + lazy='joined', + cascade='all, delete, delete-orphan') diff --git a/neutron/plugins/metaplugin/__init__.py b/neutron/db/quota/__init__.py similarity index 100% rename from neutron/plugins/metaplugin/__init__.py rename to neutron/db/quota/__init__.py diff --git a/neutron/db/quota/api.py b/neutron/db/quota/api.py new file mode 100644 index 00000000000..40a0a597d38 --- /dev/null +++ b/neutron/db/quota/api.py @@ -0,0 +1,159 @@ +# Copyright (c) 2015 OpenStack Foundation. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import collections + +from neutron.db import common_db_mixin as common_db_api +from neutron.db.quota import models as quota_models + + +class QuotaUsageInfo(collections.namedtuple( + 'QuotaUsageInfo', ['resource', 'tenant_id', 'used', 'reserved', 'dirty'])): + + @property + def total(self): + """Total resource usage (reserved and used).""" + return self.reserved + self.used + + +def get_quota_usage_by_resource_and_tenant(context, resource, tenant_id, + lock_for_update=False): + """Return usage info for a given resource and tenant. + + :param context: Request context + :param resource: Name of the resource + :param tenant_id: Tenant identifier + :param lock_for_update: if True sets a write-intent lock on the query + :returns: a QuotaUsageInfo instance + """ + + query = common_db_api.model_query(context, quota_models.QuotaUsage) + query = query.filter_by(resource=resource, tenant_id=tenant_id) + + if lock_for_update: + query = query.with_lockmode('update') + + result = query.first() + if not result: + return + return QuotaUsageInfo(result.resource, + result.tenant_id, + result.in_use, + result.reserved, + result.dirty) + + +def get_quota_usage_by_resource(context, resource): + query = common_db_api.model_query(context, quota_models.QuotaUsage) + query = query.filter_by(resource=resource) + return [QuotaUsageInfo(item.resource, + item.tenant_id, + item.in_use, + item.reserved, + item.dirty) for item in query] + + +def get_quota_usage_by_tenant_id(context, tenant_id): + query = common_db_api.model_query(context, quota_models.QuotaUsage) + query = query.filter_by(tenant_id=tenant_id) + return [QuotaUsageInfo(item.resource, + item.tenant_id, + item.in_use, + item.reserved, + item.dirty) for item in query] + + +def set_quota_usage(context, resource, tenant_id, + in_use=None, reserved=None, delta=False): + """Set resource quota usage. + + :param context: instance of neutron context with db session + :param resource: name of the resource for which usage is being set + :param tenant_id: identifier of the tenant for which quota usage is + being set + :param in_use: integer specifying the new quantity of used resources, + or a delta to apply to current used resource + :param reserved: integer specifying the new quantity of reserved resources, + or a delta to apply to current reserved resources + :param delta: Specififies whether in_use or reserved are absolute numbers + or deltas (default to False) + """ + query = common_db_api.model_query(context, quota_models.QuotaUsage) + query = query.filter_by(resource=resource).filter_by(tenant_id=tenant_id) + usage_data = query.first() + with context.session.begin(subtransactions=True): + if not usage_data: + # Must create entry + usage_data = quota_models.QuotaUsage( + resource=resource, + tenant_id=tenant_id) + context.session.add(usage_data) + # Perform explicit comparison with None as 0 is a valid value + if in_use is not None: + if delta: + in_use = usage_data.in_use + in_use + usage_data.in_use = in_use + if reserved is not None: + if delta: + reserved = usage_data.reserved + reserved + usage_data.reserved = reserved + # After an explicit update the dirty bit should always be reset + usage_data.dirty = False + return QuotaUsageInfo(usage_data.resource, + usage_data.tenant_id, + usage_data.in_use, + usage_data.reserved, + usage_data.dirty) + + +def set_quota_usage_dirty(context, resource, tenant_id, dirty=True): + """Set quota usage dirty bit for a given resource and tenant. + + :param resource: a resource for which quota usage if tracked + :param tenant_id: tenant identifier + :param dirty: the desired value for the dirty bit (defaults to True) + :returns: 1 if the quota usage data were updated, 0 otherwise. + """ + query = common_db_api.model_query(context, quota_models.QuotaUsage) + query = query.filter_by(resource=resource).filter_by(tenant_id=tenant_id) + return query.update({'dirty': dirty}) + + +def set_resources_quota_usage_dirty(context, resources, tenant_id, dirty=True): + """Set quota usage dirty bit for a given tenant and multiple resources. + + :param resources: list of resource for which the dirty bit is going + to be set + :param tenant_id: tenant identifier + :param dirty: the desired value for the dirty bit (defaults to True) + :returns: the number of records for which the bit was actually set. + """ + query = common_db_api.model_query(context, quota_models.QuotaUsage) + query = query.filter_by(tenant_id=tenant_id) + if resources: + query = query.filter(quota_models.QuotaUsage.resource.in_(resources)) + # synchronize_session=False needed because of the IN condition + return query.update({'dirty': dirty}, synchronize_session=False) + + +def set_all_quota_usage_dirty(context, resource, dirty=True): + """Set the dirty bit on quota usage for all tenants. + + :param resource: the resource for which the dirty bit should be set + :returns: the number of tenants for which the dirty bit was + actually updated + """ + query = common_db_api.model_query(context, quota_models.QuotaUsage) + query = query.filter_by(resource=resource) + return query.update({'dirty': dirty}) diff --git a/neutron/db/quota/driver.py b/neutron/db/quota/driver.py new file mode 100644 index 00000000000..cf6031ae2d8 --- /dev/null +++ b/neutron/db/quota/driver.py @@ -0,0 +1,151 @@ +# Copyright 2011 OpenStack Foundation. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from neutron.common import exceptions +from neutron.db.quota import models as quota_models + + +class DbQuotaDriver(object): + """Driver to perform necessary checks to enforce quotas and obtain quota + information. + + The default driver utilizes the local database. + """ + + @staticmethod + def get_tenant_quotas(context, resources, tenant_id): + """Given a list of resources, retrieve the quotas for the given + tenant. + + :param context: The request context, for access checks. + :param resources: A dictionary of the registered resource keys. + :param tenant_id: The ID of the tenant to return quotas for. + :return dict: from resource name to dict of name and limit + """ + + # init with defaults + tenant_quota = dict((key, resource.default) + for key, resource in resources.items()) + + # update with tenant specific limits + q_qry = context.session.query(quota_models.Quota).filter_by( + tenant_id=tenant_id) + tenant_quota.update((q['resource'], q['limit']) for q in q_qry) + + return tenant_quota + + @staticmethod + def delete_tenant_quota(context, tenant_id): + """Delete the quota entries for a given tenant_id. + + Atfer deletion, this tenant will use default quota values in conf. + """ + with context.session.begin(): + tenant_quotas = context.session.query(quota_models.Quota) + tenant_quotas = tenant_quotas.filter_by(tenant_id=tenant_id) + tenant_quotas.delete() + + @staticmethod + def get_all_quotas(context, resources): + """Given a list of resources, retrieve the quotas for the all tenants. + + :param context: The request context, for access checks. + :param resources: A dictionary of the registered resource keys. + :return quotas: list of dict of tenant_id:, resourcekey1: + resourcekey2: ... + """ + tenant_default = dict((key, resource.default) + for key, resource in resources.items()) + + all_tenant_quotas = {} + + for quota in context.session.query(quota_models.Quota): + tenant_id = quota['tenant_id'] + + # avoid setdefault() because only want to copy when actually req'd + tenant_quota = all_tenant_quotas.get(tenant_id) + if tenant_quota is None: + tenant_quota = tenant_default.copy() + tenant_quota['tenant_id'] = tenant_id + all_tenant_quotas[tenant_id] = tenant_quota + + tenant_quota[quota['resource']] = quota['limit'] + + return list(all_tenant_quotas.values()) + + @staticmethod + def update_quota_limit(context, tenant_id, resource, limit): + with context.session.begin(): + tenant_quota = context.session.query(quota_models.Quota).filter_by( + tenant_id=tenant_id, resource=resource).first() + + if tenant_quota: + tenant_quota.update({'limit': limit}) + else: + tenant_quota = quota_models.Quota(tenant_id=tenant_id, + resource=resource, + limit=limit) + context.session.add(tenant_quota) + + def _get_quotas(self, context, tenant_id, resources): + """Retrieves the quotas for specific resources. + + A helper method which retrieves the quotas for the specific + resources identified by keys, and which apply to the current + context. + + :param context: The request context, for access checks. + :param tenant_id: the tenant_id to check quota. + :param resources: A dictionary of the registered resources. + """ + # Grab and return the quotas (without usages) + quotas = DbQuotaDriver.get_tenant_quotas( + context, resources, tenant_id) + + return dict((k, v) for k, v in quotas.items()) + + def limit_check(self, context, tenant_id, resources, values): + """Check simple quota limits. + + For limits--those quotas for which there is no usage + synchronization function--this method checks that a set of + proposed values are permitted by the limit restriction. + + If any of the proposed values is over the defined quota, an + OverQuota exception will be raised with the sorted list of the + resources which are too high. Otherwise, the method returns + nothing. + + :param context: The request context, for access checks. + :param tenant_id: The tenant_id to check the quota. + :param resources: A dictionary of the registered resources. + :param values: A dictionary of the values to check against the + quota. + """ + + # Ensure no value is less than zero + unders = [key for key, val in values.items() if val < 0] + if unders: + raise exceptions.InvalidQuotaValue(unders=sorted(unders)) + + # Get the applicable quotas + quotas = self._get_quotas(context, tenant_id, resources) + + # Check the quotas and construct a list of the resources that + # would be put over limit by the desired values + overs = [key for key, val in values.items() + if quotas[key] >= 0 and quotas[key] < val] + if overs: + raise exceptions.OverQuota(overs=sorted(overs)) diff --git a/neutron/db/quota/models.py b/neutron/db/quota/models.py new file mode 100644 index 00000000000..b0abd0d9f54 --- /dev/null +++ b/neutron/db/quota/models.py @@ -0,0 +1,44 @@ +# Copyright (c) 2015 OpenStack Foundation. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import sqlalchemy as sa +from sqlalchemy import sql + +from neutron.db import model_base +from neutron.db import models_v2 + + +class Quota(model_base.BASEV2, models_v2.HasId, models_v2.HasTenant): + """Represent a single quota override for a tenant. + + If there is no row for a given tenant id and resource, then the + default for the deployment is used. + """ + resource = sa.Column(sa.String(255)) + limit = sa.Column(sa.Integer) + + +class QuotaUsage(model_base.BASEV2): + """Represents the current usage for a given resource.""" + + resource = sa.Column(sa.String(255), nullable=False, + primary_key=True, index=True) + tenant_id = sa.Column(sa.String(255), nullable=False, + primary_key=True, index=True) + dirty = sa.Column(sa.Boolean, nullable=False, server_default=sql.false()) + + in_use = sa.Column(sa.Integer, nullable=False, + server_default="0") + reserved = sa.Column(sa.Integer, nullable=False, + server_default="0") diff --git a/neutron/db/quota_db.py b/neutron/db/quota_db.py index ad7196675f3..1ce75aeef78 100644 --- a/neutron/db/quota_db.py +++ b/neutron/db/quota_db.py @@ -13,151 +13,10 @@ # License for the specific language governing permissions and limitations # under the License. -import sqlalchemy as sa +import sys -from neutron.common import exceptions -from neutron.db import model_base -from neutron.db import models_v2 +from neutron.db.quota import driver # noqa - -class Quota(model_base.BASEV2, models_v2.HasId, models_v2.HasTenant): - """Represent a single quota override for a tenant. - - If there is no row for a given tenant id and resource, then the - default for the deployment is used. - """ - resource = sa.Column(sa.String(255)) - limit = sa.Column(sa.Integer) - - -class DbQuotaDriver(object): - """Driver to perform necessary checks to enforce quotas and obtain quota - information. - - The default driver utilizes the local database. - """ - - @staticmethod - def get_tenant_quotas(context, resources, tenant_id): - """Given a list of resources, retrieve the quotas for the given - tenant. - - :param context: The request context, for access checks. - :param resources: A dictionary of the registered resource keys. - :param tenant_id: The ID of the tenant to return quotas for. - :return dict: from resource name to dict of name and limit - """ - - # init with defaults - tenant_quota = dict((key, resource.default) - for key, resource in resources.items()) - - # update with tenant specific limits - q_qry = context.session.query(Quota).filter_by(tenant_id=tenant_id) - tenant_quota.update((q['resource'], q['limit']) for q in q_qry) - - return tenant_quota - - @staticmethod - def delete_tenant_quota(context, tenant_id): - """Delete the quota entries for a given tenant_id. - - Atfer deletion, this tenant will use default quota values in conf. - """ - with context.session.begin(): - tenant_quotas = context.session.query(Quota) - tenant_quotas = tenant_quotas.filter_by(tenant_id=tenant_id) - tenant_quotas.delete() - - @staticmethod - def get_all_quotas(context, resources): - """Given a list of resources, retrieve the quotas for the all tenants. - - :param context: The request context, for access checks. - :param resources: A dictionary of the registered resource keys. - :return quotas: list of dict of tenant_id:, resourcekey1: - resourcekey2: ... - """ - tenant_default = dict((key, resource.default) - for key, resource in resources.items()) - - all_tenant_quotas = {} - - for quota in context.session.query(Quota): - tenant_id = quota['tenant_id'] - - # avoid setdefault() because only want to copy when actually req'd - tenant_quota = all_tenant_quotas.get(tenant_id) - if tenant_quota is None: - tenant_quota = tenant_default.copy() - tenant_quota['tenant_id'] = tenant_id - all_tenant_quotas[tenant_id] = tenant_quota - - tenant_quota[quota['resource']] = quota['limit'] - - return all_tenant_quotas.values() - - @staticmethod - def update_quota_limit(context, tenant_id, resource, limit): - with context.session.begin(): - tenant_quota = context.session.query(Quota).filter_by( - tenant_id=tenant_id, resource=resource).first() - - if tenant_quota: - tenant_quota.update({'limit': limit}) - else: - tenant_quota = Quota(tenant_id=tenant_id, - resource=resource, - limit=limit) - context.session.add(tenant_quota) - - def _get_quotas(self, context, tenant_id, resources): - """Retrieves the quotas for specific resources. - - A helper method which retrieves the quotas for the specific - resources identified by keys, and which apply to the current - context. - - :param context: The request context, for access checks. - :param tenant_id: the tenant_id to check quota. - :param resources: A dictionary of the registered resources. - """ - # Grab and return the quotas (without usages) - quotas = DbQuotaDriver.get_tenant_quotas( - context, resources, tenant_id) - - return dict((k, v) for k, v in quotas.items()) - - def limit_check(self, context, tenant_id, resources, values): - """Check simple quota limits. - - For limits--those quotas for which there is no usage - synchronization function--this method checks that a set of - proposed values are permitted by the limit restriction. - - If any of the proposed values is over the defined quota, an - OverQuota exception will be raised with the sorted list of the - resources which are too high. Otherwise, the method returns - nothing. - - :param context: The request context, for access checks. - :param tenant_id: The tenant_id to check the quota. - :param resources: A dictionary of the registered resources. - :param values: A dictionary of the values to check against the - quota. - """ - - # Ensure no value is less than zero - unders = [key for key, val in values.items() if val < 0] - if unders: - raise exceptions.InvalidQuotaValue(unders=sorted(unders)) - - # Get the applicable quotas - quotas = self._get_quotas(context, tenant_id, resources) - - # Check the quotas and construct a list of the resources that - # would be put over limit by the desired values - overs = [key for key, val in values.items() - if quotas[key] >= 0 and quotas[key] < val] - if overs: - raise exceptions.OverQuota(overs=sorted(overs)) +# This module has been preserved for backward compatibility, and will be +# deprecated in the future +sys.modules[__name__] = sys.modules['neutron.db.quota.driver'] diff --git a/neutron/db/rbac_db_models.py b/neutron/db/rbac_db_models.py new file mode 100644 index 00000000000..9e0aa44866e --- /dev/null +++ b/neutron/db/rbac_db_models.py @@ -0,0 +1,85 @@ +# Copyright (c) 2015 Mirantis, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import abc + +import sqlalchemy as sa +from sqlalchemy.orm import validates + +from neutron.common import exceptions as n_exc +from neutron.db import model_base +from neutron.db import models_v2 + + +class InvalidActionForType(n_exc.InvalidInput): + message = _("Invalid action '%(action)s' for object type " + "'%(object_type)s'. Valid actions: %(valid_actions)s") + + +class RBACColumns(models_v2.HasId, models_v2.HasTenant): + """Mixin that object-specific RBAC tables should inherit. + + All RBAC tables should inherit directly from this one because + the RBAC code uses the __subclasses__() method to discover the + RBAC types. + """ + + # the target_tenant is the subject that the policy will affect. this may + # also be a wildcard '*' to indicate all tenants or it may be a role if + # neutron gets better integration with keystone + target_tenant = sa.Column(sa.String(255), nullable=False) + + action = sa.Column(sa.String(255), nullable=False) + + @abc.abstractproperty + def object_type(self): + # this determines the name that users will use in the API + # to reference the type. sub-classes should set their own + pass + + __table_args__ = ( + sa.UniqueConstraint('target_tenant', 'object_id', 'action'), + model_base.BASEV2.__table_args__ + ) + + @validates('action') + def _validate_action(self, key, action): + if action not in self.get_valid_actions(): + raise InvalidActionForType( + action=action, object_type=self.object_type, + valid_actions=self.get_valid_actions()) + return action + + @abc.abstractmethod + def get_valid_actions(self): + # object table needs to override this to return an interable + # with the valid actions rbac entries + pass + + +def get_type_model_map(): + return {table.object_type: table for table in RBACColumns.__subclasses__()} + + +class NetworkRBAC(RBACColumns, model_base.BASEV2): + """RBAC table for networks.""" + + object_id = sa.Column(sa.String(36), + sa.ForeignKey('networks.id', ondelete="CASCADE"), + nullable=False) + object_type = 'network' + + def get_valid_actions(self): + return ('access_as_shared',) diff --git a/neutron/db/securitygroups_rpc_base.py b/neutron/db/securitygroups_rpc_base.py index 63212fad92c..7ae461d74c7 100644 --- a/neutron/db/securitygroups_rpc_base.py +++ b/neutron/db/securitygroups_rpc_base.py @@ -194,7 +194,7 @@ class SecurityGroupServerRpcMixin(sg_db.SecurityGroupDbMixin): for key in ('protocol', 'port_range_min', 'port_range_max', 'remote_ip_prefix', 'remote_group_id'): - if rule_in_db.get(key): + if rule_in_db.get(key) is not None: if key == 'remote_ip_prefix': direction_ip_prefix = DIRECTION_IP_PREFIX[direction] rule_dict[direction_ip_prefix] = rule_in_db[key] @@ -440,7 +440,7 @@ class SecurityGroupServerRpcMixin(sg_db.SecurityGroupDbMixin): } for key in ('protocol', 'port_range_min', 'port_range_max', 'remote_ip_prefix', 'remote_group_id'): - if rule_in_db.get(key): + if rule_in_db.get(key) is not None: if key == 'remote_ip_prefix': direction_ip_prefix = DIRECTION_IP_PREFIX[direction] rule_dict[direction_ip_prefix] = rule_in_db[key] diff --git a/neutron/extensions/address_scope.py b/neutron/extensions/address_scope.py index 63829920bf3..e63ac7ff90e 100644 --- a/neutron/extensions/address_scope.py +++ b/neutron/extensions/address_scope.py @@ -106,14 +106,17 @@ class Address_scope(extensions.ExtensionDescriptor): return [ex] def get_extended_resources(self, version): - return {} + if version == "2.0": + return RESOURCE_ATTRIBUTE_MAP + else: + return {} @six.add_metaclass(abc.ABCMeta) class AddressScopePluginBase(object): @abc.abstractmethod - def create_address_scope(self, context, adress_scope): + def create_address_scope(self, context, address_scope): pass @abc.abstractmethod diff --git a/neutron/extensions/flavor.py b/neutron/extensions/flavor.py deleted file mode 100644 index 9cafb13ef0a..00000000000 --- a/neutron/extensions/flavor.py +++ /dev/null @@ -1,64 +0,0 @@ -# Copyright 2012 Nachi Ueno, NTT MCL, Inc. -# All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -from oslo_log import log as logging - -from neutron.api import extensions -from neutron.api.v2 import attributes - - -LOG = logging.getLogger(__name__) - -FLAVOR_NETWORK = 'flavor:network' -FLAVOR_ROUTER = 'flavor:router' - -FLAVOR_ATTRIBUTE = { - 'networks': { - FLAVOR_NETWORK: {'allow_post': True, - 'allow_put': False, - 'is_visible': True, - 'default': attributes.ATTR_NOT_SPECIFIED} - }, - 'routers': { - FLAVOR_ROUTER: {'allow_post': True, - 'allow_put': False, - 'is_visible': True, - 'default': attributes.ATTR_NOT_SPECIFIED} - } -} - - -class Flavor(extensions.ExtensionDescriptor): - @classmethod - def get_name(cls): - return "Flavor support for network and router" - - @classmethod - def get_alias(cls): - return "flavor" - - @classmethod - def get_description(cls): - return "Flavor" - - @classmethod - def get_updated(cls): - return "2012-07-20T10:00:00-00:00" - - def get_extended_resources(self, version): - if version == "2.0": - return FLAVOR_ATTRIBUTE - else: - return {} diff --git a/neutron/extensions/flavors.py b/neutron/extensions/flavors.py new file mode 100644 index 00000000000..8de5fd08fe1 --- /dev/null +++ b/neutron/extensions/flavors.py @@ -0,0 +1,152 @@ +# All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from neutron.api import extensions +from neutron.api.v2 import attributes as attr +from neutron.api.v2 import base +from neutron.api.v2 import resource_helper +from neutron import manager +from neutron.plugins.common import constants + + +FLAVORS = 'flavors' +SERVICE_PROFILES = 'service_profiles' +FLAVORS_PREFIX = "" + +RESOURCE_ATTRIBUTE_MAP = { + FLAVORS: { + 'id': {'allow_post': False, 'allow_put': False, + 'validate': {'type:uuid': None}, + 'is_visible': True, + 'primary_key': True}, + 'name': {'allow_post': True, 'allow_put': True, + 'validate': {'type:string': None}, + 'is_visible': True, 'default': ''}, + 'description': {'allow_post': True, 'allow_put': True, + 'validate': {'type:string': None}, + 'is_visible': True, 'default': ''}, + 'service_type': {'allow_post': True, 'allow_put': False, + 'validate': {'type:string': None}, + 'is_visible': True}, + 'tenant_id': {'allow_post': True, 'allow_put': False, + 'required_by_policy': True, + 'validate': {'type:string': attr.TENANT_ID_MAX_LEN}, + 'is_visible': True}, + 'service_profiles': {'allow_post': True, 'allow_put': True, + 'validate': {'type:uuid_list': None}, + 'is_visible': True, 'default': []}, + 'enabled': {'allow_post': True, 'allow_put': True, + 'validate': {'type:boolean': None}, + 'default': True, + 'is_visible': True}, + }, + SERVICE_PROFILES: { + 'id': {'allow_post': False, 'allow_put': False, + 'is_visible': True, + 'primary_key': True}, + 'description': {'allow_post': True, 'allow_put': True, + 'validate': {'type:string': None}, + 'is_visible': True}, + # service_profile belong to one service type for now + #'service_types': {'allow_post': False, 'allow_put': False, + # 'is_visible': True}, + 'driver': {'allow_post': True, 'allow_put': False, + 'validate': {'type:string': None}, + 'is_visible': True, + 'default': attr.ATTR_NOT_SPECIFIED}, + 'metainfo': {'allow_post': True, 'allow_put': True, + 'is_visible': True}, + 'tenant_id': {'allow_post': True, 'allow_put': False, + 'required_by_policy': True, + 'validate': {'type:string': attr.TENANT_ID_MAX_LEN}, + 'is_visible': True}, + 'enabled': {'allow_post': True, 'allow_put': True, + 'validate': {'type:boolean': None}, + 'is_visible': True, 'default': True}, + }, +} + + +SUB_RESOURCE_ATTRIBUTE_MAP = { + 'service_profiles': { + 'parent': {'collection_name': 'flavors', + 'member_name': 'flavor'}, + 'parameters': {'id': {'allow_post': True, 'allow_put': False, + 'validate': {'type:uuid': None}, + 'is_visible': True}} + } +} + + +class Flavors(extensions.ExtensionDescriptor): + + @classmethod + def get_name(cls): + return "Neutron Service Flavors" + + @classmethod + def get_alias(cls): + return "flavors" + + @classmethod + def get_description(cls): + return "Service specification for advanced services" + + @classmethod + def get_updated(cls): + return "2014-07-06T10:00:00-00:00" + + @classmethod + def get_resources(cls): + """Returns Ext Resources.""" + plural_mappings = resource_helper.build_plural_mappings( + {}, RESOURCE_ATTRIBUTE_MAP) + attr.PLURALS.update(plural_mappings) + resources = resource_helper.build_resource_info( + plural_mappings, + RESOURCE_ATTRIBUTE_MAP, + constants.FLAVORS) + plugin = manager.NeutronManager.get_service_plugins()[ + constants.FLAVORS] + for collection_name in SUB_RESOURCE_ATTRIBUTE_MAP: + # Special handling needed for sub-resources with 'y' ending + # (e.g. proxies -> proxy) + resource_name = collection_name[:-1] + parent = SUB_RESOURCE_ATTRIBUTE_MAP[collection_name].get('parent') + params = SUB_RESOURCE_ATTRIBUTE_MAP[collection_name].get( + 'parameters') + + controller = base.create_resource(collection_name, resource_name, + plugin, params, + allow_bulk=True, + parent=parent) + + resource = extensions.ResourceExtension( + collection_name, + controller, parent, + path_prefix=FLAVORS_PREFIX, + attr_map=params) + resources.append(resource) + + return resources + + def update_attributes_map(self, attributes): + super(Flavors, self).update_attributes_map( + attributes, extension_attrs_map=RESOURCE_ATTRIBUTE_MAP) + + def get_extended_resources(self, version): + if version == "2.0": + return RESOURCE_ATTRIBUTE_MAP + else: + return {} diff --git a/neutron/extensions/portbindings.py b/neutron/extensions/portbindings.py index 3c50a4f2f8b..9c59b16c155 100644 --- a/neutron/extensions/portbindings.py +++ b/neutron/extensions/portbindings.py @@ -47,11 +47,29 @@ CAP_PORT_FILTER = 'port_filter' OVS_HYBRID_PLUG = 'ovs_hybrid_plug' VIF_DETAILS_VLAN = 'vlan' +# The keys below are used in the VIF_DETAILS attribute to convey +# information related to the configuration of the vhost-user VIF driver. + +# - vhost_user_mode: String value used to declare the mode of a +# vhost-user socket +VHOST_USER_MODE = 'vhostuser_mode' +# - server: socket created by hypervisor +VHOST_USER_MODE_SERVER = 'server' +# - client: socket created by vswitch +VHOST_USER_MODE_CLIENT = 'client' +# - vhostuser_socket String value used to declare the vhostuser socket name +VHOST_USER_SOCKET = 'vhostuser_socket' +# - vhost_user_ovs_plug: Boolean used to inform Nova that the ovs plug +# method should be used when binding the +# vhost-user vif. +VHOST_USER_OVS_PLUG = 'vhostuser_ovs_plug' + VIF_TYPE_UNBOUND = 'unbound' VIF_TYPE_BINDING_FAILED = 'binding_failed' VIF_TYPE_DISTRIBUTED = 'distributed' VIF_TYPE_IOVISOR = 'iovisor' VIF_TYPE_OVS = 'ovs' +VIF_TYPE_VHOST_USER = 'vhostuser' VIF_TYPE_IVS = 'ivs' VIF_TYPE_DVS = 'dvs' VIF_TYPE_BRIDGE = 'bridge' diff --git a/neutron/extensions/quotasv2.py b/neutron/extensions/quotasv2.py index cfe94c05229..f9a3ae9915f 100644 --- a/neutron/extensions/quotasv2.py +++ b/neutron/extensions/quotasv2.py @@ -25,13 +25,14 @@ from neutron.common import constants as const from neutron.common import exceptions as n_exc from neutron import manager from neutron import quota +from neutron.quota import resource_registry from neutron import wsgi RESOURCE_NAME = 'quota' RESOURCE_COLLECTION = RESOURCE_NAME + "s" QUOTAS = quota.QUOTAS -DB_QUOTA_DRIVER = 'neutron.db.quota_db.DbQuotaDriver' +DB_QUOTA_DRIVER = 'neutron.db.quota.driver.DbQuotaDriver' EXTENDED_ATTRIBUTES_2_0 = { RESOURCE_COLLECTION: {} } @@ -48,7 +49,7 @@ class QuotaSetsController(wsgi.Controller): self._update_extended_attributes = True def _update_attributes(self): - for quota_resource in QUOTAS.resources.keys(): + for quota_resource in resource_registry.get_all_resources().keys(): attr_dict = EXTENDED_ATTRIBUTES_2_0[RESOURCE_COLLECTION] attr_dict[quota_resource] = { 'allow_post': False, @@ -60,7 +61,9 @@ class QuotaSetsController(wsgi.Controller): def _get_quotas(self, request, tenant_id): return self._driver.get_tenant_quotas( - request.context, QUOTAS.resources, tenant_id) + request.context, + resource_registry.get_all_resources(), + tenant_id) def create(self, request, body=None): msg = _('POST requests are not supported on this resource.') @@ -70,7 +73,8 @@ class QuotaSetsController(wsgi.Controller): context = request.context self._check_admin(context) return {self._resource_name + "s": - self._driver.get_all_quotas(context, QUOTAS.resources)} + self._driver.get_all_quotas( + context, resource_registry.get_all_resources())} def tenant(self, request): """Retrieve the tenant info in context.""" diff --git a/neutron/extensions/securitygroup.py b/neutron/extensions/securitygroup.py index 8e863e831d6..f199f12025a 100644 --- a/neutron/extensions/securitygroup.py +++ b/neutron/extensions/securitygroup.py @@ -26,7 +26,7 @@ from neutron.api.v2 import base from neutron.common import constants as const from neutron.common import exceptions as nexception from neutron import manager -from neutron import quota +from neutron.quota import resource_registry # Security group Exceptions @@ -305,7 +305,7 @@ class Securitygroup(extensions.ExtensionDescriptor): for resource_name in ['security_group', 'security_group_rule']: collection_name = resource_name.replace('_', '-') + "s" params = RESOURCE_ATTRIBUTE_MAP.get(resource_name + "s", dict()) - quota.QUOTAS.register_resource_by_name(resource_name) + resource_registry.register_resource_by_name(resource_name) controller = base.create_resource(collection_name, resource_name, plugin, params, allow_bulk=True, diff --git a/neutron/ipam/driver.py b/neutron/ipam/driver.py index cd44fd09d66..3460517f6cd 100644 --- a/neutron/ipam/driver.py +++ b/neutron/ipam/driver.py @@ -148,14 +148,3 @@ class Subnet(object): :returns: An instance of SpecificSubnetRequest with the subnet detail. """ - - @abc.abstractmethod - def associate_neutron_subnet(self, subnet_id): - """Associate the IPAM subnet with a neutron subnet. - - This operation should be performed to attach a neutron subnet to the - current subnet instance. In some cases IPAM subnets may be created - independently of neutron subnets and associated at a later stage. - - :param subnet_id: neutron subnet identifier. - """ diff --git a/neutron/ipam/drivers/neutrondb_ipam/db_api.py b/neutron/ipam/drivers/neutrondb_ipam/db_api.py index 188d55990b0..223fb1c3484 100644 --- a/neutron/ipam/drivers/neutrondb_ipam/db_api.py +++ b/neutron/ipam/drivers/neutrondb_ipam/db_api.py @@ -54,11 +54,18 @@ class IpamSubnetManager(object): session.add(ipam_subnet) return self._ipam_subnet_id - def associate_neutron_id(self, session, neutron_subnet_id): - session.query(db_models.IpamSubnet).filter_by( - id=self._ipam_subnet_id).update( - {'neutron_subnet_id': neutron_subnet_id}) - self._neutron_subnet_id = neutron_subnet_id + @classmethod + def delete(cls, session, neutron_subnet_id): + """Delete IPAM subnet. + + IPAM subnet no longer has foreign key to neutron subnet, + so need to perform delete manually + + :param session: database sesssion + :param neutron_subnet_id: neutron subnet id associated with ipam subnet + """ + return session.query(db_models.IpamSubnet).filter_by( + neutron_subnet_id=neutron_subnet_id).delete() def create_pool(self, session, pool_start, pool_end): """Create an allocation pool and availability ranges for the subnet. diff --git a/neutron/ipam/drivers/neutrondb_ipam/driver.py b/neutron/ipam/drivers/neutrondb_ipam/driver.py index 28a3eb91d38..1ddab84340f 100644 --- a/neutron/ipam/drivers/neutrondb_ipam/driver.py +++ b/neutron/ipam/drivers/neutrondb_ipam/driver.py @@ -56,7 +56,7 @@ class NeutronDbSubnet(ipam_base.Subnet): ipam_subnet_id = uuidutils.generate_uuid() subnet_manager = ipam_db_api.IpamSubnetManager( ipam_subnet_id, - None) + subnet_request.subnet_id) # Create subnet resource session = ctx.session subnet_manager.create(session) @@ -76,8 +76,7 @@ class NeutronDbSubnet(ipam_base.Subnet): allocation_pools=pools, gateway_ip=subnet_request.gateway_ip, tenant_id=subnet_request.tenant_id, - subnet_id=subnet_request.subnet_id, - subnet_id_not_set=True) + subnet_id=subnet_request.subnet_id) @classmethod def load(cls, neutron_subnet_id, ctx): @@ -88,7 +87,7 @@ class NeutronDbSubnet(ipam_base.Subnet): ipam_subnet = ipam_db_api.IpamSubnetManager.load_by_neutron_subnet_id( ctx.session, neutron_subnet_id) if not ipam_subnet: - LOG.error(_LE("Unable to retrieve IPAM subnet as the referenced " + LOG.error(_LE("IPAM subnet referenced to " "Neutron subnet %s does not exist"), neutron_subnet_id) raise n_exc.SubnetNotFound(subnet_id=neutron_subnet_id) @@ -113,7 +112,7 @@ class NeutronDbSubnet(ipam_base.Subnet): def __init__(self, internal_id, ctx, cidr=None, allocation_pools=None, gateway_ip=None, tenant_id=None, - subnet_id=None, subnet_id_not_set=False): + subnet_id=None): # NOTE: In theory it could have been possible to grant the IPAM # driver direct access to the database. While this is possible, # it would have led to duplicate code and/or non-trivial @@ -124,7 +123,7 @@ class NeutronDbSubnet(ipam_base.Subnet): self._pools = allocation_pools self._gateway_ip = gateway_ip self._tenant_id = tenant_id - self._subnet_id = None if subnet_id_not_set else subnet_id + self._subnet_id = subnet_id self.subnet_manager = ipam_db_api.IpamSubnetManager(internal_id, self._subnet_id) self._context = ctx @@ -363,17 +362,6 @@ class NeutronDbSubnet(ipam_base.Subnet): self._tenant_id, self.subnet_manager.neutron_id, self._cidr, self._gateway_ip, self._pools) - def associate_neutron_subnet(self, subnet_id): - """Set neutron identifier for this subnet""" - session = self._context.session - if self._subnet_id: - raise - # IPAMSubnet does not have foreign key to Subnet, - # so need verify subnet existence. - NeutronDbSubnet._fetch_subnet(self._context, subnet_id) - self.subnet_manager.associate_neutron_id(session, subnet_id) - self._subnet_id = subnet_id - class NeutronDbPool(subnet_alloc.SubnetAllocator): """Subnet pools backed by Neutron Database. @@ -429,10 +417,16 @@ class NeutronDbPool(subnet_alloc.SubnetAllocator): subnet.update_allocation_pools(subnet_request.allocation_pools) return subnet - def remove_subnet(self, subnet): + def remove_subnet(self, subnet_id): """Remove data structures for a given subnet. - All the IPAM-related data are cleared when a subnet is deleted thanks - to cascaded foreign key relationships. + IPAM-related data has no foreign key relationships to neutron subnet, + so removing ipam subnet manually """ - pass + count = ipam_db_api.IpamSubnetManager.delete(self._context.session, + subnet_id) + if count < 1: + LOG.error(_LE("IPAM subnet referenced to " + "Neutron subnet %s does not exist"), + subnet_id) + raise n_exc.SubnetNotFound(subnet_id=subnet_id) diff --git a/neutron/ipam/requests.py b/neutron/ipam/requests.py index 7d45e235776..76a6860f1f4 100644 --- a/neutron/ipam/requests.py +++ b/neutron/ipam/requests.py @@ -255,11 +255,22 @@ class AddressRequestFactory(object): """ @classmethod - def get_request(cls, context, port, ip): - if not ip: - return AnyAddressRequest() + def get_request(cls, context, port, ip_dict): + """ + :param context: context (not used here, but can be used in sub-classes) + :param port: port dict (not used here, but can be used in sub-classes) + :param ip_dict: dict that can contain 'ip_address', 'mac' and + 'subnet_cidr' keys. Request to generate is selected depending on + this ip_dict keys. + :return: returns prepared AddressRequest (specific or any) + """ + if ip_dict.get('ip_address'): + return SpecificAddressRequest(ip_dict['ip_address']) + elif ip_dict.get('eui64_address'): + return AutomaticAddressRequest(prefix=ip_dict['subnet_cidr'], + mac=ip_dict['mac']) else: - return SpecificAddressRequest(ip) + return AnyAddressRequest() class SubnetRequestFactory(object): diff --git a/neutron/ipam/subnet_alloc.py b/neutron/ipam/subnet_alloc.py index cd17d338be9..1bc213ec4ba 100644 --- a/neutron/ipam/subnet_alloc.py +++ b/neutron/ipam/subnet_alloc.py @@ -193,9 +193,6 @@ class IpamSubnet(driver.Subnet): def get_details(self): return self._req - def associate_neutron_subnet(self, subnet_id): - pass - class SubnetPoolReader(object): '''Class to assist with reading a subnetpool, loading defaults, and diff --git a/neutron/locale/de/LC_MESSAGES/neutron-log-info.po b/neutron/locale/de/LC_MESSAGES/neutron-log-info.po index 8461ba9abba..40d427eaf67 100644 --- a/neutron/locale/de/LC_MESSAGES/neutron-log-info.po +++ b/neutron/locale/de/LC_MESSAGES/neutron-log-info.po @@ -8,10 +8,11 @@ msgid "" msgstr "" "Project-Id-Version: Neutron\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" -"POT-Creation-Date: 2015-07-11 06:09+0000\n" -"PO-Revision-Date: 2015-07-08 20:45+0000\n" +"POT-Creation-Date: 2015-07-27 06:07+0000\n" +"PO-Revision-Date: 2015-07-25 03:05+0000\n" "Last-Translator: openstackjenkins \n" -"Language-Team: German (http://www.transifex.com/p/neutron/language/de/)\n" +"Language-Team: German (http://www.transifex.com/projects/p/neutron/language/" +"de/)\n" "Language: de\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" @@ -199,11 +200,6 @@ msgid "Specified IP addresses do not match the subnet IP version" msgstr "" "Angegebene IP-Adressen stimmen nicht mit der Teilnetz-IP-Version überein" -#, python-format -msgid "Start IP (%(start)s) is greater than end IP (%(end)s)" -msgstr "" -"Anfangs-IP-Adresse (%(start)s) ist größer als Ende-IP-Adresse (%(end)s)" - msgid "Synchronizing state" msgstr "Synchronisation von Status" diff --git a/neutron/locale/es/LC_MESSAGES/neutron-log-info.po b/neutron/locale/es/LC_MESSAGES/neutron-log-info.po index 8a25eedf79e..b3611b4340a 100644 --- a/neutron/locale/es/LC_MESSAGES/neutron-log-info.po +++ b/neutron/locale/es/LC_MESSAGES/neutron-log-info.po @@ -3,14 +3,16 @@ # This file is distributed under the same license as the neutron project. # # Translators: +# Pablo Sanchez , 2015 msgid "" msgstr "" "Project-Id-Version: Neutron\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" -"POT-Creation-Date: 2015-07-11 06:09+0000\n" -"PO-Revision-Date: 2015-07-08 20:45+0000\n" +"POT-Creation-Date: 2015-07-27 06:07+0000\n" +"PO-Revision-Date: 2015-07-25 03:05+0000\n" "Last-Translator: openstackjenkins \n" -"Language-Team: Spanish (http://www.transifex.com/p/neutron/language/es/)\n" +"Language-Team: Spanish (http://www.transifex.com/projects/p/neutron/language/" +"es/)\n" "Language: es\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" @@ -71,6 +73,10 @@ msgstr "Se ha intentado eliminar el filtro de puerto que no está filtrado %r" msgid "Attempted to update port filter which is not filtered %s" msgstr "Se ha intentado actualizar el filtro de puerto que no está filtrado %s" +#, python-format +msgid "Cleaning bridge: %s" +msgstr "LImpiando puente: %s" + #, python-format msgid "Config paste file: %s" msgstr "Archivo de configuración de pegar: %s" @@ -78,6 +84,14 @@ msgstr "Archivo de configuración de pegar: %s" msgid "DHCP agent started" msgstr "Se ha iniciado al agente DHCP" +#, python-format +msgid "Deleting port: %s" +msgstr "Destruyendo puerto: %s" + +#, python-format +msgid "Destroying IPset: %s" +msgstr "Destruyendo IPset: %s" + #, python-format msgid "Device %s already exists" msgstr "El dispositivo %s ya existe" @@ -153,6 +167,9 @@ msgstr "Rangos de VLAN de red: %s" msgid "No %s Plugin loaded" msgstr "No se ha cargado ningún plug-in de %s" +msgid "No ports here to refresh firewall" +msgstr "No hay puertos aqui para actualizar firewall" + msgid "OVS cleanup completed successfully" msgstr "La limpieza de OVS se ha completado satisfactoriamente" @@ -186,6 +203,13 @@ msgstr "Renovar reglas de cortafuegos" msgid "Remove device filter for %r" msgstr "Eliminar filtro de dispositivo para %r" +#, python-format +msgid "" +"Router %s is not managed by this agent. It was possibly deleted concurrently." +msgstr "" +"Router %s no es controlado por este agente.Fue posiblemente borrado " +"concurrentemente" + #, python-format msgid "Security group member updated %r" msgstr "Se ha actualizado el miembro de grupo de seguridad %r" @@ -202,14 +226,12 @@ msgid "Specified IP addresses do not match the subnet IP version" msgstr "" "Las direcciones IP especificadas no coinciden con la versión de IP de subred " -#, python-format -msgid "Start IP (%(start)s) is greater than end IP (%(end)s)" -msgstr "" -"La IP de inicio (%(start)s) es mayor que la IP de finalización (%(end)s)" - msgid "Synchronizing state" msgstr "Sincronizando estado" +msgid "Synchronizing state complete" +msgstr "Sincronizando estado completado" + #, python-format msgid "" "Validation for CIDR: %(new_cidr)s failed - overlaps with subnet " diff --git a/neutron/locale/fr/LC_MESSAGES/neutron-log-info.po b/neutron/locale/fr/LC_MESSAGES/neutron-log-info.po index b84c632832b..ac27a19aa81 100644 --- a/neutron/locale/fr/LC_MESSAGES/neutron-log-info.po +++ b/neutron/locale/fr/LC_MESSAGES/neutron-log-info.po @@ -9,10 +9,11 @@ msgid "" msgstr "" "Project-Id-Version: Neutron\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" -"POT-Creation-Date: 2015-07-11 06:09+0000\n" -"PO-Revision-Date: 2015-07-08 20:45+0000\n" +"POT-Creation-Date: 2015-07-27 06:07+0000\n" +"PO-Revision-Date: 2015-07-25 03:05+0000\n" "Last-Translator: openstackjenkins \n" -"Language-Team: French (http://www.transifex.com/p/neutron/language/fr/)\n" +"Language-Team: French (http://www.transifex.com/projects/p/neutron/language/" +"fr/)\n" "Language: fr\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" @@ -44,12 +45,6 @@ msgstr "%(url)s a retourné une erreur : %(exception)s." msgid "%(url)s returned with HTTP %(status)d" msgstr "%(url)s retourné avec HTTP %(status)d" -msgid "APIC service agent started" -msgstr "service de l'agent APIC démarré" - -msgid "APIC service agent starting ..." -msgstr "Démarrage du service de l'agent APIC" - #, python-format msgid "Adding %s to list of bridges." msgstr "Ajout %s à la liste de ponts." @@ -331,12 +326,6 @@ msgid "Specified IP addresses do not match the subnet IP version" msgstr "" "Les adresses IP spécifiées ne correspondent à la version IP du sous-réseau" -#, python-format -msgid "Start IP (%(start)s) is greater than end IP (%(end)s)" -msgstr "" -"L'adresse IP de début (%(start)s) est supérieure à l'adresse IP de fin " -"(%(end)s)." - #, python-format msgid "Subnet %s was deleted concurrently" msgstr "Le sous-réseau %s a été effacé en même temps" diff --git a/neutron/locale/it/LC_MESSAGES/neutron-log-info.po b/neutron/locale/it/LC_MESSAGES/neutron-log-info.po index 951f981f86a..524d8a09b6f 100644 --- a/neutron/locale/it/LC_MESSAGES/neutron-log-info.po +++ b/neutron/locale/it/LC_MESSAGES/neutron-log-info.po @@ -8,10 +8,11 @@ msgid "" msgstr "" "Project-Id-Version: Neutron\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" -"POT-Creation-Date: 2015-07-11 06:09+0000\n" -"PO-Revision-Date: 2015-07-08 20:45+0000\n" +"POT-Creation-Date: 2015-07-27 06:07+0000\n" +"PO-Revision-Date: 2015-07-25 03:05+0000\n" "Last-Translator: openstackjenkins \n" -"Language-Team: Italian (http://www.transifex.com/p/neutron/language/it/)\n" +"Language-Team: Italian (http://www.transifex.com/projects/p/neutron/language/" +"it/)\n" "Language: it\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" @@ -196,10 +197,6 @@ msgstr "" "Gli indirizzi IP specificati non corrispondono alla versione IP della " "sottorete" -#, python-format -msgid "Start IP (%(start)s) is greater than end IP (%(end)s)" -msgstr "L'IP iniziale (%(start)s) è superiore all'IP finale (%(end)s)" - msgid "Synchronizing state" msgstr "Stato sincronizzazione" diff --git a/neutron/locale/ja/LC_MESSAGES/neutron-log-info.po b/neutron/locale/ja/LC_MESSAGES/neutron-log-info.po index 2dc3903baab..5754ade983e 100644 --- a/neutron/locale/ja/LC_MESSAGES/neutron-log-info.po +++ b/neutron/locale/ja/LC_MESSAGES/neutron-log-info.po @@ -8,10 +8,11 @@ msgid "" msgstr "" "Project-Id-Version: Neutron\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" -"POT-Creation-Date: 2015-07-11 06:09+0000\n" -"PO-Revision-Date: 2015-07-08 20:45+0000\n" +"POT-Creation-Date: 2015-07-27 06:07+0000\n" +"PO-Revision-Date: 2015-07-25 03:05+0000\n" "Last-Translator: openstackjenkins \n" -"Language-Team: Japanese (http://www.transifex.com/p/neutron/language/ja/)\n" +"Language-Team: Japanese (http://www.transifex.com/projects/p/neutron/" +"language/ja/)\n" "Language: ja\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" @@ -202,10 +203,6 @@ msgstr "ポート %s には IP が構成されていないため、このポー msgid "Specified IP addresses do not match the subnet IP version" msgstr "指定された IP アドレスが、サブネット IP バージョンと一致しません" -#, python-format -msgid "Start IP (%(start)s) is greater than end IP (%(end)s)" -msgstr "開始 IP (%(start)s) が終了 IP (%(end)s) より大きくなっています" - msgid "Synchronizing state" msgstr "状態の同期中" diff --git a/neutron/locale/ko_KR/LC_MESSAGES/neutron-log-info.po b/neutron/locale/ko_KR/LC_MESSAGES/neutron-log-info.po index 6dce4636df3..d1036121056 100644 --- a/neutron/locale/ko_KR/LC_MESSAGES/neutron-log-info.po +++ b/neutron/locale/ko_KR/LC_MESSAGES/neutron-log-info.po @@ -7,11 +7,11 @@ msgid "" msgstr "" "Project-Id-Version: Neutron\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" -"POT-Creation-Date: 2015-07-11 06:09+0000\n" -"PO-Revision-Date: 2015-07-08 20:45+0000\n" +"POT-Creation-Date: 2015-07-27 06:07+0000\n" +"PO-Revision-Date: 2015-07-25 03:05+0000\n" "Last-Translator: openstackjenkins \n" -"Language-Team: Korean (Korea) (http://www.transifex.com/p/neutron/language/" -"ko_KR/)\n" +"Language-Team: Korean (Korea) (http://www.transifex.com/projects/p/neutron/" +"language/ko_KR/)\n" "Language: ko_KR\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" @@ -220,10 +220,6 @@ msgstr "구성된 IP가 없어서 포트 %s을(를) 건너뜀" msgid "Specified IP addresses do not match the subnet IP version" msgstr "지정된 IP 주소가 서브넷 IP 버전과 일치하지 않음" -#, python-format -msgid "Start IP (%(start)s) is greater than end IP (%(end)s)" -msgstr "시작 IP(%(start)s)가 끝 IP(%(end)s)보다 큼" - msgid "Synchronizing state" msgstr "상태 동기화 중" diff --git a/neutron/locale/neutron-log-error.pot b/neutron/locale/neutron-log-error.pot index cb372da55d7..41d620810d5 100644 --- a/neutron/locale/neutron-log-error.pot +++ b/neutron/locale/neutron-log-error.pot @@ -6,9 +6,9 @@ #, fuzzy msgid "" msgstr "" -"Project-Id-Version: neutron 7.0.0.0b2.dev192\n" +"Project-Id-Version: neutron 7.0.0.0b2.dev396\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" -"POT-Creation-Date: 2015-07-11 06:09+0000\n" +"POT-Creation-Date: 2015-07-27 06:07+0000\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language-Team: LANGUAGE \n" @@ -17,16 +17,16 @@ msgstr "" "Content-Transfer-Encoding: 8bit\n" "Generated-By: Babel 1.3\n" -#: neutron/manager.py:135 +#: neutron/manager.py:136 msgid "Error, plugin is not set" msgstr "" -#: neutron/manager.py:146 +#: neutron/manager.py:147 #, python-format msgid "Error loading plugin by name, %s" msgstr "" -#: neutron/manager.py:147 +#: neutron/manager.py:148 #, python-format msgid "Error loading plugin by class, %s" msgstr "" @@ -36,47 +36,47 @@ msgstr "" msgid "Policy check error while calling %s!" msgstr "" -#: neutron/service.py:108 neutron/service.py:170 +#: neutron/service.py:105 neutron/service.py:167 msgid "Unrecoverable error: please check log for details." msgstr "" -#: neutron/service.py:148 +#: neutron/service.py:145 #, python-format msgid "'rpc_workers = %d' ignored because start_rpc_listeners is not implemented." msgstr "" -#: neutron/service.py:184 +#: neutron/service.py:181 msgid "No known API applications configured." msgstr "" -#: neutron/service.py:291 +#: neutron/service.py:286 msgid "Exception occurs when timer stops" msgstr "" -#: neutron/service.py:300 +#: neutron/service.py:295 msgid "Exception occurs when waiting for timer" msgstr "" -#: neutron/wsgi.py:159 +#: neutron/wsgi.py:160 #, python-format msgid "Unable to listen on %(host)s:%(port)s" msgstr "" -#: neutron/wsgi.py:800 +#: neutron/wsgi.py:803 #, python-format msgid "InvalidContentType: %s" msgstr "" -#: neutron/wsgi.py:804 +#: neutron/wsgi.py:807 #, python-format msgid "MalformedRequestBody: %s" msgstr "" -#: neutron/wsgi.py:813 +#: neutron/wsgi.py:816 msgid "Internal error" msgstr "" -#: neutron/agent/common/ovs_lib.py:225 neutron/agent/common/ovs_lib.py:320 +#: neutron/agent/common/ovs_lib.py:225 neutron/agent/common/ovs_lib.py:325 #, python-format msgid "Unable to execute %(cmd)s. Exception: %(exception)s" msgstr "" @@ -86,113 +86,113 @@ msgstr "" msgid "Timed out retrieving ofport on port %(pname)s. Exception: %(exception)s" msgstr "" -#: neutron/agent/common/ovs_lib.py:566 +#: neutron/agent/common/ovs_lib.py:575 #, python-format msgid "OVS flows could not be applied on bridge %s" msgstr "" -#: neutron/agent/dhcp/agent.py:137 +#: neutron/agent/common/utils.py:38 neutron/agent/l3/agent.py:228 +msgid "An interface driver must be specified" +msgstr "" + +#: neutron/agent/common/utils.py:43 +#, python-format +msgid "Error importing interface driver '%(driver)s': %(inner)s" +msgstr "" + +#: neutron/agent/dhcp/agent.py:136 #, python-format msgid "Unable to %(action)s dhcp for %(net_id)s." msgstr "" -#: neutron/agent/dhcp/agent.py:164 +#: neutron/agent/dhcp/agent.py:163 #, python-format msgid "Unable to sync network state on deleted network %s" msgstr "" -#: neutron/agent/dhcp/agent.py:177 +#: neutron/agent/dhcp/agent.py:176 msgid "Unable to sync network state." msgstr "" -#: neutron/agent/dhcp/agent.py:208 +#: neutron/agent/dhcp/agent.py:207 #, python-format msgid "Network %s info call failed." msgstr "" -#: neutron/agent/dhcp/agent.py:577 neutron/agent/l3/agent.py:640 +#: neutron/agent/dhcp/agent.py:576 neutron/agent/l3/agent.py:632 #: neutron/agent/metadata/agent.py:315 #: neutron/plugins/hyperv/agent/l2_agent.py:94 #: neutron/plugins/ibm/agent/sdnve_neutron_agent.py:109 -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:814 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:807 #: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:130 -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:311 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:314 #: neutron/services/metering/agents/metering_agent.py:283 msgid "Failed reporting state!" msgstr "" -#: neutron/agent/l3/agent.py:174 neutron/tests/unit/agent/l3/test_agent.py:1865 -#, python-format -msgid "Error importing interface driver '%s'" -msgstr "" - -#: neutron/agent/l3/agent.py:236 neutron/agent/linux/dhcp.py:923 -msgid "An interface driver must be specified" -msgstr "" - -#: neutron/agent/l3/agent.py:241 +#: neutron/agent/l3/agent.py:233 msgid "Router id is required if not using namespaces." msgstr "" -#: neutron/agent/l3/agent.py:248 +#: neutron/agent/l3/agent.py:240 #, python-format msgid "%s used in config as ipv6_gateway is not a valid IPv6 link-local address." msgstr "" -#: neutron/agent/l3/agent.py:333 +#: neutron/agent/l3/agent.py:325 #, python-format msgid "Error while deleting router %s" msgstr "" -#: neutron/agent/l3/agent.py:403 +#: neutron/agent/l3/agent.py:395 #, python-format msgid "The external network bridge '%s' does not exist" msgstr "" -#: neutron/agent/l3/agent.py:458 +#: neutron/agent/l3/agent.py:450 #, python-format msgid "Failed to fetch router information for '%s'" msgstr "" -#: neutron/agent/l3/agent.py:487 +#: neutron/agent/l3/agent.py:479 #, python-format msgid "Removing incompatible router '%s'" msgstr "" -#: neutron/agent/l3/agent.py:491 +#: neutron/agent/l3/agent.py:483 #, python-format msgid "Failed to process compatible router '%s'" msgstr "" -#: neutron/agent/l3/agent.py:543 +#: neutron/agent/l3/agent.py:535 msgid "Failed synchronizing routers due to RPC error" msgstr "" -#: neutron/agent/l3/dvr_local_router.py:188 +#: neutron/agent/l3/dvr_local_router.py:181 msgid "DVR: Failed updating arp entry" msgstr "" -#: neutron/agent/l3/dvr_local_router.py:213 -msgid "DVR: no map match_port found!" -msgstr "" - -#: neutron/agent/l3/dvr_local_router.py:281 +#: neutron/agent/l3/dvr_local_router.py:263 msgid "DVR: error adding redirection logic" msgstr "" -#: neutron/agent/l3/dvr_local_router.py:283 +#: neutron/agent/l3/dvr_local_router.py:265 msgid "DVR: removed snat failed" msgstr "" -#: neutron/agent/l3/dvr_local_router.py:435 +#: neutron/agent/l3/dvr_local_router.py:386 #, python-format msgid "No FloatingIP agent gateway port returned from server for 'network-id': %s" msgstr "" -#: neutron/agent/l3/dvr_local_router.py:440 +#: neutron/agent/l3/dvr_local_router.py:391 msgid "Missing subnet/agent_gateway_port" msgstr "" +#: neutron/agent/l3/dvr_router_base.py:42 +msgid "DVR: no map match_port found!" +msgstr "" + #: neutron/agent/l3/ha_router.py:74 #, python-format msgid "Error while writing HA state for %s" @@ -208,11 +208,11 @@ msgstr "" msgid "Failed to process or handle event for line %s" msgstr "" -#: neutron/agent/l3/namespace_manager.py:114 +#: neutron/agent/l3/namespace_manager.py:121 msgid "RuntimeError in obtaining namespace list for namespace cleanup." msgstr "" -#: neutron/agent/l3/namespace_manager.py:138 +#: neutron/agent/l3/namespace_manager.py:142 #, python-format msgid "Failed to destroy stale namespace %s" msgstr "" @@ -227,55 +227,50 @@ msgstr "" msgid "An error occurred while killing [%s]." msgstr "" -#: neutron/agent/linux/async_process.py:198 +#: neutron/agent/linux/async_process.py:201 #, python-format msgid "An error occurred while communicating with async process [%s]." msgstr "" -#: neutron/agent/linux/daemon.py:117 +#: neutron/agent/linux/daemon.py:127 #, python-format msgid "Error while handling pidfile: %s" msgstr "" -#: neutron/agent/linux/daemon.py:178 +#: neutron/agent/linux/daemon.py:190 msgid "Fork failed" msgstr "" -#: neutron/agent/linux/daemon.py:221 +#: neutron/agent/linux/daemon.py:243 #, python-format msgid "Pidfile %s already exist. Daemon already running?" msgstr "" -#: neutron/agent/linux/dhcp.py:929 -#, python-format -msgid "Error importing interface driver '%(driver)s': %(inner)s" -msgstr "" - -#: neutron/agent/linux/external_process.py:224 +#: neutron/agent/linux/external_process.py:225 #, python-format msgid "" "%(service)s for %(resource_type)s with uuid %(uuid)s not found. The " "process should not have died" msgstr "" -#: neutron/agent/linux/external_process.py:244 +#: neutron/agent/linux/external_process.py:245 #, python-format msgid "respawning %(service)s for uuid %(uuid)s" msgstr "" -#: neutron/agent/linux/external_process.py:250 +#: neutron/agent/linux/external_process.py:251 msgid "Exiting agent as programmed in check_child_processes_actions" msgstr "" -#: neutron/agent/linux/external_process.py:261 +#: neutron/agent/linux/external_process.py:262 #, python-format msgid "" "Exiting agent because of a malfunction with the %(service)s process " "identified by uuid %(uuid)s" msgstr "" -#: neutron/agent/linux/interface.py:290 neutron/agent/linux/interface.py:327 -#: neutron/agent/linux/interface.py:385 neutron/agent/linux/interface.py:421 +#: neutron/agent/linux/interface.py:265 neutron/agent/linux/interface.py:302 +#: neutron/agent/linux/interface.py:360 neutron/agent/linux/interface.py:396 #, python-format msgid "Failed unplugging interface '%s'" msgstr "" @@ -303,7 +298,7 @@ msgstr "" msgid "Exceeded %s second limit waiting for address to leave the tentative state." msgstr "" -#: neutron/agent/linux/ip_lib.py:799 +#: neutron/agent/linux/ip_lib.py:819 #, python-format msgid "Failed sending gratuitous ARP to %(addr)s on %(iface)s in namespace %(ns)s" msgstr "" @@ -341,7 +336,7 @@ msgstr "" msgid "Interface monitor is not active" msgstr "" -#: neutron/agent/linux/utils.py:225 +#: neutron/agent/linux/utils.py:220 #, python-format msgid "Unable to convert value in %s" msgstr "" @@ -385,31 +380,31 @@ msgstr "" msgid "Port %(port)s does not exist on %(bridge)s!" msgstr "" -#: neutron/agent/ovsdb/native/commands.py:386 +#: neutron/agent/ovsdb/native/commands.py:401 #, python-format msgid "" -"Row removed from DB during listing. Request info: Table=%(table)s. " +"Row doesn't exist in the DB. Request info: Table=%(table)s. " "Columns=%(columns)s. Records=%(records)s." msgstr "" -#: neutron/api/extensions.py:460 +#: neutron/api/extensions.py:457 #, python-format msgid "Error fetching extended attributes for extension '%s'" msgstr "" -#: neutron/api/extensions.py:469 +#: neutron/api/extensions.py:466 #, python-format msgid "" "It was impossible to process the following extensions: %s because of " "missing requirements." msgstr "" -#: neutron/api/extensions.py:485 +#: neutron/api/extensions.py:482 #, python-format msgid "Exception loading extension: %s" msgstr "" -#: neutron/api/extensions.py:505 +#: neutron/api/extensions.py:502 #, python-format msgid "Extension path '%s' doesn't exist!" msgstr "" @@ -439,8 +434,8 @@ msgstr "" msgid "Unable to undo add for %(resource)s %(id)s" msgstr "" -#: neutron/api/v2/resource.py:97 neutron/api/v2/resource.py:105 -#: neutron/api/v2/resource.py:125 +#: neutron/api/v2/resource.py:97 neutron/api/v2/resource.py:109 +#: neutron/api/v2/resource.py:129 #, python-format msgid "%s failed" msgstr "" @@ -465,80 +460,86 @@ msgstr "" msgid "Error unable to destroy namespace: %s" msgstr "" -#: neutron/cmd/sanity_check.py:51 +#: neutron/cmd/sanity_check.py:53 msgid "" "Check for Open vSwitch VXLAN support failed. Please ensure that the " "version of openvswitch being used has VXLAN support." msgstr "" -#: neutron/cmd/sanity_check.py:60 +#: neutron/cmd/sanity_check.py:62 msgid "" "Check for iproute2 VXLAN support failed. Please ensure that the iproute2 " "has VXLAN support." msgstr "" -#: neutron/cmd/sanity_check.py:68 +#: neutron/cmd/sanity_check.py:70 msgid "" "Check for Open vSwitch patch port support failed. Please ensure that the " "version of openvswitch being used has patch port support or disable " "features requiring patch ports (gre/vxlan, etc.)." msgstr "" -#: neutron/cmd/sanity_check.py:85 +#: neutron/cmd/sanity_check.py:87 msgid "" "The user that is executing neutron does not have permissions to read the " "namespaces. Enable the use_helper_for_ns_read configuration option." msgstr "" -#: neutron/cmd/sanity_check.py:102 +#: neutron/cmd/sanity_check.py:104 #, python-format msgid "" "The installed version of dnsmasq is too old. Please update to at least " "version %s." msgstr "" -#: neutron/cmd/sanity_check.py:111 +#: neutron/cmd/sanity_check.py:113 +msgid "" +"The installed version of keepalived does not support IPv6. Please update " +"to at least version 1.2.10 for IPv6 support." +msgstr "" + +#: neutron/cmd/sanity_check.py:122 msgid "" "Nova notifications are enabled, but novaclient is not installed. Either " "disable nova notifications or install python-novaclient." msgstr "" -#: neutron/cmd/sanity_check.py:120 +#: neutron/cmd/sanity_check.py:131 msgid "" "Check for Open vSwitch ARP responder support failed. Please ensure that " "the version of openvswitch being used has ARP flows support." msgstr "" -#: neutron/cmd/sanity_check.py:129 +#: neutron/cmd/sanity_check.py:140 msgid "" "Check for Open vSwitch support of ARP header matching failed. ARP " "spoofing suppression will not work. A newer version of OVS is required." msgstr "" -#: neutron/cmd/sanity_check.py:138 +#: neutron/cmd/sanity_check.py:149 msgid "" "Check for VF management support failed. Please ensure that the version of" " ip link being used has VF support." msgstr "" -#: neutron/cmd/sanity_check.py:148 +#: neutron/cmd/sanity_check.py:159 msgid "Check for native OVSDB support failed." msgstr "" -#: neutron/cmd/sanity_check.py:155 +#: neutron/cmd/sanity_check.py:166 msgid "Cannot run ebtables. Please ensure that it is installed." msgstr "" -#: neutron/cmd/sanity/checks.py:90 +#: neutron/cmd/sanity/checks.py:98 #, python-format msgid "Unexpected exception while checking supported feature via command: %s" msgstr "" -#: neutron/cmd/sanity/checks.py:130 +#: neutron/cmd/sanity/checks.py:138 msgid "Unexpected exception while checking supported ip link command" msgstr "" -#: neutron/cmd/sanity/checks.py:176 +#: neutron/cmd/sanity/checks.py:302 #, python-format msgid "" "Failed to import required modules. Ensure that the python-openvswitch " @@ -570,36 +571,56 @@ msgstr "" msgid "Exception encountered during network rescheduling" msgstr "" -#: neutron/db/db_base_plugin_v2.py:217 neutron/plugins/ml2/plugin.py:569 +#: neutron/db/db_base_plugin_v2.py:224 neutron/plugins/ml2/plugin.py:562 #, python-format msgid "An exception occurred while creating the %(resource)s:%(item)s" msgstr "" -#: neutron/db/db_base_plugin_v2.py:801 +#: neutron/db/db_base_plugin_v2.py:835 #, python-format msgid "Unable to generate mac address after %s attempts" msgstr "" -#: neutron/db/dvr_mac_db.py:98 +#: neutron/db/dvr_mac_db.py:105 #, python-format msgid "MAC generation error after %s attempts" msgstr "" -#: neutron/db/dvr_mac_db.py:170 +#: neutron/db/dvr_mac_db.py:177 #, python-format msgid "Could not retrieve gateway port for subnet %s" msgstr "" -#: neutron/db/l3_agentschedulers_db.py:118 +#: neutron/db/ipam_pluggable_backend.py:72 +#, python-format +msgid "IP deallocation failed on external system for %s" +msgstr "" + +#: neutron/db/ipam_pluggable_backend.py:134 +#, python-format +msgid "IP allocation failed on external system for %s" +msgstr "" + +#: neutron/db/ipam_pluggable_backend.py:365 +msgid "" +"An exception occurred during subnet update.Reverting allocation pool " +"changes" +msgstr "" + +#: neutron/db/l3_agentschedulers_db.py:119 #, python-format msgid "Failed to reschedule router %s" msgstr "" -#: neutron/db/l3_agentschedulers_db.py:123 +#: neutron/db/l3_agentschedulers_db.py:124 msgid "Exception encountered during router rescheduling." msgstr "" -#: neutron/db/l3_db.py:542 +#: neutron/db/l3_db.py:517 +msgid "Router port must have at least one fixed IP" +msgstr "" + +#: neutron/db/l3_db.py:546 msgid "Cannot have multiple IPv4 subnets on router port" msgstr "" @@ -613,11 +634,10 @@ msgstr "" msgid "No plugin for L3 routing registered to handle router scheduling" msgstr "" -#: neutron/ipam/drivers/neutrondb_ipam/driver.py:91 +#: neutron/ipam/drivers/neutrondb_ipam/driver.py:90 +#: neutron/ipam/drivers/neutrondb_ipam/driver.py:429 #, python-format -msgid "" -"Unable to retrieve IPAM subnet as the referenced Neutron subnet %s does " -"not exist" +msgid "IPAM subnet referenced to Neutron subnet %s does not exist" msgstr "" #: neutron/notifiers/nova.py:248 @@ -630,10 +650,10 @@ msgstr "" msgid "Error response returned from nova: %s" msgstr "" -#: neutron/plugins/brocade/NeutronPlugin.py:296 -#: neutron/plugins/brocade/NeutronPlugin.py:340 -#: neutron/plugins/brocade/NeutronPlugin.py:393 -#: neutron/plugins/brocade/NeutronPlugin.py:423 +#: neutron/plugins/brocade/NeutronPlugin.py:295 +#: neutron/plugins/brocade/NeutronPlugin.py:339 +#: neutron/plugins/brocade/NeutronPlugin.py:392 +#: neutron/plugins/brocade/NeutronPlugin.py:422 msgid "Brocade NOS driver error" msgstr "" @@ -721,13 +741,13 @@ msgid "" msgstr "" #: neutron/plugins/ibm/agent/sdnve_neutron_agent.py:256 -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1701 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1714 #, python-format msgid "%s Agent terminated!" msgstr "" #: neutron/plugins/ml2/db.py:242 neutron/plugins/ml2/db.py:326 -#: neutron/plugins/ml2/plugin.py:1344 +#: neutron/plugins/ml2/plugin.py:1361 #, python-format msgid "Multiple ports have port_id starting with %s" msgstr "" @@ -786,102 +806,107 @@ msgstr "" msgid "Extension driver '%(name)s' failed in %(method)s" msgstr "" -#: neutron/plugins/ml2/plugin.py:287 +#: neutron/plugins/ml2/plugin.py:286 #, python-format msgid "Failed to commit binding results for %(port)s after %(max)s tries" msgstr "" -#: neutron/plugins/ml2/plugin.py:449 +#: neutron/plugins/ml2/plugin.py:442 #, python-format msgid "Serialized vif_details DB value '%(value)s' for port %(port)s is invalid" msgstr "" -#: neutron/plugins/ml2/plugin.py:460 +#: neutron/plugins/ml2/plugin.py:453 #, python-format msgid "Serialized profile DB value '%(value)s' for port %(port)s is invalid" msgstr "" -#: neutron/plugins/ml2/plugin.py:546 +#: neutron/plugins/ml2/plugin.py:539 #, python-format msgid "Could not find %s to delete." msgstr "" -#: neutron/plugins/ml2/plugin.py:549 +#: neutron/plugins/ml2/plugin.py:542 #, python-format msgid "Could not delete %(res)s %(id)s." msgstr "" -#: neutron/plugins/ml2/plugin.py:582 +#: neutron/plugins/ml2/plugin.py:575 #, python-format msgid "" "mechanism_manager.create_%(res)s_postcommit failed for %(res)s: " "'%(failed_id)s'. Deleting %(res)ss %(resource_ids)s" msgstr "" -#: neutron/plugins/ml2/plugin.py:628 +#: neutron/plugins/ml2/plugin.py:621 #, python-format msgid "mechanism_manager.create_network_postcommit failed, deleting network '%s'" msgstr "" -#: neutron/plugins/ml2/plugin.py:698 +#: neutron/plugins/ml2/plugin.py:691 #, python-format msgid "Exception auto-deleting port %s" msgstr "" -#: neutron/plugins/ml2/plugin.py:711 +#: neutron/plugins/ml2/plugin.py:704 #, python-format msgid "Exception auto-deleting subnet %s" msgstr "" -#: neutron/plugins/ml2/plugin.py:793 +#: neutron/plugins/ml2/plugin.py:785 msgid "mechanism_manager.delete_network_postcommit failed" msgstr "" -#: neutron/plugins/ml2/plugin.py:814 +#: neutron/plugins/ml2/plugin.py:806 #, python-format msgid "mechanism_manager.create_subnet_postcommit failed, deleting subnet '%s'" msgstr "" -#: neutron/plugins/ml2/plugin.py:937 +#: neutron/plugins/ml2/plugin.py:925 #, python-format msgid "Exception deleting fixed_ip from port %s" msgstr "" -#: neutron/plugins/ml2/plugin.py:946 +#: neutron/plugins/ml2/plugin.py:934 msgid "mechanism_manager.delete_subnet_postcommit failed" msgstr "" -#: neutron/plugins/ml2/plugin.py:1011 +#: neutron/plugins/ml2/plugin.py:999 #, python-format msgid "mechanism_manager.create_port_postcommit failed, deleting port '%s'" msgstr "" -#: neutron/plugins/ml2/plugin.py:1023 +#: neutron/plugins/ml2/plugin.py:1011 #, python-format msgid "_bind_port_if_needed failed, deleting port '%s'" msgstr "" -#: neutron/plugins/ml2/plugin.py:1054 +#: neutron/plugins/ml2/plugin.py:1042 #, python-format msgid "_bind_port_if_needed failed. Deleting all ports from create bulk '%s'" msgstr "" -#: neutron/plugins/ml2/plugin.py:1201 +#: neutron/plugins/ml2/plugin.py:1176 +#, python-format +msgid "mechanism_manager.update_port_postcommit failed for port %s" +msgstr "" + +#: neutron/plugins/ml2/plugin.py:1223 #, python-format msgid "No Host supplied to bind DVR Port %s" msgstr "" -#: neutron/plugins/ml2/plugin.py:1325 +#: neutron/plugins/ml2/plugin.py:1342 #, python-format msgid "mechanism_manager.delete_port_postcommit failed for port %s" msgstr "" -#: neutron/plugins/ml2/plugin.py:1357 +#: neutron/plugins/ml2/plugin.py:1374 #, python-format msgid "Binding info for DVR port %s not found" msgstr "" -#: neutron/plugins/ml2/drivers/type_gre.py:82 +#: neutron/plugins/ml2/drivers/type_gre.py:79 msgid "Failed to parse tunnel_id_ranges. Service terminated!" msgstr "" @@ -889,32 +914,10 @@ msgstr "" msgid "Failed to parse network_vlan_ranges. Service terminated!" msgstr "" -#: neutron/plugins/ml2/drivers/type_vxlan.py:85 +#: neutron/plugins/ml2/drivers/type_vxlan.py:83 msgid "Failed to parse vni_ranges. Service terminated!" msgstr "" -#: neutron/plugins/ml2/drivers/cisco/apic/apic_topology.py:110 -msgid "APIC service agent: failed in reporting state" -msgstr "" - -#: neutron/plugins/ml2/drivers/cisco/apic/apic_topology.py:195 -#, python-format -msgid "No such interface (ignored): %s" -msgstr "" - -#: neutron/plugins/ml2/drivers/cisco/apic/apic_topology.py:245 -msgid "APIC service agent: exception in LLDP parsing" -msgstr "" - -#: neutron/plugins/ml2/drivers/cisco/apic/apic_topology.py:300 -#, python-format -msgid "APIC service agent: can not get MACaddr for %s" -msgstr "" - -#: neutron/plugins/ml2/drivers/cisco/apic/apic_topology.py:316 -msgid "APIC host agent: failed in reporting state" -msgstr "" - #: neutron/plugins/ml2/drivers/cisco/n1kv/n1kv_ext_driver.py:76 #: neutron/plugins/ml2/drivers/cisco/n1kv/n1kv_ext_driver.py:83 #, python-format @@ -928,51 +931,51 @@ msgid "" "%(network)s" msgstr "" -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:185 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:186 #, python-format msgid "Failed creating vxlan interface for %(segmentation_id)s" msgstr "" -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:340 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:336 #, python-format msgid "Unable to add %(interface)s to %(bridge_name)s! Exception: %(e)s" msgstr "" -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:353 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:349 #, python-format msgid "Unable to add vxlan interface for network %s" msgstr "" -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:360 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:356 #, python-format msgid "No mapping for physical network %s" msgstr "" -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:369 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:365 #, python-format msgid "Unknown network_type %(network_type)s for network %(network_id)s." msgstr "" -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:462 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:456 #, python-format msgid "Cannot delete bridge %s, does not exist" msgstr "" -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:541 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:534 msgid "No valid Segmentation ID to perform UCAST test." msgstr "" -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:824 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:817 msgid "Unable to obtain MAC address for unique ID. Agent terminated!" msgstr "" -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:1029 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:1022 #: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:271 #, python-format msgid "Error in agent loop. Devices info: %s" msgstr "" -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:1057 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:1050 #: neutron/plugins/ml2/drivers/mlnx/agent/eswitch_neutron_agent.py:40 #, python-format msgid "Parsing physical_interface_mappings failed: %s. Agent terminated!" @@ -1035,106 +1038,106 @@ msgid "" "a different subnet %(orig_subnet)s" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:410 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:413 msgid "No tunnel_type specified, cannot create tunnels" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:413 -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:436 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:416 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:439 #, python-format msgid "tunnel_type %s not supported by agent" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:429 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:432 msgid "No tunnel_ip specified, cannot delete tunnels" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:433 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:436 msgid "No tunnel_type specified, cannot delete tunnels" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:579 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:582 #, python-format msgid "No local VLAN available for net-id=%s" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:610 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:613 #, python-format msgid "" "Cannot provision %(network_type)s network for net-id=%(net_uuid)s - " "tunneling disabled" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:618 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:621 #, python-format msgid "" "Cannot provision flat network for net-id=%(net_uuid)s - no bridge for " "physical_network %(physical_network)s" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:628 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:631 #, python-format msgid "" "Cannot provision VLAN network for net-id=%(net_uuid)s - no bridge for " "physical_network %(physical_network)s" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:637 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:640 #, python-format msgid "" "Cannot provision unknown network type %(network_type)s for net-" "id=%(net_uuid)s" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:697 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:700 #, python-format msgid "" "Cannot reclaim unknown network type %(network_type)s for net-" "id=%(net_uuid)s" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:904 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:907 msgid "" "Failed to create OVS patch port. Cannot have tunneling enabled on this " "agent, since this version of OVS does not support tunnels or patch ports." " Agent terminated!" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:963 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:966 #, python-format msgid "" "Bridge %(bridge)s for physical network %(physical_network)s does not " "exist. Agent terminated!" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1152 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1155 #, python-format msgid "Failed to set-up %(type)s tunnel port to %(ip)s" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1344 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1347 #, python-format msgid "" "process_network_ports - iteration:%d - failure while retrieving port " "details from server" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1380 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1383 #, python-format msgid "" "process_ancillary_network_ports - iteration:%d - failure while retrieving" " port details from server" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1522 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1533 msgid "Error while synchronizing tunnels" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1598 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1600 msgid "Error while processing VIF ports" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1695 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1708 msgid "Agent failed to create agent config map" msgstr "" diff --git a/neutron/locale/neutron-log-info.pot b/neutron/locale/neutron-log-info.pot index 570bbd301a1..2549aefd3bf 100644 --- a/neutron/locale/neutron-log-info.pot +++ b/neutron/locale/neutron-log-info.pot @@ -6,9 +6,9 @@ #, fuzzy msgid "" msgstr "" -"Project-Id-Version: neutron 7.0.0.0b2.dev192\n" +"Project-Id-Version: neutron 7.0.0.0b2.dev396\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" -"POT-Creation-Date: 2015-07-11 06:09+0000\n" +"POT-Creation-Date: 2015-07-27 06:07+0000\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language-Team: LANGUAGE \n" @@ -17,17 +17,17 @@ msgstr "" "Content-Transfer-Encoding: 8bit\n" "Generated-By: Babel 1.3\n" -#: neutron/manager.py:117 +#: neutron/manager.py:118 #, python-format msgid "Loading core plugin: %s" msgstr "" -#: neutron/manager.py:165 +#: neutron/manager.py:166 #, python-format msgid "Service %s is supported by the core plugin" msgstr "" -#: neutron/manager.py:183 +#: neutron/manager.py:189 #, python-format msgid "Loading Plugin: %s" msgstr "" @@ -43,27 +43,27 @@ msgstr "" msgid "Loaded quota_driver: %s." msgstr "" -#: neutron/service.py:191 +#: neutron/service.py:186 #, python-format msgid "Neutron service started, listening on %(host)s:%(port)s" msgstr "" -#: neutron/wsgi.py:793 +#: neutron/wsgi.py:796 #, python-format msgid "%(method)s %(url)s" msgstr "" -#: neutron/wsgi.py:810 +#: neutron/wsgi.py:813 #, python-format msgid "HTTP exception thrown: %s" msgstr "" -#: neutron/wsgi.py:826 +#: neutron/wsgi.py:829 #, python-format msgid "%(url)s returned with HTTP %(status)d" msgstr "" -#: neutron/wsgi.py:829 +#: neutron/wsgi.py:832 #, python-format msgid "%(url)s returned a fault: %(exception)s" msgstr "" @@ -115,68 +115,68 @@ msgstr "" msgid "No ports here to refresh firewall" msgstr "" -#: neutron/agent/common/ovs_lib.py:423 neutron/agent/common/ovs_lib.py:456 +#: neutron/agent/common/ovs_lib.py:432 neutron/agent/common/ovs_lib.py:465 #, python-format msgid "Port %(port_id)s not present in bridge %(br_name)s" msgstr "" -#: neutron/agent/dhcp/agent.py:96 neutron/agent/dhcp/agent.py:589 +#: neutron/agent/dhcp/agent.py:95 neutron/agent/dhcp/agent.py:588 msgid "DHCP agent started" msgstr "" -#: neutron/agent/dhcp/agent.py:152 +#: neutron/agent/dhcp/agent.py:151 msgid "Synchronizing state" msgstr "" -#: neutron/agent/dhcp/agent.py:173 +#: neutron/agent/dhcp/agent.py:172 msgid "Synchronizing state complete" msgstr "" -#: neutron/agent/dhcp/agent.py:586 neutron/agent/l3/agent.py:654 +#: neutron/agent/dhcp/agent.py:585 neutron/agent/l3/agent.py:646 #: neutron/services/metering/agents/metering_agent.py:286 #, python-format msgid "agent_updated by server side %s!" msgstr "" -#: neutron/agent/l3/agent.py:575 neutron/agent/l3/agent.py:644 +#: neutron/agent/l3/agent.py:567 neutron/agent/l3/agent.py:636 msgid "L3 agent started" msgstr "" -#: neutron/agent/l3/ha.py:113 +#: neutron/agent/l3/ha.py:114 #, python-format msgid "Router %(router_id)s transitioned to %(state)s" msgstr "" -#: neutron/agent/l3/ha.py:120 +#: neutron/agent/l3/ha.py:121 #, python-format msgid "" "Router %s is not managed by this agent. It was possibly deleted " "concurrently." msgstr "" -#: neutron/agent/linux/daemon.py:104 +#: neutron/agent/linux/daemon.py:114 #, python-format msgid "Process runs with uid/gid: %(uid)s/%(gid)s" msgstr "" -#: neutron/agent/linux/dhcp.py:793 +#: neutron/agent/linux/dhcp.py:802 #, python-format msgid "" "Cannot apply dhcp option %(opt)s because it's ip_version %(version)d is " "not in port's address IP versions" msgstr "" -#: neutron/agent/linux/interface.py:192 +#: neutron/agent/linux/interface.py:167 #, python-format msgid "Device %s already exists" msgstr "" -#: neutron/agent/linux/iptables_firewall.py:142 +#: neutron/agent/linux/iptables_firewall.py:140 #, python-format msgid "Attempted to update port filter which is not filtered %s" msgstr "" -#: neutron/agent/linux/iptables_firewall.py:153 +#: neutron/agent/linux/iptables_firewall.py:151 #, python-format msgid "Attempted to remove port filter which is not filtered %r" msgstr "" @@ -185,16 +185,16 @@ msgstr "" msgid "Initializing extension manager." msgstr "" -#: neutron/api/extensions.py:539 +#: neutron/api/extensions.py:536 #, python-format msgid "Loaded extension: %s" msgstr "" -#: neutron/api/v2/base.py:96 +#: neutron/api/v2/base.py:95 msgid "Allow sorting is enabled because native pagination requires native sorting" msgstr "" -#: neutron/api/v2/resource.py:94 +#: neutron/api/v2/resource.py:94 neutron/api/v2/resource.py:106 #, python-format msgid "%(action)s failed (client error): %(exc)s" msgstr "" @@ -234,9 +234,9 @@ msgstr "" #: neutron/cmd/eventlet/plugins/hyperv_neutron_agent.py:43 #: neutron/plugins/ibm/agent/sdnve_neutron_agent.py:262 -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:1067 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:1060 #: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:346 -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1607 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1611 msgid "Agent initialized successfully, now running... " msgstr "" @@ -288,51 +288,46 @@ msgstr "" msgid "Adding network %(net)s to agent %(agent)s on host %(host)s" msgstr "" -#: neutron/db/db_base_plugin_v2.py:625 neutron/plugins/ml2/plugin.py:894 +#: neutron/db/db_base_plugin_v2.py:656 neutron/plugins/ml2/plugin.py:882 #, python-format msgid "" "Found port (%(port_id)s, %(ip)s) having IP allocation on subnet " "%(subnet)s, cannot delete" msgstr "" -#: neutron/db/ipam_backend_mixin.py:208 +#: neutron/db/ipam_backend_mixin.py:63 +#, python-format +msgid "Found invalid IP address in pool: %(start)s - %(end)s:" +msgstr "" + +#: neutron/db/ipam_backend_mixin.py:227 #, python-format msgid "" "Validation for CIDR: %(new_cidr)s failed - overlaps with subnet " "%(subnet_id)s (CIDR: %(cidr)s)" msgstr "" -#: neutron/db/ipam_backend_mixin.py:246 -#, python-format -msgid "Found invalid IP address in pool: %(start)s - %(end)s:" -msgstr "" - -#: neutron/db/ipam_backend_mixin.py:253 +#: neutron/db/ipam_backend_mixin.py:265 msgid "Specified IP addresses do not match the subnet IP version" msgstr "" -#: neutron/db/ipam_backend_mixin.py:257 -#, python-format -msgid "Start IP (%(start)s) is greater than end IP (%(end)s)" -msgstr "" - -#: neutron/db/ipam_backend_mixin.py:262 +#: neutron/db/ipam_backend_mixin.py:269 #, python-format msgid "Found pool larger than subnet CIDR:%(start)s - %(end)s" msgstr "" -#: neutron/db/ipam_backend_mixin.py:286 +#: neutron/db/ipam_backend_mixin.py:290 #, python-format msgid "Found overlapping ranges: %(l_range)s and %(r_range)s" msgstr "" -#: neutron/db/l3_agentschedulers_db.py:79 +#: neutron/db/l3_agentschedulers_db.py:80 msgid "" "Skipping period L3 agent status check because automatic router " "rescheduling is disabled." msgstr "" -#: neutron/db/l3_db.py:1161 +#: neutron/db/l3_db.py:1190 #, python-format msgid "Skipping port %s as no IP is configure on it" msgstr "" @@ -342,12 +337,12 @@ msgstr "" msgid "Centralizing distributed router %s is not supported" msgstr "" -#: neutron/db/l3_dvr_db.py:550 +#: neutron/db/l3_dvr_db.py:558 #, python-format msgid "Agent Gateway port does not exist, so create one: %s" msgstr "" -#: neutron/db/l3_dvr_db.py:633 +#: neutron/db/l3_dvr_db.py:641 #, python-format msgid "SNAT interface port list does not exist, so create one: %s" msgstr "" @@ -384,7 +379,7 @@ msgstr "" msgid "Nova event response: %s" msgstr "" -#: neutron/plugins/brocade/NeutronPlugin.py:306 +#: neutron/plugins/brocade/NeutronPlugin.py:305 #, python-format msgid "Allocated vlan (%d) from the pool" msgstr "" @@ -573,41 +568,26 @@ msgstr "" msgid "Got %(alias)s extension from driver '%(drv)s'" msgstr "" -#: neutron/plugins/ml2/managers.py:806 -#, python-format -msgid "Extended network dict for driver '%(drv)s'" -msgstr "" - -#: neutron/plugins/ml2/managers.py:813 -#, python-format -msgid "Extended subnet dict for driver '%(drv)s'" -msgstr "" - -#: neutron/plugins/ml2/managers.py:820 -#, python-format -msgid "Extended port dict for driver '%(drv)s'" -msgstr "" - -#: neutron/plugins/ml2/plugin.py:142 +#: neutron/plugins/ml2/plugin.py:141 msgid "Modular L2 Plugin initialization complete" msgstr "" -#: neutron/plugins/ml2/plugin.py:293 +#: neutron/plugins/ml2/plugin.py:292 #, python-format msgid "Attempt %(count)s to bind port %(port)s" msgstr "" -#: neutron/plugins/ml2/plugin.py:695 +#: neutron/plugins/ml2/plugin.py:688 #, python-format msgid "Port %s was deleted concurrently" msgstr "" -#: neutron/plugins/ml2/plugin.py:707 +#: neutron/plugins/ml2/plugin.py:700 #, python-format msgid "Subnet %s was deleted concurrently" msgstr "" -#: neutron/plugins/ml2/plugin.py:1370 +#: neutron/plugins/ml2/plugin.py:1387 #, python-format msgid "" "Binding info for port %s was not found, it might have been deleted " @@ -631,7 +611,7 @@ msgstr "" msgid "ML2 LocalTypeDriver initialization complete" msgstr "" -#: neutron/plugins/ml2/drivers/type_tunnel.py:113 +#: neutron/plugins/ml2/drivers/type_tunnel.py:123 #, python-format msgid "%(type)s ID ranges: %(range)s" msgstr "" @@ -675,30 +655,12 @@ msgstr "" msgid "VM %s is not updated as it is not found in Arista DB" msgstr "" -#: neutron/plugins/ml2/drivers/cisco/apic/apic_topology.py:78 -msgid "APIC service agent starting ..." -msgstr "" - -#: neutron/plugins/ml2/drivers/cisco/apic/apic_topology.py:95 -msgid "APIC service agent started" -msgstr "" - -#: neutron/plugins/ml2/drivers/cisco/apic/apic_topology.py:179 -#, python-format -msgid "APIC host agent: agent starting on %s" -msgstr "" - -#: neutron/plugins/ml2/drivers/cisco/apic/apic_topology.py:199 -#, python-format -msgid "APIC host agent: started on %s" -msgstr "" - #: neutron/plugins/ml2/drivers/freescale/mechanism_fslsdn.py:40 msgid "Initializing CRD client... " msgstr "" #: neutron/plugins/ml2/drivers/linuxbridge/agent/arp_protect.py:32 -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:781 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:784 #, python-format msgid "" "Skipping ARP spoofing rules for port '%s' because it has port security " @@ -710,54 +672,54 @@ msgstr "" msgid "Clearing orphaned ARP spoofing entries for devices %s" msgstr "" -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:798 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:791 msgid "Stopping linuxbridge agent." msgstr "" -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:828 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:821 #: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:100 #: neutron/plugins/oneconvergence/agent/nvsd_neutron_agent.py:89 #, python-format msgid "RPC agent_id: %s" msgstr "" -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:895 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:888 #: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:210 -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1223 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1226 #, python-format msgid "Port %(device)s updated. Details: %(details)s" msgstr "" -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:933 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:926 #, python-format msgid "Device %s not defined on plugin" msgstr "" -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:940 -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1270 -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1287 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:933 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1273 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1290 #, python-format msgid "Attachment %s removed" msgstr "" -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:952 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:945 #: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:236 -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1299 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1302 #, python-format msgid "Port %s updated." msgstr "" -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:1010 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:1003 msgid "LinuxBridge Agent RPC Daemon Started!" msgstr "" -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:1020 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:1013 #: neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py:252 -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1490 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1500 msgid "Agent out of sync with plugin!" msgstr "" -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:1060 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:1053 #: neutron/plugins/ml2/drivers/mlnx/agent/eswitch_neutron_agent.py:43 #, python-format msgid "Interface mappings: %s" @@ -801,62 +763,62 @@ msgstr "" msgid "L2 Agent operating in DVR Mode with MAC %s" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:588 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:591 #, python-format msgid "Assigning %(vlan_id)s as local vlan for net-id=%(net_uuid)s" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:652 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:655 #, python-format msgid "Reclaiming vlan = %(vlan_id)s from net-id = %(net_uuid)s" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:774 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:777 #, python-format msgid "Configuration for device %s completed." msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:813 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:816 #, python-format msgid "port_unbound(): net_uuid %s not in local_vlan_map" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:879 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:882 #, python-format msgid "Adding %s to list of bridges." msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:957 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:960 #, python-format msgid "Mapping physical network %(physical_network)s to bridge %(bridge)s" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1113 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1116 #, python-format msgid "Port '%(port_name)s' has lost its vlan tag '%(vlan_tag)d'!" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1217 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1220 #, python-format msgid "" "Port %s was not found on the integration bridge and will therefore not be" " processed" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1258 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1261 #, python-format msgid "Ancillary Port %s added" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1518 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1529 msgid "Agent tunnel out of sync with plugin!" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1617 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1630 msgid "Agent caught SIGTERM, quitting daemon loop." msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1623 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1634 msgid "Agent caught SIGHUP, resetting." msgstr "" diff --git a/neutron/locale/neutron-log-warning.pot b/neutron/locale/neutron-log-warning.pot index c4fb722550c..34e1e7bf4e2 100644 --- a/neutron/locale/neutron-log-warning.pot +++ b/neutron/locale/neutron-log-warning.pot @@ -6,9 +6,9 @@ #, fuzzy msgid "" msgstr "" -"Project-Id-Version: neutron 7.0.0.0b2.dev192\n" +"Project-Id-Version: neutron 7.0.0.0b2.dev396\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" -"POT-Creation-Date: 2015-07-11 06:09+0000\n" +"POT-Creation-Date: 2015-07-27 06:07+0000\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language-Team: LANGUAGE \n" @@ -59,22 +59,22 @@ msgid "" "falling back to old security_group_rules_for_devices which scales worse." msgstr "" -#: neutron/agent/common/ovs_lib.py:373 +#: neutron/agent/common/ovs_lib.py:382 #, python-format msgid "Found not yet ready openvswitch port: %s" msgstr "" -#: neutron/agent/common/ovs_lib.py:376 +#: neutron/agent/common/ovs_lib.py:385 #, python-format msgid "Found failed openvswitch port: %s" msgstr "" -#: neutron/agent/common/ovs_lib.py:438 +#: neutron/agent/common/ovs_lib.py:447 #, python-format msgid "ofport: %(ofport)s for VIF: %(vif)s is not a positive integer" msgstr "" -#: neutron/agent/dhcp/agent.py:120 +#: neutron/agent/dhcp/agent.py:119 #, python-format msgid "" "Unable to %(action)s dhcp for %(net_id)s: there is a conflict with its " @@ -82,26 +82,26 @@ msgid "" "exist." msgstr "" -#: neutron/agent/dhcp/agent.py:135 neutron/agent/dhcp/agent.py:204 +#: neutron/agent/dhcp/agent.py:134 neutron/agent/dhcp/agent.py:203 #, python-format msgid "Network %s has been deleted." msgstr "" -#: neutron/agent/dhcp/agent.py:221 +#: neutron/agent/dhcp/agent.py:220 #, python-format msgid "" "Network %s may have been deleted and its resources may have already been " "disposed." msgstr "" -#: neutron/agent/dhcp/agent.py:370 +#: neutron/agent/dhcp/agent.py:369 #, python-format msgid "" "%(port_num)d router ports found on the metadata access network. Only the " "port %(port_id)s, for router %(router_id)s will be considered" msgstr "" -#: neutron/agent/dhcp/agent.py:571 neutron/agent/l3/agent.py:635 +#: neutron/agent/dhcp/agent.py:570 neutron/agent/l3/agent.py:627 #: neutron/agent/metadata/agent.py:310 #: neutron/services/metering/agents/metering_agent.py:278 msgid "" @@ -109,7 +109,7 @@ msgid "" " will be disabled." msgstr "" -#: neutron/agent/l3/agent.py:194 +#: neutron/agent/l3/agent.py:186 #, python-format msgid "" "l3-agent cannot check service plugins enabled at the neutron server when " @@ -118,19 +118,19 @@ msgid "" "warning. Detail message: %s" msgstr "" -#: neutron/agent/l3/agent.py:206 +#: neutron/agent/l3/agent.py:198 #, python-format msgid "" "l3-agent cannot check service plugins enabled on the neutron server. " "Retrying. Detail message: %s" msgstr "" -#: neutron/agent/l3/agent.py:341 +#: neutron/agent/l3/agent.py:333 #, python-format msgid "Info for router %s was not found. Performing router cleanup" msgstr "" -#: neutron/agent/l3/router_info.py:208 +#: neutron/agent/l3/router_info.py:191 #, python-format msgid "Unable to configure IP address for floating IP: %s" msgstr "" @@ -180,22 +180,22 @@ msgid "" "greater to 0" msgstr "" -#: neutron/api/extensions.py:521 +#: neutron/api/extensions.py:518 #, python-format msgid "Did not find expected name \"%(ext_name)s\" in %(file)s" msgstr "" -#: neutron/api/extensions.py:529 +#: neutron/api/extensions.py:526 #, python-format msgid "Extension file %(f)s wasn't loaded due to %(exception)s" msgstr "" -#: neutron/api/extensions.py:570 +#: neutron/api/extensions.py:567 #, python-format msgid "Extension %s not supported by any of loaded plugins" msgstr "" -#: neutron/api/extensions.py:582 +#: neutron/api/extensions.py:579 #, python-format msgid "Loaded plugins do not implement extension %s interface" msgstr "" @@ -238,7 +238,7 @@ msgid "" " end of the init process." msgstr "" -#: neutron/cmd/sanity_check.py:78 +#: neutron/cmd/sanity_check.py:80 msgid "" "The user that is executing neutron can read the namespaces without using " "the root_helper. Disable the use_helper_for_ns_read option to avoid a " @@ -274,7 +274,7 @@ msgid "" "not report to the server in the last %(dead_time)s seconds." msgstr "" -#: neutron/db/l3_agentschedulers_db.py:105 +#: neutron/db/l3_agentschedulers_db.py:106 #, python-format msgid "" "Rescheduling router %(router)s from agent %(agent)s because the agent did" @@ -349,18 +349,18 @@ msgstr "" msgid "Could not expand segment %s" msgstr "" -#: neutron/plugins/ml2/plugin.py:530 +#: neutron/plugins/ml2/plugin.py:523 #, python-format msgid "" "In _notify_port_updated(), no bound segment for port %(port_id)s on " "network %(network_id)s" msgstr "" -#: neutron/plugins/ml2/plugin.py:781 +#: neutron/plugins/ml2/plugin.py:773 msgid "A concurrent port creation has occurred" msgstr "" -#: neutron/plugins/ml2/plugin.py:1435 +#: neutron/plugins/ml2/plugin.py:1446 #, python-format msgid "Port %s not found during update" msgstr "" @@ -388,16 +388,12 @@ msgstr "" msgid "No flat network found on physical network %s" msgstr "" -#: neutron/plugins/ml2/drivers/type_gre.py:102 -msgid "Gre allocations were already created." -msgstr "" - -#: neutron/plugins/ml2/drivers/type_tunnel.py:179 +#: neutron/plugins/ml2/drivers/type_tunnel.py:225 #, python-format msgid "%(type)s tunnel %(id)s not found" msgstr "" -#: neutron/plugins/ml2/drivers/type_tunnel.py:236 +#: neutron/plugins/ml2/drivers/type_tunnel.py:282 #, python-format msgid "Endpoint with ip %s already exists" msgstr "" @@ -407,26 +403,6 @@ msgstr "" msgid "No vlan_id %(vlan_id)s found on physical network %(physical_network)s" msgstr "" -#: neutron/plugins/ml2/drivers/cisco/apic/apic_sync.py:67 -#, python-format -msgid "Create network postcommit failed for network %s" -msgstr "" - -#: neutron/plugins/ml2/drivers/cisco/apic/apic_sync.py:77 -#, python-format -msgid "Create subnet postcommit failed for subnet %s" -msgstr "" - -#: neutron/plugins/ml2/drivers/cisco/apic/apic_sync.py:91 -#, python-format -msgid "Create port postcommit failed for port %s" -msgstr "" - -#: neutron/plugins/ml2/drivers/cisco/apic/apic_sync.py:110 -#, python-format -msgid "Add interface postcommit failed for port %s" -msgstr "" - #: neutron/plugins/ml2/drivers/cisco/ucsm/mech_cisco_ucsm.py:78 msgid "update_port_precommit: vlan_id is None." msgstr "" @@ -453,36 +429,36 @@ msgstr "" msgid "Port %(port)s updated by agent %(agent)s isn't bound to any segment" msgstr "" -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:90 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:91 msgid "VXLAN is enabled, a valid local_ip must be provided" msgstr "" -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:104 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:105 msgid "Invalid Network ID, will lead to incorrect bridge name" msgstr "" -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:111 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:112 msgid "Invalid VLAN ID, will lead to incorrect subinterface name" msgstr "" -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:118 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:119 msgid "Invalid Interface ID, will lead to incorrect tap device name" msgstr "" -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:127 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:128 #, python-format msgid "Invalid Segmentation ID: %s, will lead to incorrect vxlan device name" msgstr "" -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:527 -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:563 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:520 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:556 #, python-format msgid "" "Option \"%(option)s\" must be supported by command \"%(command)s\" to " "enable %(mode)s mode" msgstr "" -#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:557 +#: neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py:550 msgid "" "VXLAN muticast group must be provided in vxlan_group option to enable " "VXLAN MCAST mode" @@ -524,38 +500,38 @@ msgid "" "message: %s" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:531 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:534 #, python-format msgid "Action %s not supported" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:935 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:938 #, python-format msgid "" "Creating an interface named %(name)s exceeds the %(limit)d character " "limitation. It was shortened to %(new_name)s to fit." msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1130 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1133 #, python-format msgid "VIF port: %s has no ofport configured, and might not be able to transmit" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1241 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1244 #, python-format msgid "Device %s not defined on plugin" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1401 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1404 #, python-format msgid "Invalid remote IP: %s" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1444 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1447 msgid "OVS is restarted. OVSNeutronAgent will reset bridges and recover ports." msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1447 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1450 msgid "" "OVS is dead. OVSNeutronAgent will keep running and checking OVS status " "periodically." diff --git a/neutron/locale/neutron.pot b/neutron/locale/neutron.pot index 9112871d6d3..15fce8abdbc 100644 --- a/neutron/locale/neutron.pot +++ b/neutron/locale/neutron.pot @@ -6,9 +6,9 @@ #, fuzzy msgid "" msgstr "" -"Project-Id-Version: neutron 7.0.0.0b2.dev192\n" +"Project-Id-Version: neutron 7.0.0.0b2.dev396\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" -"POT-Creation-Date: 2015-07-11 06:09+0000\n" +"POT-Creation-Date: 2015-07-27 06:07+0000\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language-Team: LANGUAGE \n" @@ -17,20 +17,20 @@ msgstr "" "Content-Transfer-Encoding: 8bit\n" "Generated-By: Babel 1.3\n" -#: neutron/manager.py:76 +#: neutron/manager.py:77 #, python-format msgid "dhcp_agents_per_network must be >= 1. '%s' is invalid." msgstr "" -#: neutron/manager.py:88 +#: neutron/manager.py:89 msgid "Neutron core_plugin not configured!" msgstr "" -#: neutron/manager.py:136 neutron/manager.py:148 +#: neutron/manager.py:137 neutron/manager.py:149 msgid "Plugin not found." msgstr "" -#: neutron/manager.py:191 +#: neutron/manager.py:197 #, python-format msgid "Multiple plugins for service %s were configured" msgstr "" @@ -86,62 +86,62 @@ msgstr "" msgid "Access to this resource was denied." msgstr "" -#: neutron/service.py:42 +#: neutron/service.py:41 msgid "Seconds between running periodic tasks" msgstr "" -#: neutron/service.py:44 +#: neutron/service.py:43 msgid "" "Number of separate API worker processes for service. If not specified, " "the default is equal to the number of CPUs available for best " "performance." msgstr "" -#: neutron/service.py:49 +#: neutron/service.py:48 msgid "Number of RPC worker processes for service" msgstr "" -#: neutron/service.py:52 +#: neutron/service.py:51 msgid "" "Range of seconds to randomly delay when starting the periodic task " "scheduler to reduce stampeding. (Disable by setting to 0)" msgstr "" -#: neutron/wsgi.py:51 +#: neutron/wsgi.py:52 msgid "Number of backlog requests to configure the socket with" msgstr "" -#: neutron/wsgi.py:55 +#: neutron/wsgi.py:56 msgid "" "Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not " "supported on OS X." msgstr "" -#: neutron/wsgi.py:59 +#: neutron/wsgi.py:60 msgid "Number of seconds to keep retrying to listen" msgstr "" -#: neutron/wsgi.py:62 +#: neutron/wsgi.py:63 msgid "Max header line to accommodate large tokens" msgstr "" -#: neutron/wsgi.py:65 +#: neutron/wsgi.py:66 msgid "Enable SSL on the API server" msgstr "" -#: neutron/wsgi.py:67 +#: neutron/wsgi.py:68 msgid "CA certificate file to use to verify connecting clients" msgstr "" -#: neutron/wsgi.py:70 +#: neutron/wsgi.py:71 msgid "Certificate file to use when starting the server securely" msgstr "" -#: neutron/wsgi.py:73 +#: neutron/wsgi.py:74 msgid "Private key file to use when starting the server securely" msgstr "" -#: neutron/wsgi.py:77 +#: neutron/wsgi.py:78 msgid "" "Determines if connections are allowed to be held open by clients after a " "request is fulfilled. A value of False will ensure that the socket " @@ -149,62 +149,62 @@ msgid "" " client." msgstr "" -#: neutron/wsgi.py:83 +#: neutron/wsgi.py:84 msgid "" "Timeout for client connections socket operations. If an incoming " "connection is idle for this number of seconds it will be closed. A value " "of '0' means wait forever." msgstr "" -#: neutron/wsgi.py:176 +#: neutron/wsgi.py:177 #, python-format msgid "Could not bind to %(host)s:%(port)s after trying for %(time)d seconds" msgstr "" -#: neutron/wsgi.py:196 +#: neutron/wsgi.py:197 #, python-format msgid "Unable to find ssl_cert_file : %s" msgstr "" -#: neutron/wsgi.py:202 +#: neutron/wsgi.py:203 #, python-format msgid "Unable to find ssl_key_file : %s" msgstr "" -#: neutron/wsgi.py:207 +#: neutron/wsgi.py:208 #, python-format msgid "Unable to find ssl_ca_file : %s" msgstr "" -#: neutron/wsgi.py:496 +#: neutron/wsgi.py:499 msgid "Cannot understand JSON" msgstr "" -#: neutron/wsgi.py:662 +#: neutron/wsgi.py:665 msgid "You must implement __call__" msgstr "" -#: neutron/wsgi.py:750 neutron/api/v2/base.py:198 neutron/api/v2/base.py:346 -#: neutron/api/v2/base.py:494 neutron/api/v2/base.py:556 +#: neutron/wsgi.py:753 neutron/api/v2/base.py:198 neutron/api/v2/base.py:346 +#: neutron/api/v2/base.py:495 neutron/api/v2/base.py:556 #: neutron/extensions/l3agentscheduler.py:51 #: neutron/extensions/l3agentscheduler.py:94 msgid "The resource could not be found." msgstr "" -#: neutron/wsgi.py:799 +#: neutron/wsgi.py:802 msgid "Unsupported Content-Type" msgstr "" -#: neutron/wsgi.py:803 +#: neutron/wsgi.py:806 msgid "Malformed request body" msgstr "" -#: neutron/wsgi.py:940 +#: neutron/wsgi.py:943 #, python-format msgid "The requested content type %s is invalid." msgstr "" -#: neutron/wsgi.py:993 +#: neutron/wsgi.py:996 msgid "Could not deserialize data" msgstr "" @@ -277,20 +277,20 @@ msgstr "" msgid "Timeout in seconds for ovs-vsctl commands" msgstr "" -#: neutron/agent/common/ovs_lib.py:474 +#: neutron/agent/common/ovs_lib.py:483 #, python-format msgid "Unable to determine mac address for %s" msgstr "" -#: neutron/agent/common/ovs_lib.py:582 +#: neutron/agent/common/ovs_lib.py:591 msgid "Cannot match priority on flow deletion or modification" msgstr "" -#: neutron/agent/common/ovs_lib.py:587 +#: neutron/agent/common/ovs_lib.py:596 msgid "Must specify one or more actions on flow addition or modification" msgstr "" -#: neutron/agent/dhcp/agent.py:584 +#: neutron/agent/dhcp/agent.py:583 #, python-format msgid "Agent updated: %(payload)s" msgstr "" @@ -347,7 +347,7 @@ msgstr "" msgid "Use broadcast in DHCP replies" msgstr "" -#: neutron/agent/l3/agent.py:280 +#: neutron/agent/l3/agent.py:272 msgid "" "The 'gateway_external_network_id' option must be configured for this " "agent as Neutron has more than one external network." @@ -419,26 +419,31 @@ msgid "" msgstr "" #: neutron/agent/l3/config.py:86 -msgid "Iptables mangle mark used to mark metadata valid requests" +msgid "" +"Iptables mangle mark used to mark metadata valid requests. This mark will" +" be masked with 0xffff so that only the lower 16 bits will be used." msgstr "" -#: neutron/agent/l3/config.py:90 -msgid "Iptables mangle mark used to mark ingress from external network" +#: neutron/agent/l3/config.py:91 +msgid "" +"Iptables mangle mark used to mark ingress from external network. This " +"mark will be masked with 0xffff so that only the lower 16 bits will be " +"used." msgstr "" -#: neutron/agent/l3/ha.py:35 +#: neutron/agent/l3/ha.py:36 msgid "Location to store keepalived/conntrackd config files" msgstr "" -#: neutron/agent/l3/ha.py:40 +#: neutron/agent/l3/ha.py:41 msgid "VRRP authentication type" msgstr "" -#: neutron/agent/l3/ha.py:42 +#: neutron/agent/l3/ha.py:43 msgid "VRRP authentication password" msgstr "" -#: neutron/agent/l3/ha.py:46 +#: neutron/agent/l3/ha.py:47 msgid "The advertisement interval in seconds" msgstr "" @@ -501,25 +506,25 @@ msgstr "" msgid "Process is not running." msgstr "" -#: neutron/agent/linux/daemon.py:44 +#: neutron/agent/linux/daemon.py:54 #, python-format msgid "Failed to set uid %s" msgstr "" -#: neutron/agent/linux/daemon.py:58 +#: neutron/agent/linux/daemon.py:68 #, python-format msgid "Failed to set gid %s" msgstr "" -#: neutron/agent/linux/daemon.py:88 +#: neutron/agent/linux/daemon.py:98 msgid "Root permissions are required to drop privileges." msgstr "" -#: neutron/agent/linux/daemon.py:96 +#: neutron/agent/linux/daemon.py:106 msgid "Failed to remove supplemental groups" msgstr "" -#: neutron/agent/linux/daemon.py:125 +#: neutron/agent/linux/daemon.py:135 msgid "Unable to unlock pid file" msgstr "" @@ -548,63 +553,22 @@ msgstr "" msgid "Unknown chain: %r" msgstr "" -#: neutron/agent/linux/external_process.py:37 +#: neutron/agent/linux/external_process.py:38 msgid "Location to store child pid files" msgstr "" -#: neutron/agent/linux/interface.py:38 +#: neutron/agent/linux/interface.py:36 msgid "Name of Open vSwitch bridge to use" msgstr "" -#: neutron/agent/linux/interface.py:41 +#: neutron/agent/linux/interface.py:39 msgid "Uses veth for an interface or not" msgstr "" -#: neutron/agent/linux/interface.py:43 +#: neutron/agent/linux/interface.py:41 msgid "MTU setting for device." msgstr "" -#: neutron/agent/linux/interface.py:45 -msgid "" -"Mapping between flavor and LinuxInterfaceDriver. It is specific to " -"MetaInterfaceDriver used with admin_user, admin_password, " -"admin_tenant_name, admin_url, auth_strategy, auth_region and " -"endpoint_type." -msgstr "" - -#: neutron/agent/linux/interface.py:51 -msgid "Admin username" -msgstr "" - -#: neutron/agent/linux/interface.py:53 neutron/agent/metadata/config.py:56 -#: neutron/plugins/metaplugin/common/config.py:65 -msgid "Admin password" -msgstr "" - -#: neutron/agent/linux/interface.py:56 neutron/agent/metadata/config.py:59 -#: neutron/plugins/metaplugin/common/config.py:68 -msgid "Admin tenant name" -msgstr "" - -#: neutron/agent/linux/interface.py:58 neutron/agent/metadata/config.py:61 -#: neutron/plugins/metaplugin/common/config.py:70 -msgid "Authentication URL" -msgstr "" - -#: neutron/agent/linux/interface.py:60 neutron/agent/metadata/config.py:63 -#: neutron/common/config.py:50 neutron/plugins/metaplugin/common/config.py:72 -msgid "The type of authentication to use" -msgstr "" - -#: neutron/agent/linux/interface.py:62 neutron/agent/metadata/config.py:65 -#: neutron/plugins/metaplugin/common/config.py:74 -msgid "Authentication region" -msgstr "" - -#: neutron/agent/linux/interface.py:65 neutron/agent/metadata/config.py:75 -msgid "Network service endpoint type to pull from the keystone catalog" -msgstr "" - #: neutron/agent/linux/ip_lib.py:34 msgid "Force ip_lib calls to use the root helper" msgstr "" @@ -624,19 +588,19 @@ msgstr "" msgid "ip link capability %(capability)s is not supported" msgstr "" -#: neutron/agent/linux/keepalived.py:52 +#: neutron/agent/linux/keepalived.py:54 #, python-format msgid "" "Network of size %(size)s, from IP range %(parent_range)s excluding IP " "ranges %(excluded_ranges)s was not found." msgstr "" -#: neutron/agent/linux/keepalived.py:61 +#: neutron/agent/linux/keepalived.py:63 #, python-format msgid "Invalid instance state: %(state)s, valid states are: %(valid_states)s" msgstr "" -#: neutron/agent/linux/keepalived.py:71 +#: neutron/agent/linux/keepalived.py:73 #, python-format msgid "" "Invalid authentication type: %(auth_type)s, valid types are: " @@ -706,10 +670,29 @@ msgid "" msgstr "" #: neutron/agent/metadata/config.py:54 -#: neutron/plugins/metaplugin/common/config.py:63 msgid "Admin user" msgstr "" +#: neutron/agent/metadata/config.py:56 +msgid "Admin password" +msgstr "" + +#: neutron/agent/metadata/config.py:59 +msgid "Admin tenant name" +msgstr "" + +#: neutron/agent/metadata/config.py:61 +msgid "Authentication URL" +msgstr "" + +#: neutron/agent/metadata/config.py:63 neutron/common/config.py:50 +msgid "The type of authentication to use" +msgstr "" + +#: neutron/agent/metadata/config.py:65 +msgid "Authentication region" +msgstr "" + #: neutron/agent/metadata/config.py:68 msgid "Turn off verification of the certificate for ssl" msgstr "" @@ -718,6 +701,10 @@ msgstr "" msgid "Certificate Authority public key (CA cert) file for ssl" msgstr "" +#: neutron/agent/metadata/config.py:75 +msgid "Network service endpoint type to pull from the keystone catalog" +msgstr "" + #: neutron/agent/metadata/config.py:78 msgid "IP address used by Nova metadata server." msgstr "" @@ -748,7 +735,7 @@ msgstr "" #: neutron/agent/metadata/config.py:112 msgid "" -"Metadata Proxy UNIX domain socket mode, 3 values allowed: 'deduce': " +"Metadata Proxy UNIX domain socket mode, 4 values allowed: 'deduce': " "deduce mode from metadata_proxy_user/group values, 'user': set metadata " "proxy socket mode to 0o644, to use when metadata_proxy_user is agent " "effective user or root, 'group': set metadata proxy socket mode to 0o664," @@ -860,7 +847,7 @@ msgid "" " and '%(desc)s'" msgstr "" -#: neutron/api/api_common.py:318 neutron/api/v2/base.py:627 +#: neutron/api/api_common.py:318 neutron/api/v2/base.py:626 #, python-format msgid "Unable to find '%s' in request body" msgstr "" @@ -1066,7 +1053,7 @@ msgstr "" msgid "'%s' is not of the form =[value]" msgstr "" -#: neutron/api/v2/base.py:93 +#: neutron/api/v2/base.py:92 msgid "Native pagination depend on native sorting" msgstr "" @@ -1075,64 +1062,64 @@ msgstr "" msgid "Invalid format: %s" msgstr "" -#: neutron/api/v2/base.py:579 +#: neutron/api/v2/base.py:578 msgid "" "Specifying 'tenant_id' other than authenticated tenant in request " "requires admin privileges" msgstr "" -#: neutron/api/v2/base.py:587 -msgid "Running without keystone AuthN requires that tenant_id is specified" +#: neutron/api/v2/base.py:586 +msgid "Running without keystone AuthN requires that tenant_id is specified" msgstr "" -#: neutron/api/v2/base.py:605 +#: neutron/api/v2/base.py:604 msgid "Resource body required" msgstr "" -#: neutron/api/v2/base.py:611 +#: neutron/api/v2/base.py:610 msgid "Bulk operation not supported" msgstr "" -#: neutron/api/v2/base.py:614 +#: neutron/api/v2/base.py:613 msgid "Resources required" msgstr "" -#: neutron/api/v2/base.py:624 +#: neutron/api/v2/base.py:623 msgid "Body contains invalid data" msgstr "" -#: neutron/api/v2/base.py:638 +#: neutron/api/v2/base.py:637 #, python-format msgid "Failed to parse request. Required attribute '%s' not specified" msgstr "" -#: neutron/api/v2/base.py:645 +#: neutron/api/v2/base.py:644 #, python-format msgid "Attribute '%s' not allowed in POST" msgstr "" -#: neutron/api/v2/base.py:650 +#: neutron/api/v2/base.py:649 #, python-format msgid "Cannot update read-only attribute %s" msgstr "" -#: neutron/api/v2/base.py:668 +#: neutron/api/v2/base.py:667 #, python-format msgid "Invalid input for %(attr)s. Reason: %(reason)s." msgstr "" -#: neutron/api/v2/base.py:677 neutron/extensions/allowedaddresspairs.py:76 +#: neutron/api/v2/base.py:676 neutron/extensions/allowedaddresspairs.py:76 #: neutron/extensions/multiprovidernet.py:45 #, python-format msgid "Unrecognized attribute(s) '%s'" msgstr "" -#: neutron/api/v2/base.py:696 +#: neutron/api/v2/base.py:695 #, python-format msgid "Tenant %(tenant_id)s not allowed to create %(resource)s on this network" msgstr "" -#: neutron/api/v2/resource.py:127 +#: neutron/api/v2/resource.py:131 #: neutron/tests/unit/api/v2/test_resource.py:248 msgid "Request Failed: internal server error while processing your request." msgstr "" @@ -1164,50 +1151,54 @@ msgid "" "ports created by Neutron on integration and external network bridges." msgstr "" -#: neutron/cmd/sanity_check.py:163 +#: neutron/cmd/sanity_check.py:174 msgid "Check for OVS vxlan support" msgstr "" -#: neutron/cmd/sanity_check.py:165 +#: neutron/cmd/sanity_check.py:176 msgid "Check for iproute2 vxlan support" msgstr "" -#: neutron/cmd/sanity_check.py:167 +#: neutron/cmd/sanity_check.py:178 msgid "Check for patch port support" msgstr "" -#: neutron/cmd/sanity_check.py:169 +#: neutron/cmd/sanity_check.py:180 msgid "Check for nova notification support" msgstr "" -#: neutron/cmd/sanity_check.py:171 +#: neutron/cmd/sanity_check.py:182 msgid "Check for ARP responder support" msgstr "" -#: neutron/cmd/sanity_check.py:173 +#: neutron/cmd/sanity_check.py:184 msgid "Check for ARP header match support" msgstr "" -#: neutron/cmd/sanity_check.py:175 +#: neutron/cmd/sanity_check.py:186 msgid "Check for VF management support" msgstr "" -#: neutron/cmd/sanity_check.py:177 +#: neutron/cmd/sanity_check.py:188 msgid "Check netns permission settings" msgstr "" -#: neutron/cmd/sanity_check.py:179 +#: neutron/cmd/sanity_check.py:190 msgid "Check minimal dnsmasq version" msgstr "" -#: neutron/cmd/sanity_check.py:181 +#: neutron/cmd/sanity_check.py:192 msgid "Check ovsdb native interface support" msgstr "" -#: neutron/cmd/sanity_check.py:183 +#: neutron/cmd/sanity_check.py:194 msgid "Check ebtables installation" msgstr "" +#: neutron/cmd/sanity_check.py:196 +msgid "Check keepalived IPv6 support" +msgstr "" + #: neutron/common/config.py:42 msgid "The host IP to bind to" msgstr "" @@ -1228,7 +1219,7 @@ msgstr "" msgid "The core plugin Neutron will use" msgstr "" -#: neutron/common/config.py:54 neutron/db/migration/cli.py:40 +#: neutron/common/config.py:54 neutron/db/migration/cli.py:46 msgid "The service plugins Neutron will use" msgstr "" @@ -1416,423 +1407,428 @@ msgstr "" #: neutron/common/exceptions.py:73 #, python-format -msgid "User does not have admin privileges: %(reason)s" +msgid "Not supported: %(msg)s" msgstr "" #: neutron/common/exceptions.py:77 #, python-format -msgid "Network %(net_id)s could not be found" +msgid "User does not have admin privileges: %(reason)s" msgstr "" #: neutron/common/exceptions.py:81 #, python-format -msgid "Subnet %(subnet_id)s could not be found" +msgid "Network %(net_id)s could not be found" msgstr "" #: neutron/common/exceptions.py:85 #, python-format -msgid "Subnet pool %(subnetpool_id)s could not be found" +msgid "Subnet %(subnet_id)s could not be found" msgstr "" #: neutron/common/exceptions.py:89 #, python-format -msgid "Port %(port_id)s could not be found" +msgid "Subnet pool %(subnetpool_id)s could not be found" msgstr "" #: neutron/common/exceptions.py:93 #, python-format +msgid "Port %(port_id)s could not be found" +msgstr "" + +#: neutron/common/exceptions.py:97 +#, python-format msgid "Port %(port_id)s could not be found on network %(net_id)s" msgstr "" -#: neutron/common/exceptions.py:98 -msgid "Policy configuration policy.json could not be found" -msgstr "" - #: neutron/common/exceptions.py:102 -#, python-format -msgid "Failed to init policy %(policy)s because %(reason)s" +msgid "Policy configuration policy.json could not be found" msgstr "" #: neutron/common/exceptions.py:106 #, python-format -msgid "Failed to check policy %(policy)s because %(reason)s" +msgid "Failed to init policy %(policy)s because %(reason)s" msgstr "" #: neutron/common/exceptions.py:110 #, python-format -msgid "Unsupported port state: %(port_state)s" +msgid "Failed to check policy %(policy)s because %(reason)s" msgstr "" #: neutron/common/exceptions.py:114 -msgid "The resource is inuse" +#, python-format +msgid "Unsupported port state: %(port_state)s" msgstr "" #: neutron/common/exceptions.py:118 +msgid "The resource is inuse" +msgstr "" + +#: neutron/common/exceptions.py:122 #, python-format msgid "" "Unable to complete operation on network %(net_id)s. There are one or more" " ports still in use on the network." msgstr "" -#: neutron/common/exceptions.py:123 +#: neutron/common/exceptions.py:127 #, python-format msgid "Unable to complete operation on subnet %(subnet_id)s. %(reason)s" msgstr "" -#: neutron/common/exceptions.py:128 +#: neutron/common/exceptions.py:132 msgid "One or more ports have an IP allocation from this subnet." msgstr "" -#: neutron/common/exceptions.py:134 +#: neutron/common/exceptions.py:138 #, python-format msgid "" "Unable to complete operation on port %(port_id)s for network %(net_id)s. " "Port already has an attached device %(device_id)s." msgstr "" -#: neutron/common/exceptions.py:140 +#: neutron/common/exceptions.py:144 #, python-format msgid "Port %(port_id)s cannot be deleted directly via the port API: %(reason)s" msgstr "" -#: neutron/common/exceptions.py:145 +#: neutron/common/exceptions.py:149 #, python-format msgid "" "Unable to complete operation on port %(port_id)s, port is already bound, " "port type: %(vif_type)s, old_mac %(old_mac)s, new_mac %(new_mac)s" msgstr "" -#: neutron/common/exceptions.py:151 +#: neutron/common/exceptions.py:155 #, python-format msgid "" "Unable to complete operation for network %(net_id)s. The mac address " "%(mac)s is in use." msgstr "" -#: neutron/common/exceptions.py:157 +#: neutron/common/exceptions.py:161 #, python-format msgid "" "Unable to complete operation for %(subnet_id)s. The number of host routes" " exceeds the limit %(quota)s." msgstr "" -#: neutron/common/exceptions.py:163 +#: neutron/common/exceptions.py:167 #, python-format msgid "" "Unable to complete operation for %(subnet_id)s. The number of DNS " "nameservers exceeds the limit %(quota)s." msgstr "" -#: neutron/common/exceptions.py:168 +#: neutron/common/exceptions.py:172 #, python-format msgid "" "IP address %(ip_address)s is not a valid IP for any of the subnets on the" " specified network." msgstr "" -#: neutron/common/exceptions.py:173 +#: neutron/common/exceptions.py:177 #, python-format msgid "IP address %(ip_address)s is not a valid IP for the specified subnet." msgstr "" -#: neutron/common/exceptions.py:178 +#: neutron/common/exceptions.py:182 #, python-format msgid "" "Unable to complete operation for network %(net_id)s. The IP address " "%(ip_address)s is in use." msgstr "" -#: neutron/common/exceptions.py:183 +#: neutron/common/exceptions.py:187 #, python-format msgid "" "Unable to create the network. The VLAN %(vlan_id)s on physical network " "%(physical_network)s is in use." msgstr "" -#: neutron/common/exceptions.py:189 +#: neutron/common/exceptions.py:193 #, python-format msgid "" "Unable to create the flat network. Physical network %(physical_network)s " "is in use." msgstr "" -#: neutron/common/exceptions.py:194 +#: neutron/common/exceptions.py:198 #, python-format msgid "Unable to create the network. The tunnel ID %(tunnel_id)s is in use." msgstr "" -#: neutron/common/exceptions.py:199 +#: neutron/common/exceptions.py:203 msgid "Tenant network creation is not enabled." msgstr "" -#: neutron/common/exceptions.py:207 +#: neutron/common/exceptions.py:211 msgid "" "Unable to create the network. No tenant network is available for " "allocation." msgstr "" -#: neutron/common/exceptions.py:212 +#: neutron/common/exceptions.py:216 msgid "" "Unable to create the network. No available network found in maximum " "allowed attempts." msgstr "" -#: neutron/common/exceptions.py:217 +#: neutron/common/exceptions.py:221 #, python-format msgid "" "Subnet on port %(port_id)s does not match the requested subnet " "%(subnet_id)s" msgstr "" -#: neutron/common/exceptions.py:222 +#: neutron/common/exceptions.py:226 #, python-format msgid "Malformed request body: %(reason)s" msgstr "" -#: neutron/common/exceptions.py:232 +#: neutron/common/exceptions.py:236 #, python-format msgid "Invalid input for operation: %(error_message)s." msgstr "" -#: neutron/common/exceptions.py:236 +#: neutron/common/exceptions.py:240 #, python-format msgid "The allocation pool %(pool)s is not valid." msgstr "" -#: neutron/common/exceptions.py:240 +#: neutron/common/exceptions.py:244 #, python-format msgid "" "Operation %(op)s is not supported for device_owner %(device_owner)s on " "port %(port_id)s." msgstr "" -#: neutron/common/exceptions.py:245 +#: neutron/common/exceptions.py:249 #, python-format msgid "" "Found overlapping allocation pools: %(pool_1)s %(pool_2)s for subnet " "%(subnet_cidr)s." msgstr "" -#: neutron/common/exceptions.py:250 +#: neutron/common/exceptions.py:254 #, python-format msgid "The allocation pool %(pool)s spans beyond the subnet cidr %(subnet_cidr)s." msgstr "" -#: neutron/common/exceptions.py:255 +#: neutron/common/exceptions.py:259 #, python-format msgid "Unable to generate unique mac on network %(net_id)s." msgstr "" -#: neutron/common/exceptions.py:259 +#: neutron/common/exceptions.py:263 #, python-format msgid "No more IP addresses available on network %(net_id)s." msgstr "" -#: neutron/common/exceptions.py:263 +#: neutron/common/exceptions.py:267 #, python-format msgid "Bridge %(bridge)s does not exist." msgstr "" -#: neutron/common/exceptions.py:267 +#: neutron/common/exceptions.py:271 #, python-format msgid "Creation failed. %(dev_name)s already exists." msgstr "" -#: neutron/common/exceptions.py:271 +#: neutron/common/exceptions.py:275 #, python-format msgid "Unknown quota resources %(unknown)s." msgstr "" -#: neutron/common/exceptions.py:275 +#: neutron/common/exceptions.py:279 #, python-format msgid "Quota exceeded for resources: %(overs)s" msgstr "" -#: neutron/common/exceptions.py:279 +#: neutron/common/exceptions.py:283 msgid "Tenant-id was missing from Quota request" msgstr "" -#: neutron/common/exceptions.py:283 +#: neutron/common/exceptions.py:287 #, python-format msgid "" "Change would make usage less than 0 for the following resources: " "%(unders)s" msgstr "" -#: neutron/common/exceptions.py:288 +#: neutron/common/exceptions.py:292 #, python-format msgid "" "Unable to reconfigure sharing settings for network %(network)s. Multiple " "tenants are using it" msgstr "" -#: neutron/common/exceptions.py:293 +#: neutron/common/exceptions.py:297 #, python-format msgid "Invalid extension environment: %(reason)s" msgstr "" -#: neutron/common/exceptions.py:297 +#: neutron/common/exceptions.py:301 #, python-format msgid "Extensions not found: %(extensions)s" msgstr "" -#: neutron/common/exceptions.py:301 +#: neutron/common/exceptions.py:305 #, python-format msgid "Invalid content type %(content_type)s" msgstr "" -#: neutron/common/exceptions.py:305 +#: neutron/common/exceptions.py:309 #, python-format msgid "Unable to find any IP address on external network %(net_id)s." msgstr "" -#: neutron/common/exceptions.py:310 +#: neutron/common/exceptions.py:314 msgid "More than one external network exists" msgstr "" -#: neutron/common/exceptions.py:314 +#: neutron/common/exceptions.py:318 #, python-format msgid "An invalid value was provided for %(opt_name)s: %(opt_value)s" msgstr "" -#: neutron/common/exceptions.py:319 +#: neutron/common/exceptions.py:323 #, python-format msgid "Gateway ip %(ip_address)s conflicts with allocation pool %(pool)s" msgstr "" -#: neutron/common/exceptions.py:324 +#: neutron/common/exceptions.py:328 #, python-format msgid "" "Current gateway ip %(ip_address)s already in use by port %(port_id)s. " "Unable to update." msgstr "" -#: neutron/common/exceptions.py:329 +#: neutron/common/exceptions.py:333 #, python-format msgid "Invalid network VLAN range: '%(vlan_range)s' - '%(error)s'" msgstr "" -#: neutron/common/exceptions.py:339 +#: neutron/common/exceptions.py:343 msgid "Empty physical network name." msgstr "" -#: neutron/common/exceptions.py:343 +#: neutron/common/exceptions.py:347 #, python-format msgid "Invalid network Tunnel range: '%(tunnel_range)s' - %(error)s" msgstr "" -#: neutron/common/exceptions.py:354 +#: neutron/common/exceptions.py:358 #, python-format msgid "Invalid network VXLAN port range: '%(vxlan_range)s'" msgstr "" -#: neutron/common/exceptions.py:358 +#: neutron/common/exceptions.py:362 msgid "VXLAN Network unsupported." msgstr "" -#: neutron/common/exceptions.py:362 +#: neutron/common/exceptions.py:366 #, python-format msgid "Found duplicate extension: %(alias)s" msgstr "" -#: neutron/common/exceptions.py:366 +#: neutron/common/exceptions.py:370 #, python-format msgid "" "The following device_id %(device_id)s is not owned by your tenant or " "matches another tenants router." msgstr "" -#: neutron/common/exceptions.py:371 +#: neutron/common/exceptions.py:375 #, python-format msgid "Invalid CIDR %(input)s given as IP prefix" msgstr "" -#: neutron/common/exceptions.py:375 +#: neutron/common/exceptions.py:379 #, python-format msgid "Router '%(router_id)s' is not compatible with this agent" msgstr "" -#: neutron/common/exceptions.py:379 +#: neutron/common/exceptions.py:383 #, python-format msgid "Router '%(router_id)s' cannot be both DVR and HA" msgstr "" -#: neutron/common/exceptions.py:400 +#: neutron/common/exceptions.py:404 msgid "network_id and router_id are None. One must be provided." msgstr "" -#: neutron/common/exceptions.py:404 +#: neutron/common/exceptions.py:408 msgid "Aborting periodic_sync_routers_task due to an error" msgstr "" -#: neutron/common/exceptions.py:416 +#: neutron/common/exceptions.py:420 #, python-format msgid "%(driver)s: Internal driver error." msgstr "" -#: neutron/common/exceptions.py:420 +#: neutron/common/exceptions.py:424 msgid "Unspecified minimum subnet pool prefix" msgstr "" -#: neutron/common/exceptions.py:424 +#: neutron/common/exceptions.py:428 msgid "Empty subnet pool prefix list" msgstr "" -#: neutron/common/exceptions.py:428 +#: neutron/common/exceptions.py:432 msgid "Cannot mix IPv4 and IPv6 prefixes in a subnet pool" msgstr "" -#: neutron/common/exceptions.py:432 +#: neutron/common/exceptions.py:436 #, python-format msgid "Prefix '%(prefix)s' not supported in IPv%(version)s pool" msgstr "" -#: neutron/common/exceptions.py:436 +#: neutron/common/exceptions.py:440 #, python-format msgid "" "Illegal prefix bounds: %(prefix_type)s=%(prefixlen)s, " "%(base_prefix_type)s=%(base_prefixlen)s" msgstr "" -#: neutron/common/exceptions.py:441 +#: neutron/common/exceptions.py:445 #, python-format msgid "Illegal update to prefixes: %(msg)s" msgstr "" -#: neutron/common/exceptions.py:445 +#: neutron/common/exceptions.py:449 #, python-format msgid "Failed to allocate subnet: %(reason)s" msgstr "" -#: neutron/common/exceptions.py:449 +#: neutron/common/exceptions.py:453 #, python-format msgid "" "Unable to allocate subnet with prefix length %(prefixlen)s, minimum " "allowed prefix is %(min_prefixlen)s" msgstr "" -#: neutron/common/exceptions.py:454 +#: neutron/common/exceptions.py:458 #, python-format msgid "" "Unable to allocate subnet with prefix length %(prefixlen)s, maximum " "allowed prefix is %(max_prefixlen)s" msgstr "" -#: neutron/common/exceptions.py:459 +#: neutron/common/exceptions.py:463 #, python-format msgid "Unable to delete subnet pool: %(reason)s" msgstr "" -#: neutron/common/exceptions.py:463 +#: neutron/common/exceptions.py:467 msgid "Per-tenant subnet pool prefix quota exceeded" msgstr "" -#: neutron/common/exceptions.py:467 +#: neutron/common/exceptions.py:471 #, python-format msgid "Device '%(device_name)s' does not exist" msgstr "" -#: neutron/common/exceptions.py:471 +#: neutron/common/exceptions.py:475 msgid "" "Subnets hosted on the same network must be allocated from the same subnet" " pool" @@ -1854,34 +1850,34 @@ msgstr "" msgid "Bad prefix type for generate IPv6 address by EUI-64: %s" msgstr "" -#: neutron/common/utils.py:203 +#: neutron/common/utils.py:214 #: neutron/plugins/ml2/drivers/mech_sriov/agent/common/config.py:36 #, python-format msgid "Invalid mapping: '%s'" msgstr "" -#: neutron/common/utils.py:206 +#: neutron/common/utils.py:217 #: neutron/plugins/ml2/drivers/mech_sriov/agent/common/config.py:39 #, python-format msgid "Missing key in mapping: '%s'" msgstr "" -#: neutron/common/utils.py:209 +#: neutron/common/utils.py:220 #, python-format msgid "Missing value in mapping: '%s'" msgstr "" -#: neutron/common/utils.py:211 +#: neutron/common/utils.py:222 #, python-format msgid "Key %(key)s in mapping: '%(mapping)s' not unique" msgstr "" -#: neutron/common/utils.py:214 +#: neutron/common/utils.py:225 #, python-format msgid "Value %(value)s in mapping: '%(mapping)s' not unique" msgstr "" -#: neutron/common/utils.py:408 +#: neutron/common/utils.py:419 msgid "Illegal IP version number" msgstr "" @@ -1939,23 +1935,23 @@ msgid "" "such agents is available if this option is True." msgstr "" -#: neutron/db/common_db_mixin.py:138 +#: neutron/db/common_db_mixin.py:148 msgid "Cannot create resource for another tenant" msgstr "" -#: neutron/db/db_base_plugin_v2.py:108 neutron/db/db_base_plugin_v2.py:112 +#: neutron/db/db_base_plugin_v2.py:115 neutron/db/db_base_plugin_v2.py:119 #, python-format msgid "Invalid route: %s" msgstr "" -#: neutron/db/db_base_plugin_v2.py:164 +#: neutron/db/db_base_plugin_v2.py:171 #, python-format msgid "" "Invalid CIDR %s for IPv6 address mode. OpenStack uses the EUI-64 address " "format, which requires the prefix to be /64." msgstr "" -#: neutron/db/db_base_plugin_v2.py:172 +#: neutron/db/db_base_plugin_v2.py:179 #, python-format msgid "" "ipv6_ra_mode set to '%(ra_mode)s' with ipv6_address_mode set to " @@ -1963,73 +1959,79 @@ msgid "" "the same value" msgstr "" -#: neutron/db/db_base_plugin_v2.py:180 +#: neutron/db/db_base_plugin_v2.py:187 msgid "" "ipv6_ra_mode or ipv6_address_mode cannot be set when enable_dhcp is set " "to False." msgstr "" -#: neutron/db/db_base_plugin_v2.py:186 +#: neutron/db/db_base_plugin_v2.py:193 msgid "Cannot disable enable_dhcp with ipv6 attributes set" msgstr "" -#: neutron/db/db_base_plugin_v2.py:316 +#: neutron/db/db_base_plugin_v2.py:342 #, python-format msgid "%(name)s '%(addr)s' does not match the ip_version '%(ip_version)s'" msgstr "" -#: neutron/db/db_base_plugin_v2.py:343 +#: neutron/db/db_base_plugin_v2.py:369 msgid "Subnet has a prefix length that is incompatible with DHCP service enabled." msgstr "" -#: neutron/db/db_base_plugin_v2.py:364 +#: neutron/db/db_base_plugin_v2.py:390 msgid "Gateway is not valid on subnet" msgstr "" -#: neutron/db/db_base_plugin_v2.py:384 neutron/db/db_base_plugin_v2.py:398 +#: neutron/db/db_base_plugin_v2.py:410 neutron/db/db_base_plugin_v2.py:424 #: neutron/plugins/opencontrail/contrail_plugin.py:313 msgid "new subnet" msgstr "" -#: neutron/db/db_base_plugin_v2.py:391 +#: neutron/db/db_base_plugin_v2.py:417 #, python-format msgid "Error parsing dns address %s" msgstr "" -#: neutron/db/db_base_plugin_v2.py:407 +#: neutron/db/db_base_plugin_v2.py:433 msgid "ipv6_ra_mode is not valid when ip_version is 4" msgstr "" -#: neutron/db/db_base_plugin_v2.py:411 +#: neutron/db/db_base_plugin_v2.py:437 msgid "ipv6_address_mode is not valid when ip_version is 4" msgstr "" -#: neutron/db/db_base_plugin_v2.py:490 +#: neutron/db/db_base_plugin_v2.py:517 msgid "ip_version must be specified in the absence of cidr and subnetpool_id" msgstr "" -#: neutron/db/db_base_plugin_v2.py:507 +#: neutron/db/db_base_plugin_v2.py:534 msgid "cidr and prefixlen must not be supplied together" msgstr "" -#: neutron/db/db_base_plugin_v2.py:521 +#: neutron/db/db_base_plugin_v2.py:548 msgid "A cidr must be specified in the absence of a subnet pool" msgstr "" -#: neutron/db/db_base_plugin_v2.py:697 +#: neutron/db/db_base_plugin_v2.py:731 msgid "Existing prefixes must be a subset of the new prefixes" msgstr "" -#: neutron/db/db_base_plugin_v2.py:764 +#: neutron/db/db_base_plugin_v2.py:798 msgid "Subnet pool has existing allocations" msgstr "" -#: neutron/db/db_base_plugin_v2.py:771 +#: neutron/db/db_base_plugin_v2.py:805 msgid "mac address update" msgstr "" #: neutron/db/dvr_mac_db.py:38 -msgid "The base mac address used for unique DVR instances by Neutron" +msgid "" +"The base mac address used for unique DVR instances by Neutron. The first " +"3 octets will remain unchanged. If the 4th octet is not 00, it will also " +"be used. The others will be randomly generated. The 'dvr_base_mac' *must*" +" be different from 'base_mac' to avoid mixing them up with MAC's " +"allocated for tenant ports. A 4 octet example would be dvr_base_mac = " +"fa:16:3f:4f:00:00. The default is 3 octet" msgstr "" #: neutron/db/extraroute_db.py:36 @@ -2044,62 +2046,93 @@ msgstr "" msgid "the nexthop is used by router" msgstr "" -#: neutron/db/ipam_backend_mixin.py:63 +#: neutron/db/flavors_db.py:35 +#, python-format +msgid "Flavor %(flavor_id)s could not be found" +msgstr "" + +#: neutron/db/flavors_db.py:39 +#, python-format +msgid "Flavor %(flavor_id)s is used by some service instance" +msgstr "" + +#: neutron/db/flavors_db.py:43 +#, python-format +msgid "Service Profile %(sp_id)s could not be found" +msgstr "" + +#: neutron/db/flavors_db.py:47 +#, python-format +msgid "Service Profile %(sp_id)s is used by some service instance" +msgstr "" + +#: neutron/db/flavors_db.py:51 +#, python-format +msgid "Service Profile %(sp_id)s is already associated with flavor %(fl_id)s" +msgstr "" + +#: neutron/db/flavors_db.py:56 +#, python-format +msgid "Service Profile %(sp_id)s is not associated with flavor %(fl_id)s" +msgstr "" + +#: neutron/db/ipam_backend_mixin.py:81 msgid "allocation_pools allowed only for specific subnet requests." msgstr "" -#: neutron/db/ipam_backend_mixin.py:74 +#: neutron/db/ipam_backend_mixin.py:92 #, python-format msgid "Cannot allocate IPv%(req_ver)s subnet from IPv%(pool_ver)s subnet pool" msgstr "" -#: neutron/db/ipam_backend_mixin.py:193 +#: neutron/db/ipam_backend_mixin.py:212 msgid "0 is not allowed as CIDR prefix length" msgstr "" -#: neutron/db/ipam_backend_mixin.py:203 +#: neutron/db/ipam_backend_mixin.py:222 #, python-format msgid "" "Requested subnet with cidr: %(cidr)s for network: %(network_id)s overlaps" " with another subnet" msgstr "" -#: neutron/db/ipam_backend_mixin.py:329 -msgid "Exceeded maximum amount of fixed ips per port" +#: neutron/db/ipam_backend_mixin.py:300 +#: neutron/plugins/opencontrail/contrail_plugin.py:390 +msgid "Exceeded maximim amount of fixed ips per port" msgstr "" -#: neutron/db/ipam_non_pluggable_backend.py:248 -msgid "IP allocation requires subnet_id or ip_address" -msgstr "" - -#: neutron/db/ipam_non_pluggable_backend.py:265 +#: neutron/db/ipam_backend_mixin.py:307 #, python-format msgid "" "Failed to create port on network %(network_id)s, because fixed_ips " "included invalid subnet %(subnet_id)s" msgstr "" -#: neutron/db/ipam_non_pluggable_backend.py:291 +#: neutron/db/ipam_backend_mixin.py:321 +msgid "IP allocation requires subnet_id or ip_address" +msgstr "" + +#: neutron/db/ipam_backend_mixin.py:365 +msgid "Exceeded maximum amount of fixed ips per port" +msgstr "" + +#: neutron/db/ipam_non_pluggable_backend.py:257 +#: neutron/db/ipam_pluggable_backend.py:248 #, python-format msgid "" "IPv6 address %(address)s can not be directly assigned to a port on subnet" " %(id)s since the subnet is configured for automatic addresses" msgstr "" -#: neutron/db/ipam_non_pluggable_backend.py:310 -#: neutron/plugins/opencontrail/contrail_plugin.py:390 -msgid "Exceeded maximim amount of fixed ips per port" -msgstr "" - -#: neutron/db/l3_agentschedulers_db.py:45 +#: neutron/db/l3_agentschedulers_db.py:46 msgid "Driver to use for scheduling router to a default L3 agent" msgstr "" -#: neutron/db/l3_agentschedulers_db.py:48 +#: neutron/db/l3_agentschedulers_db.py:49 msgid "Allow auto scheduling of routers to L3 agent." msgstr "" -#: neutron/db/l3_agentschedulers_db.py:50 +#: neutron/db/l3_agentschedulers_db.py:51 msgid "" "Automatically reschedule routers from offline L3 agents to online L3 " "agents." @@ -2140,7 +2173,7 @@ msgstr "" msgid "Cannot specify both subnet-id and port-id" msgstr "" -#: neutron/db/l3_db.py:521 +#: neutron/db/l3_db.py:525 #, python-format msgid "" "Cannot have multiple router ports with the same network id if both " @@ -2148,63 +2181,82 @@ msgid "" "id %(nid)s" msgstr "" -#: neutron/db/l3_db.py:563 +#: neutron/db/l3_db.py:567 msgid "Subnet for router interface must have a gateway IP" msgstr "" -#: neutron/db/l3_db.py:567 +#: neutron/db/l3_db.py:571 #, python-format msgid "" "IPv6 subnet %s configured to receive RAs from an external router cannot " "be added to Neutron Router." msgstr "" -#: neutron/db/l3_db.py:779 +#: neutron/db/l3_db.py:783 #, python-format msgid "Cannot add floating IP to port on subnet %s which has no gateway_ip" msgstr "" -#: neutron/db/l3_db.py:820 +#: neutron/db/l3_db.py:828 #, python-format msgid "" "Port %(port_id)s is associated with a different tenant than Floating IP " "%(floatingip_id)s and therefore cannot be bound." msgstr "" -#: neutron/db/l3_db.py:824 +#: neutron/db/l3_db.py:832 #, python-format msgid "" "Cannot create floating IP and bind it to Port %s, since that port is " "owned by a different tenant." msgstr "" -#: neutron/db/l3_db.py:836 +#: neutron/db/l3_db.py:844 +#, python-format +msgid "" +"Floating IP %(floatingip_id) is associated with non-IPv4 address " +"%s(internal_ip)s and therefore cannot be bound." +msgstr "" + +#: neutron/db/l3_db.py:848 +#, python-format +msgid "" +"Cannot create floating IP and bind it to %s, since that is not an IPv4 " +"address." +msgstr "" + +#: neutron/db/l3_db.py:856 #, python-format msgid "Port %(id)s does not have fixed ip %(address)s" msgstr "" -#: neutron/db/l3_db.py:843 +#: neutron/db/l3_db.py:863 #, python-format -msgid "Cannot add floating IP to port %s that has no fixed IP addresses" +msgid "Cannot add floating IP to port %s that has no fixed IPv4 addresses" msgstr "" -#: neutron/db/l3_db.py:847 +#: neutron/db/l3_db.py:867 #, python-format msgid "" -"Port %s has multiple fixed IPs. Must provide a specific IP when " -"assigning a floating IP" +"Port %s has multiple fixed IPv4 addresses. Must provide a specific IPv4 " +"address when assigning a floating IP" msgstr "" -#: neutron/db/l3_db.py:876 +#: neutron/db/l3_db.py:896 msgid "fixed_ip_address cannot be specified without a port_id" msgstr "" -#: neutron/db/l3_db.py:916 +#: neutron/db/l3_db.py:940 #, python-format msgid "Network %s is not a valid external network" msgstr "" -#: neutron/db/l3_db.py:1060 +#: neutron/db/l3_db.py:944 +#, python-format +msgid "Network %s does not contain any IPv4 subnet" +msgstr "" + +#: neutron/db/l3_db.py:1089 #, python-format msgid "has device owner %s" msgstr "" @@ -2215,11 +2267,15 @@ msgid "" " Only admin can override." msgstr "" -#: neutron/db/l3_dvr_db.py:566 +#: neutron/db/l3_dvr_db.py:90 +msgid "Migration from distributed router to centralized" +msgstr "" + +#: neutron/db/l3_dvr_db.py:574 msgid "Unable to create the Agent Gateway Port" msgstr "" -#: neutron/db/l3_dvr_db.py:598 +#: neutron/db/l3_dvr_db.py:606 msgid "Unable to create the SNAT Interface Port" msgstr "" @@ -2245,6 +2301,13 @@ msgstr "" msgid "Subnet used for the l3 HA admin network." msgstr "" +#: neutron/db/rbac_db_models.py:27 +#, python-format +msgid "" +"Invalid action '%(action)s' for object type '%(object_type)s'. Valid " +"actions: %(valid_actions)s" +msgstr "" + #: neutron/db/securitygroups_db.py:271 neutron/db/securitygroups_db.py:612 #, python-format msgid "cannot be deleted due to %s" @@ -2276,121 +2339,66 @@ msgstr "" msgid "%s cannot be called while in offline mode" msgstr "" -#: neutron/db/migration/cli.py:37 +#: neutron/db/migration/cli.py:43 msgid "Neutron plugin provider module" msgstr "" -#: neutron/db/migration/cli.py:43 +#: neutron/db/migration/cli.py:49 #, python-format msgid "The advanced service to execute the command against. Can be one of '%s'." msgstr "" -#: neutron/db/migration/cli.py:50 +#: neutron/db/migration/cli.py:56 msgid "Neutron quota driver class" msgstr "" -#: neutron/db/migration/cli.py:58 +#: neutron/db/migration/cli.py:64 msgid "URL to database" msgstr "" -#: neutron/db/migration/cli.py:61 +#: neutron/db/migration/cli.py:67 msgid "Database engine" msgstr "" -#: neutron/db/migration/cli.py:88 +#: neutron/db/migration/cli.py:94 msgid "You must provide a revision or relative delta" msgstr "" -#: neutron/db/migration/cli.py:92 +#: neutron/db/migration/cli.py:98 msgid "Negative relative revision (downgrade) not supported" msgstr "" -#: neutron/db/migration/cli.py:98 +#: neutron/db/migration/cli.py:104 msgid "Use either --delta or relative revision, not both" msgstr "" -#: neutron/db/migration/cli.py:101 +#: neutron/db/migration/cli.py:107 msgid "Negative delta (downgrade) not supported" msgstr "" -#: neutron/db/migration/cli.py:110 +#: neutron/db/migration/cli.py:120 msgid "Downgrade no longer supported" msgstr "" -#: neutron/db/migration/cli.py:130 neutron/db/migration/cli.py:143 -msgid "Timeline branches unable to generate timeline" +#: neutron/db/migration/cli.py:159 +#, python-format +msgid "No new branches are allowed except: %s" msgstr "" -#: neutron/db/migration/cli.py:137 -msgid "HEAD file does not match migration timeline head" +#: neutron/db/migration/cli.py:177 +#, python-format +msgid "HEADS file does not match migration timeline heads, expected: %s" msgstr "" -#: neutron/db/migration/cli.py:188 +#: neutron/db/migration/cli.py:228 msgid "Available commands" msgstr "" -#: neutron/db/migration/cli.py:196 +#: neutron/db/migration/cli.py:301 #, python-format msgid "Package neutron-%s not installed" msgstr "" -#: neutron/db/migration/migrate_to_ml2.py:90 -msgid "Missing version in alembic_versions table" -msgstr "" - -#: neutron/db/migration/migrate_to_ml2.py:92 -#, python-format -msgid "Multiple versions in alembic_versions table: %s" -msgstr "" - -#: neutron/db/migration/migrate_to_ml2.py:96 -#, python-format -msgid "" -"Unsupported database schema %(current)s. Please migrate your database to " -"one of following versions: %(supported)s" -msgstr "" - -#: neutron/db/migration/migrate_to_ml2.py:447 -#, python-format -msgid "Unknown tunnel type: %s" -msgstr "" - -#: neutron/db/migration/migrate_to_ml2.py:481 -msgid "The plugin type whose database will be migrated" -msgstr "" - -#: neutron/db/migration/migrate_to_ml2.py:484 -msgid "The connection url for the target db" -msgstr "" - -#: neutron/db/migration/migrate_to_ml2.py:487 -#, python-format -msgid "The %s tunnel type to migrate from" -msgstr "" - -#: neutron/db/migration/migrate_to_ml2.py:490 -#: neutron/plugins/ml2/drivers/openvswitch/agent/common/config.py:68 -msgid "The UDP port to use for VXLAN tunnels." -msgstr "" - -#: neutron/db/migration/migrate_to_ml2.py:493 -msgid "Retain the old plugin's tables" -msgstr "" - -#: neutron/db/migration/migrate_to_ml2.py:499 -#, python-format -msgid "" -"Tunnel args (tunnel-type and vxlan-udp-port) are not valid for the %s " -"plugin" -msgstr "" - -#: neutron/db/migration/migrate_to_ml2.py:506 -#, python-format -msgid "" -"Support for migrating %(plugin)s for release %(release)s is not yet " -"implemented" -msgstr "" - #: neutron/db/migration/alembic_migrations/versions/14be42f3d0a5_default_sec_group_table.py:45 #, python-format msgid "" @@ -2980,36 +2988,36 @@ msgstr "" msgid "Unsupported request type" msgstr "" -#: neutron/plugins/brocade/NeutronPlugin.py:62 +#: neutron/plugins/brocade/NeutronPlugin.py:61 #: neutron/plugins/ml2/drivers/brocade/mechanism_brocade.py:22 #: neutron/services/l3_router/brocade/l3_router_plugin.py:23 msgid "The address of the host to SSH to" msgstr "" -#: neutron/plugins/brocade/NeutronPlugin.py:64 +#: neutron/plugins/brocade/NeutronPlugin.py:63 #: neutron/plugins/ml2/drivers/brocade/mechanism_brocade.py:24 #: neutron/services/l3_router/brocade/l3_router_plugin.py:25 msgid "The SSH username to use" msgstr "" -#: neutron/plugins/brocade/NeutronPlugin.py:66 +#: neutron/plugins/brocade/NeutronPlugin.py:65 #: neutron/plugins/ml2/drivers/brocade/mechanism_brocade.py:26 #: neutron/services/l3_router/brocade/l3_router_plugin.py:27 msgid "The SSH password to use" msgstr "" -#: neutron/plugins/brocade/NeutronPlugin.py:68 +#: neutron/plugins/brocade/NeutronPlugin.py:67 msgid "Currently unused" msgstr "" -#: neutron/plugins/brocade/NeutronPlugin.py:72 +#: neutron/plugins/brocade/NeutronPlugin.py:71 msgid "The network interface to use when creating a port" msgstr "" -#: neutron/plugins/brocade/NeutronPlugin.py:300 -#: neutron/plugins/brocade/NeutronPlugin.py:341 -#: neutron/plugins/brocade/NeutronPlugin.py:394 -#: neutron/plugins/brocade/NeutronPlugin.py:425 +#: neutron/plugins/brocade/NeutronPlugin.py:299 +#: neutron/plugins/brocade/NeutronPlugin.py:340 +#: neutron/plugins/brocade/NeutronPlugin.py:393 +#: neutron/plugins/brocade/NeutronPlugin.py:424 msgid "Brocade plugin raised exception, check logs" msgstr "" @@ -3900,46 +3908,6 @@ msgstr "" msgid "The input does not contain nececessary info: %(msg)s" msgstr "" -#: neutron/plugins/metaplugin/common/config.py:23 -msgid "" -"Comma separated list of flavor:neutron_plugin for plugins to load. " -"Extension method is searched in the list order and the first one is used." -msgstr "" - -#: neutron/plugins/metaplugin/common/config.py:29 -msgid "" -"Comma separated list of flavor:neutron_plugin for L3 service plugins to " -"load. This is intended for specifying L2 plugins which support L3 " -"functions. If you use a router service plugin, set this blank." -msgstr "" - -#: neutron/plugins/metaplugin/common/config.py:36 -msgid "" -"Default flavor to use, when flavor:network is not specified at network " -"creation." -msgstr "" - -#: neutron/plugins/metaplugin/common/config.py:41 -msgid "" -"Default L3 flavor to use, when flavor:router is not specified at router " -"creation. Ignored if 'l3_plugin_list' is blank." -msgstr "" - -#: neutron/plugins/metaplugin/common/config.py:47 -msgid "Comma separated list of supported extension aliases." -msgstr "" - -#: neutron/plugins/metaplugin/common/config.py:51 -msgid "" -"Comma separated list of method:flavor to select specific plugin for a " -"method. This has priority over method search order based on " -"'plugin_list'." -msgstr "" - -#: neutron/plugins/metaplugin/common/config.py:57 -msgid "Specifies flavor for plugin to handle 'q-plugin' RPC requests." -msgstr "" - #: neutron/plugins/midonet/plugin.py:23 msgid "MidoNet API server URI." msgstr "" @@ -4017,7 +3985,7 @@ msgstr "" msgid "network_type value '%s' not supported" msgstr "" -#: neutron/plugins/ml2/plugin.py:231 +#: neutron/plugins/ml2/plugin.py:230 msgid "binding:profile value too large" msgstr "" @@ -4050,7 +4018,7 @@ msgstr "" msgid "%s prohibited for flat provider network" msgstr "" -#: neutron/plugins/ml2/drivers/type_gre.py:35 +#: neutron/plugins/ml2/drivers/type_gre.py:32 msgid "" "Comma-separated list of : tuples enumerating ranges of " "GRE tunnel IDs that are available for tenant network allocation" @@ -4061,30 +4029,30 @@ msgstr "" msgid "%s prohibited for local provider network" msgstr "" -#: neutron/plugins/ml2/drivers/type_tunnel.py:122 +#: neutron/plugins/ml2/drivers/type_tunnel.py:168 #, python-format msgid "provider:physical_network specified for %s network" msgstr "" -#: neutron/plugins/ml2/drivers/type_tunnel.py:129 +#: neutron/plugins/ml2/drivers/type_tunnel.py:175 #, python-format msgid "%(key)s prohibited for %(tunnel)s provider network" msgstr "" -#: neutron/plugins/ml2/drivers/type_tunnel.py:254 +#: neutron/plugins/ml2/drivers/type_tunnel.py:300 msgid "Tunnel IP value needed by the ML2 plugin" msgstr "" -#: neutron/plugins/ml2/drivers/type_tunnel.py:259 +#: neutron/plugins/ml2/drivers/type_tunnel.py:305 msgid "Network type value needed by the ML2 plugin" msgstr "" -#: neutron/plugins/ml2/drivers/type_tunnel.py:286 +#: neutron/plugins/ml2/drivers/type_tunnel.py:332 #, python-format msgid "Tunnel IP %(ip)s in use with host %(host)s" msgstr "" -#: neutron/plugins/ml2/drivers/type_tunnel.py:305 +#: neutron/plugins/ml2/drivers/type_tunnel.py:351 #, python-format msgid "Network type value '%s' not supported" msgstr "" @@ -4116,13 +4084,13 @@ msgstr "" msgid "%s prohibited for VLAN provider network" msgstr "" -#: neutron/plugins/ml2/drivers/type_vxlan.py:34 +#: neutron/plugins/ml2/drivers/type_vxlan.py:32 msgid "" "Comma-separated list of : tuples enumerating ranges of " "VXLAN VNI IDs that are available for tenant network allocation" msgstr "" -#: neutron/plugins/ml2/drivers/type_vxlan.py:38 +#: neutron/plugins/ml2/drivers/type_vxlan.py:36 msgid "Multicast group for VXLAN. If unset, disables VXLAN multicast mode." msgstr "" @@ -4239,82 +4207,6 @@ msgstr "" msgid "OS Version number" msgstr "" -#: neutron/plugins/ml2/drivers/cisco/apic/config.py:24 -msgid "Prefix for APIC domain/names/profiles created" -msgstr "" - -#: neutron/plugins/ml2/drivers/cisco/apic/config.py:34 -msgid "An ordered list of host names or IP addresses of the APIC controller(s)." -msgstr "" - -#: neutron/plugins/ml2/drivers/cisco/apic/config.py:37 -msgid "Username for the APIC controller" -msgstr "" - -#: neutron/plugins/ml2/drivers/cisco/apic/config.py:39 -msgid "Password for the APIC controller" -msgstr "" - -#: neutron/plugins/ml2/drivers/cisco/apic/config.py:42 -msgid "Name mapping strategy to use: use_uuid | use_name" -msgstr "" - -#: neutron/plugins/ml2/drivers/cisco/apic/config.py:44 -msgid "Use SSL to connect to the APIC controller" -msgstr "" - -#: neutron/plugins/ml2/drivers/cisco/apic/config.py:47 -msgid "Name for the domain created on APIC" -msgstr "" - -#: neutron/plugins/ml2/drivers/cisco/apic/config.py:50 -msgid "Name for the app profile used for Openstack" -msgstr "" - -#: neutron/plugins/ml2/drivers/cisco/apic/config.py:53 -msgid "Name for the vlan namespace to be used for Openstack" -msgstr "" - -#: neutron/plugins/ml2/drivers/cisco/apic/config.py:56 -msgid "Name of the node profile to be created" -msgstr "" - -#: neutron/plugins/ml2/drivers/cisco/apic/config.py:59 -msgid "Name of the entity profile to be created" -msgstr "" - -#: neutron/plugins/ml2/drivers/cisco/apic/config.py:62 -msgid "Name of the function profile to be created" -msgstr "" - -#: neutron/plugins/ml2/drivers/cisco/apic/config.py:65 -msgid "Name of the LACP profile to be created" -msgstr "" - -#: neutron/plugins/ml2/drivers/cisco/apic/config.py:68 -msgid "The uplink ports to check for ACI connectivity" -msgstr "" - -#: neutron/plugins/ml2/drivers/cisco/apic/config.py:71 -msgid "The switch pairs for VPC connectivity" -msgstr "" - -#: neutron/plugins/ml2/drivers/cisco/apic/config.py:74 -msgid "Range of VLAN's to be used for Openstack" -msgstr "" - -#: neutron/plugins/ml2/drivers/cisco/apic/config.py:77 -msgid "Synchronization interval in seconds" -msgstr "" - -#: neutron/plugins/ml2/drivers/cisco/apic/config.py:80 -msgid "Interval between agent status updates (in sec)" -msgstr "" - -#: neutron/plugins/ml2/drivers/cisco/apic/config.py:83 -msgid "Interval between agent poll for topology (in sec)" -msgstr "" - #: neutron/plugins/ml2/drivers/cisco/n1kv/extensions/n1kv.py:43 msgid "Add new policy profile attribute to port resource." msgstr "" @@ -4544,23 +4436,23 @@ msgid "" "error: %(error)s" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1637 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1650 msgid "" "DVR deployments for VXLAN/GRE underlays require L2-pop to be enabled, in " "both the Agent and Server side." msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1651 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1664 #, python-format msgid "Parsing bridge_mappings failed: %s." msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1673 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1686 #, python-format msgid "Invalid tunnel type specified: %s" msgstr "" -#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1676 +#: neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1689 msgid "Tunneling cannot be enabled without a valid local_ip." msgstr "" @@ -4608,6 +4500,10 @@ msgstr "" msgid "Network types supported by the agent (gre and/or vxlan)." msgstr "" +#: neutron/plugins/ml2/drivers/openvswitch/agent/common/config.py:68 +msgid "The UDP port to use for VXLAN tunnels." +msgstr "" + #: neutron/plugins/ml2/drivers/openvswitch/agent/common/config.py:70 msgid "MTU size of veth interfaces" msgstr "" @@ -4986,10 +4882,6 @@ msgstr "" msgid "Error importing FWaaS device driver: %s" msgstr "" -#: neutron/services/l3_router/l3_apic.py:57 -msgid "L3 Router Service Plugin for basic L3 using the APIC" -msgstr "" - #: neutron/services/l3_router/brocade/l3_router_plugin.py:29 msgid "Rbridge id of provider edge router(s)" msgstr "" @@ -5070,7 +4962,7 @@ msgstr "" msgid "An interface driver must be specified" msgstr "" -#: neutron/tests/base.py:110 +#: neutron/tests/base.py:109 #, python-format msgid "Unknown attribute '%s'." msgstr "" @@ -5089,8 +4981,8 @@ msgstr "" msgid "Keepalived didn't respawn" msgstr "" -#: neutron/tests/unit/agent/linux/test_iptables_manager.py:845 -#: neutron/tests/unit/agent/linux/test_iptables_manager.py:879 +#: neutron/tests/unit/agent/linux/test_iptables_manager.py:846 +#: neutron/tests/unit/agent/linux/test_iptables_manager.py:880 #, python-format msgid "" "IPTablesManager.apply failed to apply the following set of iptables " diff --git a/neutron/locale/pt_BR/LC_MESSAGES/neutron-log-info.po b/neutron/locale/pt_BR/LC_MESSAGES/neutron-log-info.po index 90630080916..f6a1104562f 100644 --- a/neutron/locale/pt_BR/LC_MESSAGES/neutron-log-info.po +++ b/neutron/locale/pt_BR/LC_MESSAGES/neutron-log-info.po @@ -8,11 +8,11 @@ msgid "" msgstr "" "Project-Id-Version: Neutron\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" -"POT-Creation-Date: 2015-07-11 06:09+0000\n" -"PO-Revision-Date: 2015-07-08 20:45+0000\n" +"POT-Creation-Date: 2015-07-27 06:07+0000\n" +"PO-Revision-Date: 2015-07-25 03:05+0000\n" "Last-Translator: openstackjenkins \n" -"Language-Team: Portuguese (Brazil) (http://www.transifex.com/p/neutron/" -"language/pt_BR/)\n" +"Language-Team: Portuguese (Brazil) (http://www.transifex.com/projects/p/" +"neutron/language/pt_BR/)\n" "Language: pt_BR\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" @@ -293,10 +293,6 @@ msgstr "Ignorando a porta %s porque nenhum IP está configurado nela" msgid "Specified IP addresses do not match the subnet IP version" msgstr "Endereços IP especificado não correspondem à versão do IP da sub-rede" -#, python-format -msgid "Start IP (%(start)s) is greater than end IP (%(end)s)" -msgstr "IP inicial (%(start)s) é maior que IP final (%(end)s)" - msgid "Synchronizing state" msgstr "Sincronizando estado" diff --git a/neutron/locale/zh_CN/LC_MESSAGES/neutron-log-info.po b/neutron/locale/zh_CN/LC_MESSAGES/neutron-log-info.po index f84f2f01193..8c5ddc5511b 100644 --- a/neutron/locale/zh_CN/LC_MESSAGES/neutron-log-info.po +++ b/neutron/locale/zh_CN/LC_MESSAGES/neutron-log-info.po @@ -8,11 +8,11 @@ msgid "" msgstr "" "Project-Id-Version: Neutron\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" -"POT-Creation-Date: 2015-07-11 06:09+0000\n" -"PO-Revision-Date: 2015-07-08 20:45+0000\n" +"POT-Creation-Date: 2015-07-27 06:07+0000\n" +"PO-Revision-Date: 2015-07-25 03:05+0000\n" "Last-Translator: openstackjenkins \n" -"Language-Team: Chinese (China) (http://www.transifex.com/p/neutron/language/" -"zh_CN/)\n" +"Language-Team: Chinese (China) (http://www.transifex.com/projects/p/neutron/" +"language/zh_CN/)\n" "Language: zh_CN\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" @@ -48,20 +48,6 @@ msgstr "%(url)s 返回了故障:%(exception)s" msgid "%(url)s returned with HTTP %(status)d" msgstr "%(url)s 随HTTP %(status)d返回" -#, python-format -msgid "APIC host agent: agent starting on %s" -msgstr "APIC 主机代理: 代理正启动在 %s" - -#, python-format -msgid "APIC host agent: started on %s" -msgstr "APIC 主机代理: 已启动在 %s" - -msgid "APIC service agent started" -msgstr "APIC 服务代理已启动" - -msgid "APIC service agent starting ..." -msgstr "APIC 服务代理启动中 ..." - #, python-format msgid "" "Added segment %(id)s of type %(network_type)s for network %(network_id)s" @@ -345,10 +331,6 @@ msgstr "正在跳过端口 %s,因为没有在该端口上配置任何 IP" msgid "Specified IP addresses do not match the subnet IP version" msgstr "指定的 IP 地址与子网 IP 版本不匹配" -#, python-format -msgid "Start IP (%(start)s) is greater than end IP (%(end)s)" -msgstr "起始 IP (%(start)s) 大于结束 IP (%(end)s)" - #, python-format msgid "Subnet %s was deleted concurrently" msgstr "子网 %s 同时被删除 " diff --git a/neutron/locale/zh_TW/LC_MESSAGES/neutron-log-info.po b/neutron/locale/zh_TW/LC_MESSAGES/neutron-log-info.po index d5fe7830960..09cba56e8ba 100644 --- a/neutron/locale/zh_TW/LC_MESSAGES/neutron-log-info.po +++ b/neutron/locale/zh_TW/LC_MESSAGES/neutron-log-info.po @@ -7,11 +7,11 @@ msgid "" msgstr "" "Project-Id-Version: Neutron\n" "Report-Msgid-Bugs-To: EMAIL@ADDRESS\n" -"POT-Creation-Date: 2015-07-11 06:09+0000\n" -"PO-Revision-Date: 2015-07-08 20:45+0000\n" +"POT-Creation-Date: 2015-07-27 06:07+0000\n" +"PO-Revision-Date: 2015-07-25 03:05+0000\n" "Last-Translator: openstackjenkins \n" -"Language-Team: Chinese (Taiwan) (http://www.transifex.com/p/neutron/language/" -"zh_TW/)\n" +"Language-Team: Chinese (Taiwan) (http://www.transifex.com/projects/p/neutron/" +"language/zh_TW/)\n" "Language: zh_TW\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" @@ -191,10 +191,6 @@ msgstr "正在跳過埠 %s,因為其上沒有配置 IP" msgid "Specified IP addresses do not match the subnet IP version" msgstr "指定的 IP 位址與子網路 IP 版本不符" -#, python-format -msgid "Start IP (%(start)s) is greater than end IP (%(end)s)" -msgstr "起始 IP (%(start)s) 大於結尾 IP (%(end)s)" - msgid "Synchronizing state" msgstr "正在同步化狀態" diff --git a/neutron/manager.py b/neutron/manager.py index 50beae09868..0e3a16cb2ed 100644 --- a/neutron/manager.py +++ b/neutron/manager.py @@ -23,6 +23,7 @@ from oslo_utils import importutils import six from neutron.common import utils +from neutron.db import flavors_db from neutron.i18n import _LE, _LI from neutron.plugins.common import constants @@ -165,6 +166,11 @@ class NeutronManager(object): LOG.info(_LI("Service %s is supported by the core plugin"), service_type) + def _load_flavors_manager(self): + # pass manager instance to resolve cyclical import dependency + self.service_plugins[constants.FLAVORS] = ( + flavors_db.FlavorManager(self)) + def _load_service_plugins(self): """Loads service plugins. @@ -204,6 +210,9 @@ class NeutronManager(object): "Description: %(desc)s", {"type": plugin_inst.get_plugin_type(), "desc": plugin_inst.get_plugin_description()}) + # do it after the loading from conf to avoid conflict with + # configuration provided by unit tests. + self._load_flavors_manager() @classmethod @utils.synchronized("manager") diff --git a/neutron/openstack/common/fileutils.py b/neutron/openstack/common/fileutils.py deleted file mode 100644 index 1191ce8f461..00000000000 --- a/neutron/openstack/common/fileutils.py +++ /dev/null @@ -1,149 +0,0 @@ -# Copyright 2011 OpenStack Foundation. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -import contextlib -import errno -import logging -import os -import stat -import tempfile - -from oslo_utils import excutils - -LOG = logging.getLogger(__name__) - -_FILE_CACHE = {} -DEFAULT_MODE = stat.S_IRWXU | stat.S_IRWXG | stat.S_IRWXO - - -def ensure_tree(path, mode=DEFAULT_MODE): - """Create a directory (and any ancestor directories required) - - :param path: Directory to create - :param mode: Directory creation permissions - """ - try: - os.makedirs(path, mode) - except OSError as exc: - if exc.errno == errno.EEXIST: - if not os.path.isdir(path): - raise - else: - raise - - -def read_cached_file(filename, force_reload=False): - """Read from a file if it has been modified. - - :param force_reload: Whether to reload the file. - :returns: A tuple with a boolean specifying if the data is fresh - or not. - """ - global _FILE_CACHE - - if force_reload: - delete_cached_file(filename) - - reloaded = False - mtime = os.path.getmtime(filename) - cache_info = _FILE_CACHE.setdefault(filename, {}) - - if not cache_info or mtime > cache_info.get('mtime', 0): - LOG.debug("Reloading cached file %s", filename) - with open(filename) as fap: - cache_info['data'] = fap.read() - cache_info['mtime'] = mtime - reloaded = True - return (reloaded, cache_info['data']) - - -def delete_cached_file(filename): - """Delete cached file if present. - - :param filename: filename to delete - """ - global _FILE_CACHE - - if filename in _FILE_CACHE: - del _FILE_CACHE[filename] - - -def delete_if_exists(path, remove=os.unlink): - """Delete a file, but ignore file not found error. - - :param path: File to delete - :param remove: Optional function to remove passed path - """ - - try: - remove(path) - except OSError as e: - if e.errno != errno.ENOENT: - raise - - -@contextlib.contextmanager -def remove_path_on_error(path, remove=delete_if_exists): - """Protect code that wants to operate on PATH atomically. - Any exception will cause PATH to be removed. - - :param path: File to work with - :param remove: Optional function to remove passed path - """ - - try: - yield - except Exception: - with excutils.save_and_reraise_exception(): - remove(path) - - -def file_open(*args, **kwargs): - """Open file - - see built-in open() documentation for more details - - Note: The reason this is kept in a separate module is to easily - be able to provide a stub module that doesn't alter system - state at all (for unit tests) - """ - return open(*args, **kwargs) - - -def write_to_tempfile(content, path=None, suffix='', prefix='tmp'): - """Create temporary file or use existing file. - - This util is needed for creating temporary file with - specified content, suffix and prefix. If path is not None, - it will be used for writing content. If the path doesn't - exist it'll be created. - - :param content: content for temporary file. - :param path: same as parameter 'dir' for mkstemp - :param suffix: same as parameter 'suffix' for mkstemp - :param prefix: same as parameter 'prefix' for mkstemp - - For example: it can be used in database tests for creating - configuration files. - """ - if path: - ensure_tree(path) - - (fd, path) = tempfile.mkstemp(suffix=suffix, dir=path, prefix=prefix) - try: - os.write(fd, content) - finally: - os.close(fd) - return path diff --git a/neutron/plugins/brocade/NeutronPlugin.py b/neutron/plugins/brocade/NeutronPlugin.py index ea4a18ff388..306f84a28ae 100644 --- a/neutron/plugins/brocade/NeutronPlugin.py +++ b/neutron/plugins/brocade/NeutronPlugin.py @@ -54,7 +54,6 @@ from neutron.plugins.common import constants as svc_constants LOG = logging.getLogger(__name__) -PLUGIN_VERSION = 0.88 AGENT_OWNER_PREFIX = "network:" NOS_DRIVER = 'neutron.plugins.brocade.nos.nosdriver.NOSdriver' @@ -481,10 +480,6 @@ class BrocadePluginV2(db_base_plugin_v2.NeutronDbPluginV2, 'security-group' in self.supported_extension_aliases}} return binding - def get_plugin_version(self): - """Get version number of the plugin.""" - return PLUGIN_VERSION - @staticmethod def mac_reformat_62to34(interface_mac): """Transform MAC address format. diff --git a/neutron/plugins/cisco/n1kv/n1kv_neutron_plugin.py b/neutron/plugins/cisco/n1kv/n1kv_neutron_plugin.py index 63d37b79c85..31953df53db 100644 --- a/neutron/plugins/cisco/n1kv/n1kv_neutron_plugin.py +++ b/neutron/plugins/cisco/n1kv/n1kv_neutron_plugin.py @@ -32,7 +32,7 @@ from neutron.db import agentschedulers_db from neutron.db import db_base_plugin_v2 from neutron.db import external_net_db from neutron.db import portbindings_db -from neutron.db import quota_db +from neutron.db.quota import driver from neutron.extensions import portbindings from neutron.extensions import providernet from neutron.i18n import _LW @@ -59,7 +59,7 @@ class N1kvNeutronPluginV2(db_base_plugin_v2.NeutronDbPluginV2, n1kv_db_v2.PolicyProfile_db_mixin, network_db_v2.Credential_db_mixin, agentschedulers_db.DhcpAgentSchedulerDbMixin, - quota_db.DbQuotaDriver): + driver.DbQuotaDriver): """ Implement the Neutron abstractions using Cisco Nexus1000V. diff --git a/neutron/plugins/common/constants.py b/neutron/plugins/common/constants.py index 63947ae6fd1..edf52f5932b 100644 --- a/neutron/plugins/common/constants.py +++ b/neutron/plugins/common/constants.py @@ -22,6 +22,7 @@ FIREWALL = "FIREWALL" VPN = "VPN" METERING = "METERING" L3_ROUTER_NAT = "L3_ROUTER_NAT" +FLAVORS = "FLAVORS" # Maps extension alias to service type EXT_TO_SERVICE_MAPPING = { @@ -31,7 +32,8 @@ EXT_TO_SERVICE_MAPPING = { 'fwaas': FIREWALL, 'vpnaas': VPN, 'metering': METERING, - 'router': L3_ROUTER_NAT + 'router': L3_ROUTER_NAT, + 'flavors': FLAVORS } # Service operation status constants diff --git a/neutron/plugins/ibm/sdnve_neutron_plugin.py b/neutron/plugins/ibm/sdnve_neutron_plugin.py index 2c272250e91..ac4ae1a3bc6 100644 --- a/neutron/plugins/ibm/sdnve_neutron_plugin.py +++ b/neutron/plugins/ibm/sdnve_neutron_plugin.py @@ -31,7 +31,6 @@ from neutron.db import db_base_plugin_v2 from neutron.db import external_net_db from neutron.db import l3_gwmode_db from neutron.db import portbindings_db -from neutron.db import quota_db # noqa from neutron.extensions import portbindings from neutron.i18n import _LE, _LI, _LW from neutron.plugins.ibm.common import config # noqa diff --git a/neutron/plugins/metaplugin/README b/neutron/plugins/metaplugin/README deleted file mode 100644 index e2140b1e1c0..00000000000 --- a/neutron/plugins/metaplugin/README +++ /dev/null @@ -1,6 +0,0 @@ -# NOTE - -The main source codes of Metaplugin is now in https://github.com/ntt-sic/networking-metaplugin. -They were moved from Neutron tree to there according to core-vendor-decomposition. -Defining config and DB are still here according to the decomposition policy. -Codes of 'flavor' extension and interface driver used by *-agent remain in Neutron tree too. diff --git a/neutron/plugins/metaplugin/common/config.py b/neutron/plugins/metaplugin/common/config.py deleted file mode 100644 index dfcbfe12080..00000000000 --- a/neutron/plugins/metaplugin/common/config.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright 2012, Nachi Ueno, NTT MCL, Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -from oslo_config import cfg - - -meta_plugin_opts = [ - cfg.StrOpt( - 'plugin_list', - default='', - help=_("Comma separated list of flavor:neutron_plugin for " - "plugins to load. Extension method is searched in the " - "list order and the first one is used.")), - cfg.StrOpt( - 'l3_plugin_list', - default='', - help=_("Comma separated list of flavor:neutron_plugin for L3 " - "service plugins to load. This is intended for specifying " - "L2 plugins which support L3 functions. If you use a router " - "service plugin, set this blank.")), - cfg.StrOpt( - 'default_flavor', - default='', - help=_("Default flavor to use, when flavor:network is not " - "specified at network creation.")), - cfg.StrOpt( - 'default_l3_flavor', - default='', - help=_("Default L3 flavor to use, when flavor:router is not " - "specified at router creation. Ignored if 'l3_plugin_list' " - "is blank.")), - cfg.StrOpt( - 'supported_extension_aliases', - default='', - help=_("Comma separated list of supported extension aliases.")), - cfg.StrOpt( - 'extension_map', - default='', - help=_("Comma separated list of method:flavor to select specific " - "plugin for a method. This has priority over method search " - "order based on 'plugin_list'.")), - cfg.StrOpt( - 'rpc_flavor', - default='', - help=_("Specifies flavor for plugin to handle 'q-plugin' RPC " - "requests.")), -] - -proxy_plugin_opts = [ - cfg.StrOpt('admin_user', - help=_("Admin user")), - cfg.StrOpt('admin_password', - help=_("Admin password"), - secret=True), - cfg.StrOpt('admin_tenant_name', - help=_("Admin tenant name")), - cfg.StrOpt('auth_url', - help=_("Authentication URL")), - cfg.StrOpt('auth_strategy', default='keystone', - help=_("The type of authentication to use")), - cfg.StrOpt('auth_region', - help=_("Authentication region")), -] - -cfg.CONF.register_opts(meta_plugin_opts, "META") -cfg.CONF.register_opts(proxy_plugin_opts, "PROXY") diff --git a/neutron/plugins/metaplugin/meta_models_v2.py b/neutron/plugins/metaplugin/meta_models_v2.py deleted file mode 100644 index 70d546edc44..00000000000 --- a/neutron/plugins/metaplugin/meta_models_v2.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright 2012, Nachi Ueno, NTT MCL, Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -import sqlalchemy as sa -from sqlalchemy import Column, String - -from neutron.db import models_v2 - - -class NetworkFlavor(models_v2.model_base.BASEV2): - """Represents a binding of network_id to flavor.""" - flavor = Column(String(255)) - network_id = sa.Column(sa.String(36), sa.ForeignKey('networks.id', - ondelete="CASCADE"), - primary_key=True) - - def __repr__(self): - return "" % (self.flavor, self.network_id) - - -class RouterFlavor(models_v2.model_base.BASEV2): - """Represents a binding of router_id to flavor.""" - flavor = Column(String(255)) - router_id = sa.Column(sa.String(36), sa.ForeignKey('routers.id', - ondelete="CASCADE"), - primary_key=True) - - def __repr__(self): - return "" % (self.flavor, self.router_id) diff --git a/neutron/plugins/ml2/README b/neutron/plugins/ml2/README index 74f96f6f1f4..0c1fe4597c4 100644 --- a/neutron/plugins/ml2/README +++ b/neutron/plugins/ml2/README @@ -18,10 +18,10 @@ alternative L3 solutions. Additional service plugins can also be used with the ML2 core plugin. Drivers within ML2 implement separately extensible sets of network -types and of mechanisms for accessing networks of those types. Unlike -with the metaplugin, multiple mechanisms can be used simultaneously to -access different ports of the same virtual network. Mechanisms can -utilize L2 agents via RPC and/or interact with external devices or +types and of mechanisms for accessing networks of those +types. Multiple mechanisms can be used simultaneously to access +different ports of the same virtual network. Mechanisms can utilize L2 +agents via RPC and/or interact with external devices or controllers. By utilizing the multiprovidernet extension, virtual networks can be composed of multiple segments of the same or different types. Type and mechanism drivers are loaded as python entrypoints diff --git a/neutron/plugins/ml2/drivers/cisco/apic/apic_model.py b/neutron/plugins/ml2/drivers/cisco/apic/apic_model.py deleted file mode 100644 index 44bd2e22cd8..00000000000 --- a/neutron/plugins/ml2/drivers/cisco/apic/apic_model.py +++ /dev/null @@ -1,193 +0,0 @@ -# Copyright (c) 2014 Cisco Systems Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -import sqlalchemy as sa -from sqlalchemy import orm - -from neutron.db import api as db_api -from neutron.db import model_base - -from neutron.db import models_v2 -from neutron.plugins.ml2 import models as models_ml2 - - -class RouterContract(model_base.BASEV2, models_v2.HasTenant): - - """Contracts created on the APIC. - - tenant_id represents the owner (APIC side) of the contract. - router_id is the UUID of the router (Neutron side) this contract is - referring to. - """ - - __tablename__ = 'cisco_ml2_apic_contracts' - - router_id = sa.Column(sa.String(64), sa.ForeignKey('routers.id', - ondelete='CASCADE'), - primary_key=True) - - -class HostLink(model_base.BASEV2): - - """Connectivity of host links.""" - - __tablename__ = 'cisco_ml2_apic_host_links' - - host = sa.Column(sa.String(255), nullable=False, primary_key=True) - ifname = sa.Column(sa.String(64), nullable=False, primary_key=True) - ifmac = sa.Column(sa.String(32), nullable=True) - swid = sa.Column(sa.String(32), nullable=False) - module = sa.Column(sa.String(32), nullable=False) - port = sa.Column(sa.String(32), nullable=False) - - -class ApicName(model_base.BASEV2): - """Mapping of names created on the APIC.""" - - __tablename__ = 'cisco_ml2_apic_names' - - neutron_id = sa.Column(sa.String(36), nullable=False, primary_key=True) - neutron_type = sa.Column(sa.String(32), nullable=False, primary_key=True) - apic_name = sa.Column(sa.String(255), nullable=False) - - -class ApicDbModel(object): - - """DB Model to manage all APIC DB interactions.""" - - def __init__(self): - self.session = db_api.get_session() - - def get_contract_for_router(self, router_id): - """Returns the specified router's contract.""" - return self.session.query(RouterContract).filter_by( - router_id=router_id).first() - - def write_contract_for_router(self, tenant_id, router_id): - """Stores a new contract for the given tenant.""" - contract = RouterContract(tenant_id=tenant_id, - router_id=router_id) - with self.session.begin(subtransactions=True): - self.session.add(contract) - return contract - - def update_contract_for_router(self, tenant_id, router_id): - with self.session.begin(subtransactions=True): - contract = self.session.query(RouterContract).filter_by( - router_id=router_id).with_lockmode('update').first() - if contract: - contract.tenant_id = tenant_id - self.session.merge(contract) - else: - self.write_contract_for_router(tenant_id, router_id) - - def delete_contract_for_router(self, router_id): - with self.session.begin(subtransactions=True): - try: - self.session.query(RouterContract).filter_by( - router_id=router_id).delete() - except orm.exc.NoResultFound: - return - - def add_hostlink(self, host, ifname, ifmac, swid, module, port): - link = HostLink(host=host, ifname=ifname, ifmac=ifmac, - swid=swid, module=module, port=port) - with self.session.begin(subtransactions=True): - self.session.merge(link) - - def get_hostlinks(self): - return self.session.query(HostLink).all() - - def get_hostlink(self, host, ifname): - return self.session.query(HostLink).filter_by( - host=host, ifname=ifname).first() - - def get_hostlinks_for_host(self, host): - return self.session.query(HostLink).filter_by( - host=host).all() - - def get_hostlinks_for_host_switchport(self, host, swid, module, port): - return self.session.query(HostLink).filter_by( - host=host, swid=swid, module=module, port=port).all() - - def get_hostlinks_for_switchport(self, swid, module, port): - return self.session.query(HostLink).filter_by( - swid=swid, module=module, port=port).all() - - def delete_hostlink(self, host, ifname): - with self.session.begin(subtransactions=True): - try: - self.session.query(HostLink).filter_by(host=host, - ifname=ifname).delete() - except orm.exc.NoResultFound: - return - - def get_switches(self): - return self.session.query(HostLink.swid).distinct() - - def get_modules_for_switch(self, swid): - return self.session.query( - HostLink.module).filter_by(swid=swid).distinct() - - def get_ports_for_switch_module(self, swid, module): - return self.session.query( - HostLink.port).filter_by(swid=swid, module=module).distinct() - - def get_switch_and_port_for_host(self, host): - return self.session.query( - HostLink.swid, HostLink.module, HostLink.port).filter_by( - host=host).distinct() - - def get_tenant_network_vlan_for_host(self, host): - pb = models_ml2.PortBinding - po = models_v2.Port - ns = models_ml2.NetworkSegment - return self.session.query( - po.tenant_id, ns.network_id, ns.segmentation_id).filter( - po.id == pb.port_id).filter(pb.host == host).filter( - po.network_id == ns.network_id).distinct() - - def add_apic_name(self, neutron_id, neutron_type, apic_name): - name = ApicName(neutron_id=neutron_id, - neutron_type=neutron_type, - apic_name=apic_name) - with self.session.begin(subtransactions=True): - self.session.add(name) - - def update_apic_name(self, neutron_id, neutron_type, apic_name): - with self.session.begin(subtransactions=True): - name = self.session.query(ApicName).filter_by( - neutron_id=neutron_id, - neutron_type=neutron_type).with_lockmode('update').first() - if name: - name.apic_name = apic_name - self.session.merge(name) - else: - self.add_apic_name(neutron_id, neutron_type, apic_name) - - def get_apic_names(self): - return self.session.query(ApicName).all() - - def get_apic_name(self, neutron_id, neutron_type): - return self.session.query(ApicName.apic_name).filter_by( - neutron_id=neutron_id, neutron_type=neutron_type).first() - - def delete_apic_name(self, neutron_id): - with self.session.begin(subtransactions=True): - try: - self.session.query(ApicName).filter_by( - neutron_id=neutron_id).delete() - except orm.exc.NoResultFound: - return diff --git a/neutron/plugins/ml2/drivers/cisco/apic/apic_sync.py b/neutron/plugins/ml2/drivers/cisco/apic/apic_sync.py deleted file mode 100644 index fca4e2c1188..00000000000 --- a/neutron/plugins/ml2/drivers/cisco/apic/apic_sync.py +++ /dev/null @@ -1,111 +0,0 @@ -# Copyright (c) 2014 Cisco Systems Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -from oslo_log import log -from oslo_service import loopingcall - -from neutron.common import constants as n_constants -from neutron import context -from neutron.i18n import _LW -from neutron import manager -from neutron.plugins.ml2 import db as l2_db -from neutron.plugins.ml2 import driver_context - -LOG = log.getLogger(__name__) - - -class SynchronizerBase(object): - - def __init__(self, driver, interval=None): - self.core_plugin = manager.NeutronManager.get_plugin() - self.driver = driver - self.interval = interval - - def sync(self, f, *args, **kwargs): - """Fire synchronization based on interval. - - Interval can be 0 for 'sync once' >0 for 'sync periodically' and - <0 for 'no sync' - """ - if self.interval: - if self.interval > 0: - loop_call = loopingcall.FixedIntervalLoopingCall(f, *args, - **kwargs) - loop_call.start(interval=self.interval) - return loop_call - else: - # Fire once - f(*args, **kwargs) - - -class ApicBaseSynchronizer(SynchronizerBase): - - def sync_base(self): - self.sync(self._sync_base) - - def _sync_base(self): - ctx = context.get_admin_context() - # Sync Networks - for network in self.core_plugin.get_networks(ctx): - mech_context = driver_context.NetworkContext(self.core_plugin, ctx, - network) - try: - self.driver.create_network_postcommit(mech_context) - except Exception: - LOG.warn(_LW("Create network postcommit failed for " - "network %s"), network['id']) - - # Sync Subnets - for subnet in self.core_plugin.get_subnets(ctx): - mech_context = driver_context.SubnetContext(self.core_plugin, ctx, - subnet) - try: - self.driver.create_subnet_postcommit(mech_context) - except Exception: - LOG.warn(_LW("Create subnet postcommit failed for" - " subnet %s"), subnet['id']) - - # Sync Ports (compute/gateway/dhcp) - for port in self.core_plugin.get_ports(ctx): - _, binding = l2_db.get_locked_port_and_binding(ctx.session, - port['id']) - network = self.core_plugin.get_network(ctx, port['network_id']) - mech_context = driver_context.PortContext(self.core_plugin, ctx, - port, network, binding, - []) - try: - self.driver.create_port_postcommit(mech_context) - except Exception: - LOG.warn(_LW("Create port postcommit failed for" - " port %s"), port['id']) - - -class ApicRouterSynchronizer(SynchronizerBase): - - def sync_router(self): - self.sync(self._sync_router) - - def _sync_router(self): - ctx = context.get_admin_context() - # Sync Router Interfaces - filters = {'device_owner': [n_constants.DEVICE_OWNER_ROUTER_INTF]} - for interface in self.core_plugin.get_ports(ctx, filters=filters): - try: - self.driver.add_router_interface_postcommit( - ctx, interface['device_id'], - {'port_id': interface['id']}) - except Exception: - LOG.warn(_LW("Add interface postcommit failed for " - "port %s"), interface['id']) diff --git a/neutron/plugins/ml2/drivers/cisco/apic/apic_topology.py b/neutron/plugins/ml2/drivers/cisco/apic/apic_topology.py deleted file mode 100644 index 8a1be65a1b0..00000000000 --- a/neutron/plugins/ml2/drivers/cisco/apic/apic_topology.py +++ /dev/null @@ -1,343 +0,0 @@ -# Copyright (c) 2014 Cisco Systems Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -import re -import sys - -import eventlet -eventlet.monkey_patch() - -from oslo_concurrency import lockutils -from oslo_config import cfg -from oslo_log import log as logging -import oslo_messaging -from oslo_service import periodic_task -from oslo_service import service as svc - -from neutron.agent.common import config -from neutron.agent.linux import ip_lib -from neutron.agent.linux import utils -from neutron.common import config as common_cfg -from neutron.common import rpc -from neutron.common import utils as neutron_utils -from neutron.db import agents_db -from neutron.i18n import _LE, _LI -from neutron import manager -from neutron.plugins.ml2.drivers.cisco.apic import mechanism_apic as ma -from neutron.plugins.ml2.drivers import type_vlan # noqa - -from neutron import service - -ACI_PORT_DESCR_FORMATS = [ - r'topology/pod-1/node-(\d+)/sys/conng/path-\[eth(\d+)/(\d+)\]', - r'topology/pod-1/paths-(\d+)/pathep-\[eth(\d+)/(\d+)\]', -] -AGENT_FORCE_UPDATE_COUNT = 100 -BINARY_APIC_SERVICE_AGENT = 'neutron-cisco-apic-service-agent' -BINARY_APIC_HOST_AGENT = 'neutron-cisco-apic-host-agent' -TOPIC_APIC_SERVICE = 'apic-service' -TYPE_APIC_SERVICE_AGENT = 'cisco-apic-service-agent' -TYPE_APIC_HOST_AGENT = 'cisco-apic-host-agent' - - -LOG = logging.getLogger(__name__) - - -class ApicTopologyService(manager.Manager): - - target = oslo_messaging.Target(version='1.1') - - def __init__(self, host=None): - if host is None: - host = neutron_utils.get_hostname() - super(ApicTopologyService, self).__init__(host=host) - - self.conf = cfg.CONF.ml2_cisco_apic - self.conn = None - self.peers = {} - self.invalid_peers = [] - self.dispatcher = None - self.state = None - self.state_agent = None - self.topic = TOPIC_APIC_SERVICE - self.apic_manager = ma.APICMechanismDriver.get_apic_manager(False) - - def init_host(self): - LOG.info(_LI("APIC service agent starting ...")) - self.state = { - 'binary': BINARY_APIC_SERVICE_AGENT, - 'host': self.host, - 'topic': self.topic, - 'configurations': {}, - 'start_flag': True, - 'agent_type': TYPE_APIC_SERVICE_AGENT, - } - - self.conn = rpc.create_connection(new=True) - self.dispatcher = [self, agents_db.AgentExtRpcCallback()] - self.conn.create_consumer( - self.topic, self.dispatcher, fanout=True) - self.conn.consume_in_threads() - - def after_start(self): - LOG.info(_LI("APIC service agent started")) - - def report_send(self, context): - if not self.state_agent: - return - LOG.debug("APIC service agent: sending report state") - - try: - self.state_agent.report_state(context, self.state) - self.state.pop('start_flag', None) - except AttributeError: - # This means the server does not support report_state - # ignore it - return - except Exception: - LOG.exception(_LE("APIC service agent: failed in reporting state")) - - @lockutils.synchronized('apic_service') - def update_link(self, context, - host, interface, mac, - switch, module, port): - LOG.debug("APIC service agent: received update_link: %s", - ", ".join(map(str, - [host, interface, mac, switch, module, port]))) - - nlink = (host, interface, mac, switch, module, port) - clink = self.peers.get((host, interface), None) - - if switch == 0: - # this is a link delete, remove it - if clink is not None: - self.apic_manager.remove_hostlink(*clink) - self.peers.pop((host, interface)) - else: - if clink is None: - # add new link to database - self.apic_manager.add_hostlink(*nlink) - self.peers[(host, interface)] = nlink - elif clink != nlink: - # delete old link and add new one (don't update in place) - self.apic_manager.remove_hostlink(*clink) - self.peers.pop((host, interface)) - self.apic_manager.add_hostlink(*nlink) - self.peers[(host, interface)] = nlink - - -class ApicTopologyServiceNotifierApi(object): - - def __init__(self): - target = oslo_messaging.Target(topic=TOPIC_APIC_SERVICE, version='1.0') - self.client = rpc.get_client(target) - - def update_link(self, context, host, interface, mac, switch, module, port): - cctxt = self.client.prepare(version='1.1', fanout=True) - cctxt.cast(context, 'update_link', host=host, interface=interface, - mac=mac, switch=switch, module=module, port=port) - - def delete_link(self, context, host, interface): - cctxt = self.client.prepare(version='1.1', fanout=True) - cctxt.cast(context, 'delete_link', host=host, interface=interface, - mac=None, switch=0, module=0, port=0) - - -class ApicTopologyAgent(manager.Manager): - def __init__(self, host=None): - if host is None: - host = neutron_utils.get_hostname() - super(ApicTopologyAgent, self).__init__(host=host) - - self.conf = cfg.CONF.ml2_cisco_apic - self.count_current = 0 - self.count_force_send = AGENT_FORCE_UPDATE_COUNT - self.interfaces = {} - self.lldpcmd = None - self.peers = {} - self.port_desc_re = map(re.compile, ACI_PORT_DESCR_FORMATS) - self.service_agent = ApicTopologyServiceNotifierApi() - self.state = None - self.state_agent = None - self.topic = TOPIC_APIC_SERVICE - self.uplink_ports = [] - self.invalid_peers = [] - - def init_host(self): - LOG.info(_LI("APIC host agent: agent starting on %s"), self.host) - self.state = { - 'binary': BINARY_APIC_HOST_AGENT, - 'host': self.host, - 'topic': self.topic, - 'configurations': {}, - 'start_flag': True, - 'agent_type': TYPE_APIC_HOST_AGENT, - } - - self.uplink_ports = [] - for inf in self.conf.apic_host_uplink_ports: - if ip_lib.device_exists(inf): - self.uplink_ports.append(inf) - else: - # ignore unknown interfaces - LOG.error(_LE("No such interface (ignored): %s"), inf) - self.lldpcmd = ['lldpctl', '-f', 'keyvalue'] + self.uplink_ports - - def after_start(self): - LOG.info(_LI("APIC host agent: started on %s"), self.host) - - @periodic_task.periodic_task - def _check_for_new_peers(self, context): - LOG.debug("APIC host agent: _check_for_new_peers") - - if not self.lldpcmd: - return - try: - # Check if we must send update even if there is no change - force_send = False - self.count_current += 1 - if self.count_current >= self.count_force_send: - force_send = True - self.count_current = 0 - - # Check for new peers - new_peers = self._get_peers() - new_peers = self._valid_peers(new_peers) - - # Make a copy of current interfaces - curr_peers = {} - for interface in self.peers: - curr_peers[interface] = self.peers[interface] - # Based curr -> new updates, add the new interfaces - self.peers = {} - for interface in new_peers: - peer = new_peers[interface] - self.peers[interface] = peer - if (interface in curr_peers and - curr_peers[interface] != peer): - self.service_agent.update_link( - context, peer[0], peer[1], None, 0, 0, 0) - if (interface not in curr_peers or - curr_peers[interface] != peer or - force_send): - self.service_agent.update_link(context, *peer) - if interface in curr_peers: - curr_peers.pop(interface) - - # Any interface still in curr_peers need to be deleted - for peer in curr_peers.values(): - self.service_agent.update_link( - context, peer[0], peer[1], None, 0, 0, 0) - - except Exception: - LOG.exception(_LE("APIC service agent: exception in LLDP parsing")) - - def _get_peers(self): - peers = {} - lldpkeys = utils.execute(self.lldpcmd, run_as_root=True) - for line in lldpkeys.splitlines(): - if '=' not in line: - continue - fqkey, value = line.split('=', 1) - lldp, interface, key = fqkey.split('.', 2) - if key == 'port.descr': - for regexp in self.port_desc_re: - match = regexp.match(value) - if match: - mac = self._get_mac(interface) - switch, module, port = match.group(1, 2, 3) - peer = (self.host, interface, mac, - switch, module, port) - if interface not in peers: - peers[interface] = [] - peers[interface].append(peer) - return peers - - def _valid_peers(self, peers): - # Reduce the peers array to one valid peer per interface - # NOTE: - # There is a bug in lldpd daemon that it keeps reporting - # old peers even after their updates have stopped - # we keep track of that report remove them from peers - - valid_peers = {} - invalid_peers = [] - for interface in peers: - curr_peer = None - for peer in peers[interface]: - if peer in self.invalid_peers or curr_peer: - invalid_peers.append(peer) - else: - curr_peer = peer - if curr_peer is not None: - valid_peers[interface] = curr_peer - - self.invalid_peers = invalid_peers - return valid_peers - - def _get_mac(self, interface): - if interface in self.interfaces: - return self.interfaces[interface] - try: - mac = ip_lib.IPDevice(interface).link.address - self.interfaces[interface] = mac - return mac - except Exception: - # we can safely ignore it, it is only needed for debugging - LOG.exception( - _LE("APIC service agent: can not get MACaddr for %s"), - interface) - - def report_send(self, context): - if not self.state_agent: - return - LOG.debug("APIC host agent: sending report state") - - try: - self.state_agent.report_state(context, self.state) - self.state.pop('start_flag', None) - except AttributeError: - # This means the server does not support report_state - # ignore it - return - except Exception: - LOG.exception(_LE("APIC host agent: failed in reporting state")) - - -def launch(binary, manager, topic=None): - cfg.CONF(project='neutron') - common_cfg.init(sys.argv[1:]) - config.setup_logging() - report_period = cfg.CONF.ml2_cisco_apic.apic_agent_report_interval - poll_period = cfg.CONF.ml2_cisco_apic.apic_agent_poll_interval - server = service.Service.create( - binary=binary, manager=manager, topic=topic, - report_interval=report_period, periodic_interval=poll_period) - svc.launch(cfg.CONF, server).wait() - - -def service_main(): - launch( - BINARY_APIC_SERVICE_AGENT, - 'neutron.plugins.ml2.drivers.' + - 'cisco.apic.apic_topology.ApicTopologyService', - TOPIC_APIC_SERVICE) - - -def agent_main(): - launch( - BINARY_APIC_HOST_AGENT, - 'neutron.plugins.ml2.drivers.' + - 'cisco.apic.apic_topology.ApicTopologyAgent') diff --git a/neutron/plugins/ml2/drivers/cisco/apic/config.py b/neutron/plugins/ml2/drivers/cisco/apic/config.py deleted file mode 100644 index c5edc0b8309..00000000000 --- a/neutron/plugins/ml2/drivers/cisco/apic/config.py +++ /dev/null @@ -1,138 +0,0 @@ -# Copyright (c) 2014 OpenStack Foundation -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -from oslo_config import cfg - - -# oslo_config limits ${var} expansion to global variables -# That is why apic_system_id as a global variable -global_opts = [ - cfg.StrOpt('apic_system_id', - default='openstack', - help=_("Prefix for APIC domain/names/profiles created")), -] - - -cfg.CONF.register_opts(global_opts) - - -apic_opts = [ - cfg.ListOpt('apic_hosts', - default=[], - help=_("An ordered list of host names or IP addresses of " - "the APIC controller(s).")), - cfg.StrOpt('apic_username', - help=_("Username for the APIC controller")), - cfg.StrOpt('apic_password', - help=_("Password for the APIC controller"), secret=True), - cfg.StrOpt('apic_name_mapping', - default='use_name', - help=_("Name mapping strategy to use: use_uuid | use_name")), - cfg.BoolOpt('apic_use_ssl', default=True, - help=_("Use SSL to connect to the APIC controller")), - cfg.StrOpt('apic_domain_name', - default='${apic_system_id}', - help=_("Name for the domain created on APIC")), - cfg.StrOpt('apic_app_profile_name', - default='${apic_system_id}_app', - help=_("Name for the app profile used for Openstack")), - cfg.StrOpt('apic_vlan_ns_name', - default='${apic_system_id}_vlan_ns', - help=_("Name for the vlan namespace to be used for Openstack")), - cfg.StrOpt('apic_node_profile', - default='${apic_system_id}_node_profile', - help=_("Name of the node profile to be created")), - cfg.StrOpt('apic_entity_profile', - default='${apic_system_id}_entity_profile', - help=_("Name of the entity profile to be created")), - cfg.StrOpt('apic_function_profile', - default='${apic_system_id}_function_profile', - help=_("Name of the function profile to be created")), - cfg.StrOpt('apic_lacp_profile', - default='${apic_system_id}_lacp_profile', - help=_("Name of the LACP profile to be created")), - cfg.ListOpt('apic_host_uplink_ports', - default=[], - help=_('The uplink ports to check for ACI connectivity')), - cfg.ListOpt('apic_vpc_pairs', - default=[], - help=_('The switch pairs for VPC connectivity')), - cfg.StrOpt('apic_vlan_range', - default='2:4093', - help=_("Range of VLAN's to be used for Openstack")), - cfg.IntOpt('apic_sync_interval', - default=0, - help=_("Synchronization interval in seconds")), - cfg.FloatOpt('apic_agent_report_interval', - default=30, - help=_('Interval between agent status updates (in sec)')), - cfg.FloatOpt('apic_agent_poll_interval', - default=2, - help=_('Interval between agent poll for topology (in sec)')), -] - - -cfg.CONF.register_opts(apic_opts, "ml2_cisco_apic") - - -def _get_specific_config(prefix): - """retrieve config in the format [:].""" - conf_dict = {} - multi_parser = cfg.MultiConfigParser() - multi_parser.read(cfg.CONF.config_file) - for parsed_file in multi_parser.parsed: - for parsed_item in parsed_file.keys(): - if parsed_item.startswith(prefix): - switch, switch_id = parsed_item.split(':') - if switch.lower() == prefix: - conf_dict[switch_id] = parsed_file[parsed_item].items() - return conf_dict - - -def create_switch_dictionary(): - switch_dict = {} - conf = _get_specific_config('apic_switch') - for switch_id in conf: - switch_dict[switch_id] = switch_dict.get(switch_id, {}) - for host_list, port in conf[switch_id]: - hosts = host_list.split(',') - port = port[0] - switch_dict[switch_id][port] = ( - switch_dict[switch_id].get(port, []) + hosts) - return switch_dict - - -def create_vpc_dictionary(): - vpc_dict = {} - for pair in cfg.CONF.ml2_cisco_apic.apic_vpc_pairs: - pair_tuple = pair.split(':') - if (len(pair_tuple) != 2 or - any(map(lambda x: not x.isdigit(), pair_tuple))): - # Validation error, ignore this item - continue - vpc_dict[pair_tuple[0]] = pair_tuple[1] - vpc_dict[pair_tuple[1]] = pair_tuple[0] - return vpc_dict - - -def create_external_network_dictionary(): - router_dict = {} - conf = _get_specific_config('apic_external_network') - for net_id in conf: - router_dict[net_id] = router_dict.get(net_id, {}) - for key, value in conf[net_id]: - router_dict[net_id][key] = value[0] if value else None - - return router_dict diff --git a/neutron/plugins/ml2/drivers/cisco/apic/mechanism_apic.py b/neutron/plugins/ml2/drivers/cisco/apic/mechanism_apic.py deleted file mode 100644 index bb40f26bb27..00000000000 --- a/neutron/plugins/ml2/drivers/cisco/apic/mechanism_apic.py +++ /dev/null @@ -1,287 +0,0 @@ -# Copyright (c) 2014 Cisco Systems Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -from apicapi import apic_manager -from keystoneclient.v2_0 import client as keyclient -import netaddr -from oslo_concurrency import lockutils -from oslo_config import cfg -from oslo_log import log - -from neutron.common import constants as n_constants -from neutron.plugins.common import constants -from neutron.plugins.ml2 import driver_api as api -from neutron.plugins.ml2.drivers.cisco.apic import apic_model -from neutron.plugins.ml2.drivers.cisco.apic import apic_sync -from neutron.plugins.ml2.drivers.cisco.apic import config -from neutron.plugins.ml2 import models - - -LOG = log.getLogger(__name__) - - -class APICMechanismDriver(api.MechanismDriver): - - @staticmethod - def get_apic_manager(client=True): - apic_config = cfg.CONF.ml2_cisco_apic - network_config = { - 'vlan_ranges': cfg.CONF.ml2_type_vlan.network_vlan_ranges, - 'switch_dict': config.create_switch_dictionary(), - 'vpc_dict': config.create_vpc_dictionary(), - 'external_network_dict': - config.create_external_network_dictionary(), - } - apic_system_id = cfg.CONF.apic_system_id - keyclient_param = keyclient if client else None - keystone_authtoken = cfg.CONF.keystone_authtoken if client else None - return apic_manager.APICManager(apic_model.ApicDbModel(), log, - network_config, apic_config, - keyclient_param, keystone_authtoken, - apic_system_id) - - @staticmethod - def get_base_synchronizer(inst): - apic_config = cfg.CONF.ml2_cisco_apic - return apic_sync.ApicBaseSynchronizer(inst, - apic_config.apic_sync_interval) - - @staticmethod - def get_router_synchronizer(inst): - apic_config = cfg.CONF.ml2_cisco_apic - return apic_sync.ApicRouterSynchronizer(inst, - apic_config.apic_sync_interval) - - def initialize(self): - # initialize apic - self.apic_manager = APICMechanismDriver.get_apic_manager() - self.name_mapper = self.apic_manager.apic_mapper - self.synchronizer = None - self.apic_manager.ensure_infra_created_on_apic() - self.apic_manager.ensure_bgp_pod_policy_created_on_apic() - - def sync_init(f): - def inner(inst, *args, **kwargs): - if not inst.synchronizer: - inst.synchronizer = ( - APICMechanismDriver.get_base_synchronizer(inst)) - inst.synchronizer.sync_base() - # pylint: disable=not-callable - return f(inst, *args, **kwargs) - return inner - - @lockutils.synchronized('apic-portlock') - def _perform_path_port_operations(self, context, port): - # Get network - network_id = context.network.current['id'] - anetwork_id = self.name_mapper.network(context, network_id) - # Get tenant details from port context - tenant_id = context.current['tenant_id'] - tenant_id = self.name_mapper.tenant(context, tenant_id) - - # Get segmentation id - segment = context.top_bound_segment - if not segment: - LOG.debug("Port %s is not bound to a segment", port) - return - seg = None - if (segment.get(api.NETWORK_TYPE) in [constants.TYPE_VLAN]): - seg = segment.get(api.SEGMENTATION_ID) - # hosts on which this vlan is provisioned - host = context.host - # Create a static path attachment for the host/epg/switchport combo - with self.apic_manager.apic.transaction() as trs: - self.apic_manager.ensure_path_created_for_port( - tenant_id, anetwork_id, host, seg, transaction=trs) - - def _perform_gw_port_operations(self, context, port): - router_id = port.get('device_id') - network = context.network.current - anetwork_id = self.name_mapper.network(context, network['id']) - router_info = self.apic_manager.ext_net_dict.get(network['name']) - - if router_id and router_info: - address = router_info['cidr_exposed'] - next_hop = router_info['gateway_ip'] - encap = router_info.get('encap') # No encap if None - switch = router_info['switch'] - module, sport = router_info['port'].split('/') - with self.apic_manager.apic.transaction() as trs: - # Get/Create contract - arouter_id = self.name_mapper.router(context, router_id) - cid = self.apic_manager.get_router_contract(arouter_id) - # Ensure that the external ctx exists - self.apic_manager.ensure_context_enforced() - # Create External Routed Network and configure it - self.apic_manager.ensure_external_routed_network_created( - anetwork_id, transaction=trs) - self.apic_manager.ensure_logical_node_profile_created( - anetwork_id, switch, module, sport, encap, - address, transaction=trs) - self.apic_manager.ensure_static_route_created( - anetwork_id, switch, next_hop, transaction=trs) - self.apic_manager.ensure_external_epg_created( - anetwork_id, transaction=trs) - self.apic_manager.ensure_external_epg_consumed_contract( - anetwork_id, cid, transaction=trs) - self.apic_manager.ensure_external_epg_provided_contract( - anetwork_id, cid, transaction=trs) - - def _perform_port_operations(self, context): - # Get port - port = context.current - # Check if a compute port - if context.host: - self._perform_path_port_operations(context, port) - if port.get('device_owner') == n_constants.DEVICE_OWNER_ROUTER_GW: - self._perform_gw_port_operations(context, port) - - def _delete_contract(self, context): - port = context.current - network_id = self.name_mapper.network( - context, context.network.current['id']) - arouter_id = self.name_mapper.router(context, - port.get('device_id')) - self.apic_manager.delete_external_epg_contract(arouter_id, - network_id) - - def _get_active_path_count(self, context): - return context._plugin_context.session.query( - models.PortBinding).filter_by( - host=context.host, segment=context._binding.segment).count() - - @lockutils.synchronized('apic-portlock') - def _delete_port_path(self, context, atenant_id, anetwork_id): - if not self._get_active_path_count(context): - self.apic_manager.ensure_path_deleted_for_port( - atenant_id, anetwork_id, - context.host) - - def _delete_path_if_last(self, context): - if not self._get_active_path_count(context): - tenant_id = context.current['tenant_id'] - atenant_id = self.name_mapper.tenant(context, tenant_id) - network_id = context.network.current['id'] - anetwork_id = self.name_mapper.network(context, network_id) - self._delete_port_path(context, atenant_id, anetwork_id) - - def _get_subnet_info(self, context, subnet): - if subnet['gateway_ip']: - tenant_id = subnet['tenant_id'] - network_id = subnet['network_id'] - network = context._plugin.get_network(context._plugin_context, - network_id) - if not network.get('router:external'): - cidr = netaddr.IPNetwork(subnet['cidr']) - gateway_ip = '%s/%s' % (subnet['gateway_ip'], - str(cidr.prefixlen)) - - # Convert to APIC IDs - tenant_id = self.name_mapper.tenant(context, tenant_id) - network_id = self.name_mapper.network(context, network_id) - return tenant_id, network_id, gateway_ip - - @sync_init - def create_port_postcommit(self, context): - self._perform_port_operations(context) - - @sync_init - def update_port_postcommit(self, context): - self._perform_port_operations(context) - - def delete_port_postcommit(self, context): - port = context.current - # Check if a compute port - if context.host: - self._delete_path_if_last(context) - if port.get('device_owner') == n_constants.DEVICE_OWNER_ROUTER_GW: - self._delete_contract(context) - - @sync_init - def create_network_postcommit(self, context): - if not context.current.get('router:external'): - tenant_id = context.current['tenant_id'] - network_id = context.current['id'] - - # Convert to APIC IDs - tenant_id = self.name_mapper.tenant(context, tenant_id) - network_id = self.name_mapper.network(context, network_id) - - # Create BD and EPG for this network - with self.apic_manager.apic.transaction() as trs: - self.apic_manager.ensure_bd_created_on_apic(tenant_id, - network_id, - transaction=trs) - self.apic_manager.ensure_epg_created( - tenant_id, network_id, transaction=trs) - - @sync_init - def update_network_postcommit(self, context): - super(APICMechanismDriver, self).update_network_postcommit(context) - - def delete_network_postcommit(self, context): - if not context.current.get('router:external'): - tenant_id = context.current['tenant_id'] - network_id = context.current['id'] - - # Convert to APIC IDs - tenant_id = self.name_mapper.tenant(context, tenant_id) - network_id = self.name_mapper.network(context, network_id) - - # Delete BD and EPG for this network - with self.apic_manager.apic.transaction() as trs: - self.apic_manager.delete_epg_for_network(tenant_id, network_id, - transaction=trs) - self.apic_manager.delete_bd_on_apic(tenant_id, network_id, - transaction=trs) - else: - network_name = context.current['name'] - if self.apic_manager.ext_net_dict.get(network_name): - network_id = self.name_mapper.network(context, - context.current['id']) - self.apic_manager.delete_external_routed_network(network_id) - - @sync_init - def create_subnet_postcommit(self, context): - info = self._get_subnet_info(context, context.current) - if info: - tenant_id, network_id, gateway_ip = info - # Create subnet on BD - self.apic_manager.ensure_subnet_created_on_apic( - tenant_id, network_id, gateway_ip) - - @sync_init - def update_subnet_postcommit(self, context): - if context.current['gateway_ip'] != context.original['gateway_ip']: - with self.apic_manager.apic.transaction() as trs: - info = self._get_subnet_info(context, context.original) - if info: - tenant_id, network_id, gateway_ip = info - # Delete subnet - self.apic_manager.ensure_subnet_deleted_on_apic( - tenant_id, network_id, gateway_ip, transaction=trs) - info = self._get_subnet_info(context, context.current) - if info: - tenant_id, network_id, gateway_ip = info - # Create subnet - self.apic_manager.ensure_subnet_created_on_apic( - tenant_id, network_id, gateway_ip, transaction=trs) - - def delete_subnet_postcommit(self, context): - info = self._get_subnet_info(context, context.current) - if info: - tenant_id, network_id, gateway_ip = info - self.apic_manager.ensure_subnet_deleted_on_apic( - tenant_id, network_id, gateway_ip) diff --git a/neutron/plugins/ml2/drivers/cisco/n1kv/extensions/n1kv.py b/neutron/plugins/ml2/drivers/cisco/n1kv/extensions/n1kv.py deleted file mode 100644 index 726779c9df2..00000000000 --- a/neutron/plugins/ml2/drivers/cisco/n1kv/extensions/n1kv.py +++ /dev/null @@ -1,53 +0,0 @@ -# Copyright 2014 Cisco Systems, Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -from networking_cisco.plugins.ml2.drivers.cisco.n1kv import constants - -from neutron.api import extensions -from neutron.api.v2 import attributes - - -PROFILE = constants.N1KV_PROFILE -EXTENDED_ATTRIBUTES_2_0 = { - 'ports': {PROFILE: { - 'allow_post': True, - 'allow_put': False, - 'default': attributes.ATTR_NOT_SPECIFIED, - 'is_visible': True}}} - - -class N1kv(extensions.ExtensionDescriptor): - - @classmethod - def get_name(cls): - return "Cisco Nexus1000V Profile Extension" - - @classmethod - def get_alias(cls): - return "n1kv" - - @classmethod - def get_description(cls): - return _("Add new policy profile attribute to port resource.") - - @classmethod - def get_updated(cls): - return "2014-11-23T13:33:25-00:00" - - def get_extended_resources(self, version): - if version == "2.0": - return EXTENDED_ATTRIBUTES_2_0 - else: - return {} diff --git a/neutron/plugins/ml2/drivers/cisco/n1kv/n1kv_ext_driver.py b/neutron/plugins/ml2/drivers/cisco/n1kv/n1kv_ext_driver.py deleted file mode 100644 index 9fc6ec9fdfa..00000000000 --- a/neutron/plugins/ml2/drivers/cisco/n1kv/n1kv_ext_driver.py +++ /dev/null @@ -1,101 +0,0 @@ -# Copyright 2015 Cisco Systems, Inc. -# All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -"""Extensions Driver for Cisco Nexus1000V.""" - -from oslo_config import cfg -from oslo_log import log -from oslo_utils import uuidutils - -from networking_cisco.plugins.ml2.drivers.cisco.n1kv import ( - constants) -from networking_cisco.plugins.ml2.drivers.cisco.n1kv import ( - exceptions as n1kv_exc) -from networking_cisco.plugins.ml2.drivers.cisco.n1kv import ( - n1kv_db) - -from neutron.api import extensions as api_extensions -from neutron.api.v2 import attributes -from neutron.i18n import _LE -from neutron.plugins.ml2.common import exceptions as ml2_exc -from neutron.plugins.ml2 import driver_api as api -from neutron.plugins.ml2.drivers.cisco.n1kv import extensions - -LOG = log.getLogger(__name__) - - -class CiscoN1kvExtensionDriver(api.ExtensionDriver): - """Cisco N1KV ML2 Extension Driver.""" - - # List of supported extensions for cisco Nexus1000V. - _supported_extension_alias = "n1kv" - - def initialize(self): - api_extensions.append_api_extensions_path(extensions.__path__) - - @property - def extension_alias(self): - """ - Supported extension alias. - - :returns: alias identifying the core API extension supported - by this driver - """ - return self._supported_extension_alias - - def process_create_port(self, context, data, result): - """Implementation of abstract method from ExtensionDriver class.""" - port_id = result.get('id') - policy_profile_attr = data.get(constants.N1KV_PROFILE) - if not attributes.is_attr_set(policy_profile_attr): - policy_profile_attr = (cfg.CONF.ml2_cisco_n1kv. - default_policy_profile) - with context.session.begin(subtransactions=True): - try: - n1kv_db.get_policy_binding(port_id, context.session) - except n1kv_exc.PortBindingNotFound: - if not uuidutils.is_uuid_like(policy_profile_attr): - policy_profile = n1kv_db.get_policy_profile_by_name( - policy_profile_attr, - context.session) - if policy_profile: - policy_profile_attr = policy_profile.id - else: - LOG.error(_LE("Policy Profile %(profile)s does " - "not exist."), - {"profile": policy_profile_attr}) - raise ml2_exc.MechanismDriverError() - elif not (n1kv_db.get_policy_profile_by_uuid( - context.session, - policy_profile_attr)): - LOG.error(_LE("Policy Profile %(profile)s does not " - "exist."), - {"profile": policy_profile_attr}) - raise ml2_exc.MechanismDriverError() - n1kv_db.add_policy_binding(port_id, - policy_profile_attr, - context.session) - result[constants.N1KV_PROFILE] = policy_profile_attr - - def extend_port_dict(self, session, model, result): - """Implementation of abstract method from ExtensionDriver class.""" - port_id = result.get('id') - with session.begin(subtransactions=True): - try: - res = n1kv_db.get_policy_binding(port_id, session) - result[constants.N1KV_PROFILE] = res.profile_id - except n1kv_exc.PortBindingNotFound: - # Do nothing if the port binding is not found. - pass diff --git a/neutron/plugins/ml2/drivers/cisco/n1kv/n1kv_models.py b/neutron/plugins/ml2/drivers/cisco/n1kv/n1kv_models.py deleted file mode 100644 index bfbbb51f41b..00000000000 --- a/neutron/plugins/ml2/drivers/cisco/n1kv/n1kv_models.py +++ /dev/null @@ -1,138 +0,0 @@ -# Copyright 2015 Cisco Systems, Inc. -# All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -import sqlalchemy as sa -from sqlalchemy import orm - -from neutron.db import model_base -from neutron.db import models_v2 -from neutron.plugins.common import constants - - -class PolicyProfile(model_base.BASEV2): - - """ - Nexus1000V Policy Profiles - - Both 'profile_id' and 'name' are populated from Nexus1000V switch. - """ - __tablename__ = 'cisco_ml2_n1kv_policy_profiles' - - id = sa.Column(sa.String(36), nullable=False, primary_key=True) - name = sa.Column(sa.String(255), nullable=False) - vsm_ip = sa.Column(sa.String(16), nullable=False, primary_key=True) - - -class NetworkProfile(model_base.BASEV2, models_v2.HasId): - - """Nexus1000V Network Profiles created on the VSM.""" - __tablename__ = 'cisco_ml2_n1kv_network_profiles' - - name = sa.Column(sa.String(255), nullable=False) - segment_type = sa.Column(sa.Enum(constants.TYPE_VLAN, - constants.TYPE_VXLAN, - name='segment_type'), - nullable=False) - sub_type = sa.Column(sa.String(255)) - segment_range = sa.Column(sa.String(255)) - multicast_ip_index = sa.Column(sa.Integer, default=0) - multicast_ip_range = sa.Column(sa.String(255)) - physical_network = sa.Column(sa.String(255)) - - -class N1kvPortBinding(model_base.BASEV2): - - """Represents binding of ports to policy profile.""" - __tablename__ = 'cisco_ml2_n1kv_port_bindings' - - port_id = sa.Column(sa.String(36), - sa.ForeignKey('ports.id', ondelete="CASCADE"), - primary_key=True) - profile_id = sa.Column(sa.String(36), - nullable=False) - # Add a relationship to the Port model in order to instruct SQLAlchemy to - # eagerly load port bindings - port = orm.relationship( - models_v2.Port, - backref=orm.backref("n1kv_port_binding", - lazy='joined', uselist=False, - cascade='delete')) - - -class N1kvNetworkBinding(model_base.BASEV2): - - """Represents binding of virtual network to network profiles.""" - __tablename__ = 'cisco_ml2_n1kv_network_bindings' - - network_id = sa.Column(sa.String(36), - sa.ForeignKey('networks.id', ondelete="CASCADE"), - primary_key=True) - network_type = sa.Column(sa.String(32), nullable=False) - segmentation_id = sa.Column(sa.Integer) - profile_id = sa.Column(sa.String(36), - sa.ForeignKey('cisco_ml2_n1kv_network_profiles.id'), - nullable=False) - - -class N1kvVlanAllocation(model_base.BASEV2): - - """Represents allocation state of vlan_id on physical network.""" - __tablename__ = 'cisco_ml2_n1kv_vlan_allocations' - - physical_network = sa.Column(sa.String(64), - nullable=False, - primary_key=True) - vlan_id = sa.Column(sa.Integer, nullable=False, primary_key=True, - autoincrement=False) - allocated = sa.Column(sa.Boolean, nullable=False, default=False) - network_profile_id = sa.Column(sa.String(36), - sa.ForeignKey( - 'cisco_ml2_n1kv_network_profiles.id', - ondelete="CASCADE"), - nullable=False) - - -class N1kvVxlanAllocation(model_base.BASEV2): - - """Represents allocation state of vxlan_id.""" - __tablename__ = 'cisco_ml2_n1kv_vxlan_allocations' - - vxlan_id = sa.Column(sa.Integer, nullable=False, primary_key=True, - autoincrement=False) - allocated = sa.Column(sa.Boolean, nullable=False, default=False) - network_profile_id = sa.Column(sa.String(36), - sa.ForeignKey( - 'cisco_ml2_n1kv_network_profiles.id', - ondelete="CASCADE"), - nullable=False) - - -class ProfileBinding(model_base.BASEV2): - - """ - Represents a binding of Network Profile - or Policy Profile to tenant_id - """ - __tablename__ = 'cisco_ml2_n1kv_profile_bindings' - - profile_type = sa.Column(sa.Enum('network', 'policy', - name='profile_type'), - nullable=True) - tenant_id = sa.Column(sa.String(36), - primary_key=True, - nullable=False, - default='tenant_id_not_set', - server_default='tenant_id_not_set') - profile_id = sa.Column(sa.String(36), primary_key=True, nullable=False) diff --git a/neutron/plugins/ml2/drivers/linuxbridge/agent/README b/neutron/plugins/ml2/drivers/linuxbridge/agent/README deleted file mode 100644 index 008ba1ab783..00000000000 --- a/neutron/plugins/ml2/drivers/linuxbridge/agent/README +++ /dev/null @@ -1,4 +0,0 @@ -# -- Background - -The Neutron Linux Bridge plugin has removed from the tree in Juno. You must -migrate to ML2 using the script in: neutron/db/migration/migrate_to_ml2.py diff --git a/neutron/plugins/ml2/drivers/linuxbridge/agent/l2network_models_v2.py b/neutron/plugins/ml2/drivers/linuxbridge/agent/l2network_models_v2.py deleted file mode 100644 index 0c08e29c50b..00000000000 --- a/neutron/plugins/ml2/drivers/linuxbridge/agent/l2network_models_v2.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright (c) 2012 OpenStack Foundation. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or -# implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import sqlalchemy as sa - -from neutron.db import model_base - - -class NetworkState(model_base.BASEV2): - """Represents state of vlan_id on physical network.""" - __tablename__ = 'network_states' - - physical_network = sa.Column(sa.String(64), nullable=False, - primary_key=True) - vlan_id = sa.Column(sa.Integer, nullable=False, primary_key=True, - autoincrement=False) - allocated = sa.Column(sa.Boolean, nullable=False) - - def __init__(self, physical_network, vlan_id): - self.physical_network = physical_network - self.vlan_id = vlan_id - self.allocated = False - - def __repr__(self): - return "" % (self.physical_network, - self.vlan_id, self.allocated) - - -class NetworkBinding(model_base.BASEV2): - """Represents binding of virtual network to physical network and vlan.""" - __tablename__ = 'network_bindings' - - network_id = sa.Column(sa.String(36), - sa.ForeignKey('networks.id', ondelete="CASCADE"), - primary_key=True) - physical_network = sa.Column(sa.String(64)) - vlan_id = sa.Column(sa.Integer, nullable=False) - - def __init__(self, network_id, physical_network, vlan_id): - self.network_id = network_id - self.physical_network = physical_network - self.vlan_id = vlan_id - - def __repr__(self): - return "" % (self.network_id, - self.physical_network, - self.vlan_id) diff --git a/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py b/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py index 61627ebe357..b9747a4ec04 100644 --- a/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py +++ b/neutron/plugins/ml2/drivers/linuxbridge/agent/linuxbridge_neutron_agent.py @@ -33,6 +33,7 @@ from oslo_service import loopingcall from oslo_service import service from six import moves +from neutron.agent.linux import bridge_lib from neutron.agent.linux import ip_lib from neutron.agent.linux import utils from neutron.agent import rpc as agent_rpc @@ -303,21 +304,18 @@ class LinuxBridgeManager(object): LOG.debug("Starting bridge %(bridge_name)s for subinterface " "%(interface)s", {'bridge_name': bridge_name, 'interface': interface}) - if utils.execute(['brctl', 'addbr', bridge_name], - run_as_root=True): + bridge_device = bridge_lib.BridgeDevice.addbr(bridge_name) + if bridge_device.setfd(0): return - if utils.execute(['brctl', 'setfd', bridge_name, - str(0)], run_as_root=True): + if bridge_device.disable_stp(): return - if utils.execute(['brctl', 'stp', bridge_name, - 'off'], run_as_root=True): - return - if utils.execute(['ip', 'link', 'set', bridge_name, - 'up'], run_as_root=True): + if bridge_device.link.set_up(): return LOG.debug("Done starting bridge %(bridge_name)s for " "subinterface %(interface)s", {'bridge_name': bridge_name, 'interface': interface}) + else: + bridge_device = bridge_lib.BridgeDevice(bridge_name) if not interface: return bridge_name @@ -331,11 +329,9 @@ class LinuxBridgeManager(object): # Check if the interface is not enslaved in another bridge if self.is_device_on_bridge(interface): bridge = self.get_bridge_for_tap_device(interface) - utils.execute(['brctl', 'delif', bridge, interface], - run_as_root=True) + bridge_lib.BridgeDevice(bridge).delif(interface) - utils.execute(['brctl', 'addif', bridge_name, interface], - run_as_root=True) + bridge_device.addif(interface) except Exception as e: LOG.error(_LE("Unable to add %(interface)s to %(bridge_name)s" "! Exception: %(e)s"), @@ -401,8 +397,7 @@ class LinuxBridgeManager(object): 'bridge_name': bridge_name} LOG.debug("Adding device %(tap_device_name)s to bridge " "%(bridge_name)s", data) - if utils.execute(['brctl', 'addif', bridge_name, tap_device_name], - run_as_root=True): + if bridge_lib.BridgeDevice(bridge_name).addif(tap_device_name): return False else: data = {'tap_device_name': tap_device_name, @@ -450,11 +445,10 @@ class LinuxBridgeManager(object): self.delete_vlan(interface) LOG.debug("Deleting bridge %s", bridge_name) - if utils.execute(['ip', 'link', 'set', bridge_name, 'down'], - run_as_root=True): + bridge_device = bridge_lib.BridgeDevice(bridge_name) + if bridge_device.link.set_down(): return - if utils.execute(['brctl', 'delbr', bridge_name], - run_as_root=True): + if bridge_device.delbr(): return LOG.debug("Done deleting bridge %s", bridge_name) @@ -477,8 +471,7 @@ class LinuxBridgeManager(object): "%(bridge_name)s", {'interface_name': interface_name, 'bridge_name': bridge_name}) - if utils.execute(['brctl', 'delif', bridge_name, interface_name], - run_as_root=True): + if bridge_lib.BridgeDevice(bridge_name).delif(interface_name): return False LOG.debug("Done removing device %(interface_name)s from bridge " "%(bridge_name)s", diff --git a/neutron/plugins/ml2/drivers/mech_sriov/agent/pci_lib.py b/neutron/plugins/ml2/drivers/mech_sriov/agent/pci_lib.py index 05fc0d2f859..3fb04dcd3e8 100644 --- a/neutron/plugins/ml2/drivers/mech_sriov/agent/pci_lib.py +++ b/neutron/plugins/ml2/drivers/mech_sriov/agent/pci_lib.py @@ -53,7 +53,7 @@ class PciDeviceIPWrapper(ip_lib.IPWrapper): @return: list of assigned mac addresses """ try: - out = self._execute('', "link", ("show", self.dev_name)) + out = self._as_root([], "link", ("show", self.dev_name)) except Exception as e: LOG.exception(_LE("Failed executing ip command")) raise exc.IpCommandError(dev_name=self.dev_name, @@ -74,7 +74,7 @@ class PciDeviceIPWrapper(ip_lib.IPWrapper): @todo: Handle "auto" state """ try: - out = self._execute('', "link", ("show", self.dev_name)) + out = self._as_root([], "link", ("show", self.dev_name)) except Exception as e: LOG.exception(_LE("Failed executing ip command")) raise exc.IpCommandError(dev_name=self.dev_name, @@ -99,7 +99,7 @@ class PciDeviceIPWrapper(ip_lib.IPWrapper): self.LinkState.DISABLE try: - self._execute('', "link", ("set", self.dev_name, "vf", + self._as_root([], "link", ("set", self.dev_name, "vf", str(vf_index), "state", status_str)) except Exception as e: LOG.exception(_LE("Failed executing ip command")) diff --git a/neutron/plugins/ml2/drivers/openvswitch/agent/README b/neutron/plugins/ml2/drivers/openvswitch/agent/README deleted file mode 100644 index 005aca36fdd..00000000000 --- a/neutron/plugins/ml2/drivers/openvswitch/agent/README +++ /dev/null @@ -1,4 +0,0 @@ -The Open vSwitch (OVS) Neutron plugin has been removed and replaced by ML2. You -must run the migration manually to upgrade to Juno. - -See neutron/db/migration/migrate_to_ml2.py diff --git a/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/br_dvr_process.py b/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/br_dvr_process.py index 46db4ec697b..6fdb06440e0 100644 --- a/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/br_dvr_process.py +++ b/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/br_dvr_process.py @@ -29,6 +29,8 @@ # License for the specific language governing permissions and limitations # under the License. +from neutron.common import constants + class OVSDVRProcessMixin(object): """Common logic for br-tun and br-phys' DVR_PROCESS tables. @@ -58,6 +60,7 @@ class OVSDVRProcessMixin(object): priority=3, dl_vlan=vlan_tag, proto='icmp6', + icmp_type=constants.ICMPV6_TYPE_RA, dl_src=gateway_mac, actions='drop') @@ -65,6 +68,7 @@ class OVSDVRProcessMixin(object): self.delete_flows(table=self.dvr_process_table_id, dl_vlan=vlan_tag, proto='icmp6', + icmp_type=constants.ICMPV6_TYPE_RA, dl_src=gateway_mac) def install_dvr_process(self, vlan_tag, vif_mac, dvr_mac_address): diff --git a/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_models_v2.py b/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_models_v2.py deleted file mode 100644 index 59b2c14a940..00000000000 --- a/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_models_v2.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright 2011 VMware, Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - - -from sqlalchemy import Boolean, Column, ForeignKey, Integer, String -from sqlalchemy.schema import UniqueConstraint - -from neutron.db import model_base -from neutron.db import models_v2 -from sqlalchemy import orm - - -class VlanAllocation(model_base.BASEV2): - """Represents allocation state of vlan_id on physical network.""" - __tablename__ = 'ovs_vlan_allocations' - - physical_network = Column(String(64), nullable=False, primary_key=True) - vlan_id = Column(Integer, nullable=False, primary_key=True, - autoincrement=False) - allocated = Column(Boolean, nullable=False) - - def __init__(self, physical_network, vlan_id): - self.physical_network = physical_network - self.vlan_id = vlan_id - self.allocated = False - - def __repr__(self): - return "" % (self.physical_network, - self.vlan_id, self.allocated) - - -class TunnelAllocation(model_base.BASEV2): - """Represents allocation state of tunnel_id.""" - __tablename__ = 'ovs_tunnel_allocations' - - tunnel_id = Column(Integer, nullable=False, primary_key=True, - autoincrement=False) - allocated = Column(Boolean, nullable=False) - - def __init__(self, tunnel_id): - self.tunnel_id = tunnel_id - self.allocated = False - - def __repr__(self): - return "" % (self.tunnel_id, self.allocated) - - -class NetworkBinding(model_base.BASEV2): - """Represents binding of virtual network to physical realization.""" - __tablename__ = 'ovs_network_bindings' - - network_id = Column(String(36), - ForeignKey('networks.id', ondelete="CASCADE"), - primary_key=True) - # 'gre', 'vlan', 'flat', 'local' - network_type = Column(String(32), nullable=False) - physical_network = Column(String(64)) - segmentation_id = Column(Integer) # tunnel_id or vlan_id - - network = orm.relationship( - models_v2.Network, - backref=orm.backref("binding", lazy='joined', - uselist=False, cascade='delete')) - - def __init__(self, network_id, network_type, physical_network, - segmentation_id): - self.network_id = network_id - self.network_type = network_type - self.physical_network = physical_network - self.segmentation_id = segmentation_id - - def __repr__(self): - return "" % (self.network_id, - self.network_type, - self.physical_network, - self.segmentation_id) - - -class TunnelEndpoint(model_base.BASEV2): - """Represents tunnel endpoint in RPC mode.""" - __tablename__ = 'ovs_tunnel_endpoints' - __table_args__ = ( - UniqueConstraint('id', name='uniq_ovs_tunnel_endpoints0id'), - model_base.BASEV2.__table_args__, - ) - - ip_address = Column(String(64), primary_key=True) - id = Column(Integer, nullable=False) - - def __init__(self, ip_address, id): - self.ip_address = ip_address - self.id = id - - def __repr__(self): - return "" % (self.ip_address, self.id) diff --git a/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py b/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py index 4ca3423605e..fe9f8a15b41 100644 --- a/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py +++ b/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py @@ -64,8 +64,7 @@ class _mac_mydialect(netaddr.mac_unix): class DeviceListRetrievalError(exceptions.NeutronException): - message = _("Unable to retrieve port details for devices: %(devices)s " - "because of error: %(error)s") + message = _("Unable to retrieve port details for devices: %(devices)s ") # A class to represent a VIF (i.e., a port that has 'iface-id' and 'vif-mac' @@ -289,6 +288,9 @@ class OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin, self.iter_num = 0 self.run_daemon_loop = True + self.catch_sigterm = False + self.catch_sighup = False + # The initialization is complete; we can start receiving messages self.connection.consume_in_threads() @@ -708,7 +710,7 @@ class OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin, '''Bind port to net_uuid/lsw_id and install flow for inbound traffic to vm. - :param port: a ovslib.VifPort object. + :param port: a ovs_lib.VifPort object. :param net_uuid: the net_uuid this port is to be associated with. :param network_type: the network type ('gre', 'vlan', 'flat', 'local') :param physical_network: the physical network for 'vlan' or 'flat' @@ -737,6 +739,8 @@ class OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin, port_other_config) def _bind_devices(self, need_binding_ports): + devices_up = [] + devices_down = [] port_info = self.int_br.db_list( "Port", columns=["name", "tag"]) tags_by_name = {x['name']: x['tag'] for x in port_info} @@ -765,13 +769,26 @@ class OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin, # API server, thus possibly preventing instance spawn. if port_detail.get('admin_state_up'): LOG.debug("Setting status for %s to UP", device) - self.plugin_rpc.update_device_up( - self.context, device, self.agent_id, self.conf.host) + devices_up.append(device) else: LOG.debug("Setting status for %s to DOWN", device) - self.plugin_rpc.update_device_down( - self.context, device, self.agent_id, self.conf.host) - LOG.info(_LI("Configuration for device %s completed."), device) + devices_down.append(device) + failed_devices = [] + if devices_up or devices_down: + devices_set = self.plugin_rpc.update_device_list( + self.context, devices_up, devices_down, self.agent_id, + self.conf.host) + failed_devices = (devices_set.get('failed_devices_up') + + devices_set.get('failed_devices_down')) + if failed_devices: + LOG.error(_LE("Configuration for devices %s failed!"), + failed_devices) + #TODO(rossella_s) handle better the resync in next patches, + # this is just to preserve the current behavior + raise DeviceListRetrievalError(devices=failed_devices) + LOG.info(_LI("Configuration for devices up %(up)s and devices " + "down %(down)s completed."), + {'up': devices_up, 'down': devices_down}) @staticmethod def setup_arp_spoofing_protection(bridge, vif, port_details): @@ -1198,17 +1215,21 @@ class OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin, def treat_devices_added_or_updated(self, devices, ovs_restarted): skipped_devices = [] need_binding_devices = [] - try: - devices_details_list = self.plugin_rpc.get_devices_details_list( + devices_details_list = ( + self.plugin_rpc.get_devices_details_list_and_failed_devices( self.context, devices, self.agent_id, - self.conf.host) - except Exception as e: - raise DeviceListRetrievalError(devices=devices, error=e) + self.conf.host)) + if devices_details_list.get('failed_devices'): + #TODO(rossella_s) handle better the resync in next patches, + # this is just to preserve the current behavior + raise DeviceListRetrievalError(devices=devices) + + devices = devices_details_list.get('devices') vif_by_id = self.int_br.get_vifs_by_ids( - [vif['device'] for vif in devices_details_list]) - for details in devices_details_list: + [vif['device'] for vif in devices]) + for details in devices: device = details['device'] LOG.debug("Processing port: %s", device) port = vif_by_id.get(device) @@ -1244,62 +1265,67 @@ class OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin, return skipped_devices, need_binding_devices def treat_ancillary_devices_added(self, devices): - try: - devices_details_list = self.plugin_rpc.get_devices_details_list( + devices_details_list = ( + self.plugin_rpc.get_devices_details_list_and_failed_devices( self.context, devices, self.agent_id, - self.conf.host) - except Exception as e: - raise DeviceListRetrievalError(devices=devices, error=e) + self.conf.host)) + if devices_details_list.get('failed_devices'): + #TODO(rossella_s) handle better the resync in next patches, + # this is just to preserve the current behavior + raise DeviceListRetrievalError(devices=devices) + devices_added = [ + d['device'] for d in devices_details_list.get('devices')] + LOG.info(_LI("Ancillary Ports %s added"), devices_added) - for details in devices_details_list: - device = details['device'] - LOG.info(_LI("Ancillary Port %s added"), device) - - # update plugin about port status - self.plugin_rpc.update_device_up(self.context, - device, - self.agent_id, - self.conf.host) + # update plugin about port status + devices_set_up = ( + self.plugin_rpc.update_device_list(self.context, + devices_added, + [], + self.agent_id, + self.conf.host)) + if devices_set_up.get('failed_devices_up'): + #TODO(rossella_s) handle better the resync in next patches, + # this is just to preserve the current behavior + raise DeviceListRetrievalError() def treat_devices_removed(self, devices): resync = False self.sg_agent.remove_devices_filter(devices) + LOG.info(_LI("Ports %s removed"), devices) + devices_down = self.plugin_rpc.update_device_list(self.context, + [], + devices, + self.agent_id, + self.conf.host) + failed_devices = devices_down.get('failed_devices_down') + if failed_devices: + LOG.debug("Port removal failed for %(devices)s ", failed_devices) + resync = True for device in devices: - LOG.info(_LI("Attachment %s removed"), device) - try: - self.plugin_rpc.update_device_down(self.context, - device, - self.agent_id, - self.conf.host) - except Exception as e: - LOG.debug("port_removed failed for %(device)s: %(e)s", - {'device': device, 'e': e}) - resync = True - continue self.port_unbound(device) return resync def treat_ancillary_devices_removed(self, devices): resync = False - for device in devices: - LOG.info(_LI("Attachment %s removed"), device) - try: - details = self.plugin_rpc.update_device_down(self.context, - device, - self.agent_id, - self.conf.host) - except Exception as e: - LOG.debug("port_removed failed for %(device)s: %(e)s", - {'device': device, 'e': e}) - resync = True - continue - if details['exists']: - LOG.info(_LI("Port %s updated."), device) + LOG.info(_LI("Ancillary ports %s removed"), devices) + devices_down = self.plugin_rpc.update_device_list(self.context, + [], + devices, + self.agent_id, + self.conf.host) + failed_devices = devices_down.get('failed_devices_down') + if failed_devices: + LOG.debug("Port removal failed for %(devices)s ", failed_devices) + resync = True + for detail in devices_down.get('devices_down'): + if detail['exists']: + LOG.info(_LI("Port %s updated."), detail['device']) # Nothing to do regarding local networking else: - LOG.debug("Device %s not defined on plugin", device) + LOG.debug("Device %s not defined on plugin", detail['device']) return resync def process_network_ports(self, port_info, ovs_restarted): @@ -1352,7 +1378,7 @@ class OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin, port_info.get('updated', set())) self._bind_devices(need_binding_devices) - if 'removed' in port_info: + if 'removed' in port_info and port_info['removed']: start = time.time() resync_b = self.treat_devices_removed(port_info['removed']) LOG.debug("process_network_ports - iteration:%(iter_num)d - " @@ -1365,15 +1391,15 @@ class OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin, def process_ancillary_network_ports(self, port_info): resync_a = False resync_b = False - if 'added' in port_info: + if 'added' in port_info and port_info['added']: start = time.time() try: self.treat_ancillary_devices_added(port_info['added']) LOG.debug("process_ancillary_network_ports - iteration: " "%(iter_num)d - treat_ancillary_devices_added " "completed in %(elapsed).3f", - {'iter_num': self.iter_num, - 'elapsed': time.time() - start}) + {'iter_num': self.iter_num, + 'elapsed': time.time() - start}) except DeviceListRetrievalError: # Need to resync as there was an error with server # communication. @@ -1381,7 +1407,7 @@ class OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin, "iteration:%d - failure while retrieving " "port details from server"), self.iter_num) resync_a = True - if 'removed' in port_info: + if 'removed' in port_info and port_info['removed']: start = time.time() resync_b = self.treat_ancillary_devices_removed( port_info['removed']) @@ -1466,6 +1492,18 @@ class OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin, 'elapsed': elapsed}) self.iter_num = self.iter_num + 1 + def get_port_stats(self, port_info, ancillary_port_info): + port_stats = { + 'regular': { + 'added': len(port_info.get('added', [])), + 'updated': len(port_info.get('updated', [])), + 'removed': len(port_info.get('removed', []))}} + if self.ancillary_brs: + port_stats['ancillary'] = { + 'added': len(ancillary_port_info.get('added', [])), + 'removed': len(ancillary_port_info.get('removed', []))} + return port_stats + def rpc_loop(self, polling_manager=None): if not polling_manager: polling_manager = polling.get_polling_manager( @@ -1477,13 +1515,8 @@ class OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin, ancillary_ports = set() tunnel_sync = True ovs_restarted = False - while self.run_daemon_loop: + while self._check_and_handle_signal(): start = time.time() - port_stats = {'regular': {'added': 0, - 'updated': 0, - 'removed': 0}, - 'ancillary': {'added': 0, - 'removed': 0}} LOG.debug("Agent rpc_loop - iteration:%d started", self.iter_num) if sync: @@ -1511,6 +1544,7 @@ class OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin, # Agent doesn't apply any operations when ovs is dead, to # prevent unexpected failure or crash. Sleep and continue # loop in which ovs status will be checked periodically. + port_stats = self.get_port_stats({}, {}) self.loop_count_and_wait(start, port_stats) continue # Notify the plugin of tunnel IP @@ -1567,12 +1601,7 @@ class OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin, "ports processed. Elapsed:%(elapsed).3f", {'iter_num': self.iter_num, 'elapsed': time.time() - start}) - port_stats['regular']['added'] = ( - len(port_info.get('added', []))) - port_stats['regular']['updated'] = ( - len(port_info.get('updated', []))) - port_stats['regular']['removed'] = ( - len(port_info.get('removed', []))) + ports = port_info['current'] if self.ancillary_brs: @@ -1584,10 +1613,6 @@ class OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin, {'iter_num': self.iter_num, 'elapsed': time.time() - start}) ancillary_ports = ancillary_port_info['current'] - port_stats['ancillary']['added'] = ( - len(ancillary_port_info.get('added', []))) - port_stats['ancillary']['removed'] = ( - len(ancillary_port_info.get('removed', []))) polling_manager.polling_completed() # Keep this flag in the last line of "try" block, @@ -1599,7 +1624,9 @@ class OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin, # Put the ports back in self.updated_port self.updated_ports |= updated_ports_copy sync = True - + ancillary_port_info = (ancillary_port_info if self.ancillary_brs + else {}) + port_stats = self.get_port_stats(port_info, ancillary_port_info) self.loop_count_and_wait(start, port_stats) def daemon_loop(self): @@ -1614,17 +1641,26 @@ class OVSNeutronAgent(sg_rpc.SecurityGroupAgentRpcCallbackMixin, self.rpc_loop(polling_manager=pm) def _handle_sigterm(self, signum, frame): - LOG.info(_LI("Agent caught SIGTERM, quitting daemon loop.")) - self.run_daemon_loop = False + self.catch_sigterm = True if self.quitting_rpc_timeout: self.set_rpc_timeout(self.quitting_rpc_timeout) def _handle_sighup(self, signum, frame): - LOG.info(_LI("Agent caught SIGHUP, resetting.")) - self.conf.reload_config_files() - config.setup_logging() - LOG.debug('Full set of CONF:') - self.conf.log_opt_values(LOG, std_logging.DEBUG) + self.catch_sighup = True + + def _check_and_handle_signal(self): + if self.catch_sigterm: + LOG.info(_LI("Agent caught SIGTERM, quitting daemon loop.")) + self.run_daemon_loop = False + self.catch_sigterm = False + if self.catch_sighup: + LOG.info(_LI("Agent caught SIGHUP, resetting.")) + self.conf.reload_config_files() + config.setup_logging() + LOG.debug('Full set of CONF:') + self.conf.log_opt_values(LOG, std_logging.DEBUG) + self.catch_sighup = False + return self.run_daemon_loop def set_rpc_timeout(self, timeout): for rpc_api in (self.plugin_rpc, self.sg_plugin_rpc, diff --git a/neutron/plugins/ml2/drivers/type_gre.py b/neutron/plugins/ml2/drivers/type_gre.py index 5db7074c73c..53b907c884c 100644 --- a/neutron/plugins/ml2/drivers/type_gre.py +++ b/neutron/plugins/ml2/drivers/type_gre.py @@ -14,16 +14,13 @@ # under the License. from oslo_config import cfg -from oslo_db import exception as db_exc from oslo_log import log -from six import moves import sqlalchemy as sa from sqlalchemy import sql from neutron.common import exceptions as n_exc -from neutron.db import api as db_api from neutron.db import model_base -from neutron.i18n import _LE, _LW +from neutron.i18n import _LE from neutron.plugins.common import constants as p_const from neutron.plugins.ml2.drivers import type_tunnel @@ -83,44 +80,6 @@ class GreTypeDriver(type_tunnel.EndpointTunnelTypeDriver): "Service terminated!")) raise SystemExit() - def sync_allocations(self): - - # determine current configured allocatable gres - gre_ids = set() - for gre_id_range in self.tunnel_ranges: - tun_min, tun_max = gre_id_range - gre_ids |= set(moves.range(tun_min, tun_max + 1)) - - session = db_api.get_session() - try: - self._add_allocation(session, gre_ids) - except db_exc.DBDuplicateEntry: - # in case multiple neutron-servers start allocations could be - # already added by different neutron-server. because this function - # is called only when initializing this type driver, it's safe to - # assume allocations were added. - LOG.warning(_LW("Gre allocations were already created.")) - - def _add_allocation(self, session, gre_ids): - with session.begin(subtransactions=True): - # remove from table unallocated tunnels not currently allocatable - allocs = (session.query(GreAllocation).all()) - for alloc in allocs: - try: - # see if tunnel is allocatable - gre_ids.remove(alloc.gre_id) - except KeyError: - # it's not allocatable, so check if its allocated - if not alloc.allocated: - # it's not, so remove it from table - LOG.debug("Removing tunnel %s from pool", alloc.gre_id) - session.delete(alloc) - - # add missing allocatable tunnels to table - for gre_id in sorted(gre_ids): - alloc = GreAllocation(gre_id=gre_id) - session.add(alloc) - def get_endpoints(self): """Get every gre endpoints from database.""" gre_endpoints = self._get_endpoints() diff --git a/neutron/plugins/ml2/drivers/type_tunnel.py b/neutron/plugins/ml2/drivers/type_tunnel.py index 258e78c2644..fec72c84ea9 100644 --- a/neutron/plugins/ml2/drivers/type_tunnel.py +++ b/neutron/plugins/ml2/drivers/type_tunnel.py @@ -13,10 +13,14 @@ # License for the specific language governing permissions and limitations # under the License. import abc +import itertools +import operator from oslo_config import cfg +from oslo_db import api as oslo_db_api from oslo_db import exception as db_exc from oslo_log import log +from six import moves from neutron.common import exceptions as exc from neutron.common import topics @@ -31,21 +35,27 @@ LOG = log.getLogger(__name__) TUNNEL = 'tunnel' +def chunks(iterable, chunk_size): + """Chunks data into chunk with size<=chunk_size.""" + iterator = iter(iterable) + chunk = list(itertools.islice(iterator, 0, chunk_size)) + while chunk: + yield chunk + chunk = list(itertools.islice(iterator, 0, chunk_size)) + + class TunnelTypeDriver(helpers.SegmentTypeDriver): """Define stable abstract interface for ML2 type drivers. tunnel type networks rely on tunnel endpoints. This class defines abstract methods to manage these endpoints. """ + BULK_SIZE = 100 def __init__(self, model): super(TunnelTypeDriver, self).__init__(model) self.segmentation_key = next(iter(self.primary_keys)) - @abc.abstractmethod - def sync_allocations(self): - """Synchronize type_driver allocation table with configured ranges.""" - @abc.abstractmethod def add_endpoint(self, ip, host): """Register the endpoint in the type_driver database. @@ -113,6 +123,42 @@ class TunnelTypeDriver(helpers.SegmentTypeDriver): LOG.info(_LI("%(type)s ID ranges: %(range)s"), {'type': self.get_type(), 'range': current_range}) + @oslo_db_api.wrap_db_retry( + max_retries=db_api.MAX_RETRIES, retry_on_deadlock=True) + def sync_allocations(self): + # determine current configured allocatable tunnel ids + tunnel_ids = set() + for tun_min, tun_max in self.tunnel_ranges: + tunnel_ids |= set(moves.range(tun_min, tun_max + 1)) + + tunnel_id_getter = operator.attrgetter(self.segmentation_key) + tunnel_col = getattr(self.model, self.segmentation_key) + session = db_api.get_session() + with session.begin(subtransactions=True): + # remove from table unallocated tunnels not currently allocatable + # fetch results as list via all() because we'll be iterating + # through them twice + allocs = (session.query(self.model). + with_lockmode("update").all()) + + # collect those vnis that needs to be deleted from db + unallocateds = ( + tunnel_id_getter(a) for a in allocs if not a.allocated) + to_remove = (x for x in unallocateds if x not in tunnel_ids) + # Immediately delete tunnels in chunks. This leaves no work for + # flush at the end of transaction + for chunk in chunks(to_remove, self.BULK_SIZE): + session.query(self.model).filter( + tunnel_col.in_(chunk)).delete(synchronize_session=False) + + # collect vnis that need to be added + existings = {tunnel_id_getter(a) for a in allocs} + missings = list(tunnel_ids - existings) + for chunk in chunks(missings, self.BULK_SIZE): + bulk = [{self.segmentation_key: x, 'allocated': False} + for x in chunk] + session.execute(self.model.__table__.insert(), bulk) + def is_partial_segment(self, segment): return segment.get(api.SEGMENTATION_ID) is None diff --git a/neutron/plugins/ml2/drivers/type_vxlan.py b/neutron/plugins/ml2/drivers/type_vxlan.py index 52e5f7eaee7..c6f9dbf1073 100644 --- a/neutron/plugins/ml2/drivers/type_vxlan.py +++ b/neutron/plugins/ml2/drivers/type_vxlan.py @@ -15,12 +15,10 @@ from oslo_config import cfg from oslo_log import log -from six import moves import sqlalchemy as sa from sqlalchemy import sql from neutron.common import exceptions as n_exc -from neutron.db import api as db_api from neutron.db import model_base from neutron.i18n import _LE from neutron.plugins.common import constants as p_const @@ -86,45 +84,6 @@ class VxlanTypeDriver(type_tunnel.EndpointTunnelTypeDriver): "Service terminated!")) raise SystemExit() - def sync_allocations(self): - - # determine current configured allocatable vnis - vxlan_vnis = set() - for tun_min, tun_max in self.tunnel_ranges: - vxlan_vnis |= set(moves.range(tun_min, tun_max + 1)) - - session = db_api.get_session() - with session.begin(subtransactions=True): - # remove from table unallocated tunnels not currently allocatable - # fetch results as list via all() because we'll be iterating - # through them twice - allocs = (session.query(VxlanAllocation). - with_lockmode("update").all()) - # collect all vnis present in db - existing_vnis = set(alloc.vxlan_vni for alloc in allocs) - # collect those vnis that needs to be deleted from db - vnis_to_remove = [alloc.vxlan_vni for alloc in allocs - if (alloc.vxlan_vni not in vxlan_vnis and - not alloc.allocated)] - # Immediately delete vnis in chunks. This leaves no work for - # flush at the end of transaction - bulk_size = 100 - chunked_vnis = (vnis_to_remove[i:i + bulk_size] for i in - range(0, len(vnis_to_remove), bulk_size)) - for vni_list in chunked_vnis: - if vni_list: - session.query(VxlanAllocation).filter( - VxlanAllocation.vxlan_vni.in_(vni_list)).delete( - synchronize_session=False) - # collect vnis that need to be added - vnis = list(vxlan_vnis - existing_vnis) - chunked_vnis = (vnis[i:i + bulk_size] for i in - range(0, len(vnis), bulk_size)) - for vni_list in chunked_vnis: - bulk = [{'vxlan_vni': vni, 'allocated': False} - for vni in vni_list] - session.execute(VxlanAllocation.__table__.insert(), bulk) - def get_endpoints(self): """Get every vxlan endpoints from database.""" vxlan_endpoints = self._get_endpoints() diff --git a/neutron/plugins/ml2/extensions/port_security.py b/neutron/plugins/ml2/extensions/port_security.py index aceec24a235..cb582f3b28f 100644 --- a/neutron/plugins/ml2/extensions/port_security.py +++ b/neutron/plugins/ml2/extensions/port_security.py @@ -38,8 +38,10 @@ class PortSecurityExtensionDriver(api.ExtensionDriver, def process_create_network(self, context, data, result): # Create the network extension attributes. - if psec.PORTSECURITY in data: - self._process_network_port_security_create(context, data, result) + if psec.PORTSECURITY not in data: + data[psec.PORTSECURITY] = (psec.EXTENDED_ATTRIBUTES_2_0['networks'] + [psec.PORTSECURITY]['default']) + self._process_network_port_security_create(context, data, result) def process_update_network(self, context, data, result): # Update the network extension attributes. @@ -63,7 +65,12 @@ class PortSecurityExtensionDriver(api.ExtensionDriver, self._extend_port_security_dict(result, db_data) def _extend_port_security_dict(self, response_data, db_data): - response_data[psec.PORTSECURITY] = ( + if db_data.get('port_security') is None: + response_data[psec.PORTSECURITY] = ( + psec.EXTENDED_ATTRIBUTES_2_0['networks'] + [psec.PORTSECURITY]['default']) + else: + response_data[psec.PORTSECURITY] = ( db_data['port_security'][psec.PORTSECURITY]) def _determine_port_security(self, context, port): diff --git a/neutron/plugins/ml2/managers.py b/neutron/plugins/ml2/managers.py index 1d1d204a0c5..ef5b868cfc6 100644 --- a/neutron/plugins/ml2/managers.py +++ b/neutron/plugins/ml2/managers.py @@ -172,7 +172,7 @@ class TypeManager(stevedore.named.NamedExtensionManager): def _add_network_segment(self, session, network_id, segment, mtu, segment_index=0): db.add_network_segment(session, network_id, segment, segment_index) - if segment.get(api.MTU) > 0: + if segment.get(api.MTU, 0) > 0: mtu.append(segment[api.MTU]) def create_network_segments(self, context, network, tenant_id): @@ -803,19 +803,19 @@ class ExtensionManager(stevedore.named.NamedExtensionManager): """Notify all extension drivers to extend network dictionary.""" for driver in self.ordered_ext_drivers: driver.obj.extend_network_dict(session, base_model, result) - LOG.info(_LI("Extended network dict for driver '%(drv)s'"), - {'drv': driver.name}) + LOG.debug("Extended network dict for driver '%(drv)s'", + {'drv': driver.name}) def extend_subnet_dict(self, session, base_model, result): """Notify all extension drivers to extend subnet dictionary.""" for driver in self.ordered_ext_drivers: driver.obj.extend_subnet_dict(session, base_model, result) - LOG.info(_LI("Extended subnet dict for driver '%(drv)s'"), - {'drv': driver.name}) + LOG.debug("Extended subnet dict for driver '%(drv)s'", + {'drv': driver.name}) def extend_port_dict(self, session, base_model, result): """Notify all extension drivers to extend port dictionary.""" for driver in self.ordered_ext_drivers: driver.obj.extend_port_dict(session, base_model, result) - LOG.info(_LI("Extended port dict for driver '%(drv)s'"), - {'drv': driver.name}) + LOG.debug("Extended port dict for driver '%(drv)s'", + {'drv': driver.name}) diff --git a/neutron/plugins/ml2/plugin.py b/neutron/plugins/ml2/plugin.py index cd64d7210fb..9a1d5a84eac 100644 --- a/neutron/plugins/ml2/plugin.py +++ b/neutron/plugins/ml2/plugin.py @@ -14,10 +14,10 @@ # under the License. from eventlet import greenthread -from oslo_concurrency import lockutils from oslo_config import cfg from oslo_db import api as oslo_db_api from oslo_db import exception as os_db_exception +from oslo_log import helpers as log_helpers from oslo_log import log from oslo_serialization import jsonutils from oslo_utils import excutils @@ -40,7 +40,6 @@ from neutron.callbacks import resources from neutron.common import constants as const from neutron.common import exceptions as exc from neutron.common import ipv6_utils -from neutron.common import log as neutron_log from neutron.common import rpc as n_rpc from neutron.common import topics from neutron.common import utils @@ -55,7 +54,7 @@ from neutron.db import external_net_db from neutron.db import extradhcpopt_db from neutron.db import models_v2 from neutron.db import netmtu_db -from neutron.db import quota_db # noqa +from neutron.db.quota import driver # noqa from neutron.db import securitygroups_rpc_base as sg_db_rpc from neutron.db import vlantransparent_db from neutron.extensions import allowedaddresspairs as addr_pair @@ -163,7 +162,7 @@ class Ml2Plugin(db_base_plugin_v2.NeutronDbPluginV2, ) self.start_periodic_dhcp_agent_status_check() - @neutron_log.log + @log_helpers.log_method_call def start_rpc_listeners(self): """Start the RPC loop to let the plugin communicate with agents.""" self.topic = topics.PLUGIN @@ -344,13 +343,7 @@ class Ml2Plugin(db_base_plugin_v2.NeutronDbPluginV2, # After we've attempted to bind the port, we begin a # transaction, get the current port state, and decide whether # to commit the binding results. - # - # REVISIT: Serialize this operation with a semaphore to - # prevent deadlock waiting to acquire a DB lock held by - # another thread in the same process, leading to 'lock wait - # timeout' errors. - with lockutils.lock('db-access'),\ - session.begin(subtransactions=True): + with session.begin(subtransactions=True): # Get the current port state and build a new PortContext # reflecting this state as original state for subsequent # mechanism driver update_port_*commit() calls. @@ -729,15 +722,14 @@ class Ml2Plugin(db_base_plugin_v2.NeutronDbPluginV2, # to 'lock wait timeout' errors. # # Process L3 first, since, depending on the L3 plugin, it may - # involve locking the db-access semaphore, sending RPC - # notifications, and/or calling delete_port on this plugin. + # involve sending RPC notifications, and/or calling delete_port + # on this plugin. # Additionally, a rollback may not be enough to undo the # deletion of a floating IP with certain L3 backends. self._process_l3_delete(context, id) # Using query().with_lockmode isn't necessary. Foreign-key # constraints prevent deletion if concurrent creation happens. - with lockutils.lock('db-access'),\ - session.begin(subtransactions=True): + with session.begin(subtransactions=True): # Get ports to auto-delete. ports = (session.query(models_v2.Port). enable_eagerloads(False). @@ -852,14 +844,9 @@ class Ml2Plugin(db_base_plugin_v2.NeutronDbPluginV2, LOG.debug("Deleting subnet %s", id) session = context.session while True: - # REVISIT: Serialize this operation with a semaphore to - # prevent deadlock waiting to acquire a DB lock held by - # another thread in the same process, leading to 'lock - # wait timeout' errors. - with lockutils.lock('db-access'),\ - session.begin(subtransactions=True): + with session.begin(subtransactions=True): record = self._get_subnet(context, id) - subnet = self._make_subnet_dict(record, None) + subnet = self._make_subnet_dict(record, None, context=context) qry_allocated = (session.query(models_v2.IPAllocation). filter_by(subnet_id=id). join(models_v2.Port)) @@ -877,7 +864,8 @@ class Ml2Plugin(db_base_plugin_v2.NeutronDbPluginV2, allocated = qry_allocated.all() # Delete all the IPAllocation that can be auto-deleted if allocated: - map(session.delete, allocated) + for x in allocated: + session.delete(x) LOG.debug("Ports to auto-deallocate: %s", allocated) # Check if there are more IP allocations, unless # is_auto_address_subnet is True. In that case the check is @@ -1101,13 +1089,9 @@ class Ml2Plugin(db_base_plugin_v2.NeutronDbPluginV2, attrs = port[attributes.PORT] need_port_update_notify = False session = context.session + bound_mech_contexts = [] - # REVISIT: Serialize this operation with a semaphore to - # prevent deadlock waiting to acquire a DB lock held by - # another thread in the same process, leading to 'lock wait - # timeout' errors. - with lockutils.lock('db-access'),\ - session.begin(subtransactions=True): + with session.begin(subtransactions=True): port_db, binding = db.get_locked_port_and_binding(session, id) if not port_db: raise exc.PortNotFound(port_id=id) @@ -1141,11 +1125,36 @@ class Ml2Plugin(db_base_plugin_v2.NeutronDbPluginV2, mech_context = driver_context.PortContext( self, context, updated_port, network, binding, levels, original_port=original_port) - new_host_port = self._get_host_port_if_changed(mech_context, attrs) + # For DVR router interface ports we need to retrieve the + # DVRPortbinding context instead of the normal port context. + # The normal Portbinding context does not have the status + # of the ports that are required by the l2pop to process the + # postcommit events. + + # NOTE:Sometimes during the update_port call, the DVR router + # interface port may not have the port binding, so we cannot + # create a generic bindinglist that will address both the + # DVR and non-DVR cases here. + # TODO(Swami): This code need to be revisited. + if port_db['device_owner'] == const.DEVICE_OWNER_DVR_INTERFACE: + dvr_binding_list = db.get_dvr_port_bindings(session, id) + for dvr_binding in dvr_binding_list: + levels = db.get_binding_levels(session, id, + dvr_binding.host) + dvr_mech_context = driver_context.PortContext( + self, context, updated_port, network, + dvr_binding, levels, original_port=original_port) + self.mechanism_manager.update_port_precommit( + dvr_mech_context) + bound_mech_contexts.append(dvr_mech_context) + else: + self.mechanism_manager.update_port_precommit(mech_context) + bound_mech_contexts.append(mech_context) + + new_host_port = self._get_host_port_if_changed( + mech_context, attrs) need_port_update_notify |= self._process_port_binding( mech_context, attrs) - self.mechanism_manager.update_port_precommit(mech_context) - # Notifications must be sent after the above transaction is complete kwargs = { 'context': context, @@ -1154,11 +1163,18 @@ class Ml2Plugin(db_base_plugin_v2.NeutronDbPluginV2, } registry.notify(resources.PORT, events.AFTER_UPDATE, self, **kwargs) - # TODO(apech) - handle errors raised by update_port, potentially - # by re-calling update_port with the previous attributes. For - # now the error is propogated to the caller, which is expected to - # either undo/retry the operation or delete the resource. - self.mechanism_manager.update_port_postcommit(mech_context) + # Note that DVR Interface ports will have bindings on + # multiple hosts, and so will have multiple mech_contexts, + # while other ports typically have just one. + # Since bound_mech_contexts has both the DVR and non-DVR + # contexts we can manage just with a single for loop. + try: + for mech_context in bound_mech_contexts: + self.mechanism_manager.update_port_postcommit( + mech_context) + except ml2_exc.MechanismDriverError: + LOG.error(_LE("mechanism_manager.update_port_postcommit " + "failed for port %s"), id) self.check_and_notify_security_group_member_changed( context, original_port, updated_port) @@ -1167,7 +1183,13 @@ class Ml2Plugin(db_base_plugin_v2.NeutronDbPluginV2, if original_port['admin_state_up'] != updated_port['admin_state_up']: need_port_update_notify = True - + # NOTE: In the case of DVR ports, the port-binding is done after + # router scheduling when sync_routers is callede and so this call + # below may not be required for DVR routed interfaces. But still + # since we don't have the mech_context for the DVR router interfaces + # at certain times, we just pass the port-context and return it, so + # that we don't disturb other methods that are expecting a return + # value. bound_context = self._bind_port_if_needed( mech_context, allow_notify=True, @@ -1258,12 +1280,7 @@ class Ml2Plugin(db_base_plugin_v2.NeutronDbPluginV2, l3plugin, const.L3_DISTRIBUTED_EXT_ALIAS) session = context.session - # REVISIT: Serialize this operation with a semaphore to - # prevent deadlock waiting to acquire a DB lock held by - # another thread in the same process, leading to 'lock wait - # timeout' errors. - with lockutils.lock('db-access'),\ - session.begin(subtransactions=True): + with session.begin(subtransactions=True): port_db, binding = db.get_locked_port_and_binding(session, id) if not port_db: LOG.debug("The port '%s' was deleted", id) @@ -1392,12 +1409,7 @@ class Ml2Plugin(db_base_plugin_v2.NeutronDbPluginV2, """ updated = False session = context.session - # REVISIT: Serialize this operation with a semaphore to - # prevent deadlock waiting to acquire a DB lock held by - # another thread in the same process, leading to 'lock wait - # timeout' errors. - with lockutils.lock('db-access'),\ - session.begin(subtransactions=True): + with session.begin(subtransactions=True): port = db.get_port(session, port_id) if not port: LOG.debug("Port %(port)s update to %(val)s by agent not found", @@ -1428,8 +1440,7 @@ class Ml2Plugin(db_base_plugin_v2.NeutronDbPluginV2, if (updated and port['device_owner'] == const.DEVICE_OWNER_DVR_INTERFACE): - with lockutils.lock('db-access'),\ - session.begin(subtransactions=True): + with session.begin(subtransactions=True): port = db.get_port(session, port_id) if not port: LOG.warning(_LW("Port %s not found during update"), diff --git a/neutron/plugins/ml2/rpc.py b/neutron/plugins/ml2/rpc.py index 4187da6864e..4f5c10848c8 100644 --- a/neutron/plugins/ml2/rpc.py +++ b/neutron/plugins/ml2/rpc.py @@ -28,7 +28,7 @@ from neutron.common import rpc as n_rpc from neutron.common import topics from neutron.extensions import portbindings from neutron.extensions import portsecurity as psec -from neutron.i18n import _LW +from neutron.i18n import _LE, _LW from neutron import manager from neutron.plugins.ml2 import driver_api as api from neutron.plugins.ml2.drivers import type_tunnel @@ -48,7 +48,9 @@ class RpcCallbacks(type_tunnel.TunnelRpcCallbackMixin): # return value to include fixed_ips and device_owner for # the device port # 1.4 tunnel_sync rpc signature upgrade to obtain 'host' - target = oslo_messaging.Target(version='1.4') + # 1.5 Support update_device_list and + # get_devices_details_list_and_failed_devices + target = oslo_messaging.Target(version='1.5') def __init__(self, notifier, type_manager): self.setup_tunnel_callback_mixin(notifier, type_manager) @@ -135,6 +137,27 @@ class RpcCallbacks(type_tunnel.TunnelRpcCallbackMixin): for device in kwargs.pop('devices', []) ] + def get_devices_details_list_and_failed_devices(self, + rpc_context, + **kwargs): + devices = [] + failed_devices = [] + cached_networks = {} + for device in kwargs.pop('devices', []): + try: + devices.append(self.get_device_details( + rpc_context, + device=device, + cached_networks=cached_networks, + **kwargs)) + except Exception: + LOG.error(_LE("Failed to get details for device %s"), + device) + failed_devices.append(device) + + return {'devices': devices, + 'failed_devices': failed_devices} + def update_device_down(self, rpc_context, **kwargs): """Device no longer exists on agent.""" # TODO(garyk) - live migration and port status @@ -201,6 +224,44 @@ class RpcCallbacks(type_tunnel.TunnelRpcCallbackMixin): registry.notify( resources.PORT, events.AFTER_UPDATE, plugin, **kwargs) + def update_device_list(self, rpc_context, **kwargs): + devices_up = [] + failed_devices_up = [] + devices_down = [] + failed_devices_down = [] + devices = kwargs.get('devices_up') + if devices: + for device in devices: + try: + self.update_device_up( + rpc_context, + device=device, + **kwargs) + except Exception: + failed_devices_up.append(device) + LOG.error(_LE("Failed to update device %s up"), device) + else: + devices_up.append(device) + + devices = kwargs.get('devices_down') + if devices: + for device in devices: + try: + dev = self.update_device_down( + rpc_context, + device=device, + **kwargs) + except Exception: + failed_devices_down.append(device) + LOG.error(_LE("Failed to update device %s down"), device) + else: + devices_down.append(dev) + + return {'devices_up': devices_up, + 'failed_devices_up': failed_devices_up, + 'devices_down': devices_down, + 'failed_devices_down': failed_devices_down} + class AgentNotifierApi(dvr_rpc.DVRAgentRpcApiMixin, sg_rpc.SecurityGroupAgentRpcApiMixin, diff --git a/neutron/plugins/nec/extensions/packetfilter.py b/neutron/plugins/nec/extensions/packetfilter.py index 7c9971f8a96..3d89cf4e25a 100644 --- a/neutron/plugins/nec/extensions/packetfilter.py +++ b/neutron/plugins/nec/extensions/packetfilter.py @@ -21,7 +21,8 @@ from neutron.api.v2 import base from neutron.common import constants from neutron.common import exceptions from neutron import manager -from neutron import quota +from neutron.quota import resource as quota_resource +from neutron.quota import resource_registry quota_packet_filter_opts = [ @@ -180,10 +181,10 @@ class Packetfilter(extensions.ExtensionDescriptor): @classmethod def get_resources(cls): - qresource = quota.CountableResource(RESOURCE, - quota._count_resource, - 'quota_%s' % RESOURCE) - quota.QUOTAS.register_resource(qresource) + qresource = quota_resource.CountableResource( + RESOURCE, quota_resource._count_resource, 'quota_%s' % RESOURCE) + + resource_registry.register_resource(qresource) resource = base.create_resource(COLLECTION, RESOURCE, manager.NeutronManager.get_plugin(), diff --git a/neutron/plugins/oneconvergence/plugin.py b/neutron/plugins/oneconvergence/plugin.py index f0295cb7701..d3150f7ea70 100644 --- a/neutron/plugins/oneconvergence/plugin.py +++ b/neutron/plugins/oneconvergence/plugin.py @@ -39,7 +39,6 @@ from neutron.db import extraroute_db from neutron.db import l3_agentschedulers_db from neutron.db import l3_gwmode_db from neutron.db import portbindings_base -from neutron.db import quota_db # noqa from neutron.db import securitygroups_rpc_base as sg_db_rpc from neutron.extensions import portbindings from neutron.i18n import _LE diff --git a/neutron/plugins/plumgrid/plumgrid_plugin/plumgrid_plugin.py b/neutron/plugins/plumgrid/plumgrid_plugin/plumgrid_plugin.py index ee95f2ae381..e69d6d65348 100644 --- a/neutron/plugins/plumgrid/plumgrid_plugin/plumgrid_plugin.py +++ b/neutron/plugins/plumgrid/plumgrid_plugin/plumgrid_plugin.py @@ -17,8 +17,9 @@ from networking_plumgrid.neutron.plugins import plugin class NeutronPluginPLUMgridV2(plugin.NeutronPluginPLUMgridV2): - supported_extension_aliases = ["binding", "external-net", "provider", - "quotas", "router", "security-group"] + supported_extension_aliases = ["binding", "external-net", "extraroute", + "provider", "quotas", "router", + "security-group"] def __init__(self): super(NeutronPluginPLUMgridV2, self).__init__() diff --git a/neutron/quota.py b/neutron/quota/__init__.py similarity index 67% rename from neutron/quota.py rename to neutron/quota/__init__.py index e99a01ecdde..97b466e872a 100644 --- a/neutron/quota.py +++ b/neutron/quota/__init__.py @@ -1,4 +1,4 @@ -# Copyright 2011 OpenStack Foundation +# Copyright (c) 2015 OpenStack Foundation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain @@ -25,14 +25,16 @@ import webob from neutron.common import exceptions from neutron.i18n import _LI, _LW +from neutron.quota import resource_registry LOG = logging.getLogger(__name__) -QUOTA_DB_MODULE = 'neutron.db.quota_db' -QUOTA_DB_DRIVER = 'neutron.db.quota_db.DbQuotaDriver' +QUOTA_DB_MODULE = 'neutron.db.quota.driver' +QUOTA_DB_DRIVER = '%s.DbQuotaDriver' % QUOTA_DB_MODULE QUOTA_CONF_DRIVER = 'neutron.quota.ConfDriver' default_quota_items = ['network', 'subnet', 'port'] + quota_opts = [ cfg.ListOpt('quota_items', default=default_quota_items, @@ -59,6 +61,11 @@ quota_opts = [ cfg.StrOpt('quota_driver', default=QUOTA_DB_DRIVER, help=_('Default driver to use for quota checks')), + cfg.BoolOpt('track_quota_usage', + default=True, + help=_('Keep in track in the database of current resource' + 'quota usage. Plugins which do not leverage the ' + 'neutron database should set this flag to False')), ] # Register the configuration options cfg.CONF.register_opts(quota_opts, 'QUOTAS') @@ -146,67 +153,19 @@ class ConfDriver(object): raise webob.exc.HTTPForbidden(msg) -class BaseResource(object): - """Describe a single resource for quota checking.""" - - def __init__(self, name, flag): - """Initializes a resource. - - :param name: The name of the resource, i.e., "instances". - :param flag: The name of the flag or configuration option - """ - - self.name = name - self.flag = flag - - @property - def default(self): - """Return the default value of the quota.""" - # Any negative value will be interpreted as an infinite quota, - # and stored as -1 for compatibility with current behaviour - value = getattr(cfg.CONF.QUOTAS, - self.flag, - cfg.CONF.QUOTAS.default_quota) - return max(value, -1) - - -class CountableResource(BaseResource): - """Describe a resource where the counts are determined by a function.""" - - def __init__(self, name, count, flag=None): - """Initializes a CountableResource. - - Countable resources are those resources which directly - correspond to objects in the database, i.e., netowk, subnet, - etc.,. A CountableResource must be constructed with a counting - function, which will be called to determine the current counts - of the resource. - - The counting function will be passed the context, along with - the extra positional and keyword arguments that are passed to - Quota.count(). It should return an integer specifying the - count. - - :param name: The name of the resource, i.e., "instances". - :param count: A callable which returns the count of the - resource. The arguments passed are as described - above. - :param flag: The name of the flag or configuration option - which specifies the default value of the quota - for this resource. - """ - - super(CountableResource, self).__init__(name, flag=flag) - self.count = count - - class QuotaEngine(object): """Represent the set of recognized quotas.""" + _instance = None + + @classmethod + def get_instance(cls): + if not cls._instance: + cls._instance = cls() + return cls._instance + def __init__(self, quota_driver_class=None): """Initialize a Quota object.""" - - self._resources = {} self._driver = None self._driver_class = quota_driver_class @@ -226,35 +185,13 @@ class QuotaEngine(object): versionutils.report_deprecated_feature( LOG, _LW("The quota driver neutron.quota.ConfDriver is " "deprecated as of Liberty. " - "neutron.db.quota_db.DbQuotaDriver should be " - "used in its place")) + "neutron.db.quota.driver.DbQuotaDriver should " + "be used in its place")) self._driver = _driver_class LOG.info(_LI('Loaded quota_driver: %s.'), _driver_class) return self._driver - def __contains__(self, resource): - return resource in self._resources - - def register_resource(self, resource): - """Register a resource.""" - if resource.name in self._resources: - LOG.warn(_LW('%s is already registered.'), resource.name) - return - self._resources[resource.name] = resource - - def register_resource_by_name(self, resourcename): - """Register a resource by name.""" - resource = CountableResource(resourcename, _count_resource, - 'quota_' + resourcename) - self.register_resource(resource) - - def register_resources(self, resources): - """Register a list of resources.""" - - for resource in resources: - self.register_resource(resource) - - def count(self, context, resource, *args, **kwargs): + def count(self, context, resource_name, *args, **kwargs): """Count a resource. For countable resources, invokes the count() function and @@ -263,13 +200,13 @@ class QuotaEngine(object): the resource. :param context: The request context, for access checks. - :param resource: The name of the resource, as a string. + :param resource_name: The name of the resource, as a string. """ # Get the resource - res = self._resources.get(resource) + res = resource_registry.get_resource(resource_name) if not res or not hasattr(res, 'count'): - raise exceptions.QuotaResourceUnknown(unknown=[resource]) + raise exceptions.QuotaResourceUnknown(unknown=[resource_name]) return res.count(context, *args, **kwargs) @@ -297,7 +234,8 @@ class QuotaEngine(object): """ # Verify that resources are managed by the quota engine requested_resources = set(values.keys()) - managed_resources = set([res for res in self._resources.keys() + managed_resources = set([res for res in + resource_registry.get_all_resources() if res in requested_resources]) # Make sure we accounted for all of them... @@ -306,31 +244,11 @@ class QuotaEngine(object): raise exceptions.QuotaResourceUnknown( unknown=sorted(unknown_resources)) - return self.get_driver().limit_check(context, tenant_id, - self._resources, values) - - @property - def resources(self): - return self._resources + return self.get_driver().limit_check( + context, tenant_id, resource_registry.get_all_resources(), values) -QUOTAS = QuotaEngine() - - -def _count_resource(context, plugin, resources, tenant_id): - count_getter_name = "get_%s_count" % resources - - # Some plugins support a count method for particular resources, - # using a DB's optimized counting features. We try to use that one - # if present. Otherwise just use regular getter to retrieve all objects - # and count in python, allowing older plugins to still be supported - try: - obj_count_getter = getattr(plugin, count_getter_name) - return obj_count_getter(context, filters={'tenant_id': [tenant_id]}) - except (NotImplementedError, AttributeError): - obj_getter = getattr(plugin, "get_%s" % resources) - obj_list = obj_getter(context, filters={'tenant_id': [tenant_id]}) - return len(obj_list) if obj_list else 0 +QUOTAS = QuotaEngine.get_instance() def register_resources_from_config(): @@ -342,12 +260,9 @@ def register_resources_from_config(): "quota_items option is deprecated as of Liberty." "Resource REST controllers should take care of registering " "resources with the quota engine.")) - resources = [] for resource_item in (set(cfg.CONF.QUOTAS.quota_items) - set(default_quota_items)): - resources.append(CountableResource(resource_item, _count_resource, - 'quota_' + resource_item)) - QUOTAS.register_resources(resources) + resource_registry.register_resource_by_name(resource_item) register_resources_from_config() diff --git a/neutron/quota/resource.py b/neutron/quota/resource.py new file mode 100644 index 00000000000..d9a716a5eb8 --- /dev/null +++ b/neutron/quota/resource.py @@ -0,0 +1,277 @@ +# Copyright (c) 2015 OpenStack Foundation. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from oslo_concurrency import lockutils +from oslo_config import cfg +from oslo_db import api as oslo_db_api +from oslo_db import exception as oslo_db_exception +from oslo_log import log +from sqlalchemy import event + +from neutron.db import api as db_api +from neutron.db.quota import api as quota_api +from neutron.i18n import _LE + +LOG = log.getLogger(__name__) + + +def _count_resource(context, plugin, collection_name, tenant_id): + count_getter_name = "get_%s_count" % collection_name + + # Some plugins support a count method for particular resources, + # using a DB's optimized counting features. We try to use that one + # if present. Otherwise just use regular getter to retrieve all objects + # and count in python, allowing older plugins to still be supported + try: + obj_count_getter = getattr(plugin, count_getter_name) + meh = obj_count_getter(context, filters={'tenant_id': [tenant_id]}) + return meh + except (NotImplementedError, AttributeError): + obj_getter = getattr(plugin, "get_%s" % collection_name) + obj_list = obj_getter(context, filters={'tenant_id': [tenant_id]}) + return len(obj_list) if obj_list else 0 + + +class BaseResource(object): + """Describe a single resource for quota checking.""" + + def __init__(self, name, flag, plural_name=None): + """Initializes a resource. + + :param name: The name of the resource, i.e., "instances". + :param flag: The name of the flag or configuration option + :param plural_name: Plural form of the resource name. If not + specified, it is generated automatically by + appending an 's' to the resource name, unless + it ends with a 'y'. In that case the last + letter is removed, and 'ies' is appended. + Dashes are always converted to underscores. + """ + + self.name = name + # If a plural name is not supplied, default to adding an 's' to + # the resource name, unless the resource name ends in 'y', in which + # case remove the 'y' and add 'ies'. Even if the code should not fiddle + # too much with English grammar, this is a rather common and easy to + # implement rule. + if plural_name: + self.plural_name = plural_name + elif self.name[-1] == 'y': + self.plural_name = "%sies" % self.name[:-1] + else: + self.plural_name = "%ss" % self.name + # always convert dashes to underscores + self.plural_name = self.plural_name.replace('-', '_') + self.flag = flag + + @property + def default(self): + """Return the default value of the quota.""" + # Any negative value will be interpreted as an infinite quota, + # and stored as -1 for compatibility with current behaviour + value = getattr(cfg.CONF.QUOTAS, + self.flag, + cfg.CONF.QUOTAS.default_quota) + return max(value, -1) + + @property + def dirty(self): + """Return the current state of the Resource instance. + + :returns: True if the resource count is out of sync with actual date, + False if it is in sync, and None if the resource instance + does not track usage. + """ + + +class CountableResource(BaseResource): + """Describe a resource where the counts are determined by a function.""" + + def __init__(self, name, count, flag=None, plural_name=None): + """Initializes a CountableResource. + + Countable resources are those resources which directly + correspond to objects in the database, i.e., netowk, subnet, + etc.,. A CountableResource must be constructed with a counting + function, which will be called to determine the current counts + of the resource. + + The counting function will be passed the context, along with + the extra positional and keyword arguments that are passed to + Quota.count(). It should return an integer specifying the + count. + + :param name: The name of the resource, i.e., "instances". + :param count: A callable which returns the count of the + resource. The arguments passed are as described + above. + :param flag: The name of the flag or configuration option + which specifies the default value of the quota + for this resource. + :param plural_name: Plural form of the resource name. If not + specified, it is generated automatically by + appending an 's' to the resource name, unless + it ends with a 'y'. In that case the last + letter is removed, and 'ies' is appended. + Dashes are always converted to underscores. + """ + + super(CountableResource, self).__init__( + name, flag=flag, plural_name=plural_name) + self._count_func = count + + def count(self, context, plugin, tenant_id): + return self._count_func(context, plugin, self.plural_name, tenant_id) + + +class TrackedResource(BaseResource): + """Resource which keeps track of its usage data.""" + + def __init__(self, name, model_class, flag, plural_name=None): + """Initializes an instance for a given resource. + + TrackedResource are directly mapped to data model classes. + Resource usage is tracked in the database, and the model class to + which this resource refers is monitored to ensure always "fresh" + usage data are employed when performing quota checks. + + This class operates under the assumption that the model class + describing the resource has a tenant identifier attribute. + + :param name: The name of the resource, i.e., "networks". + :param model_class: The sqlalchemy model class of the resource for + which this instance is being created + :param flag: The name of the flag or configuration option + which specifies the default value of the quota + for this resource. + :param plural_name: Plural form of the resource name. If not + specified, it is generated automatically by + appending an 's' to the resource name, unless + it ends with a 'y'. In that case the last + letter is removed, and 'ies' is appended. + Dashes are always converted to underscores. + + """ + super(TrackedResource, self).__init__( + name, flag=flag, plural_name=plural_name) + # Register events for addition/removal of records in the model class + # As tenant_id is immutable for all Neutron objects there is no need + # to register a listener for update events + self._model_class = model_class + self._dirty_tenants = set() + self._out_of_sync_tenants = set() + + @property + def dirty(self): + return self._dirty_tenants + + @lockutils.synchronized('dirty_tenants') + def mark_dirty(self, context, nested=False): + if not self._dirty_tenants: + return + with context.session.begin(nested=nested, subtransactions=True): + for tenant_id in self._dirty_tenants: + quota_api.set_quota_usage_dirty(context, self.name, tenant_id) + LOG.debug(("Persisted dirty status for tenant:%(tenant_id)s " + "on resource:%(resource)s"), + {'tenant_id': tenant_id, 'resource': self.name}) + self._out_of_sync_tenants |= self._dirty_tenants + self._dirty_tenants.clear() + + @lockutils.synchronized('dirty_tenants') + def _db_event_handler(self, mapper, _conn, target): + tenant_id = target.get('tenant_id') + if not tenant_id: + # NOTE: This is an unexpected error condition. Log anomaly but do + # not raise as this might have unexpected effects on other + # operations + LOG.error(_LE("Model class %s does not have tenant_id attribute"), + target) + return + self._dirty_tenants.add(tenant_id) + + # Retry the operation if a duplicate entry exception is raised. This + # can happen is two or more workers are trying to create a resource of a + # give kind for the same tenant concurrently. Retrying the operation will + # ensure that an UPDATE statement is emitted rather than an INSERT one + @oslo_db_api.wrap_db_retry( + max_retries=db_api.MAX_RETRIES, + exception_checker=lambda exc: + isinstance(exc, oslo_db_exception.DBDuplicateEntry)) + def _set_quota_usage(self, context, tenant_id, in_use): + return quota_api.set_quota_usage(context, self.name, + tenant_id, in_use=in_use) + + def _resync(self, context, tenant_id, in_use): + # Update quota usage + usage_info = self._set_quota_usage( + context, tenant_id, in_use=in_use) + self._dirty_tenants.discard(tenant_id) + self._out_of_sync_tenants.discard(tenant_id) + LOG.debug(("Unset dirty status for tenant:%(tenant_id)s on " + "resource:%(resource)s"), + {'tenant_id': tenant_id, 'resource': self.name}) + return usage_info + + def resync(self, context, tenant_id): + if tenant_id not in self._out_of_sync_tenants: + return + LOG.debug(("Synchronizing usage tracker for tenant:%(tenant_id)s on " + "resource:%(resource)s"), + {'tenant_id': tenant_id, 'resource': self.name}) + in_use = context.session.query(self._model_class).filter_by( + tenant_id=tenant_id).count() + # Update quota usage + return self._resync(context, tenant_id, in_use) + + def count(self, context, _plugin, tenant_id, resync_usage=False): + """Return the current usage count for the resource.""" + # Load current usage data + usage_info = quota_api.get_quota_usage_by_resource_and_tenant( + context, self.name, tenant_id) + # If dirty or missing, calculate actual resource usage querying + # the database and set/create usage info data + # NOTE: this routine "trusts" usage counters at service startup. This + # assumption is generally valid, but if the database is tampered with, + # or if data migrations do not take care of usage counters, the + # assumption will not hold anymore + if (tenant_id in self._dirty_tenants or not usage_info + or usage_info.dirty): + LOG.debug(("Usage tracker for resource:%(resource)s and tenant:" + "%(tenant_id)s is out of sync, need to count used " + "quota"), {'resource': self.name, + 'tenant_id': tenant_id}) + in_use = context.session.query(self._model_class).filter_by( + tenant_id=tenant_id).count() + # Update quota usage, if requested (by default do not do that, as + # typically one counts before adding a record, and that would mark + # the usage counter as dirty again) + if resync_usage or not usage_info: + usage_info = self._resync(context, tenant_id, in_use) + else: + usage_info = quota_api.QuotaUsageInfo(usage_info.resource, + usage_info.tenant_id, + in_use, + usage_info.reserved, + usage_info.dirty) + + return usage_info.total + + def register_events(self): + event.listen(self._model_class, 'after_insert', self._db_event_handler) + event.listen(self._model_class, 'after_delete', self._db_event_handler) + + def unregister_events(self): + event.remove(self._model_class, 'after_insert', self._db_event_handler) + event.remove(self._model_class, 'after_delete', self._db_event_handler) diff --git a/neutron/quota/resource_registry.py b/neutron/quota/resource_registry.py new file mode 100644 index 00000000000..d0263e87614 --- /dev/null +++ b/neutron/quota/resource_registry.py @@ -0,0 +1,248 @@ +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from oslo_config import cfg +from oslo_log import log +import six + +from neutron.i18n import _LI, _LW +from neutron.quota import resource + +LOG = log.getLogger(__name__) + + +# Wrappers for easing access to the ResourceRegistry singleton + + +def register_resource(resource): + ResourceRegistry.get_instance().register_resource(resource) + + +def register_resource_by_name(resource_name, plural_name=None): + ResourceRegistry.get_instance().register_resource_by_name( + resource_name, plural_name) + + +def get_all_resources(): + return ResourceRegistry.get_instance().resources + + +def get_resource(resource_name): + return ResourceRegistry.get_instance().get_resource(resource_name) + + +def is_tracked(resource_name): + return ResourceRegistry.get_instance().is_tracked(resource_name) + + +# auxiliary functions and decorators + + +def set_resources_dirty(context): + """Sets the dirty bit for resources with usage changes. + + This routine scans all registered resources, and, for those whose + dirty status is True, sets the dirty bit to True in the database + for the appropriate tenants. + + Please note that this routine begins a nested transaction, and it + is not recommended that this transaction begins within another + transaction. For this reason the function will raise a SqlAlchemy + exception if such an attempt is made. + + :param context: a Neutron request context with a DB session + """ + if not cfg.CONF.QUOTAS.track_quota_usage: + return + + for res in get_all_resources().values(): + with context.session.begin(): + if is_tracked(res.name) and res.dirty: + res.mark_dirty(context, nested=True) + + +def resync_resource(context, resource_name, tenant_id): + if not cfg.CONF.QUOTAS.track_quota_usage: + return + + if is_tracked(resource_name): + res = get_resource(resource_name) + # If the resource is tracked count supports the resync_usage parameter + res.resync(context, tenant_id) + + +def mark_resources_dirty(f): + """Decorator for functions which alter resource usage. + + This decorator ensures set_resource_dirty is invoked after completion + of the decorated function. + """ + + @six.wraps(f) + def wrapper(_self, context, *args, **kwargs): + ret_val = f(_self, context, *args, **kwargs) + set_resources_dirty(context) + return ret_val + + return wrapper + + +class tracked_resources(object): + """Decorator for specifying resources for which usage should be tracked. + + A plugin class can use this decorator to specify for which resources + usage info should be tracked into an appropriate table rather than being + explicitly counted. + """ + + def __init__(self, override=False, **kwargs): + self._tracked_resources = kwargs + self._override = override + + def __call__(self, f): + + @six.wraps(f) + def wrapper(*args, **kwargs): + registry = ResourceRegistry.get_instance() + for resource_name in self._tracked_resources: + registry.set_tracked_resource( + resource_name, + self._tracked_resources[resource_name], + self._override) + return f(*args, **kwargs) + + return wrapper + + +class ResourceRegistry(object): + """Registry for resource subject to quota limits. + + This class keeps track of Neutron resources for which quota limits are + enforced, regardless of whether their usage is being tracked or counted. + + For tracked-usage resources, that is to say those resources for which + there are usage counters which are kept in sync with the actual number + of rows in the database, this class allows the plugin to register their + names either explicitly or through the @tracked_resources decorator, + which should preferrably be applied to the __init__ method of the class. + """ + + _instance = None + + @classmethod + def get_instance(cls): + if cls._instance is None: + cls._instance = cls() + return cls._instance + + def __init__(self): + self._resources = {} + # Map usage tracked resources to the correspondent db model class + self._tracked_resource_mappings = {} + + def __contains__(self, resource): + return resource in self._resources + + def _create_resource_instance(self, resource_name, plural_name): + """Factory function for quota Resource. + + This routine returns a resource instance of the appropriate type + according to system configuration. + + If QUOTAS.track_quota_usage is True, and there is a model mapping for + the current resource, this function will return an instance of + AccountedResource; otherwise an instance of CountableResource. + """ + + if (not cfg.CONF.QUOTAS.track_quota_usage or + resource_name not in self._tracked_resource_mappings): + LOG.info(_LI("Creating instance of CountableResource for " + "resource:%s"), resource_name) + return resource.CountableResource( + resource_name, resource._count_resource, + 'quota_%s' % resource_name) + else: + LOG.info(_LI("Creating instance of TrackedResource for " + "resource:%s"), resource_name) + return resource.TrackedResource( + resource_name, + self._tracked_resource_mappings[resource_name], + 'quota_%s' % resource_name) + + def set_tracked_resource(self, resource_name, model_class, override=False): + # Do not do anything if tracking is disabled by config + if not cfg.CONF.QUOTAS.track_quota_usage: + return + + current_model_class = self._tracked_resource_mappings.setdefault( + resource_name, model_class) + + # Check whether setdefault also set the entry in the dict + if current_model_class != model_class: + LOG.debug("A model class is already defined for %(resource)s: " + "%(current_model_class)s. Override:%(override)s", + {'resource': resource_name, + 'current_model_class': current_model_class, + 'override': override}) + if override: + self._tracked_resource_mappings[resource_name] = model_class + LOG.debug("Tracking information for resource: %s configured", + resource_name) + + def is_tracked(self, resource_name): + """Find out if a resource if tracked or not. + + :param resource_name: name of the resource. + :returns True if resource_name is registered and tracked, otherwise + False. Please note that here when False it returned it + simply means that resource_name is not a TrackedResource + instance, it does not necessarily mean that the resource + is not registered. + """ + return resource_name in self._tracked_resource_mappings + + def register_resource(self, resource): + if resource.name in self._resources: + LOG.warn(_LW('%s is already registered'), resource.name) + if resource.name in self._tracked_resource_mappings: + resource.register_events() + self._resources[resource.name] = resource + + def register_resources(self, resources): + for res in resources: + self.register_resource(res) + + def register_resource_by_name(self, resource_name, + plural_name=None): + """Register a resource by name.""" + resource = self._create_resource_instance( + resource_name, plural_name) + self.register_resource(resource) + + def unregister_resources(self): + """Unregister all resources.""" + for (res_name, res) in self._resources.items(): + if res_name in self._tracked_resource_mappings: + res.unregister_events() + self._resources.clear() + self._tracked_resource_mappings.clear() + + def get_resource(self, resource_name): + """Return a resource given its name. + + :returns: The resource instance or None if the resource is not found + """ + return self._resources.get(resource_name) + + @property + def resources(self): + return self._resources diff --git a/neutron/scheduler/l3_agent_scheduler.py b/neutron/scheduler/l3_agent_scheduler.py index 4d20ba7b888..223a5487442 100644 --- a/neutron/scheduler/l3_agent_scheduler.py +++ b/neutron/scheduler/l3_agent_scheduler.py @@ -200,7 +200,7 @@ class L3Scheduler(object): if router.get('ha'): if not self._router_has_binding(context, router['id'], l3_agent.id): - self._create_ha_router_binding( + self.create_ha_port_and_bind( plugin, context, router['id'], router['tenant_id'], l3_agent) else: @@ -289,8 +289,8 @@ class L3Scheduler(object): return False return True - def _create_ha_router_binding(self, plugin, context, router_id, tenant_id, - agent): + def create_ha_port_and_bind(self, plugin, context, router_id, + tenant_id, agent): """Creates and binds a new HA port for this agent.""" ha_network = plugin.get_ha_network(context, tenant_id) port_binding = plugin.add_ha_port(context.elevated(), router_id, @@ -316,9 +316,9 @@ class L3Scheduler(object): if max_agents_not_reached: if not self._router_has_binding(admin_ctx, router_id, agent.id): - self._create_ha_router_binding(plugin, admin_ctx, - router_id, tenant_id, - agent) + self.create_ha_port_and_bind(plugin, admin_ctx, + router_id, tenant_id, + agent) scheduled = True return scheduled diff --git a/neutron/service.py b/neutron/service.py index ee8432dea50..4cec3357078 100644 --- a/neutron/service.py +++ b/neutron/service.py @@ -14,7 +14,6 @@ # under the License. import inspect -import logging as std_logging import os import random @@ -92,8 +91,6 @@ class NeutronApiService(WsgiService): # Log the options used when starting if we're in debug mode... config.setup_logging() - # Dump the initial option values - cfg.CONF.log_opt_values(LOG, std_logging.DEBUG) service = cls(app_name) return service @@ -186,8 +183,6 @@ def _run_wsgi(app_name): server = wsgi.Server("Neutron") server.start(app, cfg.CONF.bind_port, cfg.CONF.bind_host, workers=_get_api_workers()) - # Dump all option values here after all options are parsed - cfg.CONF.log_opt_values(LOG, std_logging.DEBUG) LOG.info(_LI("Neutron service started, listening on %(host)s:%(port)s"), {'host': cfg.CONF.bind_host, 'port': cfg.CONF.bind_port}) return server diff --git a/neutron/services/l3_router/l3_apic.py b/neutron/services/l3_router/l3_apic.py deleted file mode 100644 index 651fac23986..00000000000 --- a/neutron/services/l3_router/l3_apic.py +++ /dev/null @@ -1,184 +0,0 @@ -# Copyright (c) 2014 Cisco Systems Inc. -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -from apicapi import apic_mapper -from oslo_utils import excutils - -from neutron.db import db_base_plugin_v2 -from neutron.db import extraroute_db -from neutron.db import l3_dvr_db -from neutron.plugins.common import constants - -from neutron.plugins.ml2.drivers.cisco.apic import mechanism_apic - - -class ApicL3ServicePlugin(db_base_plugin_v2.NeutronDbPluginV2, - l3_dvr_db.L3_NAT_with_dvr_db_mixin, - extraroute_db.ExtraRoute_db_mixin): - supported_extension_aliases = ["router", "ext-gw-mode", "extraroute"] - - def __init__(self): - super(ApicL3ServicePlugin, self).__init__() - self.manager = mechanism_apic.APICMechanismDriver.get_apic_manager() - self.name_mapper = self.manager.apic_mapper - self.synchronizer = None - self.manager.ensure_infra_created_on_apic() - self.manager.ensure_bgp_pod_policy_created_on_apic() - - def _map_names(self, context, - tenant_id, router_id, net_id, subnet_id): - context._plugin = self - with apic_mapper.mapper_context(context) as ctx: - atenant_id = tenant_id and self.name_mapper.tenant(ctx, tenant_id) - arouter_id = router_id and self.name_mapper.router(ctx, router_id) - anet_id = net_id and self.name_mapper.network(ctx, net_id) - asubnet_id = subnet_id and self.name_mapper.subnet(ctx, subnet_id) - return atenant_id, arouter_id, anet_id, asubnet_id - - @staticmethod - def get_plugin_type(): - return constants.L3_ROUTER_NAT - - @staticmethod - def get_plugin_description(): - """Returns string description of the plugin.""" - return _("L3 Router Service Plugin for basic L3 using the APIC") - - def sync_init(f): - def inner(inst, *args, **kwargs): - if not inst.synchronizer: - inst.synchronizer = ( - mechanism_apic.APICMechanismDriver. - get_router_synchronizer(inst)) - inst.synchronizer.sync_router() - # pylint: disable=not-callable - return f(inst, *args, **kwargs) - return inner - - def add_router_interface_postcommit(self, context, router_id, - interface_info): - # Update router's state first - router = self.get_router(context, router_id) - self.update_router_postcommit(context, router) - - # Add router interface - if 'subnet_id' in interface_info: - subnet = self.get_subnet(context, interface_info['subnet_id']) - network_id = subnet['network_id'] - tenant_id = subnet['tenant_id'] - else: - port = self.get_port(context, interface_info['port_id']) - network_id = port['network_id'] - tenant_id = port['tenant_id'] - - # Map openstack IDs to APIC IDs - atenant_id, arouter_id, anetwork_id, _ = self._map_names( - context, tenant_id, router_id, network_id, None) - - # Program APIC - self.manager.add_router_interface(atenant_id, arouter_id, - anetwork_id) - - def remove_router_interface_precommit(self, context, router_id, - interface_info): - if 'subnet_id' in interface_info: - subnet = self.get_subnet(context, interface_info['subnet_id']) - network_id = subnet['network_id'] - tenant_id = subnet['tenant_id'] - else: - port = self.get_port(context, interface_info['port_id']) - network_id = port['network_id'] - tenant_id = port['tenant_id'] - - # Map openstack IDs to APIC IDs - atenant_id, arouter_id, anetwork_id, _ = self._map_names( - context, tenant_id, router_id, network_id, None) - - # Program APIC - self.manager.remove_router_interface(atenant_id, arouter_id, - anetwork_id) - - def delete_router_precommit(self, context, router_id): - context._plugin = self - with apic_mapper.mapper_context(context) as ctx: - arouter_id = router_id and self.name_mapper.router(ctx, router_id) - self.manager.delete_router(arouter_id) - - def update_router_postcommit(self, context, router): - context._plugin = self - with apic_mapper.mapper_context(context) as ctx: - arouter_id = router['id'] and self.name_mapper.router(ctx, - router['id']) - with self.manager.apic.transaction() as trs: - self.manager.create_router(arouter_id, transaction=trs) - if router['admin_state_up']: - self.manager.enable_router(arouter_id, transaction=trs) - else: - self.manager.disable_router(arouter_id, transaction=trs) - - # Router API - - @sync_init - def create_router(self, *args, **kwargs): - return super(ApicL3ServicePlugin, self).create_router(*args, **kwargs) - - @sync_init - def update_router(self, context, id, router): - result = super(ApicL3ServicePlugin, self).update_router(context, - id, router) - self.update_router_postcommit(context, result) - return result - - @sync_init - def get_router(self, *args, **kwargs): - return super(ApicL3ServicePlugin, self).get_router(*args, **kwargs) - - @sync_init - def get_routers(self, *args, **kwargs): - return super(ApicL3ServicePlugin, self).get_routers(*args, **kwargs) - - @sync_init - def get_routers_count(self, *args, **kwargs): - return super(ApicL3ServicePlugin, self).get_routers_count(*args, - **kwargs) - - def delete_router(self, context, router_id): - self.delete_router_precommit(context, router_id) - result = super(ApicL3ServicePlugin, self).delete_router(context, - router_id) - return result - - # Router Interface API - - @sync_init - def add_router_interface(self, context, router_id, interface_info): - # Create interface in parent - result = super(ApicL3ServicePlugin, self).add_router_interface( - context, router_id, interface_info) - try: - self.add_router_interface_postcommit(context, router_id, - interface_info) - except Exception: - with excutils.save_and_reraise_exception(): - # Rollback db operation - super(ApicL3ServicePlugin, self).remove_router_interface( - context, router_id, interface_info) - return result - - def remove_router_interface(self, context, router_id, interface_info): - self.remove_router_interface_precommit(context, router_id, - interface_info) - return super(ApicL3ServicePlugin, self).remove_router_interface( - context, router_id, interface_info) diff --git a/neutron/services/l3_router/l3_router_plugin.py b/neutron/services/l3_router/l3_router_plugin.py index 91f8ad9d03c..289f81d1b3d 100644 --- a/neutron/services/l3_router/l3_router_plugin.py +++ b/neutron/services/l3_router/l3_router_plugin.py @@ -14,12 +14,12 @@ # under the License. from oslo_config import cfg +from oslo_log import helpers as log_helpers from oslo_utils import importutils from neutron.api.rpc.agentnotifiers import l3_rpc_agent_api from neutron.api.rpc.handlers import l3_rpc from neutron.common import constants as n_const -from neutron.common import log as neutron_log from neutron.common import rpc as n_rpc from neutron.common import topics from neutron.db import common_db_mixin @@ -62,7 +62,7 @@ class L3RouterPlugin(common_db_mixin.CommonDbMixin, l3_dvrscheduler_db.subscribe() l3_db.subscribe() - @neutron_log.log + @log_helpers.log_method_call def setup_rpc(self): # RPC support self.topic = topics.L3PLUGIN diff --git a/neutron/tests/api/admin/test_quotas.py b/neutron/tests/api/admin/test_quotas.py index 0dfe7987584..f29d438c787 100644 --- a/neutron/tests/api/admin/test_quotas.py +++ b/neutron/tests/api/admin/test_quotas.py @@ -35,7 +35,7 @@ class QuotasTest(base.BaseAdminNetworkTest): It is also assumed that the per-tenant quota extension API is configured in /etc/neutron/neutron.conf as follows: - quota_driver = neutron.db.quota_db.DbQuotaDriver + quota_driver = neutron.db.driver.DbQuotaDriver """ @classmethod diff --git a/neutron/tests/api/admin/test_shared_network_extension.py b/neutron/tests/api/admin/test_shared_network_extension.py index 64fb33e7429..569e07f1a72 100644 --- a/neutron/tests/api/admin/test_shared_network_extension.py +++ b/neutron/tests/api/admin/test_shared_network_extension.py @@ -32,6 +32,49 @@ class SharedNetworksTest(base.BaseAdminNetworkTest): super(SharedNetworksTest, cls).resource_setup() cls.shared_network = cls.create_shared_network() + @test.idempotent_id('6661d219-b96d-4597-ad10-55766123421a') + def test_filtering_shared_networks(self): + # this test is necessary because the 'shared' column does not actually + # exist on networks so the filter function has to translate it into + # queries against the RBAC table + self.create_network() + self._check_shared_correct( + self.client.list_networks(shared=True)['networks'], True) + self._check_shared_correct( + self.admin_client.list_networks(shared=True)['networks'], True) + self._check_shared_correct( + self.client.list_networks(shared=False)['networks'], False) + self._check_shared_correct( + self.admin_client.list_networks(shared=False)['networks'], False) + + def _check_shared_correct(self, items, shared): + self.assertNotEmpty(items) + self.assertTrue(all(n['shared'] == shared for n in items)) + + @test.idempotent_id('6661d219-b96d-4597-ad10-51672353421a') + def test_filtering_shared_subnets(self): + # shared subnets need to be tested because their shared status isn't + # visible as a regular API attribute and it's solely dependent on the + # parent network + reg = self.create_network() + priv = self.create_subnet(reg, client=self.client) + shared = self.create_subnet(self.shared_network, + client=self.admin_client) + self.assertIn(shared, self.client.list_subnets(shared=True)['subnets']) + self.assertIn(shared, + self.admin_client.list_subnets(shared=True)['subnets']) + self.assertNotIn(priv, + self.client.list_subnets(shared=True)['subnets']) + self.assertNotIn(priv, + self.admin_client.list_subnets(shared=True)['subnets']) + self.assertIn(priv, self.client.list_subnets(shared=False)['subnets']) + self.assertIn(priv, + self.admin_client.list_subnets(shared=False)['subnets']) + self.assertNotIn(shared, + self.client.list_subnets(shared=False)['subnets']) + self.assertNotIn(shared, + self.admin_client.list_subnets(shared=False)['subnets']) + @test.idempotent_id('6661d219-b96d-4597-ad10-55766ce4abf7') def test_create_update_shared_network(self): shared_network = self.create_shared_network() diff --git a/neutron/tests/api/base.py b/neutron/tests/api/base.py index 25ae565e580..2790240eb5f 100644 --- a/neutron/tests/api/base.py +++ b/neutron/tests/api/base.py @@ -82,11 +82,15 @@ class BaseNetworkTest(neutron.tests.tempest.test.BaseTestCase): cls.ikepolicies = [] cls.floating_ips = [] cls.metering_labels = [] + cls.service_profiles = [] + cls.flavors = [] cls.metering_label_rules = [] cls.fw_rules = [] cls.fw_policies = [] cls.ipsecpolicies = [] cls.ethertype = "IPv" + str(cls._ip_version) + cls.address_scopes = [] + cls.admin_address_scopes = [] @classmethod def resource_cleanup(cls): @@ -146,6 +150,16 @@ class BaseNetworkTest(neutron.tests.tempest.test.BaseTestCase): cls._try_delete_resource( cls.admin_client.delete_metering_label, metering_label['id']) + # Clean up flavors + for flavor in cls.flavors: + cls._try_delete_resource( + cls.admin_client.delete_flavor, + flavor['id']) + # Clean up service profiles + for service_profile in cls.service_profiles: + cls._try_delete_resource( + cls.admin_client.delete_service_profile, + service_profile['id']) # Clean up ports for port in cls.ports: cls._try_delete_resource(cls.client.delete_port, @@ -164,6 +178,15 @@ class BaseNetworkTest(neutron.tests.tempest.test.BaseTestCase): cls._try_delete_resource(cls.admin_client.delete_network, network['id']) + for address_scope in cls.address_scopes: + cls._try_delete_resource(cls.client.delete_address_scope, + address_scope['id']) + + for address_scope in cls.admin_address_scopes: + cls._try_delete_resource( + cls.admin_client.delete_address_scope, + address_scope['id']) + cls.clear_isolated_creds() super(BaseNetworkTest, cls).resource_cleanup() @@ -428,6 +451,16 @@ class BaseNetworkTest(neutron.tests.tempest.test.BaseTestCase): cls.ipsecpolicies.append(ipsecpolicy) return ipsecpolicy + @classmethod + def create_address_scope(cls, name, is_admin=False, **kwargs): + if is_admin: + body = cls.admin_client.create_address_scope(name=name, **kwargs) + cls.admin_address_scopes.append(body['address_scope']) + else: + body = cls.client.create_address_scope(name=name, **kwargs) + cls.address_scopes.append(body['address_scope']) + return body['address_scope'] + class BaseAdminNetworkTest(BaseNetworkTest): @@ -464,3 +497,22 @@ class BaseAdminNetworkTest(BaseNetworkTest): metering_label_rule = body['metering_label_rule'] cls.metering_label_rules.append(metering_label_rule) return metering_label_rule + + @classmethod + def create_flavor(cls, name, description, service_type): + """Wrapper utility that returns a test flavor.""" + body = cls.admin_client.create_flavor( + description=description, service_type=service_type, + name=name) + flavor = body['flavor'] + cls.flavors.append(flavor) + return flavor + + @classmethod + def create_service_profile(cls, description, metainfo, driver): + """Wrapper utility that returns a test service profile.""" + body = cls.admin_client.create_service_profile( + driver=driver, metainfo=metainfo, description=description) + service_profile = body['service_profile'] + cls.service_profiles.append(service_profile) + return service_profile diff --git a/neutron/tests/api/test_address_scopes.py b/neutron/tests/api/test_address_scopes.py new file mode 100644 index 00000000000..a80319b39a3 --- /dev/null +++ b/neutron/tests/api/test_address_scopes.py @@ -0,0 +1,121 @@ +# Copyright (c) 2015 Red Hat, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from tempest_lib.common.utils import data_utils +from tempest_lib import exceptions as lib_exc + +from neutron.tests.api import base +from neutron.tests.api import clients +from neutron.tests.tempest import config +from neutron.tests.tempest import test + +CONF = config.CONF +ADDRESS_SCOPE_NAME = 'smoke-address-scope' + + +class AddressScopeTestBase(base.BaseNetworkTest): + + @classmethod + def resource_setup(cls): + super(AddressScopeTestBase, cls).resource_setup() + try: + creds = cls.isolated_creds.get_admin_creds() + cls.os_adm = clients.Manager(credentials=creds) + except NotImplementedError: + msg = ("Missing Administrative Network API credentials " + "in configuration.") + raise cls.skipException(msg) + cls.admin_client = cls.os_adm.network_client + + def _create_address_scope(self, is_admin=False, **kwargs): + name = data_utils.rand_name(ADDRESS_SCOPE_NAME) + return self.create_address_scope(name=name, is_admin=is_admin, + **kwargs) + + def _test_update_address_scope_helper(self, is_admin=False, shared=None): + address_scope = self._create_address_scope(is_admin=is_admin) + + if is_admin: + client = self.admin_client + else: + client = self.client + + kwargs = {'name': 'new_name'} + if shared is not None: + kwargs['shared'] = shared + + client.update_address_scope(address_scope['id'], **kwargs) + body = client.show_address_scope(address_scope['id']) + address_scope = body['address_scope'] + self.assertEqual('new_name', address_scope['name']) + return address_scope + + +class AddressScopeTest(AddressScopeTestBase): + + @test.attr(type='smoke') + @test.idempotent_id('045f9294-8b1a-4848-b6a8-edf1b41e9d06') + def test_tenant_create_list_address_scope(self): + address_scope = self._create_address_scope() + body = self.client.list_address_scopes() + returned_address_scopes = body['address_scopes'] + self.assertIn(address_scope['id'], + [a_s['id'] for a_s in returned_address_scopes], + "Created address scope id should be in the list") + self.assertIn(address_scope['name'], + [a_s['name'] for a_s in returned_address_scopes], + "Created address scope name should be in the list") + + @test.attr(type='smoke') + @test.idempotent_id('85e0326b-4c75-4b92-bd6e-7c7de6aaf05c') + def test_show_address_scope(self): + address_scope = self._create_address_scope() + body = self.client.show_address_scope( + address_scope['id']) + returned_address_scope = body['address_scope'] + self.assertEqual(address_scope['id'], returned_address_scope['id']) + self.assertEqual(address_scope['name'], + returned_address_scope['name']) + self.assertFalse(returned_address_scope['shared']) + + @test.attr(type='smoke') + @test.idempotent_id('85a259b2-ace6-4e32-9657-a9a392b452aa') + def test_tenant_update_address_scope(self): + self._test_update_address_scope_helper() + + @test.attr(type='smoke') + @test.idempotent_id('22b3b600-72a8-4b60-bc94-0f29dd6271df') + def test_delete_address_scope(self): + address_scope = self._create_address_scope() + self.client.delete_address_scope(address_scope['id']) + self.assertRaises(lib_exc.NotFound, self.client.show_address_scope, + address_scope['id']) + + @test.attr(type='smoke') + @test.idempotent_id('5a06c287-8036-4d04-9d78-def8e06d43df') + def test_admin_create_shared_address_scope(self): + address_scope = self._create_address_scope(is_admin=True, shared=True) + body = self.admin_client.show_address_scope( + address_scope['id']) + returned_address_scope = body['address_scope'] + self.assertEqual(address_scope['name'], + returned_address_scope['name']) + self.assertTrue(returned_address_scope['shared']) + + @test.attr(type='smoke') + @test.idempotent_id('e9e1ccdd-9ccd-4076-9503-71820529508b') + def test_admin_update_shared_address_scope(self): + address_scope = self._test_update_address_scope_helper(is_admin=True, + shared=True) + self.assertTrue(address_scope['shared']) diff --git a/neutron/tests/api/test_address_scopes_negative.py b/neutron/tests/api/test_address_scopes_negative.py new file mode 100644 index 00000000000..872650b4a86 --- /dev/null +++ b/neutron/tests/api/test_address_scopes_negative.py @@ -0,0 +1,77 @@ +# Copyright (c) 2015 Red Hat, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from tempest_lib.common.utils import data_utils +from tempest_lib import exceptions as lib_exc + +from neutron.tests.api import test_address_scopes +from neutron.tests.tempest import test + + +class AddressScopeTestNegative(test_address_scopes.AddressScopeTestBase): + + @test.attr(type=['negative', 'smoke']) + @test.idempotent_id('9c92ec34-0c50-4104-aa47-9ce98d5088df') + def test_tenant_create_shared_address_scope(self): + self.assertRaises(lib_exc.Forbidden, self._create_address_scope, + shared=True) + + @test.attr(type=['negative', 'smoke']) + @test.idempotent_id('a857b61e-bf53-4fab-b21a-b0daaf81b5bd') + def test_tenant_update_address_scope_shared_true(self): + self.assertRaises(lib_exc.Forbidden, + self._test_update_address_scope_helper, shared=True) + + @test.attr(type=['negative', 'smoke']) + @test.idempotent_id('a859ef2f-9c76-4e2e-ba0f-e0339a489e8c') + def test_tenant_update_address_scope_shared_false(self): + self.assertRaises(lib_exc.Forbidden, + self._test_update_address_scope_helper, shared=False) + + @test.attr(type=['negative', 'smoke']) + @test.idempotent_id('9b6dd7ad-cabb-4f55-bd5e-e61176ef41f6') + def test_get_non_existent_address_scope(self): + non_exist_id = data_utils.rand_name('address_scope') + self.assertRaises(lib_exc.NotFound, self.client.show_address_scope, + non_exist_id) + + @test.attr(type=['negative', 'smoke']) + @test.idempotent_id('ef213552-f2da-487d-bf4a-e1705d115ff1') + def test_tenant_get_not_shared_admin_address_scope(self): + address_scope = self._create_address_scope(is_admin=True) + # None-shared admin address scope cannot be retrieved by tenant user. + self.assertRaises(lib_exc.NotFound, self.client.show_address_scope, + address_scope['id']) + + @test.attr(type=['negative', 'smoke']) + @test.idempotent_id('5c25dc6a-1e92-467a-9cc7-cda74b6003db') + def test_delete_non_existent_address_scope(self): + non_exist_id = data_utils.rand_name('address_scope') + self.assertRaises(lib_exc.NotFound, self.client.delete_address_scope, + non_exist_id) + + @test.attr(type=['negative', 'smoke']) + @test.idempotent_id('47c25dc5-e886-4a84-88c3-ac5031969661') + def test_update_non_existent_address_scope(self): + non_exist_id = data_utils.rand_name('address_scope') + self.assertRaises(lib_exc.NotFound, self.client.update_address_scope, + non_exist_id, name='foo-name') + + @test.attr(type=['negative', 'smoke']) + @test.idempotent_id('702d0515-82cb-4207-b0d9-703336e54665') + def test_update_shared_address_scope_to_unshare(self): + address_scope = self._create_address_scope(is_admin=True, shared=True) + self.assertRaises(lib_exc.BadRequest, + self.admin_client.update_address_scope, + address_scope['id'], name='new-name', shared=False) diff --git a/neutron/tests/api/test_dhcp_ipv6.py b/neutron/tests/api/test_dhcp_ipv6.py index 3e181e2d0fa..2bd9379cb0d 100644 --- a/neutron/tests/api/test_dhcp_ipv6.py +++ b/neutron/tests/api/test_dhcp_ipv6.py @@ -227,7 +227,7 @@ class NetworksTestDHCPv6(base.BaseNetworkTest): """When a Network contains two subnets, one being an IPv6 subnet configured with ipv6_ra_mode either as slaac or dhcpv6-stateless, and the other subnet being an IPv4 subnet, a port attached to the - network shall recieve IP addresses from the subnets as follows: An + network shall receive IP addresses from the subnets as follows: An IPv6 address calculated using EUI-64 from the first subnet, and an IPv4 address from the second subnet. The ordering of the subnets that the port is associated with should not affect this behavior. diff --git a/neutron/tests/api/test_flavors_extensions.py b/neutron/tests/api/test_flavors_extensions.py new file mode 100644 index 00000000000..8575c6f31d8 --- /dev/null +++ b/neutron/tests/api/test_flavors_extensions.py @@ -0,0 +1,154 @@ +# Copyright 2015 Hewlett-Packard Development Company, L.P. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from oslo_log import log as logging + +from neutron.tests.api import base +from neutron.tests.tempest import test + + +LOG = logging.getLogger(__name__) + + +class TestFlavorsJson(base.BaseAdminNetworkTest): + + """ + Tests the following operations in the Neutron API using the REST client for + Neutron: + + List, Show, Create, Update, Delete Flavors + List, Show, Create, Update, Delete service profiles + """ + + @classmethod + def resource_setup(cls): + super(TestFlavorsJson, cls).resource_setup() + if not test.is_extension_enabled('flavors', 'network'): + msg = "flavors extension not enabled." + raise cls.skipException(msg) + service_type = "LOADBALANCER" + description_flavor = "flavor is created by tempest" + name_flavor = "Best flavor created by tempest" + cls.flavor = cls.create_flavor(name_flavor, description_flavor, + service_type) + description_sp = "service profile created by tempest" + # Future TODO(madhu_ak): Right now the dummy driver is loaded. Will + # make changes as soon I get to know the flavor supported drivers + driver = "" + metainfo = '{"data": "value"}' + cls.service_profile = cls.create_service_profile( + description=description_sp, metainfo=metainfo, driver=driver) + + def _delete_service_profile(self, service_profile_id): + # Deletes a service profile and verifies if it is deleted or not + self.admin_client.delete_service_profile(service_profile_id) + # Asserting that service profile is not found in list after deletion + labels = self.admin_client.list_service_profiles(id=service_profile_id) + self.assertEqual(len(labels['service_profiles']), 0) + + @test.attr(type='smoke') + @test.idempotent_id('ec8e15ff-95d0-433b-b8a6-b466bddb1e50') + def test_create_update_delete_service_profile(self): + # Creates a service profile + description = "service_profile created by tempest" + driver = "" + metainfo = '{"data": "value"}' + body = self.admin_client.create_service_profile( + description=description, driver=driver, metainfo=metainfo) + service_profile = body['service_profile'] + # Updates a service profile + self.admin_client.update_service_profile(service_profile['id'], + enabled=False) + self.assertTrue(service_profile['enabled']) + # Deletes a service profile + self.addCleanup(self._delete_service_profile, + service_profile['id']) + # Assert whether created service profiles are found in service profile + # lists or fail if created service profiles are not found in service + # profiles list + labels = (self.admin_client.list_service_profiles( + id=service_profile['id'])) + self.assertEqual(len(labels['service_profiles']), 1) + + @test.attr(type='smoke') + @test.idempotent_id('ec8e15ff-95d0-433b-b8a6-b466bddb1e50') + def test_create_update_delete_flavor(self): + # Creates a flavor + description = "flavor created by tempest" + service = "LOADBALANCERS" + name = "Best flavor created by tempest" + body = self.admin_client.create_flavor(name=name, service_type=service, + description=description) + flavor = body['flavor'] + # Updates a flavor + self.admin_client.update_flavor(flavor['id'], enabled=False) + self.assertTrue(flavor['enabled']) + # Deletes a flavor + self.addCleanup(self._delete_flavor, flavor['id']) + # Assert whether created flavors are found in flavor lists or fail + # if created flavors are not found in flavors list + labels = (self.admin_client.list_flavors(id=flavor['id'])) + self.assertEqual(len(labels['flavors']), 1) + + @test.attr(type='smoke') + @test.idempotent_id('30abb445-0eea-472e-bd02-8649f54a5968') + def test_show_service_profile(self): + # Verifies the details of a service profile + body = self.admin_client.show_service_profile( + self.service_profile['id']) + service_profile = body['service_profile'] + self.assertEqual(self.service_profile['id'], service_profile['id']) + self.assertEqual(self.service_profile['description'], + service_profile['description']) + self.assertEqual(self.service_profile['metainfo'], + service_profile['metainfo']) + self.assertEqual(True, service_profile['enabled']) + + @test.attr(type='smoke') + @test.idempotent_id('30abb445-0eea-472e-bd02-8649f54a5968') + def test_show_flavor(self): + # Verifies the details of a flavor + body = self.admin_client.show_flavor(self.flavor['id']) + flavor = body['flavor'] + self.assertEqual(self.flavor['id'], flavor['id']) + self.assertEqual(self.flavor['description'], flavor['description']) + self.assertEqual(self.flavor['name'], flavor['name']) + self.assertEqual(True, flavor['enabled']) + + @test.attr(type='smoke') + @test.idempotent_id('e2fb2f8c-45bf-429a-9f17-171c70444612') + def test_list_flavors(self): + # Verify flavor lists + body = self.admin_client.list_flavors(id=33) + flavors = body['flavors'] + self.assertEqual(0, len(flavors)) + + @test.attr(type='smoke') + @test.idempotent_id('e2fb2f8c-45bf-429a-9f17-171c70444612') + def test_list_service_profiles(self): + # Verify service profiles lists + body = self.admin_client.list_service_profiles(id=33) + service_profiles = body['service_profiles'] + self.assertEqual(0, len(service_profiles)) + + def _delete_flavor(self, flavor_id): + # Deletes a flavor and verifies if it is deleted or not + self.admin_client.delete_flavor(flavor_id) + # Asserting that the flavor is not found in list after deletion + labels = self.admin_client.list_flavors(id=flavor_id) + self.assertEqual(len(labels['flavors']), 0) + + +class TestFlavorsIpV6TestJSON(TestFlavorsJson): + _ip_version = 6 diff --git a/neutron/tests/api/test_routers.py b/neutron/tests/api/test_routers.py index 6593f979962..4cde8fb82c1 100644 --- a/neutron/tests/api/test_routers.py +++ b/neutron/tests/api/test_routers.py @@ -189,7 +189,7 @@ class RoutersTest(base.BaseRouterTest): CONF.network.public_network_id) public_subnet_id = public_net_body['network']['subnets'][0] self.assertIn(public_subnet_id, - map(lambda x: x['subnet_id'], fixed_ips)) + [x['subnet_id'] for x in fixed_ips]) @test.attr(type='smoke') @test.idempotent_id('6cc285d8-46bf-4f36-9b1a-783e3008ba79') diff --git a/neutron/tests/base.py b/neutron/tests/base.py index 97e76eb1284..d9b6e0b6e13 100644 --- a/neutron/tests/base.py +++ b/neutron/tests/base.py @@ -22,7 +22,6 @@ import logging as std_logging import os import os.path import random -import traceback import weakref import eventlet.timeout @@ -171,8 +170,8 @@ class DietTestCase(testtools.TestCase): if os.getpid() != self.orig_pid: # Subprocess - let it just exit raise - self.fail("A SystemExit was raised during the test. %s" - % traceback.format_exception(*exc_info)) + # This makes sys.exit(0) still a failure + self.force_failure = True @contextlib.contextmanager def assert_max_execution_time(self, max_execution_time=5): diff --git a/neutron/tests/common/conn_testers.py b/neutron/tests/common/conn_testers.py new file mode 100644 index 00000000000..2de8f422dec --- /dev/null +++ b/neutron/tests/common/conn_testers.py @@ -0,0 +1,265 @@ +# All Rights Reserved. +# +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +import functools + +import fixtures + +from neutron.agent import firewall +from neutron.tests.common import machine_fixtures +from neutron.tests.common import net_helpers + + +class ConnectionTesterException(Exception): + pass + + +def _validate_direction(f): + @functools.wraps(f) + def wrap(self, direction, *args, **kwargs): + if direction not in (firewall.INGRESS_DIRECTION, + firewall.EGRESS_DIRECTION): + raise ConnectionTesterException('Unknown direction %s' % direction) + return f(self, direction, *args, **kwargs) + return wrap + + +class ConnectionTester(fixtures.Fixture): + """Base class for testers + + This class implements API for various methods for testing connectivity. The + concrete implementation relies on how encapsulated resources are + configured. That means child classes should define resources by themselves + (e.g. endpoints connected through linux bridge or ovs bridge). + + """ + + UDP = net_helpers.NetcatTester.UDP + TCP = net_helpers.NetcatTester.TCP + ICMP = 'icmp' + ARP = 'arp' + INGRESS = firewall.INGRESS_DIRECTION + EGRESS = firewall.EGRESS_DIRECTION + + def _setUp(self): + self._protocol_to_method = { + self.UDP: self._test_transport_connectivity, + self.TCP: self._test_transport_connectivity, + self.ICMP: self._test_icmp_connectivity, + self.ARP: self._test_arp_connectivity} + self._nc_testers = dict() + self.addCleanup(self.cleanup) + + def cleanup(self): + for nc in self._nc_testers.values(): + nc.stop_processes() + + @property + def vm_namespace(self): + return self._vm.namespace + + @property + def vm_ip_address(self): + return self._vm.ip + + @property + def vm_ip_cidr(self): + return self._vm.ip_cidr + + @vm_ip_cidr.setter + def vm_ip_cidr(self, ip_cidr): + self._vm.ip_cidr = ip_cidr + + @property + def vm_mac_address(self): + return self._vm.port.link.address + + @vm_mac_address.setter + def vm_mac_address(self, mac_address): + self._vm.mac_address = mac_address + + @property + def peer_namespace(self): + return self._peer.namespace + + @property + def peer_ip_address(self): + return self._peer.ip + + def flush_arp_tables(self): + """Flush arptables in all used namespaces""" + for machine in (self._peer, self._vm): + machine.port.neigh.flush(4, 'all') + + def _test_transport_connectivity(self, direction, protocol, src_port, + dst_port): + nc_tester = self._create_nc_tester(direction, protocol, src_port, + dst_port) + try: + nc_tester.test_connectivity() + except RuntimeError as exc: + raise ConnectionTesterException( + "%s connection over %s protocol with %s source port and " + "%s destination port can't be established: %s" % ( + direction, protocol, src_port, dst_port, exc)) + + @_validate_direction + def _get_namespace_and_address(self, direction): + if direction == self.INGRESS: + return self.peer_namespace, self.vm_ip_address + return self.vm_namespace, self.peer_ip_address + + def _test_icmp_connectivity(self, direction, protocol, src_port, dst_port): + src_namespace, ip_address = self._get_namespace_and_address(direction) + try: + net_helpers.assert_ping(src_namespace, ip_address) + except RuntimeError: + raise ConnectionTesterException( + "ICMP packets can't get from %s namespace to %s address" % ( + src_namespace, ip_address)) + + def _test_arp_connectivity(self, direction, protocol, src_port, dst_port): + src_namespace, ip_address = self._get_namespace_and_address(direction) + try: + net_helpers.assert_arping(src_namespace, ip_address) + except RuntimeError: + raise ConnectionTesterException( + "ARP queries to %s address have no response from %s namespace" + % (ip_address, src_namespace)) + + @_validate_direction + def assert_connection(self, direction, protocol, src_port=None, + dst_port=None): + testing_method = self._protocol_to_method[protocol] + testing_method(direction, protocol, src_port, dst_port) + + def assert_no_connection(self, direction, protocol, src_port=None, + dst_port=None): + try: + self.assert_connection(direction, protocol, src_port, dst_port) + except ConnectionTesterException: + pass + else: + dst_port_info = str() + src_port_info = str() + if dst_port is not None: + dst_port_info = " and destionation port %d" % dst_port + if src_port is not None: + src_port_info = " and source port %d" % src_port + raise ConnectionTesterException("%s connection with %s protocol%s" + "%s was established but it " + "shouldn't be possible" % ( + direction, protocol, + src_port_info, dst_port_info)) + + @_validate_direction + def assert_established_connection(self, direction, protocol, src_port=None, + dst_port=None): + nc_params = (direction, protocol, src_port, dst_port) + nc_tester = self._nc_testers.get(nc_params) + if nc_tester: + if nc_tester.is_established: + nc_tester.test_connectivity() + else: + raise ConnectionTesterException( + '%s connection with protocol %s, source port %s and ' + 'destination port %s is not established' % nc_params) + else: + raise ConnectionTesterException( + "Attempting to test established %s connection with protocol %s" + ", source port %s and destination port %s that hasn't been " + "established yet by calling establish_connection()" + % nc_params) + + def assert_no_established_connection(self, direction, protocol, + src_port=None, dst_port=None): + try: + self.assert_established_connection(direction, protocol, src_port, + dst_port) + except ConnectionTesterException: + pass + else: + raise ConnectionTesterException( + 'Established %s connection with protocol %s, source port %s, ' + 'destination port %s can still send packets throught' % ( + direction, protocol, src_port, dst_port)) + + @_validate_direction + def establish_connection(self, direction, protocol, src_port=None, + dst_port=None): + nc_tester = self._create_nc_tester(direction, protocol, src_port, + dst_port) + nc_tester.establish_connection() + self.addCleanup(nc_tester.stop_processes) + + def _create_nc_tester(self, direction, protocol, src_port, dst_port): + """Create netcat tester + + If there already exists a netcat tester that has established + connection, exception is raised. + """ + nc_key = (direction, protocol, src_port, dst_port) + nc_tester = self._nc_testers.get(nc_key) + if nc_tester and nc_tester.is_established: + raise ConnectionTesterException( + '%s connection using %s protocol, source port %s and ' + 'destination port %s is already established' % ( + direction, protocol, src_port, dst_port)) + + if direction == self.INGRESS: + client_ns = self.peer_namespace + server_ns = self.vm_namespace + server_addr = self.vm_ip_address + else: + client_ns = self.vm_namespace + server_ns = self.peer_namespace + server_addr = self.peer_ip_address + + server_port = dst_port or net_helpers.get_free_namespace_port( + protocol, server_ns) + nc_tester = net_helpers.NetcatTester(client_namespace=client_ns, + server_namespace=server_ns, + address=server_addr, + protocol=protocol, + src_port=src_port, + dst_port=server_port) + self._nc_testers[nc_key] = nc_tester + return nc_tester + + +class LinuxBridgeConnectionTester(ConnectionTester): + """Tester with linux bridge in the middle + + Both endpoints are placed in their separated namespace connected to + bridge's namespace via veth pair. + + """ + + def _setUp(self): + super(LinuxBridgeConnectionTester, self)._setUp() + self._bridge = self.useFixture(net_helpers.LinuxBridgeFixture()).bridge + self._peer, self._vm = self.useFixture( + machine_fixtures.PeerMachines(self._bridge)).machines + + @property + def bridge_namespace(self): + return self._bridge.namespace + + @property + def vm_port_id(self): + return net_helpers.VethFixture.get_peer_name(self._vm.port.name) + + def flush_arp_tables(self): + self._bridge.neigh.flush(4, 'all') + super(LinuxBridgeConnectionTester, self).flush_arp_tables() diff --git a/neutron/tests/common/l3_test_common.py b/neutron/tests/common/l3_test_common.py index 2cb66d4deea..6045f56bb44 100644 --- a/neutron/tests/common/l3_test_common.py +++ b/neutron/tests/common/l3_test_common.py @@ -56,18 +56,21 @@ def prepare_router_data(ip_version=4, enable_snat=None, num_internal_ports=1, fixed_ips = [] subnets = [] gateway_mac = kwargs.get('gateway_mac', 'ca:fe:de:ad:be:ee') + extra_subnets = [] for loop_version in (4, 6): if loop_version == 4 and (ip_version == 4 or dual_stack): ip_address = kwargs.get('ip_address', '19.4.4.4') prefixlen = 24 subnet_cidr = kwargs.get('subnet_cidr', '19.4.4.0/24') gateway_ip = kwargs.get('gateway_ip', '19.4.4.1') + _extra_subnet = {'cidr': '9.4.5.0/24'} elif (loop_version == 6 and (ip_version == 6 or dual_stack) and v6_ext_gw_with_sub): ip_address = kwargs.get('ip_address', 'fd00::4') prefixlen = 64 subnet_cidr = kwargs.get('subnet_cidr', 'fd00::/64') gateway_ip = kwargs.get('gateway_ip', 'fd00::1') + _extra_subnet = {'cidr': 'fd01::/64'} else: continue subnet_id = _uuid() @@ -77,6 +80,7 @@ def prepare_router_data(ip_version=4, enable_snat=None, num_internal_ports=1, subnets.append({'id': subnet_id, 'cidr': subnet_cidr, 'gateway_ip': gateway_ip}) + extra_subnets.append(_extra_subnet) if not fixed_ips and v6_ext_gw_with_sub: raise ValueError("Invalid ip_version: %s" % ip_version) @@ -85,7 +89,8 @@ def prepare_router_data(ip_version=4, enable_snat=None, num_internal_ports=1, 'mac_address': gateway_mac, 'network_id': _uuid(), 'fixed_ips': fixed_ips, - 'subnets': subnets} + 'subnets': subnets, + 'extra_subnets': extra_subnets} routes = [] if extra_routes: diff --git a/neutron/tests/common/machine_fixtures.py b/neutron/tests/common/machine_fixtures.py index c6ff0f78f8a..65a1a433cd1 100644 --- a/neutron/tests/common/machine_fixtures.py +++ b/neutron/tests/common/machine_fixtures.py @@ -39,8 +39,7 @@ class FakeMachine(fixtures.Fixture): def __init__(self, bridge, ip_cidr, gateway_ip=None): super(FakeMachine, self).__init__() self.bridge = bridge - self.ip_cidr = ip_cidr - self.ip = self.ip_cidr.partition('/')[0] + self._ip_cidr = ip_cidr self.gateway_ip = gateway_ip def _setUp(self): @@ -50,11 +49,35 @@ class FakeMachine(fixtures.Fixture): self.port = self.useFixture( net_helpers.PortFixture.get(self.bridge, self.namespace)).port - self.port.addr.add(self.ip_cidr) + self.port.addr.add(self._ip_cidr) if self.gateway_ip: net_helpers.set_namespace_gateway(self.port, self.gateway_ip) + @property + def ip(self): + return self._ip_cidr.partition('/')[0] + + @property + def ip_cidr(self): + return self._ip_cidr + + @ip_cidr.setter + def ip_cidr(self, ip_cidr): + self.port.addr.add(ip_cidr) + self.port.addr.delete(self._ip_cidr) + self._ip_cidr = ip_cidr + + @property + def mac_address(self): + return self.port.link.address + + @mac_address.setter + def mac_address(self, mac_address): + self.port.link.set_down() + self.port.link.set_address(mac_address) + self.port.link.set_up() + def execute(self, *args, **kwargs): ns_ip_wrapper = ip_lib.IPWrapper(self.namespace) return ns_ip_wrapper.netns.execute(*args, **kwargs) diff --git a/neutron/tests/common/net_helpers.py b/neutron/tests/common/net_helpers.py index 3fb50838dee..fe93a0a71be 100644 --- a/neutron/tests/common/net_helpers.py +++ b/neutron/tests/common/net_helpers.py @@ -38,7 +38,7 @@ from neutron.tests import base as tests_base from neutron.tests.common import base as common_base from neutron.tests import tools -NS_PREFIX = 'func-' +NS_PREFIX = 'test-' BR_PREFIX = 'test-br' PORT_PREFIX = 'test-port' VETH0_PREFIX = 'test-veth0' @@ -275,6 +275,10 @@ class NetcatTester(object): address=self.server_address, listen=True) + @property + def is_established(self): + return bool(self._client_process and not self._client_process.poll()) + def establish_connection(self): if self._client_process: raise RuntimeError('%(proto)s connection to $(ip_addr)s is already' diff --git a/neutron/tests/etc/policy.json b/neutron/tests/etc/policy.json index eaf6d685ffe..72756bdb630 100644 --- a/neutron/tests/etc/policy.json +++ b/neutron/tests/etc/policy.json @@ -163,5 +163,16 @@ "get_service_provider": "rule:regular_user", "get_lsn": "rule:admin_only", - "create_lsn": "rule:admin_only" + "create_lsn": "rule:admin_only", + + "create_flavor": "rule:admin_only", + "update_flavor": "rule:admin_only", + "delete_flavor": "rule:admin_only", + "get_flavors": "rule:regular_user", + "get_flavor": "rule:regular_user", + "create_service_profile": "rule:admin_only", + "update_service_profile": "rule:admin_only", + "delete_service_profile": "rule:admin_only", + "get_service_profiles": "rule:admin_only", + "get_service_profile": "rule:admin_only" } diff --git a/neutron/tests/fullstack/base.py b/neutron/tests/fullstack/base.py index 87eb1880224..579831524f0 100644 --- a/neutron/tests/fullstack/base.py +++ b/neutron/tests/fullstack/base.py @@ -18,6 +18,7 @@ from oslo_db.sqlalchemy import test_base from neutron.db.migration.models import head # noqa from neutron.db import model_base from neutron.tests.common import base +from neutron.tests.fullstack.resources import client as client_resource class BaseFullStackTestCase(base.MySQLTestCase): @@ -35,6 +36,8 @@ class BaseFullStackTestCase(base.MySQLTestCase): self.useFixture(self.environment) self.client = self.environment.neutron_server.client + self.safe_client = self.useFixture( + client_resource.ClientFixture(self.client)) def get_name(self): class_name, test_name = self.id().split(".")[-2:] diff --git a/neutron/tests/fullstack/fullstack_fixtures.py b/neutron/tests/fullstack/fullstack_fixtures.py index 690891cd550..7db1af123cd 100644 --- a/neutron/tests/fullstack/fullstack_fixtures.py +++ b/neutron/tests/fullstack/fullstack_fixtures.py @@ -12,6 +12,7 @@ # License for the specific language governing permissions and limitations # under the License. +from datetime import datetime from distutils import spawn import functools import os @@ -21,10 +22,10 @@ from neutronclient.common import exceptions as nc_exc from neutronclient.v2_0 import client from oslo_config import cfg from oslo_log import log as logging -from oslo_utils import timeutils from neutron.agent.linux import async_process from neutron.agent.linux import utils +from neutron.common import utils as common_utils from neutron.tests import base from neutron.tests.common import net_helpers from neutron.tests.fullstack import config_fixtures @@ -51,11 +52,11 @@ class ProcessFixture(fixtures.Fixture): def start(self): fmt = self.process_name + "--%Y-%m-%d--%H%M%S.log" log_dir = os.path.join(DEFAULT_LOG_DIR, self.test_name) - utils.ensure_dir(log_dir) + common_utils.ensure_dir(log_dir) cmd = [spawn.find_executable(self.exec_name), '--log-dir', log_dir, - '--log-file', timeutils.strtime(fmt=fmt)] + '--log-file', datetime.utcnow().strftime(fmt)] for filename in self.config_filenames: cmd += ['--config-file', filename] self.process = async_process.AsyncProcess(cmd) diff --git a/neutron/plugins/metaplugin/common/__init__.py b/neutron/tests/fullstack/resources/__init__.py similarity index 100% rename from neutron/plugins/metaplugin/common/__init__.py rename to neutron/tests/fullstack/resources/__init__.py diff --git a/neutron/tests/fullstack/resources/client.py b/neutron/tests/fullstack/resources/client.py new file mode 100644 index 00000000000..797f9b40d1c --- /dev/null +++ b/neutron/tests/fullstack/resources/client.py @@ -0,0 +1,72 @@ +# Copyright (c) 2015 Thales Services SAS +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +# + +import fixtures + +from neutron.tests import base + + +class ClientFixture(fixtures.Fixture): + """Manage and cleanup neutron resources.""" + + def __init__(self, client): + super(ClientFixture, self).__init__() + self.client = client + + def _create_resource(self, resource_type, spec): + create = getattr(self.client, 'create_%s' % resource_type) + delete = getattr(self.client, 'delete_%s' % resource_type) + + body = {resource_type: spec} + resp = create(body=body) + data = resp[resource_type] + self.addCleanup(delete, data['id']) + return data + + def create_router(self, tenant_id, name=None): + resource_type = 'router' + + name = name or base.get_rand_name(prefix=resource_type) + spec = {'tenant_id': tenant_id, 'name': name} + + return self._create_resource(resource_type, spec) + + def create_network(self, tenant_id, name=None): + resource_type = 'network' + + name = name or base.get_rand_name(prefix=resource_type) + spec = {'tenant_id': tenant_id, 'name': name} + + return self._create_resource(resource_type, spec) + + def create_subnet(self, tenant_id, network_id, + cidr, gateway_ip=None, ip_version=4, + name=None, enable_dhcp=True): + resource_type = 'subnet' + + name = name or base.get_rand_name(prefix=resource_type) + spec = {'tenant_id': tenant_id, 'network_id': network_id, 'name': name, + 'cidr': cidr, 'ip_version': ip_version, + 'enable_dhcp': enable_dhcp} + if gateway_ip: + spec['gateway_ip'] = gateway_ip + + return self._create_resource(resource_type, spec) + + def add_router_interface(self, router_id, subnet_id): + body = {'subnet_id': subnet_id} + self.client.add_interface_router(router=router_id, body=body) + self.addCleanup(self.client.remove_interface_router, + router=router_id, body=body) diff --git a/neutron/tests/fullstack/test_l3_agent.py b/neutron/tests/fullstack/test_l3_agent.py index e12e9410df7..72d6b68f9e6 100644 --- a/neutron/tests/fullstack/test_l3_agent.py +++ b/neutron/tests/fullstack/test_l3_agent.py @@ -56,39 +56,15 @@ class TestLegacyL3Agent(base.BaseFullStackTestCase): utils.wait_until_true(lambda: ip.netns.exists(ns_name)) def test_namespace_exists(self): - uuid = uuidutils.generate_uuid() + tenant_id = uuidutils.generate_uuid() - router = self.client.create_router( - body={'router': {'name': 'router-test', - 'tenant_id': uuid}}) + router = self.safe_client.create_router(tenant_id) + network = self.safe_client.create_network(tenant_id) + subnet = self.safe_client.create_subnet( + tenant_id, network['id'], '20.0.0.0/24', gateway_ip='20.0.0.1') + self.safe_client.add_router_interface(router['id'], subnet['id']) - network = self.client.create_network( - body={'network': {'name': 'network-test', - 'tenant_id': uuid}}) - - subnet = self.client.create_subnet( - body={'subnet': {'name': 'subnet-test', - 'tenant_id': uuid, - 'network_id': network['network']['id'], - 'cidr': '20.0.0.0/24', - 'gateway_ip': '20.0.0.1', - 'ip_version': 4, - 'enable_dhcp': True}}) - - self.client.add_interface_router( - router=router['router']['id'], - body={'subnet_id': subnet['subnet']['id']}) - - router_id = router['router']['id'] namespace = "%s@%s" % ( - self._get_namespace(router_id), + self._get_namespace(router['id']), self.environment.l3_agent.get_namespace_suffix(), ) self._assert_namespace_exists(namespace) - - self.client.remove_interface_router( - router=router['router']['id'], - body={'subnet_id': subnet['subnet']['id']}) - - self.client.delete_subnet(subnet['subnet']['id']) - self.client.delete_network(network['network']['id']) - self.client.delete_router(router['router']['id']) diff --git a/neutron/tests/functional/agent/linux/test_ebtables_driver.py b/neutron/tests/functional/agent/linux/test_ebtables_driver.py index 999cc6762dc..c7728d60299 100644 --- a/neutron/tests/functional/agent/linux/test_ebtables_driver.py +++ b/neutron/tests/functional/agent/linux/test_ebtables_driver.py @@ -13,8 +13,8 @@ # License for the specific language governing permissions and limitations # under the License. +from neutron.agent.linux import bridge_lib from neutron.agent.linux import ebtables_driver -from neutron.agent.linux import ip_lib from neutron.tests.common import machine_fixtures from neutron.tests.common import net_helpers from neutron.tests.functional import base @@ -85,14 +85,13 @@ class EbtablesLowLevelTestCase(base.BaseSudoTestCase): # Pick one of the namespaces and setup a bridge for the local ethernet # interface there, because ebtables only works on bridged interfaces. - self.source.execute(['brctl', 'addbr', 'mybridge']) - self.source.execute( - ['brctl', 'addif', 'mybridge', self.source.port.name]) + dev_mybridge = bridge_lib.BridgeDevice.addbr( + 'mybridge', self.source.namespace) + dev_mybridge.addif(self.source.port.name) # Take the IP addrss off one of the interfaces and apply it to the # bridge interface instead. self.source.port.addr.delete(self.source.ip_cidr) - dev_mybridge = ip_lib.IPDevice("mybridge", self.source.namespace) dev_mybridge.link.set_up() dev_mybridge.addr.add(self.source.ip_cidr) diff --git a/neutron/tests/functional/agent/linux/test_ip_lib.py b/neutron/tests/functional/agent/linux/test_ip_lib.py index 68cecc19d8f..8804599ec5b 100644 --- a/neutron/tests/functional/agent/linux/test_ip_lib.py +++ b/neutron/tests/functional/agent/linux/test_ip_lib.py @@ -141,11 +141,13 @@ class IpLibTestCase(IpLibTestFramework): expected_routes = [{'nexthop': device_ip, 'device': attr.name, - 'destination': destination}, + 'destination': destination, + 'scope': None}, {'nexthop': None, 'device': attr.name, 'destination': str( - netaddr.IPNetwork(attr.ip_cidrs[0]).cidr)}] + netaddr.IPNetwork(attr.ip_cidrs[0]).cidr), + 'scope': 'link'}] - routes = ip_lib.get_routing_table(namespace=attr.namespace) + routes = ip_lib.get_routing_table(4, namespace=attr.namespace) self.assertEqual(expected_routes, routes) diff --git a/neutron/tests/functional/agent/test_l2_ovs_agent.py b/neutron/tests/functional/agent/test_l2_ovs_agent.py index 6deaab64e2e..db57e18b18c 100644 --- a/neutron/tests/functional/agent/test_l2_ovs_agent.py +++ b/neutron/tests/functional/agent/test_l2_ovs_agent.py @@ -202,11 +202,6 @@ class OVSAgentTestFramework(base.BaseOVSLinuxTestCase): for port in [self.patch_tun, self.patch_int]: self.assertTrue(self.ovs.port_exists(port)) - def assert_no_vlan_tags(self, ports, agent): - for port in ports: - res = agent.int_br.db_get_val('Port', port.get('vif_name'), 'tag') - self.assertEqual([], res) - def assert_vlan_tags(self, ports, agent): for port in ports: res = agent.int_br.db_get_val('Port', port.get('vif_name'), 'tag') @@ -215,30 +210,64 @@ class OVSAgentTestFramework(base.BaseOVSLinuxTestCase): class TestOVSAgent(OVSAgentTestFramework): - def _expected_plugin_rpc_call(self, call, expected_devices): + def _expected_plugin_rpc_call(self, call, expected_devices, is_up=True): """Helper to check expected rpc call are received :param call: The call to check :param expected_devices The device for which call is expected + :param is_up True if expected_devices are devices that are set up, + False if expected_devices are devices that are set down """ - args = (args[0][1] for args in call.call_args_list) - return not (set(expected_devices) - set(args)) + if is_up: + rpc_devices = [ + dev for args in call.call_args_list for dev in args[0][1]] + else: + rpc_devices = [ + dev for args in call.call_args_list for dev in args[0][2]] + return not (set(expected_devices) - set(rpc_devices)) - def _create_ports(self, network, agent): + def _create_ports(self, network, agent, trigger_resync=False): ports = [] for x in range(3): ports.append(self._create_test_port_dict()) + def mock_device_raise_exception(context, devices_up, devices_down, + agent_id, host=None): + agent.plugin_rpc.update_device_list.side_effect = ( + mock_update_device) + raise Exception('Exception to trigger resync') + def mock_device_details(context, devices, agent_id, host=None): + details = [] for port in ports: if port['id'] in devices: dev = OVSAgentTestFramework._get_device_details( port, network) details.append(dev) - return details + return {'devices': details, 'failed_devices': []} - agent.plugin_rpc.get_devices_details_list.side_effect = ( - mock_device_details) + def mock_update_device(context, devices_up, devices_down, agent_id, + host=None): + dev_up = [] + dev_down = [] + for port in ports: + if devices_up and port['id'] in devices_up: + dev_up.append(port['id']) + if devices_down and port['id'] in devices_down: + dev_down.append({'device': port['id'], 'exists': True}) + return {'devices_up': dev_up, + 'failed_devices_up': [], + 'devices_down': dev_down, + 'failed_devices_down': []} + + (agent.plugin_rpc.get_devices_details_list_and_failed_devices. + side_effect) = mock_device_details + if trigger_resync: + agent.plugin_rpc.update_device_list.side_effect = ( + mock_device_raise_exception) + else: + agent.plugin_rpc.update_device_list.side_effect = ( + mock_update_device) return ports def test_port_creation_and_deletion(self): @@ -250,39 +279,35 @@ class TestOVSAgent(OVSAgentTestFramework): up_ports_ids = [p['id'] for p in ports] agent_utils.wait_until_true( lambda: self._expected_plugin_rpc_call( - agent.plugin_rpc.update_device_up, up_ports_ids)) + agent.plugin_rpc.update_device_list, up_ports_ids)) down_ports_ids = [p['id'] for p in ports] for port in ports: agent.int_br.delete_port(port['vif_name']) agent_utils.wait_until_true( lambda: self._expected_plugin_rpc_call( - agent.plugin_rpc.update_device_down, down_ports_ids)) + agent.plugin_rpc.update_device_list, down_ports_ids, False)) def test_resync_devices_set_up_after_exception(self): agent = self.create_agent() self.start_agent(agent) network = self._create_test_network_dict() - ports = self._create_ports(network, agent) - agent.plugin_rpc.update_device_up.side_effect = [ - Exception('Exception to trigger resync'), - None, None, None] + ports = self._create_ports(network, agent, True) self._plug_ports(network, ports, agent) ports_ids = [p['id'] for p in ports] agent_utils.wait_until_true( lambda: self._expected_plugin_rpc_call( - agent.plugin_rpc.update_device_up, ports_ids)) + agent.plugin_rpc.update_device_list, ports_ids)) def test_port_vlan_tags(self): agent = self.create_agent() self.start_agent(agent) - ports = [] - for x in range(3): - ports.append(self._create_test_port_dict()) network = self._create_test_network_dict() + ports = self._create_ports(network, agent) + ports_ids = [p['id'] for p in ports] self._plug_ports(network, ports, agent) - agent.provision_local_vlan(network['id'], 'vlan', 'physnet', 1) - self.assert_no_vlan_tags(ports, agent) - self._bind_ports(ports, network, agent) + agent_utils.wait_until_true( + lambda: self._expected_plugin_rpc_call( + agent.plugin_rpc.update_device_list, ports_ids)) self.assert_vlan_tags(ports, agent) def test_assert_bridges_ports_vxlan(self): diff --git a/neutron/tests/functional/agent/test_l3_agent.py b/neutron/tests/functional/agent/test_l3_agent.py index c27a94f1a44..ef2bd498ed8 100644 --- a/neutron/tests/functional/agent/test_l3_agent.py +++ b/neutron/tests/functional/agent/test_l3_agent.py @@ -191,12 +191,14 @@ class L3AgentTestFramework(base.BaseSudoTestCase): floating_ip_cidr = common_utils.ip_to_cidr( router.get_floating_ips()[0]['floating_ip_address']) default_gateway_ip = external_port['subnets'][0].get('gateway_ip') - + extra_subnet_cidr = external_port['extra_subnets'][0].get('cidr') return """vrrp_instance VR_1 { state BACKUP interface %(ha_device_name)s virtual_router_id 1 priority 50 + garp_master_repeat 5 + garp_master_refresh 10 nopreempt advert_int 2 track_interface { @@ -215,6 +217,7 @@ class L3AgentTestFramework(base.BaseSudoTestCase): virtual_routes { 0.0.0.0/0 via %(default_gateway_ip)s dev %(external_device_name)s 8.8.8.0/24 via 19.4.4.4 + %(extra_subnet_cidr)s dev %(external_device_name)s scope link } }""" % { 'ha_device_name': ha_device_name, @@ -225,7 +228,8 @@ class L3AgentTestFramework(base.BaseSudoTestCase): 'floating_ip_cidr': floating_ip_cidr, 'default_gateway_ip': default_gateway_ip, 'int_port_ipv6': int_port_ipv6, - 'ex_port_ipv6': ex_port_ipv6 + 'ex_port_ipv6': ex_port_ipv6, + 'extra_subnet_cidr': extra_subnet_cidr, } def _get_rule(self, iptables_manager, table, chain, predicate): @@ -271,13 +275,24 @@ class L3AgentTestFramework(base.BaseSudoTestCase): device, router.get_internal_device_name, router.ns_name)) def _assert_extra_routes(self, router): - routes = ip_lib.get_routing_table(namespace=router.ns_name) + routes = ip_lib.get_routing_table(4, namespace=router.ns_name) routes = [{'nexthop': route['nexthop'], 'destination': route['destination']} for route in routes] for extra_route in router.router['routes']: self.assertIn(extra_route, routes) + def _assert_onlink_subnet_routes(self, router, ip_versions): + routes = [] + for ip_version in ip_versions: + _routes = ip_lib.get_routing_table(ip_version, + namespace=router.ns_name) + routes.extend(_routes) + routes = set(route['destination'] for route in routes) + extra_subnets = router.get_ex_gw_port()['extra_subnets'] + for extra_subnet in (route['cidr'] for route in extra_subnets): + self.assertIn(extra_subnet, routes) + def _assert_interfaces_deleted_from_ovs(self): def assert_ovs_bridge_empty(bridge_name): bridge = ovs_lib.OVSBridge(bridge_name) @@ -634,6 +649,8 @@ class L3AgentTestCase(L3AgentTestFramework): self._assert_snat_chains(router) self._assert_floating_ip_chains(router) self._assert_extra_routes(router) + ip_versions = [4, 6] if (ip_version == 6 or dual_stack) else [4] + self._assert_onlink_subnet_routes(router, ip_versions) self._assert_metadata_chains(router) # Verify router gateway interface is configured to receive Router Advts diff --git a/neutron/tests/functional/agent/test_ovs_lib.py b/neutron/tests/functional/agent/test_ovs_lib.py index 5c5409a605d..b81e6a52665 100644 --- a/neutron/tests/functional/agent/test_ovs_lib.py +++ b/neutron/tests/functional/agent/test_ovs_lib.py @@ -174,6 +174,10 @@ class OVSBridgeTestCase(OVSBridgeTestBase): ports = {self.create_ovs_port()[0] for i in range(5)} self.assertSetEqual(ports, set(self.br.get_port_name_list())) + def test_get_iface_name_list(self): + ifaces = {self.create_ovs_port()[0] for i in range(5)} + self.assertSetEqual(ifaces, set(self.br.get_iface_name_list())) + def test_get_port_stats(self): # Nothing seems to use this function? (port_name, ofport) = self.create_ovs_port() diff --git a/neutron/tests/functional/db/test_ipam.py b/neutron/tests/functional/db/test_ipam.py index a10e9e288a1..a1dd8468f05 100644 --- a/neutron/tests/functional/db/test_ipam.py +++ b/neutron/tests/functional/db/test_ipam.py @@ -24,6 +24,7 @@ from neutron import context from neutron.db import db_base_plugin_v2 as base_plugin from neutron.db import model_base from neutron.db import models_v2 +from neutron.ipam.drivers.neutrondb_ipam import db_models as ipam_models from neutron.tests import base from neutron.tests.common import base as common_base @@ -47,9 +48,13 @@ class IpamTestCase(object): Base class for tests that aim to test ip allocation. """ - def configure_test(self): + def configure_test(self, use_pluggable_ipam=False): model_base.BASEV2.metadata.create_all(self.engine) cfg.CONF.set_override('notify_nova_on_port_status_changes', False) + if use_pluggable_ipam: + self._turn_on_pluggable_ipam() + else: + self._turn_off_pluggable_ipam() self.plugin = base_plugin.NeutronDbPluginV2() self.cxt = get_admin_test_context(self.engine.url) self.addCleanup(self.cxt._session.close) @@ -60,6 +65,16 @@ class IpamTestCase(object): self._create_network() self._create_subnet() + def _turn_off_pluggable_ipam(self): + cfg.CONF.set_override('ipam_driver', None) + self.ip_availability_range = models_v2.IPAvailabilityRange + + def _turn_on_pluggable_ipam(self): + cfg.CONF.set_override('ipam_driver', 'internal') + DB_PLUGIN_KLASS = 'neutron.db.db_base_plugin_v2.NeutronDbPluginV2' + self.setup_coreplugin(DB_PLUGIN_KLASS) + self.ip_availability_range = ipam_models.IpamAvailabilityRange + def result_set_to_dicts(self, resultset, keys): dicts = [] for item in resultset: @@ -75,7 +90,7 @@ class IpamTestCase(object): def assert_ip_avail_range_matches(self, expected): result_set = self.cxt.session.query( - models_v2.IPAvailabilityRange).all() + self.ip_availability_range).all() keys = ['first_ip', 'last_ip'] actual = self.result_set_to_dicts(result_set, keys) self.assertEqual(expected, actual) @@ -218,3 +233,19 @@ class TestIpamPsql(common_base.PostgreSQLTestCase, def setUp(self): super(TestIpamPsql, self).setUp() self.configure_test() + + +class TestPluggableIpamMySql(common_base.MySQLTestCase, + base.BaseTestCase, IpamTestCase): + + def setUp(self): + super(TestPluggableIpamMySql, self).setUp() + self.configure_test(use_pluggable_ipam=True) + + +class TestPluggableIpamPsql(common_base.PostgreSQLTestCase, + base.BaseTestCase, IpamTestCase): + + def setUp(self): + super(TestPluggableIpamPsql, self).setUp() + self.configure_test(use_pluggable_ipam=True) diff --git a/neutron/tests/functional/db/test_migrations.py b/neutron/tests/functional/db/test_migrations.py index ad3fd859534..200b601ac49 100644 --- a/neutron/tests/functional/db/test_migrations.py +++ b/neutron/tests/functional/db/test_migrations.py @@ -121,7 +121,7 @@ class _TestModelsMigrations(test_migrations.ModelsMigrationsSync): def db_sync(self, engine): cfg.CONF.set_override('connection', engine.url, group='database') - migration.do_alembic_command(self.alembic_config, 'upgrade', 'head') + migration.do_alembic_command(self.alembic_config, 'upgrade', 'heads') cfg.CONF.clear_override('connection', group='database') def get_engine(self): diff --git a/neutron/tests/functional/sanity/test_sanity.py b/neutron/tests/functional/sanity/test_sanity.py index 55b0633f4fb..b65de687a5b 100644 --- a/neutron/tests/functional/sanity/test_sanity.py +++ b/neutron/tests/functional/sanity/test_sanity.py @@ -67,3 +67,6 @@ class SanityTestCaseRoot(functional_base.BaseSudoTestCase): def test_ovsdb_native_supported_runs(self): checks.ovsdb_native_supported() + + def test_keepalived_ipv6_support(self): + checks.keepalived_ipv6_supported() diff --git a/neutron/tests/functional/scheduler/test_l3_agent_scheduler.py b/neutron/tests/functional/scheduler/test_l3_agent_scheduler.py new file mode 100644 index 00000000000..ca89515ba55 --- /dev/null +++ b/neutron/tests/functional/scheduler/test_l3_agent_scheduler.py @@ -0,0 +1,274 @@ +# Copyright (c) 2015 Hewlett-Packard Development Company, L.P. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import random +import testscenarios + +from neutron import context +from neutron.scheduler import l3_agent_scheduler +from neutron.services.l3_router import l3_router_plugin +from neutron.tests.common import helpers +from neutron.tests.unit.db import test_db_base_plugin_v2 + +# Required to generate tests from scenarios. Not compatible with nose. +load_tests = testscenarios.load_tests_apply_scenarios + + +class L3SchedulerBaseTest(test_db_base_plugin_v2.NeutronDbPluginV2TestCase): + + """Base class for functional test of L3 schedulers. + Provides basic setup and utility functions. + """ + + def setUp(self): + super(L3SchedulerBaseTest, self).setUp() + + self.l3_plugin = l3_router_plugin.L3RouterPlugin() + self.adminContext = context.get_admin_context() + self.adminContext.tenant_id = '_func_test_tenant_' + + def _create_l3_agent(self, host, context, agent_mode='legacy', plugin=None, + state=True): + agent = helpers.register_l3_agent(host, agent_mode) + helpers.set_agent_admin_state(agent.id, state) + return agent + + def _create_router(self, name): + router = {'name': name, 'admin_state_up': True} + return self.l3_plugin.create_router( + self.adminContext, {'router': router}) + + def _create_legacy_agents(self, agent_count, down_agent_count): + # Creates legacy l3 agents and sets admin state based on + # down agent count. + self.hosts = ['host-%s' % i for i in range(agent_count)] + self.l3_agents = [self._create_l3_agent(self.hosts[i], + self.adminContext, 'legacy', self.l3_plugin, + (i >= down_agent_count)) for i in range(agent_count)] + + def _create_routers(self, scheduled_router_count, + expected_scheduled_router_count): + routers = [] + if (scheduled_router_count + expected_scheduled_router_count): + for i in range(scheduled_router_count + + expected_scheduled_router_count): + router = self._create_router('schd_rtr' + str(i)) + routers.append(router) + else: + # create at least one router to test scheduling + routers.append(self._create_router('schd_rtr0')) + + return routers + + def _pre_scheduler_routers(self, scheduler, count): + hosting_agents = [] + # schedule routers before calling schedule: + for i in range(count): + router = self.routers[i] + agent = random.choice(self.l3_agents) + scheduler.bind_router(self.adminContext, router['id'], agent) + hosting_agents.append(agent) + return hosting_agents + + def _test_auto_schedule(self, expected_count): + router_ids = [rtr['id'] for rtr in self.routers] + + did_it_schedule = False + + # Try scheduling on each host + for host in self.hosts: + did_it_schedule = self.scheduler.auto_schedule_routers( + self.l3_plugin, + self.adminContext, + host, + router_ids) + if did_it_schedule: + break + + if expected_count: + self.assertTrue(did_it_schedule, 'Failed to schedule agent') + else: + self.assertFalse(did_it_schedule, 'Agent scheduled, not expected') + + +class L3ChanceSchedulerTestCase(L3SchedulerBaseTest): + + """Test various scenarios for chance scheduler. + + agent_count + Number of l3 agents (also number of hosts). + + down_agent_count + Number of l3 agents which are down. + + scheduled_router_count + Number of routers that have been previously scheduled. + + expected_scheduled_router_count + Number of newly scheduled routers. + """ + + scenarios = [ + ('No routers scheduled if no agents are present', + dict(agent_count=0, + down_agent_count=0, + scheduled_router_count=0, + expected_scheduled_router_count=0)), + + ('No routers scheduled if it is already hosted', + dict(agent_count=1, + down_agent_count=0, + scheduled_router_count=1, + expected_scheduled_router_count=0)), + + ('No routers scheduled if all agents are down', + dict(agent_count=2, + down_agent_count=2, + scheduled_router_count=0, + expected_scheduled_router_count=0)), + + ('Router scheduled to the agent if router is not yet hosted', + dict(agent_count=1, + down_agent_count=0, + scheduled_router_count=0, + expected_scheduled_router_count=1)), + + ('Router scheduled to the agent even if it already hosts a router', + dict(agent_count=1, + down_agent_count=0, + scheduled_router_count=1, + expected_scheduled_router_count=1)), + ] + + def setUp(self): + super(L3ChanceSchedulerTestCase, self).setUp() + self._create_legacy_agents(self.agent_count, self.down_agent_count) + self.routers = self._create_routers(self.scheduled_router_count, + self.expected_scheduled_router_count) + self.scheduler = l3_agent_scheduler.ChanceScheduler() + + def test_chance_schedule_router(self): + # Pre schedule routers + self._pre_scheduler_routers(self.scheduler, + self.scheduled_router_count) + # schedule: + actual_scheduled_agent = self.scheduler.schedule( + self.l3_plugin, self.adminContext, self.routers[-1]['id']) + + if self.expected_scheduled_router_count: + self.assertIsNotNone(actual_scheduled_agent, + message='Failed to schedule agent') + else: + self.assertIsNone(actual_scheduled_agent, + message='Agent scheduled but not expected') + + def test_auto_schedule_routers(self): + # Pre schedule routers + self._pre_scheduler_routers(self.scheduler, + self.scheduled_router_count) + # The test + self._test_auto_schedule(self.expected_scheduled_router_count) + + +class L3LeastRoutersSchedulerTestCase(L3SchedulerBaseTest): + + """Test various scenarios for least router scheduler. + + agent_count + Number of l3 agents (also number of hosts). + + down_agent_count + Number of l3 agents which are down. + + scheduled_router_count + Number of routers that have been previously scheduled + + expected_scheduled_router_count + Number of newly scheduled routers + """ + + scenarios = [ + ('No routers scheduled if no agents are present', + dict(agent_count=0, + down_agent_count=0, + scheduled_router_count=0, + expected_scheduled_router_count=0)), + + ('No routers scheduled if it is already hosted', + dict(agent_count=1, + down_agent_count=0, + scheduled_router_count=1, + expected_scheduled_router_count=1)), + + ('No routers scheduled if all agents are down', + dict(agent_count=2, + down_agent_count=2, + scheduled_router_count=0, + expected_scheduled_router_count=0)), + + ('Router scheduled to the agent if router is not yet hosted', + dict(agent_count=1, + down_agent_count=0, + scheduled_router_count=0, + expected_scheduled_router_count=1)), + + ('Router scheduled to the agent even if it already hosts a router', + dict(agent_count=1, + down_agent_count=0, + scheduled_router_count=1, + expected_scheduled_router_count=1)), + + ('Router is scheduled to agent hosting least routers', + dict(agent_count=2, + down_agent_count=0, + scheduled_router_count=1, + expected_scheduled_router_count=1)), + ] + + def setUp(self): + super(L3LeastRoutersSchedulerTestCase, self).setUp() + self._create_legacy_agents(self.agent_count, self.down_agent_count) + self.routers = self._create_routers(self.scheduled_router_count, + self.expected_scheduled_router_count) + self.scheduler = l3_agent_scheduler.LeastRoutersScheduler() + + def test_least_routers_schedule(self): + # Pre schedule routers + hosting_agents = self._pre_scheduler_routers(self.scheduler, + self.scheduled_router_count) + + actual_scheduled_agent = self.scheduler.schedule( + self.l3_plugin, self.adminContext, self.routers[-1]['id']) + + if self.expected_scheduled_router_count: + # For case where there is just one agent: + if self.agent_count == 1: + self.assertEqual(actual_scheduled_agent.id, + self.l3_agents[0].id) + else: + self.assertNotIn(actual_scheduled_agent.id, + [x.id for x in hosting_agents], + message='The expected agent was not scheduled') + else: + self.assertIsNone(actual_scheduled_agent, + message='Expected no agent to be scheduled,' + ' but it got scheduled') + + def test_auto_schedule_routers(self): + # Pre schedule routers + self._pre_scheduler_routers(self.scheduler, + self.scheduled_router_count) + # The test + self._test_auto_schedule(self.expected_scheduled_router_count) diff --git a/neutron/tests/tempest/services/network/json/network_client.py b/neutron/tests/tempest/services/network/json/network_client.py index 54f264c82f1..4958bc51c03 100644 --- a/neutron/tests/tempest/services/network/json/network_client.py +++ b/neutron/tests/tempest/services/network/json/network_client.py @@ -45,7 +45,7 @@ class NetworkClientJSON(service_client.ServiceClient): # The following list represents resource names that do not require # changing underscore to a hyphen hyphen_exceptions = ["health_monitors", "firewall_rules", - "firewall_policies"] + "firewall_policies", "service_profiles"] # the following map is used to construct proper URI # for the given neutron resource service_resource_prefix_map = { diff --git a/neutron/tests/unit/agent/common/test_utils.py b/neutron/tests/unit/agent/common/test_utils.py new file mode 100644 index 00000000000..7c89b1e2b5e --- /dev/null +++ b/neutron/tests/unit/agent/common/test_utils.py @@ -0,0 +1,53 @@ +# Copyright 2015 Red Hat, Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import mock + +from neutron.agent.common import config +from neutron.agent.common import utils +from neutron.agent.linux import interface +from neutron.tests import base +from neutron.tests.unit import testlib_api + + +class TestLoadInterfaceDriver(base.BaseTestCase): + + def setUp(self): + super(TestLoadInterfaceDriver, self).setUp() + self.conf = config.setup_conf() + config.register_interface_driver_opts_helper(self.conf) + + def test_load_interface_driver_not_set(self): + with testlib_api.ExpectedException(SystemExit): + utils.load_interface_driver(self.conf) + + def test_load_interface_driver_wrong_driver(self): + self.conf.set_override('interface_driver', 'neutron.NonExistentDriver') + with testlib_api.ExpectedException(SystemExit): + utils.load_interface_driver(self.conf) + + def test_load_interface_driver_does_not_consume_irrelevant_errors(self): + self.conf.set_override('interface_driver', + 'neutron.agent.linux.interface.NullDriver') + with mock.patch('oslo_utils.importutils.import_object', + side_effect=RuntimeError()): + with testlib_api.ExpectedException(RuntimeError): + utils.load_interface_driver(self.conf) + + def test_load_interface_driver_success(self): + self.conf.set_override('interface_driver', + 'neutron.agent.linux.interface.NullDriver') + self.assertIsInstance(utils.load_interface_driver(self.conf), + interface.NullDriver) diff --git a/neutron/tests/unit/agent/dhcp/test_agent.py b/neutron/tests/unit/agent/dhcp/test_agent.py index 876bf8db424..d1c7226d840 100644 --- a/neutron/tests/unit/agent/dhcp/test_agent.py +++ b/neutron/tests/unit/agent/dhcp/test_agent.py @@ -102,6 +102,14 @@ fake_port1 = dhcp.DictModel(dict(id='12345678-1234-aaaa-1234567890ab', network_id='12345678-1234-5678-1234567890ab', fixed_ips=[fake_fixed_ip1])) +fake_dhcp_port = dhcp.DictModel(dict(id='12345678-1234-aaaa-123456789022', + device_id='dhcp-12345678-1234-aaaa-123456789022', + device_owner='network:dhcp', + allocation_pools=fake_subnet1_allocation_pools, + mac_address='aa:bb:cc:dd:ee:22', + network_id='12345678-1234-5678-1234567890ab', + fixed_ips=[fake_fixed_ip2])) + fake_port2 = dhcp.DictModel(dict(id='12345678-1234-aaaa-123456789000', device_id='dhcp-12345678-1234-aaaa-123456789000', device_owner='', @@ -400,13 +408,14 @@ class TestDhcpAgent(base.BaseTestCase): def test_periodic_resync_helper(self): with mock.patch.object(dhcp_agent.eventlet, 'sleep') as sleep: dhcp = dhcp_agent.DhcpAgent(HOSTNAME) - dhcp.needs_resync_reasons = collections.OrderedDict( + resync_reasons = collections.OrderedDict( (('a', 'reason1'), ('b', 'reason2'))) + dhcp.needs_resync_reasons = resync_reasons with mock.patch.object(dhcp, 'sync_state') as sync_state: sync_state.side_effect = RuntimeError with testtools.ExpectedException(RuntimeError): dhcp._periodic_resync_helper() - sync_state.assert_called_once_with(['a', 'b']) + sync_state.assert_called_once_with(resync_reasons.keys()) sleep.assert_called_once_with(dhcp.conf.resync_interval) self.assertEqual(len(dhcp.needs_resync_reasons), 0) @@ -438,22 +447,17 @@ class TestDhcpAgent(base.BaseTestCase): def test_none_interface_driver(self): cfg.CONF.set_override('interface_driver', None) - with mock.patch.object(dhcp, 'LOG') as log: - self.assertRaises(SystemExit, dhcp.DeviceManager, - cfg.CONF, None) - msg = 'An interface driver must be specified' - log.error.assert_called_once_with(msg) + self.assertRaises(SystemExit, dhcp.DeviceManager, + cfg.CONF, None) def test_nonexistent_interface_driver(self): # Temporarily turn off mock, so could use the real import_class # to import interface_driver. self.driver_cls_p.stop() self.addCleanup(self.driver_cls_p.start) - cfg.CONF.set_override('interface_driver', 'foo') - with mock.patch.object(dhcp, 'LOG') as log: - self.assertRaises(SystemExit, dhcp.DeviceManager, - cfg.CONF, None) - self.assertEqual(log.error.call_count, 1) + cfg.CONF.set_override('interface_driver', 'foo.bar') + self.assertRaises(SystemExit, dhcp.DeviceManager, + cfg.CONF, None) class TestLogArgs(base.BaseTestCase): @@ -1067,7 +1071,7 @@ class TestNetworkCache(base.BaseTestCase): nc = dhcp_agent.NetworkCache() nc.put(fake_network) - self.assertEqual(nc.get_network_ids(), [fake_network.id]) + self.assertEqual(list(nc.get_network_ids()), [fake_network.id]) def test_get_network_by_subnet_id(self): nc = dhcp_agent.NetworkCache() @@ -1258,6 +1262,23 @@ class TestDeviceManager(base.BaseTestCase): expected = [mock.call.add_rule('POSTROUTING', rule)] self.mangle_inst.assert_has_calls(expected) + def test_setup_create_dhcp_port(self): + plugin = mock.Mock() + net = copy.deepcopy(fake_network) + plugin.create_dhcp_port.return_value = fake_dhcp_port + dh = dhcp.DeviceManager(cfg.CONF, plugin) + dh.setup(net) + + plugin.assert_has_calls([ + mock.call.create_dhcp_port( + {'port': {'name': '', 'admin_state_up': True, + 'network_id': net.id, + 'tenant_id': net.tenant_id, + 'fixed_ips': [{'subnet_id': + fake_dhcp_port.fixed_ips[0].subnet_id}], + 'device_id': mock.ANY}})]) + self.assertIn(fake_dhcp_port, net.ports) + def test_setup_ipv6(self): self._test_setup_helper(True, net=fake_network_ipv6, port=fake_ipv6_port) diff --git a/neutron/tests/unit/agent/l3/test_agent.py b/neutron/tests/unit/agent/l3/test_agent.py index b683727fdb5..b59c9cc632d 100644 --- a/neutron/tests/unit/agent/l3/test_agent.py +++ b/neutron/tests/unit/agent/l3/test_agent.py @@ -29,7 +29,6 @@ from neutron.agent.common import config as agent_config from neutron.agent.l3 import agent as l3_agent from neutron.agent.l3 import config as l3_config from neutron.agent.l3 import dvr_edge_router as dvr_router -from neutron.agent.l3 import dvr_local_router from neutron.agent.l3 import dvr_snat_ns from neutron.agent.l3 import ha from neutron.agent.l3 import legacy_router @@ -44,7 +43,6 @@ from neutron.agent import rpc as agent_rpc from neutron.common import config as base_config from neutron.common import constants as l3_constants from neutron.common import exceptions as n_exc -from neutron.i18n import _LE from neutron.plugins.common import constants as p_const from neutron.tests import base from neutron.tests.common import l3_test_common @@ -81,8 +79,7 @@ class BasicRouterOperationsFramework(base.BaseTestCase): 'neutron.agent.linux.ip_lib.device_exists') self.device_exists = self.device_exists_p.start() - self.ensure_dir = mock.patch('neutron.agent.linux.utils' - '.ensure_dir').start() + self.ensure_dir = mock.patch('neutron.common.utils.ensure_dir').start() mock.patch('neutron.agent.linux.keepalived.KeepalivedManager' '.get_full_config_file_path').start() @@ -339,7 +336,8 @@ class TestBasicRouterOperations(BasicRouterOperationsFramework): if action == 'add': self.device_exists.return_value = False - ri._map_internal_interfaces = mock.Mock(return_value=sn_port) + ri.get_snat_port_for_internal_port = mock.Mock( + return_value=sn_port) ri._snat_redirect_add = mock.Mock() ri._set_subnet_arp_info = mock.Mock() ri._internal_network_added = mock.Mock() @@ -358,7 +356,8 @@ class TestBasicRouterOperations(BasicRouterOperationsFramework): dvr_snat_ns.SNAT_INT_DEV_PREFIX) elif action == 'remove': self.device_exists.return_value = False - ri._map_internal_interfaces = mock.Mock(return_value=sn_port) + ri.get_snat_port_for_internal_port = mock.Mock( + return_value=sn_port) ri._snat_redirect_modify = mock.Mock() ri.internal_network_removed(port) ri._snat_redirect_modify.assert_called_with( @@ -434,8 +433,7 @@ class TestBasicRouterOperations(BasicRouterOperationsFramework): interface_name, ip_cidrs, **kwargs) else: ri._create_dvr_gateway.assert_called_once_with( - ex_gw_port, interface_name, - self.snat_ports) + ex_gw_port, interface_name) def _test_external_gateway_action(self, action, router, dual_stack=False): agent = l3_agent.L3NATAgent(HOSTNAME, self.conf) @@ -520,7 +518,8 @@ class TestBasicRouterOperations(BasicRouterOperationsFramework): elif action == 'remove': self.device_exists.return_value = True - ri._map_internal_interfaces = mock.Mock(return_value=sn_port) + ri.get_snat_port_for_internal_port = mock.Mock( + return_value=sn_port) ri._snat_redirect_remove = mock.Mock() ri.external_gateway_removed(ex_gw_port, interface_name) if not router.get('distributed'): @@ -687,22 +686,24 @@ class TestBasicRouterOperations(BasicRouterOperationsFramework): '! -i %s ! -o %s -m conntrack ! --ctstate DNAT -j ACCEPT' % (interface_name, interface_name), '-o %s -j SNAT --to-source %s' % (interface_name, source_nat_ip), - '-m mark ! --mark 0x2 -m conntrack --ctstate DNAT ' - '-j SNAT --to-source %s' % source_nat_ip] + '-m mark ! --mark 0x2/%s -m conntrack --ctstate DNAT ' + '-j SNAT --to-source %s' % + (l3_constants.ROUTER_MARK_MASK, source_nat_ip)] for r in nat_rules: if negate: self.assertNotIn(r.rule, expected_rules) else: self.assertIn(r.rule, expected_rules) expected_rules = [ - '-i %s -j MARK --set-xmark 0x2/0xffffffff' % interface_name] + '-i %s -j MARK --set-xmark 0x2/%s' % + (interface_name, l3_constants.ROUTER_MARK_MASK)] for r in mangle_rules: if negate: self.assertNotIn(r.rule, expected_rules) else: self.assertIn(r.rule, expected_rules) - def test__map_internal_interfaces(self): + def test_get_snat_port_for_internal_port(self): router = l3_test_common.prepare_router_data(num_internal_ports=4) ri = dvr_router.DvrEdgeRouter(mock.sentinel.agent, HOSTNAME, @@ -716,13 +717,15 @@ class TestBasicRouterOperations(BasicRouterOperationsFramework): 'ip_address': '101.12.13.14'}]} internal_ports = ri.router.get(l3_constants.INTERFACE_KEY, []) # test valid case - res_port = ri._map_internal_interfaces(internal_ports[0], [test_port]) - self.assertEqual(test_port, res_port) - # test invalid case - test_port['fixed_ips'][0]['subnet_id'] = 1234 - res_ip = ri._map_internal_interfaces(internal_ports[0], [test_port]) - self.assertNotEqual(test_port, res_ip) - self.assertIsNone(res_ip) + with mock.patch.object(ri, 'get_snat_interfaces') as get_interfaces: + get_interfaces.return_value = [test_port] + res_port = ri.get_snat_port_for_internal_port(internal_ports[0]) + self.assertEqual(test_port, res_port) + # test invalid case + test_port['fixed_ips'][0]['subnet_id'] = 1234 + res_ip = ri.get_snat_port_for_internal_port(internal_ports[0]) + self.assertNotEqual(test_port, res_ip) + self.assertIsNone(res_ip) def test_process_cent_router(self): router = l3_test_common.prepare_router_data() @@ -1487,10 +1490,11 @@ class TestBasicRouterOperations(BasicRouterOperationsFramework): {}, **self.ri_kwargs) ri.iptables_manager = mock.Mock() + ri._is_this_snat_host = mock.Mock(return_value=True) + ri.get_ex_gw_port = mock.Mock(return_value=mock.ANY) - with mock.patch.object(dvr_local_router.LOG, - 'debug') as log_debug: - ri._handle_router_snat_rules(mock.ANY, mock.ANY, mock.ANY) + with mock.patch.object(dvr_router.LOG, 'debug') as log_debug: + ri._handle_router_snat_rules(mock.ANY, mock.ANY) self.assertIsNone(ri.snat_iptables_manager) self.assertFalse(ri.iptables_manager.called) self.assertTrue(log_debug.called) @@ -1500,7 +1504,7 @@ class TestBasicRouterOperations(BasicRouterOperationsFramework): ri.iptables_manager = mock.MagicMock() port = {'fixed_ips': [{'ip_address': '192.168.1.4'}]} - ri._handle_router_snat_rules(port, "iface", "add_rules") + ri._handle_router_snat_rules(port, "iface") nat = ri.iptables_manager.ipv4['nat'] nat.empty_chain.assert_any_call('snat') @@ -1516,19 +1520,20 @@ class TestBasicRouterOperations(BasicRouterOperationsFramework): ri = l3router.RouterInfo(_uuid(), {}, **self.ri_kwargs) ex_gw_port = {'fixed_ips': [{'ip_address': '192.168.1.4'}]} ri.router = {'distributed': False} - ri._handle_router_snat_rules(ex_gw_port, "iface", "add_rules") + ri._handle_router_snat_rules(ex_gw_port, "iface") - nat_rules = map(str, ri.iptables_manager.ipv4['nat'].rules) + nat_rules = list(map(str, ri.iptables_manager.ipv4['nat'].rules)) wrap_name = ri.iptables_manager.wrap_name jump_float_rule = "-A %s-snat -j %s-float-snat" % (wrap_name, wrap_name) snat_rule1 = ("-A %s-snat -o iface -j SNAT --to-source %s") % ( wrap_name, ex_gw_port['fixed_ips'][0]['ip_address']) - snat_rule2 = ("-A %s-snat -m mark ! --mark 0x2 " + snat_rule2 = ("-A %s-snat -m mark ! --mark 0x2/%s " "-m conntrack --ctstate DNAT " "-j SNAT --to-source %s") % ( - wrap_name, ex_gw_port['fixed_ips'][0]['ip_address']) + wrap_name, l3_constants.ROUTER_MARK_MASK, + ex_gw_port['fixed_ips'][0]['ip_address']) self.assertIn(jump_float_rule, nat_rules) @@ -1537,9 +1542,10 @@ class TestBasicRouterOperations(BasicRouterOperationsFramework): self.assertThat(nat_rules.index(jump_float_rule), matchers.LessThan(nat_rules.index(snat_rule1))) - mangle_rules = map(str, ri.iptables_manager.ipv4['mangle'].rules) + mangle_rules = list(map(str, ri.iptables_manager.ipv4['mangle'].rules)) mangle_rule = ("-A %s-mark -i iface " - "-j MARK --set-xmark 0x2/0xffffffff") % wrap_name + "-j MARK --set-xmark 0x2/%s" % + (wrap_name, l3_constants.ROUTER_MARK_MASK)) self.assertIn(mangle_rule, mangle_rules) def test_process_router_delete_stale_internal_devices(self): @@ -1852,18 +1858,12 @@ class TestBasicRouterOperations(BasicRouterOperationsFramework): def test_nonexistent_interface_driver(self): self.conf.set_override('interface_driver', None) - with mock.patch.object(l3_agent, 'LOG') as log: - self.assertRaises(SystemExit, l3_agent.L3NATAgent, - HOSTNAME, self.conf) - msg = 'An interface driver must be specified' - log.error.assert_called_once_with(msg) + self.assertRaises(SystemExit, l3_agent.L3NATAgent, + HOSTNAME, self.conf) - self.conf.set_override('interface_driver', 'wrong_driver') - with mock.patch.object(l3_agent, 'LOG') as log: - self.assertRaises(SystemExit, l3_agent.L3NATAgent, - HOSTNAME, self.conf) - msg = _LE("Error importing interface driver '%s'") - log.error.assert_called_once_with(msg, 'wrong_driver') + self.conf.set_override('interface_driver', 'wrong.driver') + self.assertRaises(SystemExit, l3_agent.L3NATAgent, + HOSTNAME, self.conf) @mock.patch.object(namespaces.RouterNamespace, 'delete') @mock.patch.object(dvr_snat_ns.SnatNamespace, 'delete') @@ -1961,7 +1961,9 @@ class TestBasicRouterOperations(BasicRouterOperationsFramework): interface_name = ri.get_snat_int_device_name(port_id) self.device_exists.return_value = False - ri._create_dvr_gateway(dvr_gw_port, interface_name, self.snat_ports) + with mock.patch.object(ri, 'get_snat_interfaces') as get_interfaces: + get_interfaces.return_value = self.snat_ports + ri._create_dvr_gateway(dvr_gw_port, interface_name) # check 2 internal ports are plugged # check 1 ext-gw-port is plugged diff --git a/neutron/tests/unit/agent/l3/test_dvr_local_router.py b/neutron/tests/unit/agent/l3/test_dvr_local_router.py index bec9168afbe..052ac68bf2e 100644 --- a/neutron/tests/unit/agent/l3/test_dvr_local_router.py +++ b/neutron/tests/unit/agent/l3/test_dvr_local_router.py @@ -65,8 +65,7 @@ class TestDvrRouterOperations(base.BaseTestCase): 'neutron.agent.linux.ip_lib.device_exists') self.device_exists = self.device_exists_p.start() - self.ensure_dir = mock.patch('neutron.agent.linux.utils' - '.ensure_dir').start() + self.ensure_dir = mock.patch('neutron.common.utils.ensure_dir').start() mock.patch('neutron.agent.linux.keepalived.KeepalivedManager' '.get_full_config_file_path').start() diff --git a/neutron/tests/unit/agent/l3/test_namespace_manager.py b/neutron/tests/unit/agent/l3/test_namespace_manager.py index 706b8994551..956099136bd 100644 --- a/neutron/tests/unit/agent/l3/test_namespace_manager.py +++ b/neutron/tests/unit/agent/l3/test_namespace_manager.py @@ -16,6 +16,7 @@ import mock from oslo_utils import uuidutils +from neutron.agent.l3 import dvr_fip_ns from neutron.agent.l3 import dvr_snat_ns from neutron.agent.l3 import namespace_manager from neutron.agent.l3 import namespaces @@ -63,11 +64,15 @@ class TestNamespaceManager(NamespaceManagerTestCaseFramework): self.assertTrue(self.ns_manager.is_managed(router_ns_name)) router_ns_name = dvr_snat_ns.SNAT_NS_PREFIX + router_id self.assertTrue(self.ns_manager.is_managed(router_ns_name)) + router_ns_name = dvr_fip_ns.FIP_NS_PREFIX + router_id + self.assertTrue(self.ns_manager.is_managed(router_ns_name)) + self.assertFalse(self.ns_manager.is_managed('dhcp-' + router_id)) def test_list_all(self): ns_names = [namespaces.NS_PREFIX + _uuid(), dvr_snat_ns.SNAT_NS_PREFIX + _uuid(), + dvr_fip_ns.FIP_NS_PREFIX + _uuid(), 'dhcp-' + _uuid(), ] # Test the normal path @@ -90,12 +95,14 @@ class TestNamespaceManager(NamespaceManagerTestCaseFramework): ns_names = [namespaces.NS_PREFIX + _uuid() for _ in range(5)] ns_names += [dvr_snat_ns.SNAT_NS_PREFIX + _uuid() for _ in range(5)] ns_names += [namespaces.NS_PREFIX + router_id, - dvr_snat_ns.SNAT_NS_PREFIX + router_id] + dvr_snat_ns.SNAT_NS_PREFIX + router_id, + dvr_fip_ns.FIP_NS_PREFIX + router_id] with mock.patch.object(ip_lib.IPWrapper, 'get_namespaces', return_value=ns_names), \ mock.patch.object(self.ns_manager, '_cleanup') as mock_cleanup: self.ns_manager.ensure_router_cleanup(router_id) expected = [mock.call(namespaces.NS_PREFIX, router_id), - mock.call(dvr_snat_ns.SNAT_NS_PREFIX, router_id)] + mock.call(dvr_snat_ns.SNAT_NS_PREFIX, router_id), + mock.call(dvr_fip_ns.FIP_NS_PREFIX, router_id)] mock_cleanup.assert_has_calls(expected, any_order=True) - self.assertEqual(2, mock_cleanup.call_count) + self.assertEqual(3, mock_cleanup.call_count) diff --git a/neutron/tests/unit/agent/linux/test_bridge_lib.py b/neutron/tests/unit/agent/linux/test_bridge_lib.py index 768c276b298..3b9701d0805 100644 --- a/neutron/tests/unit/agent/linux/test_bridge_lib.py +++ b/neutron/tests/unit/agent/linux/test_bridge_lib.py @@ -41,6 +41,12 @@ class BridgeLibTest(base.BaseTestCase): self.assertEqual(namespace, br.namespace) self._verify_bridge_mock(['brctl', 'addbr', self._BR_NAME]) + br.setfd(0) + self._verify_bridge_mock(['brctl', 'setfd', self._BR_NAME, '0']) + + br.disable_stp() + self._verify_bridge_mock(['brctl', 'stp', self._BR_NAME, 'off']) + br.addif(self._IF_NAME) self._verify_bridge_mock( ['brctl', 'addif', self._BR_NAME, self._IF_NAME]) diff --git a/neutron/tests/unit/agent/linux/test_daemon.py b/neutron/tests/unit/agent/linux/test_daemon.py index e9348802317..c50b4c87624 100644 --- a/neutron/tests/unit/agent/linux/test_daemon.py +++ b/neutron/tests/unit/agent/linux/test_daemon.py @@ -223,6 +223,11 @@ class TestDaemon(base.BaseTestCase): d = daemon.Daemon('pidfile') self.assertEqual(d.procname, 'python') + def test_init_nopidfile(self): + d = daemon.Daemon(pidfile=None) + self.assertEqual(d.procname, 'python') + self.assertFalse(self.pidfile.called) + def test_fork_parent(self): self.os.fork.return_value = 1 d = daemon.Daemon('pidfile') diff --git a/neutron/tests/unit/agent/linux/test_dhcp.py b/neutron/tests/unit/agent/linux/test_dhcp.py index 41a3173d1f4..2a306f12e94 100644 --- a/neutron/tests/unit/agent/linux/test_dhcp.py +++ b/neutron/tests/unit/agent/linux/test_dhcp.py @@ -24,9 +24,9 @@ from neutron.agent.common import config from neutron.agent.dhcp import config as dhcp_config from neutron.agent.linux import dhcp from neutron.agent.linux import external_process -from neutron.agent.linux import utils from neutron.common import config as base_config from neutron.common import constants +from neutron.common import utils from neutron.extensions import extra_dhcp_opt as edo_ext from neutron.tests import base @@ -48,6 +48,19 @@ class DhcpOpt(object): return str(self.__dict__) +class FakeDhcpPort(object): + id = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaa' + admin_state_up = True + device_owner = 'network:dhcp' + fixed_ips = [FakeIPAllocation('192.168.0.1', + 'dddddddd-dddd-dddd-dddd-dddddddddddd')] + mac_address = '00:00:80:aa:bb:ee' + device_id = 'fake_dhcp_port' + + def __init__(self): + self.extra_dhcp_opts = [] + + class FakePort1(object): id = 'eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee' admin_state_up = True @@ -55,6 +68,7 @@ class FakePort1(object): fixed_ips = [FakeIPAllocation('192.168.0.2', 'dddddddd-dddd-dddd-dddd-dddddddddddd')] mac_address = '00:00:80:aa:bb:cc' + device_id = 'fake_port1' def __init__(self): self.extra_dhcp_opts = [] @@ -67,6 +81,7 @@ class FakePort2(object): fixed_ips = [FakeIPAllocation('192.168.0.3', 'dddddddd-dddd-dddd-dddd-dddddddddddd')] mac_address = '00:00:f3:aa:bb:cc' + device_id = 'fake_port2' def __init__(self): self.extra_dhcp_opts = [] @@ -81,6 +96,7 @@ class FakePort3(object): FakeIPAllocation('192.168.1.2', 'eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee')] mac_address = '00:00:0f:aa:bb:cc' + device_id = 'fake_port3' def __init__(self): self.extra_dhcp_opts = [] @@ -96,6 +112,7 @@ class FakePort4(object): FakeIPAllocation('ffda:3ba5:a17a:4ba3:0216:3eff:fec2:771d', 'eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee')] mac_address = '00:16:3E:C2:77:1D' + device_id = 'fake_port4' def __init__(self): self.extra_dhcp_opts = [] @@ -108,6 +125,7 @@ class FakePort5(object): fixed_ips = [FakeIPAllocation('192.168.0.5', 'dddddddd-dddd-dddd-dddd-dddddddddddd')] mac_address = '00:00:0f:aa:bb:55' + device_id = 'fake_port5' def __init__(self): self.extra_dhcp_opts = [ @@ -122,6 +140,7 @@ class FakePort6(object): fixed_ips = [FakeIPAllocation('192.168.0.6', 'dddddddd-dddd-dddd-dddd-dddddddddddd')] mac_address = '00:00:0f:aa:bb:66' + device_id = 'fake_port6' def __init__(self): self.extra_dhcp_opts = [ @@ -140,6 +159,7 @@ class FakeV6Port(object): fixed_ips = [FakeIPAllocation('fdca:3ba5:a17a:4ba3::2', 'ffffffff-ffff-ffff-ffff-ffffffffffff')] mac_address = '00:00:f3:aa:bb:cc' + device_id = 'fake_port6' def __init__(self): self.extra_dhcp_opts = [] @@ -152,6 +172,7 @@ class FakeV6PortExtraOpt(object): fixed_ips = [FakeIPAllocation('ffea:3ba5:a17a:4ba3:0216:3eff:fec2:771d', 'eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee')] mac_address = '00:16:3e:c2:77:1d' + device_id = 'fake_port6' def __init__(self): self.extra_dhcp_opts = [ @@ -169,6 +190,7 @@ class FakeDualPortWithV6ExtraOpt(object): FakeIPAllocation('ffea:3ba5:a17a:4ba3:0216:3eff:fec2:771d', 'eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee')] mac_address = '00:16:3e:c2:77:1d' + device_id = 'fake_port6' def __init__(self): self.extra_dhcp_opts = [ @@ -186,6 +208,7 @@ class FakeDualPort(object): FakeIPAllocation('fdca:3ba5:a17a:4ba3::3', 'ffffffff-ffff-ffff-ffff-ffffffffffff')] mac_address = '00:00:0f:aa:bb:cc' + device_id = 'fake_dual_port' def __init__(self): self.extra_dhcp_opts = [] @@ -196,6 +219,7 @@ class FakeRouterPort(object): admin_state_up = True device_owner = constants.DEVICE_OWNER_ROUTER_INTF mac_address = '00:00:0f:rr:rr:rr' + device_id = 'fake_router_port' def __init__(self, dev_owner=constants.DEVICE_OWNER_ROUTER_INTF, ip_address='192.168.0.1'): @@ -212,6 +236,7 @@ class FakeRouterPort2(object): fixed_ips = [FakeIPAllocation('192.168.1.1', 'dddddddd-dddd-dddd-dddd-dddddddddddd')] mac_address = '00:00:0f:rr:rr:r2' + device_id = 'fake_router_port2' def __init__(self): self.extra_dhcp_opts = [] @@ -224,6 +249,7 @@ class FakePortMultipleAgents1(object): fixed_ips = [FakeIPAllocation('192.168.0.5', 'dddddddd-dddd-dddd-dddd-dddddddddddd')] mac_address = '00:00:0f:dd:dd:dd' + device_id = 'fake_multiple_agents_port' def __init__(self): self.extra_dhcp_opts = [] @@ -236,6 +262,7 @@ class FakePortMultipleAgents2(object): fixed_ips = [FakeIPAllocation('192.168.0.6', 'dddddddd-dddd-dddd-dddd-dddddddddddd')] mac_address = '00:00:0f:ee:ee:ee' + device_id = 'fake_multiple_agents_port2' def __init__(self): self.extra_dhcp_opts = [] @@ -306,6 +333,17 @@ class FakeV4SubnetMultipleAgentsWithoutDnsProvided(object): host_routes = [] +class FakeV4SubnetAgentWithManyDnsProvided(object): + id = 'dddddddd-dddd-dddd-dddd-dddddddddddd' + ip_version = 4 + cidr = '192.168.0.0/24' + gateway_ip = '192.168.0.1' + enable_dhcp = True + dns_nameservers = ['2.2.2.2', '9.9.9.9', '1.1.1.1', + '3.3.3.3'] + host_routes = [] + + class FakeV4MultipleAgentsWithoutDnsProvided(object): id = 'ffffffff-ffff-ffff-ffff-ffffffffffff' subnets = [FakeV4SubnetMultipleAgentsWithoutDnsProvided()] @@ -314,6 +352,14 @@ class FakeV4MultipleAgentsWithoutDnsProvided(object): namespace = 'qdhcp-ns' +class FakeV4AgentWithManyDnsProvided(object): + id = 'ffffffff-ffff-ffff-ffff-ffffffffffff' + subnets = [FakeV4SubnetAgentWithManyDnsProvided()] + ports = [FakePort1(), FakePort2(), FakePort3(), FakeRouterPort(), + FakePortMultipleAgents1()] + namespace = 'qdhcp-ns' + + class FakeV4SubnetMultipleAgentsWithDnsProvided(object): id = 'dddddddd-dddd-dddd-dddd-dddddddddddd' ip_version = 4 @@ -437,6 +483,13 @@ class FakeDualNetwork(object): namespace = 'qdhcp-ns' +class FakeNetworkDhcpPort(object): + id = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' + subnets = [FakeV4Subnet()] + ports = [FakePort1(), FakeDhcpPort()] + namespace = 'qdhcp-ns' + + class FakeDualNetworkGatewayRoute(object): id = 'cccccccc-cccc-cccc-cccc-cccccccccccc' subnets = [FakeV4SubnetGatewayRoute(), FakeV6SubnetDHCPStateful()] @@ -1101,6 +1154,18 @@ class TestDnsmasq(TestBase): self._test_output_opts_file(expected, FakeV4MultipleAgentsWithoutDnsProvided()) + def test_output_opts_file_agent_with_many_dns_provided(self): + expected = ('tag:tag0,' + 'option:dns-server,2.2.2.2,9.9.9.9,1.1.1.1,3.3.3.3\n' + 'tag:tag0,option:classless-static-route,' + '169.254.169.254/32,192.168.0.1,0.0.0.0/0,192.168.0.1\n' + 'tag:tag0,249,169.254.169.254/32,192.168.0.1,0.0.0.0/0,' + '192.168.0.1\n' + 'tag:tag0,option:router,192.168.0.1').lstrip() + + self._test_output_opts_file(expected, + FakeV4AgentWithManyDnsProvided()) + def test_output_opts_file_multiple_agents_with_dns_provided(self): expected = ('tag:tag0,option:dns-server,8.8.8.8\n' 'tag:tag0,option:classless-static-route,' @@ -1425,12 +1490,33 @@ class TestDnsmasq(TestBase): dnsmasq._output_hosts_file = mock.Mock() dnsmasq._release_lease = mock.Mock() dnsmasq.network.ports = [] + dnsmasq.device_manager.driver.unplug = mock.Mock() dnsmasq._release_unused_leases() dnsmasq._release_lease.assert_has_calls([mock.call(mac1, ip1, None), mock.call(mac2, ip2, None)], any_order=True) + dnsmasq.device_manager.driver.unplug.assert_has_calls( + [mock.call(dnsmasq.interface_name, + namespace=dnsmasq.network.namespace)]) + + def test_release_unused_leases_with_dhcp_port(self): + dnsmasq = self._get_dnsmasq(FakeNetworkDhcpPort()) + ip1 = '192.168.1.2' + mac1 = '00:00:80:aa:bb:cc' + ip2 = '192.168.1.3' + mac2 = '00:00:80:cc:bb:aa' + + old_leases = set([(ip1, mac1, None), (ip2, mac2, None)]) + dnsmasq._read_hosts_file_leases = mock.Mock(return_value=old_leases) + dnsmasq._output_hosts_file = mock.Mock() + dnsmasq._release_lease = mock.Mock() + dnsmasq.device_manager.get_device_id = mock.Mock( + return_value='fake_dhcp_port') + dnsmasq._release_unused_leases() + self.assertFalse( + dnsmasq.device_manager.driver.unplug.called) def test_release_unused_leases_with_client_id(self): dnsmasq = self._get_dnsmasq(FakeDualNetwork()) @@ -1597,13 +1683,13 @@ class TestDnsmasq(TestBase): def test__output_hosts_file_log_only_twice(self): dm = self._get_dnsmasq(FakeDualStackNetworkSingleDHCP()) - with mock.patch.object(dhcp.LOG, 'process') as process: - process.return_value = ('fake_message', {}) + with mock.patch.object(dhcp, 'LOG') as logger: + logger.process.return_value = ('fake_message', {}) dm._output_hosts_file() # The method logs twice, at the start of and the end. There should be # no other logs, no matter how many hosts there are to dump in the # file. - self.assertEqual(2, process.call_count) + self.assertEqual(2, len(logger.method_calls)) def test_only_populates_dhcp_enabled_subnets(self): exp_host_name = '/dhcp/eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee/host' diff --git a/neutron/tests/unit/agent/linux/test_external_process.py b/neutron/tests/unit/agent/linux/test_external_process.py index db84de21e4b..4cc3836306d 100644 --- a/neutron/tests/unit/agent/linux/test_external_process.py +++ b/neutron/tests/unit/agent/linux/test_external_process.py @@ -16,7 +16,7 @@ import mock import os.path from neutron.agent.linux import external_process as ep -from neutron.agent.linux import utils +from neutron.common import utils as common_utils from neutron.tests import base @@ -103,9 +103,9 @@ class TestProcessManager(base.BaseTestCase): self.execute_p = mock.patch('neutron.agent.common.utils.execute') self.execute = self.execute_p.start() self.delete_if_exists = mock.patch( - 'neutron.openstack.common.fileutils.delete_if_exists').start() + 'oslo_utils.fileutils.delete_if_exists').start() self.ensure_dir = mock.patch.object( - utils, 'ensure_dir').start() + common_utils, 'ensure_dir').start() self.conf = mock.Mock() self.conf.external_pids = '/var/path' diff --git a/neutron/tests/unit/agent/linux/test_interface.py b/neutron/tests/unit/agent/linux/test_interface.py index 7feb9981b7b..a46354a1a5c 100644 --- a/neutron/tests/unit/agent/linux/test_interface.py +++ b/neutron/tests/unit/agent/linux/test_interface.py @@ -22,7 +22,6 @@ from neutron.agent.linux import interface from neutron.agent.linux import ip_lib from neutron.agent.linux import utils from neutron.common import constants -from neutron.extensions import flavor from neutron.tests import base @@ -504,73 +503,6 @@ class TestBridgeInterfaceDriver(TestBase): mock.call().link.delete()]) -class TestMetaInterfaceDriver(TestBase): - def setUp(self): - super(TestMetaInterfaceDriver, self).setUp() - config.register_interface_driver_opts_helper(self.conf) - self.client_cls_p = mock.patch('neutronclient.v2_0.client.Client') - client_cls = self.client_cls_p.start() - self.client_inst = mock.Mock() - client_cls.return_value = self.client_inst - - fake_network = {'network': {flavor.FLAVOR_NETWORK: 'fake1'}} - fake_port = {'ports': - [{'mac_address': - 'aa:bb:cc:dd:ee:ffa', 'network_id': 'test'}]} - - self.client_inst.list_ports.return_value = fake_port - self.client_inst.show_network.return_value = fake_network - - self.conf.set_override('auth_url', 'http://localhost:35357/v2.0') - self.conf.set_override('auth_region', 'RegionOne') - self.conf.set_override('admin_user', 'neutron') - self.conf.set_override('admin_password', 'password') - self.conf.set_override('admin_tenant_name', 'service') - self.conf.set_override( - 'meta_flavor_driver_mappings', - 'fake1:neutron.agent.linux.interface.OVSInterfaceDriver,' - 'fake2:neutron.agent.linux.interface.BridgeInterfaceDriver') - self.conf.set_override('endpoint_type', 'internalURL') - - def test_get_driver_by_network_id(self): - meta_interface = interface.MetaInterfaceDriver(self.conf) - driver = meta_interface._get_driver_by_network_id('test') - self.assertIsInstance(driver, interface.OVSInterfaceDriver) - - def test_set_device_plugin_tag(self): - meta_interface = interface.MetaInterfaceDriver(self.conf) - driver = meta_interface._get_driver_by_network_id('test') - meta_interface._set_device_plugin_tag(driver, - 'tap0', - namespace=None) - expected = [mock.call('tap0', namespace=None), - mock.call().link.set_alias('fake1')] - self.ip_dev.assert_has_calls(expected) - namespace = '01234567-1234-1234-99' - meta_interface._set_device_plugin_tag(driver, - 'tap1', - namespace=namespace) - expected = [mock.call('tap1', namespace='01234567-1234-1234-99'), - mock.call().link.set_alias('fake1')] - self.ip_dev.assert_has_calls(expected) - - def test_get_device_plugin_tag(self): - meta_interface = interface.MetaInterfaceDriver(self.conf) - self.ip_dev().link.alias = 'fake1' - plugin_tag0 = meta_interface._get_device_plugin_tag('tap0', - namespace=None) - expected = [mock.call('tap0', namespace=None)] - self.ip_dev.assert_has_calls(expected) - self.assertEqual('fake1', plugin_tag0) - namespace = '01234567-1234-1234-99' - expected = [mock.call('tap1', namespace='01234567-1234-1234-99')] - plugin_tag1 = meta_interface._get_device_plugin_tag( - 'tap1', - namespace=namespace) - self.ip_dev.assert_has_calls(expected) - self.assertEqual('fake1', plugin_tag1) - - class TestIVSInterfaceDriver(TestBase): def setUp(self): diff --git a/neutron/tests/unit/agent/linux/test_ip_lib.py b/neutron/tests/unit/agent/linux/test_ip_lib.py index f28232cdf4c..87a2a82274c 100644 --- a/neutron/tests/unit/agent/linux/test_ip_lib.py +++ b/neutron/tests/unit/agent/linux/test_ip_lib.py @@ -619,11 +619,13 @@ class TestIpLinkCommand(TestIPCmdBase): self._assert_sudo([], ('set', 'eth0', 'mtu', 1500)) def test_set_up(self): - self.link_cmd.set_up() + observed = self.link_cmd.set_up() + self.assertEqual(self.parent._as_root.return_value, observed) self._assert_sudo([], ('set', 'eth0', 'up')) def test_set_down(self): - self.link_cmd.set_down() + observed = self.link_cmd.set_down() + self.assertEqual(self.parent._as_root.return_value, observed) self._assert_sudo([], ('set', 'eth0', 'down')) def test_set_netns(self): @@ -814,6 +816,15 @@ class TestIpRouteCommand(TestIPCmdBase): 'dev', self.parent.name, 'table', self.table)) + def test_add_gateway_subtable(self): + self.route_cmd.table(self.table).add_gateway(self.gateway, self.metric) + self._assert_sudo([self.ip_version], + ('replace', 'default', + 'via', self.gateway, + 'metric', self.metric, + 'dev', self.parent.name, + 'table', self.table)) + def test_del_gateway_success(self): self.route_cmd.delete_gateway(self.gateway, table=self.table) self._assert_sudo([self.ip_version], @@ -822,6 +833,14 @@ class TestIpRouteCommand(TestIPCmdBase): 'dev', self.parent.name, 'table', self.table)) + def test_del_gateway_success_subtable(self): + self.route_cmd.table(table=self.table).delete_gateway(self.gateway) + self._assert_sudo([self.ip_version], + ('del', 'default', + 'via', self.gateway, + 'dev', self.parent.name, + 'table', self.table)) + def test_del_gateway_cannot_find_device(self): self.parent._as_root.side_effect = RuntimeError("Cannot find device") @@ -894,6 +913,33 @@ class TestIpRouteCommand(TestIPCmdBase): 'dev', self.parent.name, 'table', self.table)) + def test_list_onlink_routes_subtable(self): + self.parent._run.return_value = ( + "10.0.0.0/22\n" + "172.24.4.0/24 proto kernel src 172.24.4.2\n") + routes = self.route_cmd.table(self.table).list_onlink_routes( + self.ip_version) + self.assertEqual(['10.0.0.0/22'], routes) + self._assert_call([self.ip_version], + ('list', 'dev', self.parent.name, 'scope', 'link', + 'table', self.table)) + + def test_add_onlink_route_subtable(self): + self.route_cmd.table(self.table).add_onlink_route(self.cidr) + self._assert_sudo([self.ip_version], + ('replace', self.cidr, + 'dev', self.parent.name, + 'scope', 'link', + 'table', self.table)) + + def test_delete_onlink_route_subtable(self): + self.route_cmd.table(self.table).delete_onlink_route(self.cidr) + self._assert_sudo([self.ip_version], + ('del', self.cidr, + 'dev', self.parent.name, + 'scope', 'link', + 'table', self.table)) + class TestIPv6IpRouteCommand(TestIpRouteCommand): def setUp(self): diff --git a/neutron/tests/unit/agent/linux/test_iptables_firewall.py b/neutron/tests/unit/agent/linux/test_iptables_firewall.py index d43532df010..7b05a3fe649 100644 --- a/neutron/tests/unit/agent/linux/test_iptables_firewall.py +++ b/neutron/tests/unit/agent/linux/test_iptables_firewall.py @@ -604,6 +604,25 @@ class IptablesFirewallTestCase(BaseIptablesFirewallTestCase): egress = None self._test_prepare_port_filter(rule, ingress, egress) + def _test_filter_ingress_tcp_min_port_0(self, ethertype): + rule = {'ethertype': ethertype, + 'direction': 'ingress', + 'protocol': 'tcp', + 'port_range_min': 0, + 'port_range_max': 100} + ingress = mock.call.add_rule( + 'ifake_dev', + '-p tcp -m tcp -m multiport --dports 0:100 -j RETURN', + comment=None) + egress = None + self._test_prepare_port_filter(rule, ingress, egress) + + def test_filter_ingress_tcp_min_port_0_for_ipv4(self): + self._test_filter_ingress_tcp_min_port_0('IPv4') + + def test_filter_ingress_tcp_min_port_0_for_ipv6(self): + self._test_filter_ingress_tcp_min_port_0('IPv6') + def test_filter_ipv6_ingress_tcp_mport_prefix(self): prefix = FAKE_PREFIX['IPv6'] rule = {'ethertype': 'IPv6', diff --git a/neutron/tests/unit/agent/linux/test_iptables_manager.py b/neutron/tests/unit/agent/linux/test_iptables_manager.py index 674b1a872f7..d6a1f9116f7 100644 --- a/neutron/tests/unit/agent/linux/test_iptables_manager.py +++ b/neutron/tests/unit/agent/linux/test_iptables_manager.py @@ -22,6 +22,7 @@ import testtools from neutron.agent.linux import iptables_comments as ic from neutron.agent.linux import iptables_manager +from neutron.common import constants from neutron.common import exceptions as n_exc from neutron.tests import base from neutron.tests import tools @@ -29,7 +30,8 @@ from neutron.tests import tools IPTABLES_ARG = {'bn': iptables_manager.binary_name, 'snat_out_comment': ic.SNAT_OUT, - 'filter_rules': ''} + 'filter_rules': '', + 'mark': constants.ROUTER_MARK_MASK} NAT_TEMPLATE = ('# Generated by iptables_manager\n' '*nat\n' @@ -603,10 +605,9 @@ class IptablesManagerStateFulTestCase(base.BaseTestCase): '[0:0] -A OUTPUT -j %(bn)s-OUTPUT\n' '[0:0] -A POSTROUTING -j %(bn)s-POSTROUTING\n' '[0:0] -A %(bn)s-PREROUTING -j %(bn)s-mark\n' - '[0:0] -A %(bn)s-PREROUTING -j MARK --set-xmark 0x1/0xffffffff\n' + '[0:0] -A %(bn)s-PREROUTING -j MARK --set-xmark 0x1/%(mark)s\n' 'COMMIT\n' - '# Completed by iptables_manager\n' - % IPTABLES_ARG) + '# Completed by iptables_manager\n' % IPTABLES_ARG) expected_calls_and_values = [ (mock.call(['iptables-save', '-c'], @@ -635,13 +636,13 @@ class IptablesManagerStateFulTestCase(base.BaseTestCase): self.iptables.ipv4['mangle'].add_chain('mangle') self.iptables.ipv4['mangle'].add_rule( 'PREROUTING', - '-j MARK --set-xmark 0x1/0xffffffff') + '-j MARK --set-xmark 0x1/%s' % constants.ROUTER_MARK_MASK) self.iptables.apply() self.iptables.ipv4['mangle'].remove_rule( 'PREROUTING', - '-j MARK --set-xmark 0x1/0xffffffff') + '-j MARK --set-xmark 0x1/%s' % constants.ROUTER_MARK_MASK) self.iptables.ipv4['mangle'].remove_chain('mangle') self.iptables.apply() diff --git a/neutron/tests/unit/agent/linux/test_keepalived.py b/neutron/tests/unit/agent/linux/test_keepalived.py index 7d6e9806f9e..1533edc6757 100644 --- a/neutron/tests/unit/agent/linux/test_keepalived.py +++ b/neutron/tests/unit/agent/linux/test_keepalived.py @@ -115,6 +115,8 @@ class KeepalivedConfTestCase(base.BaseTestCase, interface eth0 virtual_router_id 1 priority 50 + garp_master_repeat 5 + garp_master_refresh 10 advert_int 5 authentication { auth_type AH @@ -141,6 +143,8 @@ vrrp_instance VR_2 { interface eth4 virtual_router_id 2 priority 50 + garp_master_repeat 5 + garp_master_refresh 10 mcast_src_ip 224.0.0.1 track_interface { eth4 @@ -202,11 +206,15 @@ class KeepalivedInstanceRoutesTestCase(base.BaseTestCase): keepalived.KeepalivedVirtualRoute('10.0.0.0/8', '1.0.0.1'), keepalived.KeepalivedVirtualRoute('20.0.0.0/8', '2.0.0.2')] routes.extra_routes = extra_routes + extra_subnets = [ + keepalived.KeepalivedVirtualRoute( + '30.0.0.0/8', None, 'eth0', scope='link')] + routes.extra_subnets = extra_subnets return routes def test_routes(self): routes = self._get_instance_routes() - self.assertEqual(len(routes.routes), 4) + self.assertEqual(len(routes.routes), 5) def test_remove_routes_on_interface(self): routes = self._get_instance_routes() @@ -221,6 +229,7 @@ class KeepalivedInstanceRoutesTestCase(base.BaseTestCase): ::/0 via fe80::3e97:eff:fe26:3bfa/64 dev eth1 10.0.0.0/8 via 1.0.0.1 20.0.0.0/8 via 2.0.0.2 + 30.0.0.0/8 dev eth0 scope link }""" routes = self._get_instance_routes() self.assertEqual(expected, '\n'.join(routes.build_config())) @@ -233,7 +242,7 @@ class KeepalivedInstanceTestCase(base.BaseTestCase, ['169.254.192.0/18']) self.assertEqual('169.254.0.42/24', instance.get_primary_vip()) - def test_remove_adresses_by_interface(self): + def test_remove_addresses_by_interface(self): config = self._get_config() instance = config.get_instance(1) instance.remove_vips_vroutes_by_interface('eth2') @@ -244,6 +253,8 @@ class KeepalivedInstanceTestCase(base.BaseTestCase, interface eth0 virtual_router_id 1 priority 50 + garp_master_repeat 5 + garp_master_refresh 10 advert_int 5 authentication { auth_type AH @@ -267,6 +278,8 @@ vrrp_instance VR_2 { interface eth4 virtual_router_id 2 priority 50 + garp_master_repeat 5 + garp_master_refresh 10 mcast_src_ip 224.0.0.1 track_interface { eth4 @@ -289,6 +302,8 @@ vrrp_instance VR_2 { interface eth0 virtual_router_id 1 priority 50 + garp_master_repeat 5 + garp_master_refresh 10 virtual_ipaddress { 169.254.0.1/24 dev eth0 } diff --git a/neutron/tests/unit/agent/linux/test_utils.py b/neutron/tests/unit/agent/linux/test_utils.py index 9958d0422f8..9a2e89ffa35 100644 --- a/neutron/tests/unit/agent/linux/test_utils.py +++ b/neutron/tests/unit/agent/linux/test_utils.py @@ -12,9 +12,9 @@ # License for the specific language governing permissions and limitations # under the License. -import errno -import mock import socket + +import mock import testtools from neutron.agent.linux import utils @@ -282,18 +282,6 @@ class TestBaseOSUtils(base.BaseTestCase): getegid.assert_called_once_with() getgrgid.assert_called_once_with(self.EGID) - @mock.patch('os.makedirs') - def test_ensure_dir_no_fail_if_exists(self, makedirs): - error = OSError() - error.errno = errno.EEXIST - makedirs.side_effect = error - utils.ensure_dir("/etc/create/concurrently") - - @mock.patch('os.makedirs') - def test_ensure_dir_calls_makedirs(self, makedirs): - utils.ensure_dir("/etc/create/directory") - makedirs.assert_called_once_with("/etc/create/directory", 0o755) - class TestUnixDomainHttpConnection(base.BaseTestCase): def test_connect(self): diff --git a/neutron/tests/unit/agent/metadata/test_agent.py b/neutron/tests/unit/agent/metadata/test_agent.py index 9bef96864c8..20aeba05085 100644 --- a/neutron/tests/unit/agent/metadata/test_agent.py +++ b/neutron/tests/unit/agent/metadata/test_agent.py @@ -23,6 +23,7 @@ from neutron.agent import metadata_agent from neutron.common import constants from neutron.common import utils from neutron.tests import base +from neutronclient.v2_0 import client class FakeConf(object): @@ -43,12 +44,55 @@ class FakeConf(object): nova_client_cert = 'nova_cert' nova_client_priv_key = 'nova_priv_key' cache_url = '' + endpoint_url = None class FakeConfCache(FakeConf): cache_url = 'memory://?default_ttl=5' +class FakeConfEndpoint(FakeConf): + endpoint_url = 'http://127.0.0.0:8776' + + +class TestNeutronClient(base.BaseTestCase): + fake_conf = FakeConf + expected_params = { + 'username': 'neutron', + 'region_name': 'region', + 'ca_cert': None, + 'tenant_name': 'tenant', + 'insecure': False, + 'token': None, + 'endpoint_type': 'adminURL', + 'auth_url': 'http://127.0.0.1', + 'password': 'password', + 'endpoint_url': None, + 'auth_strategy': 'keystone', + } + + def test_client_params(self): + handler = agent.MetadataProxyHandler(self.fake_conf) + + with mock.patch.object( + client.Client, "__init__", return_value=None) as mock_init: + handler._get_neutron_client() + mock_init.assert_called_once_with(**self.expected_params) + + def test_client_with_endpoint_url(self): + fake_conf = FakeConfEndpoint + handler = agent.MetadataProxyHandler(fake_conf) + + expected_params = self.expected_params.copy() + del expected_params['endpoint_type'] + expected_params['endpoint_url'] = 'http://127.0.0.0:8776' + + with mock.patch.object( + client.Client, "__init__", return_value=None) as mock_init: + handler._get_neutron_client() + mock_init.assert_called_once_with(**expected_params) + + class TestMetadataProxyHandlerBase(base.BaseTestCase): fake_conf = FakeConf @@ -524,7 +568,7 @@ class TestUnixDomainMetadataProxy(base.BaseTestCase): self.cfg.CONF.metadata_backlog = 128 self.cfg.CONF.metadata_proxy_socket_mode = config.USER_MODE - @mock.patch.object(agent_utils, 'ensure_dir') + @mock.patch.object(utils, 'ensure_dir') def test_init_doesnot_exists(self, ensure_dir): agent.UnixDomainMetadataProxy(mock.Mock()) ensure_dir.assert_called_once_with('/the') @@ -561,7 +605,7 @@ class TestUnixDomainMetadataProxy(base.BaseTestCase): @mock.patch.object(agent, 'MetadataProxyHandler') @mock.patch.object(agent_utils, 'UnixDomainWSGIServer') - @mock.patch.object(agent_utils, 'ensure_dir') + @mock.patch.object(utils, 'ensure_dir') def test_run(self, ensure_dir, server, handler): p = agent.UnixDomainMetadataProxy(self.cfg.CONF) p.run() diff --git a/neutron/tests/unit/agent/metadata/test_driver.py b/neutron/tests/unit/agent/metadata/test_driver.py index 5cbfd182f9b..d86c4fbce01 100644 --- a/neutron/tests/unit/agent/metadata/test_driver.py +++ b/neutron/tests/unit/agent/metadata/test_driver.py @@ -23,6 +23,7 @@ from neutron.agent.l3 import config as l3_config from neutron.agent.l3 import ha as l3_ha_agent from neutron.agent.metadata import config from neutron.agent.metadata import driver as metadata_driver +from neutron.common import constants from neutron.tests import base @@ -39,7 +40,8 @@ class TestMetadataDriverRules(base.BaseTestCase): metadata_driver.MetadataDriver.metadata_nat_rules(8775)) def test_metadata_filter_rules(self): - rules = [('INPUT', '-m mark --mark 0x1 -j ACCEPT'), + rules = [('INPUT', '-m mark --mark 0x1/%s -j ACCEPT' % + constants.ROUTER_MARK_MASK), ('INPUT', '-p tcp -m tcp --dport 8775 -j DROP')] self.assertEqual( rules, @@ -49,7 +51,7 @@ class TestMetadataDriverRules(base.BaseTestCase): rule = ('PREROUTING', '-d 169.254.169.254/32 ' '-p tcp -m tcp --dport 80 ' '-j MARK --set-xmark 0x1/%s' % - metadata_driver.METADATA_ACCESS_MARK_MASK) + constants.ROUTER_MARK_MASK) self.assertEqual( [rule], metadata_driver.MetadataDriver.metadata_mangle_rules('0x1')) diff --git a/neutron/tests/unit/agent/test_securitygroups_rpc.py b/neutron/tests/unit/agent/test_securitygroups_rpc.py index eb4f1ac839c..030899cf7a6 100644 --- a/neutron/tests/unit/agent/test_securitygroups_rpc.py +++ b/neutron/tests/unit/agent/test_securitygroups_rpc.py @@ -182,7 +182,8 @@ class SGServerRpcCallBackTestCase(test_sg.SecurityGroupDBTestCase): '192.168.1.3') self.assertFalse(self.notifier.security_groups_provider_updated.called) - def test_security_group_rules_for_devices_ipv4_ingress(self): + def _test_sg_rules_for_devices_ipv4_ingress_port_range( + self, min_port, max_port): fake_prefix = FAKE_PREFIX[const.IPv4] with self.network() as n,\ self.subnet(n),\ @@ -190,8 +191,8 @@ class SGServerRpcCallBackTestCase(test_sg.SecurityGroupDBTestCase): sg1_id = sg1['security_group']['id'] rule1 = self._build_security_group_rule( sg1_id, - 'ingress', const.PROTO_NAME_TCP, '22', - '22') + 'ingress', const.PROTO_NAME_TCP, str(min_port), + str(max_port)) rule2 = self._build_security_group_rule( sg1_id, 'ingress', const.PROTO_NAME_TCP, '23', @@ -221,9 +222,9 @@ class SGServerRpcCallBackTestCase(test_sg.SecurityGroupDBTestCase): {'direction': 'ingress', 'protocol': const.PROTO_NAME_TCP, 'ethertype': const.IPv4, - 'port_range_max': 22, + 'port_range_max': max_port, 'security_group_id': sg1_id, - 'port_range_min': 22}, + 'port_range_min': min_port}, {'direction': 'ingress', 'protocol': const.PROTO_NAME_TCP, 'ethertype': const.IPv4, @@ -235,6 +236,12 @@ class SGServerRpcCallBackTestCase(test_sg.SecurityGroupDBTestCase): expected) self._delete('ports', port_id1) + def test_sg_rules_for_devices_ipv4_ingress_port_range_min_port_0(self): + self._test_sg_rules_for_devices_ipv4_ingress_port_range(0, 10) + + def test_sg_rules_for_devices_ipv4_ingress_port_range_min_port_1(self): + self._test_sg_rules_for_devices_ipv4_ingress_port_range(1, 10) + @contextlib.contextmanager def _port_with_addr_pairs_and_security_group(self): plugin_obj = manager.NeutronManager.get_plugin() diff --git a/neutron/tests/unit/api/test_extensions.py b/neutron/tests/unit/api/test_extensions.py index 0730f3e321d..19b9858da5b 100644 --- a/neutron/tests/unit/api/test_extensions.py +++ b/neutron/tests/unit/api/test_extensions.py @@ -486,6 +486,55 @@ class ExtensionManagerTest(base.BaseTestCase): self.assertIn('valid_extension', ext_mgr.extensions) self.assertNotIn('invalid_extension', ext_mgr.extensions) + def test_assignment_of_attr_map(self): + """Unit test for bug 1443342 + + In this bug, an extension that extended multiple resources with the + same dict would cause future extensions to inadvertently modify the + resources of all of the resources since they were referencing the same + dictionary. + """ + + class MultiResourceExtension(ext_stubs.StubExtension): + """Generated Extended Resources. + + This extension's extended resource will assign + to more than one resource. + """ + + def get_extended_resources(self, version): + EXTENDED_TIMESTAMP = { + 'created_at': {'allow_post': False, 'allow_put': False, + 'is_visible': True}} + EXTENDED_RESOURCES = ["ext1", "ext2"] + attrs = {} + for resources in EXTENDED_RESOURCES: + attrs[resources] = EXTENDED_TIMESTAMP + + return attrs + + class AttrExtension(ext_stubs.StubExtension): + def get_extended_resources(self, version): + attrs = { + self.alias: { + '%s-attr' % self.alias: {'allow_post': False, + 'allow_put': False, + 'is_visible': True}}} + return attrs + + ext_mgr = extensions.ExtensionManager('') + attr_map = {} + ext_mgr.add_extension(MultiResourceExtension('timestamp')) + ext_mgr.extend_resources("2.0", attr_map) + ext_mgr.add_extension(AttrExtension("ext1")) + ext_mgr.add_extension(AttrExtension("ext2")) + ext_mgr.extend_resources("2.0", attr_map) + self.assertIn('created_at', attr_map['ext2']) + self.assertIn('created_at', attr_map['ext1']) + # now we need to make sure the attrextensions didn't leak across + self.assertNotIn('ext1-attr', attr_map['ext2']) + self.assertNotIn('ext2-attr', attr_map['ext1']) + class PluginAwareExtensionManagerTest(base.BaseTestCase): diff --git a/neutron/tests/unit/api/v2/test_base.py b/neutron/tests/unit/api/v2/test_base.py index eae4dd21b53..0ee9c2ec313 100644 --- a/neutron/tests/unit/api/v2/test_base.py +++ b/neutron/tests/unit/api/v2/test_base.py @@ -37,6 +37,7 @@ from neutron import context from neutron import manager from neutron import policy from neutron import quota +from neutron.quota import resource_registry from neutron.tests import base from neutron.tests import fake_notifier from neutron.tests import tools @@ -1289,6 +1290,12 @@ class NotificationTest(APIv2TestBase): class DHCPNotificationTest(APIv2TestBase): + + def setUp(self): + # This test does not have database support so tracking cannot be used + cfg.CONF.set_override('track_quota_usage', False, group='QUOTAS') + super(DHCPNotificationTest, self).setUp() + def _test_dhcp_notifier(self, opname, resource, initial_input=None): instance = self.plugin.return_value instance.get_networks.return_value = initial_input @@ -1340,6 +1347,23 @@ class DHCPNotificationTest(APIv2TestBase): class QuotaTest(APIv2TestBase): + + def setUp(self): + # This test does not have database support so tracking cannot be used + cfg.CONF.set_override('track_quota_usage', False, group='QUOTAS') + super(QuotaTest, self).setUp() + # Use mock to let the API use a different QuotaEngine instance for + # unit test in this class. This will ensure resource are registered + # again and instanciated with neutron.quota.resource.CountableResource + replacement_registry = resource_registry.ResourceRegistry() + registry_patcher = mock.patch('neutron.quota.resource_registry.' + 'ResourceRegistry.get_instance') + mock_registry = registry_patcher.start().return_value + mock_registry.get_resource = replacement_registry.get_resource + mock_registry.resources = replacement_registry.resources + # Register a resource + replacement_registry.register_resource_by_name('network') + def test_create_network_quota(self): cfg.CONF.set_override('quota_network', 1, group='QUOTAS') initial_input = {'network': {'name': 'net1', 'tenant_id': _uuid()}} @@ -1384,9 +1408,10 @@ class QuotaTest(APIv2TestBase): class ExtensionTestCase(base.BaseTestCase): def setUp(self): + # This test does not have database support so tracking cannot be used + cfg.CONF.set_override('track_quota_usage', False, group='QUOTAS') super(ExtensionTestCase, self).setUp() plugin = 'neutron.neutron_plugin_base_v2.NeutronPluginBaseV2' - # Ensure existing ExtensionManager is not used extensions.PluginAwareExtensionManager._instance = None diff --git a/neutron/tests/unit/api/v2/test_resource.py b/neutron/tests/unit/api/v2/test_resource.py index 96c7d2da29d..36afc95572e 100644 --- a/neutron/tests/unit/api/v2/test_resource.py +++ b/neutron/tests/unit/api/v2/test_resource.py @@ -289,19 +289,21 @@ class ResourceTestCase(base.BaseTestCase): res = resource.delete('', extra_environ=environ) self.assertEqual(res.status_int, 204) - def _test_error_log_level(self, map_webob_exc, expect_log_info=False, - use_fault_map=True): - class TestException(n_exc.NeutronException): - message = 'Test Exception' + def _test_error_log_level(self, expected_webob_exc, expect_log_info=False, + use_fault_map=True, exc_raised=None): + if not exc_raised: + class TestException(n_exc.NeutronException): + message = 'Test Exception' + exc_raised = TestException controller = mock.MagicMock() - controller.test.side_effect = TestException() - faults = {TestException: map_webob_exc} if use_fault_map else {} + controller.test.side_effect = exc_raised() + faults = {exc_raised: expected_webob_exc} if use_fault_map else {} resource = webtest.TestApp(wsgi_resource.Resource(controller, faults)) environ = {'wsgiorg.routing_args': (None, {'action': 'test'})} with mock.patch.object(wsgi_resource, 'LOG') as log: res = resource.get('', extra_environ=environ, expect_errors=True) - self.assertEqual(res.status_int, map_webob_exc.code) + self.assertEqual(res.status_int, expected_webob_exc.code) self.assertEqual(expect_log_info, log.info.called) self.assertNotEqual(expect_log_info, log.exception.called) @@ -316,6 +318,16 @@ class ResourceTestCase(base.BaseTestCase): self._test_error_log_level(exc.HTTPInternalServerError, expect_log_info=False, use_fault_map=False) + def test_webob_4xx_logged_info_level(self): + self._test_error_log_level(exc.HTTPNotFound, + use_fault_map=False, expect_log_info=True, + exc_raised=exc.HTTPNotFound) + + def test_webob_5xx_logged_info_level(self): + self._test_error_log_level(exc.HTTPServiceUnavailable, + use_fault_map=False, expect_log_info=False, + exc_raised=exc.HTTPServiceUnavailable) + def test_no_route_args(self): controller = mock.MagicMock() diff --git a/neutron/tests/unit/common/test_log.py b/neutron/tests/unit/common/test_log.py deleted file mode 100644 index b6ed65b43a3..00000000000 --- a/neutron/tests/unit/common/test_log.py +++ /dev/null @@ -1,57 +0,0 @@ -# Copyright (c) 2013 OpenStack Foundation. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -import mock - -from neutron.common import log as call_log -from neutron.tests import base - - -class TargetKlass(object): - - @call_log.log - def test_method(self, arg1, arg2, *args, **kwargs): - pass - - -class TestCallLog(base.BaseTestCase): - def setUp(self): - super(TestCallLog, self).setUp() - self.klass = TargetKlass() - logger = self.klass.test_method.__func__.__closure__[0].cell_contents - self.log_debug = mock.patch.object(logger, 'debug').start() - - def _test_call_log(self, *args, **kwargs): - expected_format = ('%(class_name)s method %(method_name)s ' - 'called with arguments %(args)s %(kwargs)s') - expected_data = {'class_name': '%s.%s' % ( - __name__, - self.klass.__class__.__name__), - 'method_name': 'test_method', - 'args': args, - 'kwargs': kwargs} - self.klass.test_method(*args, **kwargs) - self.log_debug.assert_called_once_with(expected_format, expected_data) - - def test_call_log_all_args(self): - self._test_call_log(10, 20) - - def test_call_log_all_kwargs(self): - self._test_call_log(arg1=10, arg2=20) - - def test_call_log_known_args_unknown_args_kwargs(self): - self._test_call_log(10, 20, 30, arg4=40) - - def test_call_log_known_args_kwargs_unknown_kwargs(self): - self._test_call_log(10, arg2=20, arg3=30, arg4=40) diff --git a/neutron/tests/unit/common/test_utils.py b/neutron/tests/unit/common/test_utils.py index 82c84904c00..81634f979e3 100644 --- a/neutron/tests/unit/common/test_utils.py +++ b/neutron/tests/unit/common/test_utils.py @@ -12,6 +12,8 @@ # License for the specific language governing permissions and limitations # under the License. +import errno + import eventlet import mock import netaddr @@ -663,3 +665,17 @@ class TestDelayedStringRenderer(base.BaseTestCase): LOG.logger.setLevel(logging.logging.DEBUG) LOG.debug("Hello %s", delayed) self.assertTrue(my_func.called) + + +class TestEnsureDir(base.BaseTestCase): + @mock.patch('os.makedirs') + def test_ensure_dir_no_fail_if_exists(self, makedirs): + error = OSError() + error.errno = errno.EEXIST + makedirs.side_effect = error + utils.ensure_dir("/etc/create/concurrently") + + @mock.patch('os.makedirs') + def test_ensure_dir_calls_makedirs(self, makedirs): + utils.ensure_dir("/etc/create/directory") + makedirs.assert_called_once_with("/etc/create/directory", 0o755) diff --git a/neutron/plugins/ml2/drivers/cisco/n1kv/__init__.py b/neutron/tests/unit/db/quota/__init__.py similarity index 100% rename from neutron/plugins/ml2/drivers/cisco/n1kv/__init__.py rename to neutron/tests/unit/db/quota/__init__.py diff --git a/neutron/tests/unit/db/quota/test_api.py b/neutron/tests/unit/db/quota/test_api.py new file mode 100644 index 00000000000..a64e2b98b44 --- /dev/null +++ b/neutron/tests/unit/db/quota/test_api.py @@ -0,0 +1,229 @@ +# Copyright (c) 2015 OpenStack Foundation. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +from neutron import context +from neutron.db.quota import api as quota_api +from neutron.tests.unit import testlib_api + + +class TestQuotaDbApi(testlib_api.SqlTestCaseLight): + + def _set_context(self): + self.tenant_id = 'Higuain' + self.context = context.Context('Gonzalo', self.tenant_id, + is_admin=False, is_advsvc=False) + + def _create_quota_usage(self, resource, used, reserved, tenant_id=None): + tenant_id = tenant_id or self.tenant_id + return quota_api.set_quota_usage( + self.context, resource, tenant_id, + in_use=used, reserved=reserved) + + def _verify_quota_usage(self, usage_info, + expected_resource=None, + expected_used=None, + expected_reserved=None, + expected_dirty=None): + self.assertEqual(self.tenant_id, usage_info.tenant_id) + if expected_resource: + self.assertEqual(expected_resource, usage_info.resource) + if expected_dirty is not None: + self.assertEqual(expected_dirty, usage_info.dirty) + if expected_used is not None: + self.assertEqual(expected_used, usage_info.used) + if expected_reserved is not None: + self.assertEqual(expected_reserved, usage_info.reserved) + if expected_used is not None and expected_reserved is not None: + self.assertEqual(expected_used + expected_reserved, + usage_info.total) + + def setUp(self): + super(TestQuotaDbApi, self).setUp() + self._set_context() + + def test_create_quota_usage(self): + usage_info = self._create_quota_usage('goals', 26, 10) + self._verify_quota_usage(usage_info, + expected_resource='goals', + expected_used=26, + expected_reserved=10) + + def test_update_quota_usage(self): + self._create_quota_usage('goals', 26, 10) + # Higuain scores a double + usage_info_1 = quota_api.set_quota_usage( + self.context, 'goals', self.tenant_id, + in_use=28) + self._verify_quota_usage(usage_info_1, + expected_used=28, + expected_reserved=10) + usage_info_2 = quota_api.set_quota_usage( + self.context, 'goals', self.tenant_id, + reserved=8) + self._verify_quota_usage(usage_info_2, + expected_used=28, + expected_reserved=8) + + def test_update_quota_usage_with_deltas(self): + self._create_quota_usage('goals', 26, 10) + # Higuain scores a double + usage_info_1 = quota_api.set_quota_usage( + self.context, 'goals', self.tenant_id, + in_use=2, delta=True) + self._verify_quota_usage(usage_info_1, + expected_used=28, + expected_reserved=10) + usage_info_2 = quota_api.set_quota_usage( + self.context, 'goals', self.tenant_id, + reserved=-2, delta=True) + self._verify_quota_usage(usage_info_2, + expected_used=28, + expected_reserved=8) + + def test_set_quota_usage_dirty(self): + self._create_quota_usage('goals', 26, 10) + # Higuain needs a shower after the match + self.assertEqual(1, quota_api.set_quota_usage_dirty( + self.context, 'goals', self.tenant_id)) + usage_info = quota_api.get_quota_usage_by_resource_and_tenant( + self.context, 'goals', self.tenant_id) + self._verify_quota_usage(usage_info, + expected_dirty=True) + # Higuain is clean now + self.assertEqual(1, quota_api.set_quota_usage_dirty( + self.context, 'goals', self.tenant_id, dirty=False)) + usage_info = quota_api.get_quota_usage_by_resource_and_tenant( + self.context, 'goals', self.tenant_id) + self._verify_quota_usage(usage_info, + expected_dirty=False) + + def test_set_dirty_non_existing_quota_usage(self): + self.assertEqual(0, quota_api.set_quota_usage_dirty( + self.context, 'meh', self.tenant_id)) + + def test_set_resources_quota_usage_dirty(self): + self._create_quota_usage('goals', 26, 10) + self._create_quota_usage('assists', 11, 5) + self._create_quota_usage('bookings', 3, 1) + self.assertEqual(2, quota_api.set_resources_quota_usage_dirty( + self.context, ['goals', 'bookings'], self.tenant_id)) + usage_info_goals = quota_api.get_quota_usage_by_resource_and_tenant( + self.context, 'goals', self.tenant_id) + usage_info_assists = quota_api.get_quota_usage_by_resource_and_tenant( + self.context, 'assists', self.tenant_id) + usage_info_bookings = quota_api.get_quota_usage_by_resource_and_tenant( + self.context, 'bookings', self.tenant_id) + self._verify_quota_usage(usage_info_goals, expected_dirty=True) + self._verify_quota_usage(usage_info_assists, expected_dirty=False) + self._verify_quota_usage(usage_info_bookings, expected_dirty=True) + + def test_set_resources_quota_usage_dirty_with_empty_list(self): + self._create_quota_usage('goals', 26, 10) + self._create_quota_usage('assists', 11, 5) + self._create_quota_usage('bookings', 3, 1) + # Expect all the resources for the tenant to be set dirty + self.assertEqual(3, quota_api.set_resources_quota_usage_dirty( + self.context, [], self.tenant_id)) + usage_info_goals = quota_api.get_quota_usage_by_resource_and_tenant( + self.context, 'goals', self.tenant_id) + usage_info_assists = quota_api.get_quota_usage_by_resource_and_tenant( + self.context, 'assists', self.tenant_id) + usage_info_bookings = quota_api.get_quota_usage_by_resource_and_tenant( + self.context, 'bookings', self.tenant_id) + self._verify_quota_usage(usage_info_goals, expected_dirty=True) + self._verify_quota_usage(usage_info_assists, expected_dirty=True) + self._verify_quota_usage(usage_info_bookings, expected_dirty=True) + + # Higuain is clean now + self.assertEqual(1, quota_api.set_quota_usage_dirty( + self.context, 'goals', self.tenant_id, dirty=False)) + usage_info = quota_api.get_quota_usage_by_resource_and_tenant( + self.context, 'goals', self.tenant_id) + self._verify_quota_usage(usage_info, + expected_dirty=False) + + def _test_set_all_quota_usage_dirty(self, expected): + self._create_quota_usage('goals', 26, 10) + self._create_quota_usage('goals', 12, 6, tenant_id='Callejon') + self.assertEqual(expected, quota_api.set_all_quota_usage_dirty( + self.context, 'goals')) + + def test_set_all_quota_usage_dirty(self): + # All goal scorers need a shower after the match, but since this is not + # admin context we can clean only one + self._test_set_all_quota_usage_dirty(expected=1) + + def test_get_quota_usage_by_tenant(self): + self._create_quota_usage('goals', 26, 10) + self._create_quota_usage('assists', 11, 5) + # Create a resource for a different tenant + self._create_quota_usage('mehs', 99, 99, tenant_id='buffon') + usage_infos = quota_api.get_quota_usage_by_tenant_id( + self.context, self.tenant_id) + + self.assertEqual(2, len(usage_infos)) + resources = [info.resource for info in usage_infos] + self.assertIn('goals', resources) + self.assertIn('assists', resources) + + def test_get_quota_usage_by_resource(self): + self._create_quota_usage('goals', 26, 10) + self._create_quota_usage('assists', 11, 5) + self._create_quota_usage('goals', 12, 6, tenant_id='Callejon') + usage_infos = quota_api.get_quota_usage_by_resource( + self.context, 'goals') + # Only 1 result expected in tenant context + self.assertEqual(1, len(usage_infos)) + self._verify_quota_usage(usage_infos[0], + expected_resource='goals', + expected_used=26, + expected_reserved=10) + + def test_get_quota_usage_by_tenant_and_resource(self): + self._create_quota_usage('goals', 26, 10) + usage_info = quota_api.get_quota_usage_by_resource_and_tenant( + self.context, 'goals', self.tenant_id) + self._verify_quota_usage(usage_info, + expected_resource='goals', + expected_used=26, + expected_reserved=10) + + def test_get_non_existing_quota_usage_returns_none(self): + self.assertIsNone(quota_api.get_quota_usage_by_resource_and_tenant( + self.context, 'goals', self.tenant_id)) + + +class TestQuotaDbApiAdminContext(TestQuotaDbApi): + + def _set_context(self): + self.tenant_id = 'Higuain' + self.context = context.Context('Gonzalo', self.tenant_id, + is_admin=True, is_advsvc=True, + load_admin_roles=False) + + def test_get_quota_usage_by_resource(self): + self._create_quota_usage('goals', 26, 10) + self._create_quota_usage('assists', 11, 5) + self._create_quota_usage('goals', 12, 6, tenant_id='Callejon') + usage_infos = quota_api.get_quota_usage_by_resource( + self.context, 'goals') + # 2 results expected in admin context + self.assertEqual(2, len(usage_infos)) + for usage_info in usage_infos: + self.assertEqual('goals', usage_info.resource) + + def test_set_all_quota_usage_dirty(self): + # All goal scorers need a shower after the match, and with admin + # context we should be able to clean all of them + self._test_set_all_quota_usage_dirty(expected=2) diff --git a/neutron/tests/unit/db/test_quota_db.py b/neutron/tests/unit/db/quota/test_driver.py similarity index 97% rename from neutron/tests/unit/db/test_quota_db.py rename to neutron/tests/unit/db/quota/test_driver.py index b6ba3f52016..31a741721ce 100644 --- a/neutron/tests/unit/db/test_quota_db.py +++ b/neutron/tests/unit/db/quota/test_driver.py @@ -16,11 +16,11 @@ from neutron.common import exceptions from neutron import context from neutron.db import db_base_plugin_v2 as base_plugin -from neutron.db import quota_db +from neutron.db.quota import driver from neutron.tests.unit import testlib_api -class FakePlugin(base_plugin.NeutronDbPluginV2, quota_db.DbQuotaDriver): +class FakePlugin(base_plugin.NeutronDbPluginV2, driver.DbQuotaDriver): """A fake plugin class containing all DB methods.""" diff --git a/neutron/tests/unit/db/test_db_base_plugin_v2.py b/neutron/tests/unit/db/test_db_base_plugin_v2.py index faab63dcfc3..f91b0ad2372 100644 --- a/neutron/tests/unit/db/test_db_base_plugin_v2.py +++ b/neutron/tests/unit/db/test_db_base_plugin_v2.py @@ -20,7 +20,6 @@ import itertools import mock import netaddr from oslo_config import cfg -from oslo_db import exception as db_exc from oslo_utils import importutils import six from sqlalchemy import orm @@ -2293,7 +2292,7 @@ class TestNetworksV2(NeutronDbPluginV2TestCase): # must query db to see whether subnet's shared attribute # has been updated or not ctx = context.Context('', '', is_admin=True) - subnet_db = manager.NeutronManager.get_plugin()._get_subnet( + subnet_db = manager.NeutronManager.get_plugin().get_subnet( ctx, subnet['subnet']['id']) self.assertEqual(subnet_db['shared'], True) @@ -3278,6 +3277,9 @@ class TestSubnetsV2(NeutronDbPluginV2TestCase): [{'start': '10.0.0.2', 'end': '10.0.0.254'}, {'end': '10.0.0.254'}], None, + [{'start': '10.0.0.200', 'end': '10.0.3.20'}], + [{'start': '10.0.2.250', 'end': '10.0.3.5'}], + [{'start': '10.0.2.10', 'end': '10.0.2.5'}], [{'start': '10.0.0.2', 'end': '10.0.0.3'}, {'start': '10.0.0.2', 'end': '10.0.0.3'}]] tenant_id = network['network']['tenant_id'] @@ -3806,14 +3808,19 @@ class TestSubnetsV2(NeutronDbPluginV2TestCase): self.subnet(network=network) as v4_subnet,\ self.port(subnet=v4_subnet, device_owner=device_owner) as port: if insert_db_reference_error: - def db_ref_err_for_ipalloc(instance): + orig_fn = orm.Session.add + + def db_ref_err_for_ipalloc(s, instance): if instance.__class__.__name__ == 'IPAllocation': - raise db_exc.DBReferenceError( - 'dummy_table', 'dummy_constraint', - 'dummy_key', 'dummy_key_table') + # tweak port_id to cause a FK violation, + # thus DBReferenceError + instance.port_id = 'nonexistent' + return orig_fn(s, instance) + mock.patch.object(orm.Session, 'add', - side_effect=db_ref_err_for_ipalloc).start() - mock.patch.object(non_ipam.IpamNonPluggableBackend, + side_effect=db_ref_err_for_ipalloc, + autospec=True).start() + mock.patch.object(db_base_plugin_common.DbBasePluginCommon, '_get_subnet', return_value=mock.Mock()).start() # Add an IPv6 auto-address subnet to the network @@ -3915,8 +3922,8 @@ class TestSubnetsV2(NeutronDbPluginV2TestCase): res = self.deserialize(self.fmt, req.get_response(self.api)) self.assertEqual(sorted(res['subnet']['host_routes']), sorted(host_routes)) - self.assertEqual(sorted(res['subnet']['dns_nameservers']), - sorted(dns_nameservers)) + self.assertEqual(res['subnet']['dns_nameservers'], + dns_nameservers) def test_update_subnet_shared_returns_400(self): with self.network(shared=True) as network: @@ -4457,6 +4464,27 @@ class TestSubnetsV2(NeutronDbPluginV2TestCase): self.assertEqual(res['subnet']['dns_nameservers'], data['subnet']['dns_nameservers']) + def test_subnet_lifecycle_dns_retains_order(self): + cfg.CONF.set_override('max_dns_nameservers', 3) + with self.subnet(dns_nameservers=['1.1.1.1', '2.2.2.2', + '3.3.3.3']) as subnet: + subnets = self._show('subnets', subnet['subnet']['id'], + expected_code=webob.exc.HTTPOk.code) + self.assertEqual(['1.1.1.1', '2.2.2.2', '3.3.3.3'], + subnets['subnet']['dns_nameservers']) + data = {'subnet': {'dns_nameservers': ['2.2.2.2', '3.3.3.3', + '1.1.1.1']}} + req = self.new_update_request('subnets', + data, + subnet['subnet']['id']) + res = self.deserialize(self.fmt, req.get_response(self.api)) + self.assertEqual(data['subnet']['dns_nameservers'], + res['subnet']['dns_nameservers']) + subnets = self._show('subnets', subnet['subnet']['id'], + expected_code=webob.exc.HTTPOk.code) + self.assertEqual(data['subnet']['dns_nameservers'], + subnets['subnet']['dns_nameservers']) + def test_update_subnet_dns_to_None(self): with self.subnet(dns_nameservers=['11.0.0.1']) as subnet: data = {'subnet': {'dns_nameservers': None}} @@ -5323,8 +5351,8 @@ class DbModelTestCase(base.BaseTestCase): exp_middle = "[object at %x]" % id(network) exp_end_with = (" {tenant_id=None, id=None, " "name='net_net', status='OK', " - "admin_state_up=True, shared=None, " - "mtu=None, vlan_transparent=None}>") + "admin_state_up=True, mtu=None, " + "vlan_transparent=None}>") final_exp = exp_start_with + exp_middle + exp_end_with self.assertEqual(actual_repr_output, final_exp) diff --git a/neutron/tests/unit/db/test_ipam_pluggable_backend.py b/neutron/tests/unit/db/test_ipam_pluggable_backend.py new file mode 100644 index 00000000000..80d826c7977 --- /dev/null +++ b/neutron/tests/unit/db/test_ipam_pluggable_backend.py @@ -0,0 +1,493 @@ +# Copyright (c) 2015 Infoblox Inc. +# All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import mock +import netaddr +import webob.exc + +from oslo_config import cfg +from oslo_utils import uuidutils + +from neutron.common import exceptions as n_exc +from neutron.common import ipv6_utils +from neutron.db import ipam_backend_mixin +from neutron.db import ipam_pluggable_backend +from neutron.ipam import requests as ipam_req +from neutron.tests.unit.db import test_db_base_plugin_v2 as test_db_base + + +class UseIpamMixin(object): + + def setUp(self): + cfg.CONF.set_override("ipam_driver", 'internal') + super(UseIpamMixin, self).setUp() + + +class TestIpamHTTPResponse(UseIpamMixin, test_db_base.TestV2HTTPResponse): + pass + + +class TestIpamPorts(UseIpamMixin, test_db_base.TestPortsV2): + pass + + +class TestIpamNetworks(UseIpamMixin, test_db_base.TestNetworksV2): + pass + + +class TestIpamSubnets(UseIpamMixin, test_db_base.TestSubnetsV2): + pass + + +class TestIpamSubnetPool(UseIpamMixin, test_db_base.TestSubnetPoolsV2): + pass + + +class TestDbBasePluginIpam(test_db_base.NeutronDbPluginV2TestCase): + def setUp(self): + cfg.CONF.set_override("ipam_driver", 'internal') + super(TestDbBasePluginIpam, self).setUp() + self.tenant_id = uuidutils.generate_uuid() + self.subnet_id = uuidutils.generate_uuid() + + def _prepare_mocks(self): + mocks = { + 'driver': mock.Mock(), + 'subnet': mock.Mock(), + 'subnet_request': ipam_req.SpecificSubnetRequest( + self.tenant_id, + self.subnet_id, + '10.0.0.0/24', + '10.0.0.1', + [netaddr.IPRange('10.0.0.2', '10.0.0.254')]), + } + mocks['driver'].get_subnet.return_value = mocks['subnet'] + mocks['driver'].allocate_subnet.return_value = mocks['subnet'] + mocks['driver'].get_subnet_request_factory = ( + ipam_req.SubnetRequestFactory) + mocks['driver'].get_address_request_factory = ( + ipam_req.AddressRequestFactory) + mocks['subnet'].get_details.return_value = mocks['subnet_request'] + return mocks + + def _prepare_ipam(self): + mocks = self._prepare_mocks() + mocks['ipam'] = ipam_pluggable_backend.IpamPluggableBackend() + return mocks + + def _prepare_mocks_with_pool_mock(self, pool_mock): + mocks = self._prepare_mocks() + pool_mock.get_instance.return_value = mocks['driver'] + return mocks + + def _get_allocate_mock(self, auto_ip='10.0.0.2', + fail_ip='127.0.0.1', + error_message='SomeError'): + def allocate_mock(request): + if type(request) == ipam_req.SpecificAddressRequest: + if request.address == netaddr.IPAddress(fail_ip): + raise n_exc.InvalidInput(error_message=error_message) + else: + return str(request.address) + else: + return auto_ip + + return allocate_mock + + def _validate_allocate_calls(self, expected_calls, mocks): + self.assertTrue(mocks['subnet'].allocate.called) + + actual_calls = mocks['subnet'].allocate.call_args_list + self.assertEqual(len(expected_calls), len(actual_calls)) + + i = 0 + for call in expected_calls: + if call['ip_address']: + self.assertIsInstance(actual_calls[i][0][0], + ipam_req.SpecificAddressRequest) + self.assertEqual(netaddr.IPAddress(call['ip_address']), + actual_calls[i][0][0].address) + else: + self.assertIsInstance(actual_calls[i][0][0], + ipam_req.AnyAddressRequest) + i += 1 + + def _convert_to_ips(self, data): + ips = [{'ip_address': ip, + 'subnet_id': data[ip][1], + 'subnet_cidr': data[ip][0]} for ip in data] + return sorted(ips, key=lambda t: t['subnet_cidr']) + + def _gen_subnet_id(self): + return uuidutils.generate_uuid() + + def test_deallocate_single_ip(self): + mocks = self._prepare_ipam() + ip = '192.168.12.45' + data = {ip: ['192.168.12.0/24', self._gen_subnet_id()]} + ips = self._convert_to_ips(data) + + mocks['ipam']._ipam_deallocate_ips(mock.ANY, mocks['driver'], + mock.ANY, ips) + + mocks['driver'].get_subnet.assert_called_once_with(data[ip][1]) + mocks['subnet'].deallocate.assert_called_once_with(ip) + + def test_deallocate_multiple_ips(self): + mocks = self._prepare_ipam() + data = {'192.168.43.15': ['192.168.43.0/24', self._gen_subnet_id()], + '172.23.158.84': ['172.23.128.0/17', self._gen_subnet_id()], + '8.8.8.8': ['8.0.0.0/8', self._gen_subnet_id()]} + ips = self._convert_to_ips(data) + + mocks['ipam']._ipam_deallocate_ips(mock.ANY, mocks['driver'], + mock.ANY, ips) + + get_calls = [mock.call(data[ip][1]) for ip in data] + mocks['driver'].get_subnet.assert_has_calls(get_calls, any_order=True) + + ip_calls = [mock.call(ip) for ip in data] + mocks['subnet'].deallocate.assert_has_calls(ip_calls, any_order=True) + + def _single_ip_allocate_helper(self, mocks, ip, network, subnet): + ips = [{'subnet_cidr': network, + 'subnet_id': subnet}] + if ip: + ips[0]['ip_address'] = ip + + allocated_ips = mocks['ipam']._ipam_allocate_ips( + mock.ANY, mocks['driver'], mock.ANY, ips) + + mocks['driver'].get_subnet.assert_called_once_with(subnet) + + self.assertTrue(mocks['subnet'].allocate.called) + request = mocks['subnet'].allocate.call_args[0][0] + + return {'ips': allocated_ips, + 'request': request} + + def test_allocate_single_fixed_ip(self): + mocks = self._prepare_ipam() + ip = '192.168.15.123' + mocks['subnet'].allocate.return_value = ip + + results = self._single_ip_allocate_helper(mocks, + ip, + '192.168.15.0/24', + self._gen_subnet_id()) + + self.assertIsInstance(results['request'], + ipam_req.SpecificAddressRequest) + self.assertEqual(netaddr.IPAddress(ip), results['request'].address) + + self.assertEqual(ip, results['ips'][0]['ip_address'], + 'Should allocate the same ip as passed') + + def test_allocate_single_any_ip(self): + mocks = self._prepare_ipam() + network = '192.168.15.0/24' + ip = '192.168.15.83' + mocks['subnet'].allocate.return_value = ip + + results = self._single_ip_allocate_helper(mocks, '', network, + self._gen_subnet_id()) + + self.assertIsInstance(results['request'], ipam_req.AnyAddressRequest) + self.assertEqual(ip, results['ips'][0]['ip_address']) + + def test_allocate_eui64_ip(self): + mocks = self._prepare_ipam() + ip = {'subnet_id': self._gen_subnet_id(), + 'subnet_cidr': '2001:470:abcd::/64', + 'mac': '6c:62:6d:de:cf:49', + 'eui64_address': True} + eui64_ip = ipv6_utils.get_ipv6_addr_by_EUI64(ip['subnet_cidr'], + ip['mac']) + mocks['ipam']._ipam_allocate_ips(mock.ANY, mocks['driver'], + mock.ANY, [ip]) + + request = mocks['subnet'].allocate.call_args[0][0] + self.assertIsInstance(request, ipam_req.AutomaticAddressRequest) + self.assertEqual(eui64_ip, request.address) + + def test_allocate_multiple_ips(self): + mocks = self._prepare_ipam() + data = {'': ['172.23.128.0/17', self._gen_subnet_id()], + '192.168.43.15': ['192.168.43.0/24', self._gen_subnet_id()], + '8.8.8.8': ['8.0.0.0/8', self._gen_subnet_id()]} + ips = self._convert_to_ips(data) + mocks['subnet'].allocate.side_effect = self._get_allocate_mock( + auto_ip='172.23.128.94') + + mocks['ipam']._ipam_allocate_ips( + mock.ANY, mocks['driver'], mock.ANY, ips) + get_calls = [mock.call(data[ip][1]) for ip in data] + mocks['driver'].get_subnet.assert_has_calls(get_calls, any_order=True) + + self._validate_allocate_calls(ips, mocks) + + def test_allocate_multiple_ips_with_exception(self): + mocks = self._prepare_ipam() + + auto_ip = '172.23.128.94' + fail_ip = '192.168.43.15' + data = {'': ['172.23.128.0/17', self._gen_subnet_id()], + fail_ip: ['192.168.43.0/24', self._gen_subnet_id()], + '8.8.8.8': ['8.0.0.0/8', self._gen_subnet_id()]} + ips = self._convert_to_ips(data) + mocks['subnet'].allocate.side_effect = self._get_allocate_mock( + auto_ip=auto_ip, fail_ip=fail_ip) + + # Exception should be raised on attempt to allocate second ip. + # Revert action should be performed for the already allocated ips, + # In this test case only one ip should be deallocated + # and original error should be reraised + self.assertRaises(n_exc.InvalidInput, + mocks['ipam']._ipam_allocate_ips, + mock.ANY, + mocks['driver'], + mock.ANY, + ips) + + # get_subnet should be called only for the first two networks + get_calls = [mock.call(data[ip][1]) for ip in ['', fail_ip]] + mocks['driver'].get_subnet.assert_has_calls(get_calls, any_order=True) + + # Allocate should be called for the first two ips only + self._validate_allocate_calls(ips[:-1], mocks) + # Deallocate should be called for the first ip only + mocks['subnet'].deallocate.assert_called_once_with(auto_ip) + + @mock.patch('neutron.ipam.driver.Pool') + def test_create_subnet_over_ipam(self, pool_mock): + mocks = self._prepare_mocks_with_pool_mock(pool_mock) + cidr = '192.168.0.0/24' + allocation_pools = [{'start': '192.168.0.2', 'end': '192.168.0.254'}] + with self.subnet(allocation_pools=allocation_pools, + cidr=cidr): + pool_mock.get_instance.assert_called_once_with(None, mock.ANY) + self.assertTrue(mocks['driver'].allocate_subnet.called) + request = mocks['driver'].allocate_subnet.call_args[0][0] + self.assertIsInstance(request, ipam_req.SpecificSubnetRequest) + self.assertEqual(netaddr.IPNetwork(cidr), request.subnet_cidr) + + @mock.patch('neutron.ipam.driver.Pool') + def test_create_subnet_over_ipam_with_rollback(self, pool_mock): + mocks = self._prepare_mocks_with_pool_mock(pool_mock) + mocks['driver'].allocate_subnet.side_effect = ValueError + cidr = '10.0.2.0/24' + with self.network() as network: + self._create_subnet(self.fmt, network['network']['id'], + cidr, expected_res_status=500) + + pool_mock.get_instance.assert_called_once_with(None, mock.ANY) + self.assertTrue(mocks['driver'].allocate_subnet.called) + request = mocks['driver'].allocate_subnet.call_args[0][0] + self.assertIsInstance(request, ipam_req.SpecificSubnetRequest) + self.assertEqual(netaddr.IPNetwork(cidr), request.subnet_cidr) + # Verify no subnet was created for network + req = self.new_show_request('networks', network['network']['id']) + res = req.get_response(self.api) + net = self.deserialize(self.fmt, res) + self.assertEqual(0, len(net['network']['subnets'])) + + @mock.patch('neutron.ipam.driver.Pool') + def test_ipam_subnet_deallocated_if_create_fails(self, pool_mock): + mocks = self._prepare_mocks_with_pool_mock(pool_mock) + cidr = '10.0.2.0/24' + with mock.patch.object( + ipam_backend_mixin.IpamBackendMixin, '_save_subnet', + side_effect=ValueError), self.network() as network: + self._create_subnet(self.fmt, network['network']['id'], + cidr, expected_res_status=500) + pool_mock.get_instance.assert_any_call(None, mock.ANY) + self.assertEqual(2, pool_mock.get_instance.call_count) + self.assertTrue(mocks['driver'].allocate_subnet.called) + request = mocks['driver'].allocate_subnet.call_args[0][0] + self.assertIsInstance(request, ipam_req.SpecificSubnetRequest) + self.assertEqual(netaddr.IPNetwork(cidr), request.subnet_cidr) + # Verify remove ipam subnet was called + mocks['driver'].remove_subnet.assert_called_once_with( + self.subnet_id) + + @mock.patch('neutron.ipam.driver.Pool') + def test_update_subnet_over_ipam(self, pool_mock): + mocks = self._prepare_mocks_with_pool_mock(pool_mock) + cidr = '10.0.0.0/24' + allocation_pools = [{'start': '10.0.0.2', 'end': '10.0.0.254'}] + with self.subnet(allocation_pools=allocation_pools, + cidr=cidr) as subnet: + data = {'subnet': {'allocation_pools': [ + {'start': '10.0.0.10', 'end': '10.0.0.20'}, + {'start': '10.0.0.30', 'end': '10.0.0.40'}]}} + req = self.new_update_request('subnets', data, + subnet['subnet']['id']) + res = req.get_response(self.api) + self.assertEqual(200, res.status_code) + + pool_mock.get_instance.assert_any_call(None, mock.ANY) + self.assertEqual(2, pool_mock.get_instance.call_count) + self.assertTrue(mocks['driver'].update_subnet.called) + request = mocks['driver'].update_subnet.call_args[0][0] + self.assertIsInstance(request, ipam_req.SpecificSubnetRequest) + self.assertEqual(netaddr.IPNetwork(cidr), request.subnet_cidr) + + ip_ranges = [netaddr.IPRange(p['start'], + p['end']) for p in data['subnet']['allocation_pools']] + self.assertEqual(ip_ranges, request.allocation_pools) + + @mock.patch('neutron.ipam.driver.Pool') + def test_delete_subnet_over_ipam(self, pool_mock): + mocks = self._prepare_mocks_with_pool_mock(pool_mock) + gateway_ip = '10.0.0.1' + cidr = '10.0.0.0/24' + res = self._create_network(fmt=self.fmt, name='net', + admin_state_up=True) + network = self.deserialize(self.fmt, res) + subnet = self._make_subnet(self.fmt, network, gateway_ip, + cidr, ip_version=4) + req = self.new_delete_request('subnets', subnet['subnet']['id']) + res = req.get_response(self.api) + self.assertEqual(webob.exc.HTTPNoContent.code, res.status_int) + + pool_mock.get_instance.assert_any_call(None, mock.ANY) + self.assertEqual(2, pool_mock.get_instance.call_count) + mocks['driver'].remove_subnet.assert_called_once_with( + subnet['subnet']['id']) + + @mock.patch('neutron.ipam.driver.Pool') + def test_delete_subnet_over_ipam_with_rollback(self, pool_mock): + mocks = self._prepare_mocks_with_pool_mock(pool_mock) + mocks['driver'].remove_subnet.side_effect = ValueError + gateway_ip = '10.0.0.1' + cidr = '10.0.0.0/24' + res = self._create_network(fmt=self.fmt, name='net', + admin_state_up=True) + network = self.deserialize(self.fmt, res) + subnet = self._make_subnet(self.fmt, network, gateway_ip, + cidr, ip_version=4) + req = self.new_delete_request('subnets', subnet['subnet']['id']) + res = req.get_response(self.api) + self.assertEqual(webob.exc.HTTPServerError.code, res.status_int) + + pool_mock.get_instance.assert_any_call(None, mock.ANY) + self.assertEqual(2, pool_mock.get_instance.call_count) + mocks['driver'].remove_subnet.assert_called_once_with( + subnet['subnet']['id']) + # Verify subnet was recreated after failed ipam call + subnet_req = self.new_show_request('subnets', + subnet['subnet']['id']) + raw_res = subnet_req.get_response(self.api) + sub_res = self.deserialize(self.fmt, raw_res) + self.assertIn(sub_res['subnet']['cidr'], cidr) + self.assertIn(sub_res['subnet']['gateway_ip'], + gateway_ip) + + @mock.patch('neutron.ipam.driver.Pool') + def test_create_port_ipam(self, pool_mock): + mocks = self._prepare_mocks_with_pool_mock(pool_mock) + auto_ip = '10.0.0.2' + expected_calls = [{'ip_address': ''}] + mocks['subnet'].allocate.side_effect = self._get_allocate_mock( + auto_ip=auto_ip) + with self.subnet() as subnet: + with self.port(subnet=subnet) as port: + ips = port['port']['fixed_ips'] + self.assertEqual(1, len(ips)) + self.assertEqual(ips[0]['ip_address'], auto_ip) + self.assertEqual(ips[0]['subnet_id'], subnet['subnet']['id']) + self._validate_allocate_calls(expected_calls, mocks) + + @mock.patch('neutron.ipam.driver.Pool') + def test_create_port_ipam_with_rollback(self, pool_mock): + mocks = self._prepare_mocks_with_pool_mock(pool_mock) + mocks['subnet'].allocate.side_effect = ValueError + with self.network() as network: + with self.subnet(network=network): + net_id = network['network']['id'] + data = { + 'port': {'network_id': net_id, + 'tenant_id': network['network']['tenant_id']}} + port_req = self.new_create_request('ports', data) + res = port_req.get_response(self.api) + self.assertEqual(webob.exc.HTTPServerError.code, + res.status_int) + + # verify no port left after failure + req = self.new_list_request('ports', self.fmt, + "network_id=%s" % net_id) + res = self.deserialize(self.fmt, req.get_response(self.api)) + self.assertEqual(0, len(res['ports'])) + + @mock.patch('neutron.ipam.driver.Pool') + def test_update_port_ipam(self, pool_mock): + mocks = self._prepare_mocks_with_pool_mock(pool_mock) + auto_ip = '10.0.0.2' + new_ip = '10.0.0.15' + expected_calls = [{'ip_address': ip} for ip in ['', new_ip]] + mocks['subnet'].allocate.side_effect = self._get_allocate_mock( + auto_ip=auto_ip) + with self.subnet() as subnet: + with self.port(subnet=subnet) as port: + ips = port['port']['fixed_ips'] + self.assertEqual(1, len(ips)) + self.assertEqual(ips[0]['ip_address'], auto_ip) + # Update port with another new ip + data = {"port": {"fixed_ips": [{ + 'subnet_id': subnet['subnet']['id'], + 'ip_address': new_ip}]}} + req = self.new_update_request('ports', data, + port['port']['id']) + res = self.deserialize(self.fmt, req.get_response(self.api)) + ips = res['port']['fixed_ips'] + self.assertEqual(1, len(ips)) + self.assertEqual(new_ip, ips[0]['ip_address']) + + # Allocate should be called for the first two networks + self._validate_allocate_calls(expected_calls, mocks) + # Deallocate should be called for the first ip only + mocks['subnet'].deallocate.assert_called_once_with(auto_ip) + + @mock.patch('neutron.ipam.driver.Pool') + def test_delete_port_ipam(self, pool_mock): + mocks = self._prepare_mocks_with_pool_mock(pool_mock) + auto_ip = '10.0.0.2' + mocks['subnet'].allocate.side_effect = self._get_allocate_mock( + auto_ip=auto_ip) + with self.subnet() as subnet: + with self.port(subnet=subnet) as port: + ips = port['port']['fixed_ips'] + self.assertEqual(1, len(ips)) + self.assertEqual(ips[0]['ip_address'], auto_ip) + req = self.new_delete_request('ports', port['port']['id']) + res = req.get_response(self.api) + + self.assertEqual(webob.exc.HTTPNoContent.code, res.status_int) + mocks['subnet'].deallocate.assert_called_once_with(auto_ip) + + def test_recreate_port_ipam(self): + ip = '10.0.0.2' + with self.subnet() as subnet: + with self.port(subnet=subnet) as port: + ips = port['port']['fixed_ips'] + self.assertEqual(1, len(ips)) + self.assertEqual(ips[0]['ip_address'], ip) + req = self.new_delete_request('ports', port['port']['id']) + res = req.get_response(self.api) + self.assertEqual(webob.exc.HTTPNoContent.code, res.status_int) + with self.port(subnet=subnet, fixed_ips=ips) as port: + ips = port['port']['fixed_ips'] + self.assertEqual(1, len(ips)) + self.assertEqual(ips[0]['ip_address'], ip) diff --git a/neutron/tests/unit/db/test_l3_dvr_db.py b/neutron/tests/unit/db/test_l3_dvr_db.py index 3a3335784e1..419e168fb7a 100644 --- a/neutron/tests/unit/db/test_l3_dvr_db.py +++ b/neutron/tests/unit/db/test_l3_dvr_db.py @@ -85,7 +85,7 @@ class L3DvrTestCase(testlib_api.SqlTestCase): 'distributed': True } router_db = self._create_router(router) - self.assertRaises(NotImplementedError, + self.assertRaises(exceptions.NotSupported, self.mixin._validate_router_migration, self.ctx, router_db, {'distributed': False}) @@ -244,6 +244,31 @@ class L3DvrTestCase(testlib_api.SqlTestCase): self.assertTrue(cfips.called) self.assertTrue(gvm.called) + def setup_port_has_ipv6_address(self, port): + with mock.patch.object(l3_dvr_db.l3_db.L3_NAT_db_mixin, + '_port_has_ipv6_address') as pv6: + pv6.return_value = True + result = self.mixin._port_has_ipv6_address(port) + return result, pv6 + + def test__port_has_ipv6_address_for_dvr_snat_port(self): + port = { + 'id': 'my_port_id', + 'device_owner': l3_const.DEVICE_OWNER_ROUTER_SNAT, + } + result, pv6 = self.setup_port_has_ipv6_address(port) + self.assertFalse(result) + self.assertFalse(pv6.called) + + def test__port_has_ipv6_address_for_non_snat_ports(self): + port = { + 'id': 'my_port_id', + 'device_owner': l3_const.DEVICE_OWNER_DVR_INTERFACE, + } + result, pv6 = self.setup_port_has_ipv6_address(port) + self.assertTrue(result) + self.assertTrue(pv6.called) + def test__delete_floatingip_agent_gateway_port(self): port = { 'id': 'my_port_id', @@ -263,7 +288,7 @@ class L3DvrTestCase(testlib_api.SqlTestCase): plugin.get_ports.assert_called_with(self.ctx, filters={ 'network_id': ['network_id'], 'device_owner': [l3_const.DEVICE_OWNER_AGENT_GW]}) - plugin._delete_port.assert_called_with(self.ctx, 'my_port_id') + plugin.ipam.delete_port.assert_called_with(self.ctx, 'my_port_id') def _delete_floatingip_test_setup(self, floatingip): fip_id = floatingip['id'] diff --git a/neutron/tests/unit/db/test_migration.py b/neutron/tests/unit/db/test_migration.py index f795bafb080..955605aadca 100644 --- a/neutron/tests/unit/db/test_migration.py +++ b/neutron/tests/unit/db/test_migration.py @@ -22,6 +22,10 @@ from neutron.db.migration import cli from neutron.tests import base +class FakeConfig(object): + service = '' + + class TestDbMigration(base.BaseTestCase): def setUp(self): @@ -75,12 +79,13 @@ class TestCli(base.BaseTestCase): self.mock_alembic_err = mock.patch('alembic.util.err').start() self.mock_alembic_err.side_effect = SystemExit - def _main_test_helper(self, argv, func_name, exp_args=(), exp_kwargs={}): + def _main_test_helper(self, argv, func_name, exp_args=(), exp_kwargs=[{}]): with mock.patch.object(sys, 'argv', argv), mock.patch.object( cli, 'run_sanity_checks'): cli.main() self.do_alembic_cmd.assert_has_calls( - [mock.call(mock.ANY, func_name, *exp_args, **exp_kwargs)] + [mock.call(mock.ANY, func_name, *exp_args, **kwargs) + for kwargs in exp_kwargs] ) def test_stamp(self): @@ -88,14 +93,14 @@ class TestCli(base.BaseTestCase): ['prog', 'stamp', 'foo'], 'stamp', ('foo',), - {'sql': False} + [{'sql': False}] ) self._main_test_helper( ['prog', 'stamp', 'foo', '--sql'], 'stamp', ('foo',), - {'sql': True} + [{'sql': True}] ) def test_current(self): @@ -105,49 +110,72 @@ class TestCli(base.BaseTestCase): self._main_test_helper(['prog', 'history'], 'history') def test_check_migration(self): - with mock.patch.object(cli, 'validate_head_file') as validate: + with mock.patch.object(cli, 'validate_heads_file') as validate: self._main_test_helper(['prog', 'check_migration'], 'branches') validate.assert_called_once_with(mock.ANY) - def test_database_sync_revision(self): - with mock.patch.object(cli, 'update_head_file') as update: + def _test_database_sync_revision(self, separate_branches=True): + with mock.patch.object(cli, 'update_heads_file') as update: + fake_config = FakeConfig() + if separate_branches: + expected_kwargs = [ + {'message': 'message', 'sql': False, 'autogenerate': True, + 'version_path': + cli._get_version_branch_path(fake_config, branch), + 'head': cli._get_branch_head(branch)} + for branch in cli.MIGRATION_BRANCHES] + else: + expected_kwargs = [{ + 'message': 'message', 'sql': False, 'autogenerate': True, + }] self._main_test_helper( ['prog', 'revision', '--autogenerate', '-m', 'message'], 'revision', - (), - {'message': 'message', 'sql': False, 'autogenerate': True} + (), expected_kwargs ) update.assert_called_once_with(mock.ANY) - update.reset_mock() + + for kwarg in expected_kwargs: + kwarg['autogenerate'] = False + kwarg['sql'] = True + self._main_test_helper( ['prog', 'revision', '--sql', '-m', 'message'], 'revision', - (), - {'message': 'message', 'sql': True, 'autogenerate': False} + (), expected_kwargs ) update.assert_called_once_with(mock.ANY) + def test_database_sync_revision(self): + self._test_database_sync_revision() + + @mock.patch.object(cli, '_use_separate_migration_branches', + return_value=False) + def test_database_sync_revision_no_branches(self, *args): + # Test that old branchless approach is still supported + self._test_database_sync_revision(separate_branches=False) + def test_upgrade(self): self._main_test_helper( ['prog', 'upgrade', '--sql', 'head'], 'upgrade', - ('head',), - {'sql': True} + ('heads',), + [{'sql': True}] ) self._main_test_helper( ['prog', 'upgrade', '--delta', '3'], 'upgrade', ('+3',), - {'sql': False} + [{'sql': False}] ) self._main_test_helper( ['prog', 'upgrade', 'kilo', '--delta', '3'], 'upgrade', ('kilo+3',), - {'sql': False} + [{'sql': False}] ) def assert_command_fails(self, command): @@ -169,60 +197,92 @@ class TestCli(base.BaseTestCase): def test_upgrade_rejects_delta_with_relative_revision(self): self.assert_command_fails(['prog', 'upgrade', '+2', '--delta', '3']) - def _test_validate_head_file_helper(self, heads, file_content=None): + def _test_validate_heads_file_helper(self, heads, file_heads=None, + branchless=False): + if file_heads is None: + file_heads = [] + fake_config = FakeConfig() with mock.patch('alembic.script.ScriptDirectory.from_config') as fc: fc.return_value.get_heads.return_value = heads - fc.return_value.get_current_head.return_value = heads[0] with mock.patch('six.moves.builtins.open') as mock_open: mock_open.return_value.__enter__ = lambda s: s mock_open.return_value.__exit__ = mock.Mock() - mock_open.return_value.read.return_value = file_content + mock_open.return_value.read.return_value = ( + '\n'.join(file_heads)) with mock.patch('os.path.isfile') as is_file: - is_file.return_value = file_content is not None + is_file.return_value = bool(file_heads) - if file_content in heads: - cli.validate_head_file(mock.sentinel.config) + if all(head in file_heads for head in heads): + cli.validate_heads_file(fake_config) else: self.assertRaises( SystemExit, - cli.validate_head_file, - mock.sentinel.config + cli.validate_heads_file, + fake_config ) self.mock_alembic_err.assert_called_once_with(mock.ANY) - fc.assert_called_once_with(mock.sentinel.config) + if branchless: + mock_open.assert_called_with( + cli._get_head_file_path(fake_config)) + else: + mock_open.assert_called_with( + cli._get_heads_file_path(fake_config)) + fc.assert_called_once_with(fake_config) - def test_validate_head_file_multiple_heads(self): - self._test_validate_head_file_helper(['a', 'b']) + def test_validate_heads_file_multiple_heads(self): + self._test_validate_heads_file_helper(['a', 'b']) - def test_validate_head_file_missing_file(self): - self._test_validate_head_file_helper(['a']) + def test_validate_heads_file_missing_file(self): + self._test_validate_heads_file_helper(['a']) - def test_validate_head_file_wrong_contents(self): - self._test_validate_head_file_helper(['a'], 'b') + def test_validate_heads_file_wrong_contents(self): + self._test_validate_heads_file_helper(['a'], ['b']) - def test_validate_head_success(self): - self._test_validate_head_file_helper(['a'], 'a') + def test_validate_heads_success(self): + self._test_validate_heads_file_helper(['a'], ['a']) - def test_update_head_file_multiple_heads(self): + @mock.patch.object(cli, '_use_separate_migration_branches', + return_value=False) + def test_validate_heads_file_branchless_failure(self, *args): + self._test_validate_heads_file_helper(['a'], ['b'], branchless=True) + + @mock.patch.object(cli, '_use_separate_migration_branches', + return_value=False) + def test_validate_heads_file_branchless_success(self, *args): + self._test_validate_heads_file_helper(['a'], ['a'], branchless=True) + + def test_update_heads_file_two_heads(self): with mock.patch('alembic.script.ScriptDirectory.from_config') as fc: - fc.return_value.get_heads.return_value = ['a', 'b'] - self.assertRaises( - SystemExit, - cli.update_head_file, - mock.sentinel.config - ) - self.mock_alembic_err.assert_called_once_with(mock.ANY) - fc.assert_called_once_with(mock.sentinel.config) - - def test_update_head_file_success(self): - with mock.patch('alembic.script.ScriptDirectory.from_config') as fc: - fc.return_value.get_heads.return_value = ['a'] - fc.return_value.get_current_head.return_value = 'a' + heads = ('b', 'a') + fc.return_value.get_heads.return_value = heads with mock.patch('six.moves.builtins.open') as mock_open: mock_open.return_value.__enter__ = lambda s: s mock_open.return_value.__exit__ = mock.Mock() - cli.update_head_file(mock.sentinel.config) - mock_open.return_value.write.assert_called_once_with('a') - fc.assert_called_once_with(mock.sentinel.config) + cli.update_heads_file(mock.sentinel.config) + mock_open.return_value.write.assert_called_once_with( + '\n'.join(sorted(heads))) + + def test_update_heads_file_excessive_heads_negative(self): + with mock.patch('alembic.script.ScriptDirectory.from_config') as fc: + heads = ('b', 'a', 'c', 'kilo') + fc.return_value.get_heads.return_value = heads + self.assertRaises( + SystemExit, + cli.update_heads_file, + mock.sentinel.config + ) + self.mock_alembic_err.assert_called_once_with(mock.ANY) + + def test_update_heads_file_success(self): + with mock.patch('alembic.script.ScriptDirectory.from_config') as fc: + heads = ('a', 'b') + fc.return_value.get_heads.return_value = heads + with mock.patch('six.moves.builtins.open') as mock_open: + mock_open.return_value.__enter__ = lambda s: s + mock_open.return_value.__exit__ = mock.Mock() + + cli.update_heads_file(mock.sentinel.config) + mock_open.return_value.write.assert_called_once_with( + '\n'.join(heads)) diff --git a/neutron/tests/unit/extensions/extensionattribute.py b/neutron/tests/unit/extensions/extensionattribute.py index f289c8b0625..dcf2c8c2385 100644 --- a/neutron/tests/unit/extensions/extensionattribute.py +++ b/neutron/tests/unit/extensions/extensionattribute.py @@ -18,7 +18,7 @@ import abc from neutron.api import extensions from neutron.api.v2 import base from neutron import manager -from neutron import quota +from neutron.quota import resource_registry # Attribute Map @@ -69,7 +69,7 @@ class Extensionattribute(extensions.ExtensionDescriptor): collection_name = resource_name + "s" params = RESOURCE_ATTRIBUTE_MAP.get(collection_name, dict()) - quota.QUOTAS.register_resource_by_name(resource_name) + resource_registry.register_resource_by_name(resource_name) controller = base.create_resource(collection_name, resource_name, diff --git a/neutron/tests/unit/extensions/test_agent.py b/neutron/tests/unit/extensions/test_agent.py index ff805b469e3..546b18467f0 100644 --- a/neutron/tests/unit/extensions/test_agent.py +++ b/neutron/tests/unit/extensions/test_agent.py @@ -14,11 +14,11 @@ # under the License. import copy +from datetime import datetime import time from oslo_config import cfg from oslo_log import log as logging -from oslo_utils import timeutils from oslo_utils import uuidutils from webob import exc @@ -108,10 +108,10 @@ class AgentDBTestMixIn(object): callback = agents_db.AgentExtRpcCallback() callback.report_state(self.adminContext, agent_state={'agent_state': lbaas_hosta}, - time=timeutils.strtime()) + time=datetime.utcnow().isoformat()) callback.report_state(self.adminContext, agent_state={'agent_state': lbaas_hostb}, - time=timeutils.strtime()) + time=datetime.utcnow().isoformat()) res += [lbaas_hosta, lbaas_hostb] return res diff --git a/neutron/tests/unit/extensions/test_flavors.py b/neutron/tests/unit/extensions/test_flavors.py new file mode 100644 index 00000000000..8de2cf5cacc --- /dev/null +++ b/neutron/tests/unit/extensions/test_flavors.py @@ -0,0 +1,459 @@ +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. +# + + +import copy +import fixtures +import mock + +from oslo_config import cfg +from oslo_utils import uuidutils + +from neutron import context +from neutron.db import api as dbapi +from neutron.db import flavors_db +from neutron.extensions import flavors +from neutron import manager +from neutron.plugins.common import constants +from neutron.tests import base +from neutron.tests.unit.api.v2 import test_base +from neutron.tests.unit.db import test_db_base_plugin_v2 +from neutron.tests.unit.extensions import base as extension + +_uuid = uuidutils.generate_uuid +_get_path = test_base._get_path + + +class FlavorExtensionTestCase(extension.ExtensionTestCase): + + def setUp(self): + super(FlavorExtensionTestCase, self).setUp() + self._setUpExtension( + 'neutron.db.flavors_db.FlavorManager', + constants.FLAVORS, flavors.RESOURCE_ATTRIBUTE_MAP, + flavors.Flavors, '', supported_extension_aliases='flavors') + + def test_create_flavor(self): + tenant_id = uuidutils.generate_uuid() + data = {'flavor': {'name': 'GOLD', + 'service_type': constants.LOADBALANCER, + 'description': 'the best flavor', + 'tenant_id': tenant_id, + 'enabled': True}} + + expected = copy.deepcopy(data) + expected['flavor']['service_profiles'] = [] + + instance = self.plugin.return_value + instance.create_flavor.return_value = expected['flavor'] + res = self.api.post(_get_path('flavors', fmt=self.fmt), + self.serialize(data), + content_type='application/%s' % self.fmt) + + instance.create_flavor.assert_called_with(mock.ANY, + flavor=expected) + res = self.deserialize(res) + self.assertIn('flavor', res) + self.assertEqual(expected, res) + + def test_update_flavor(self): + flavor_id = 'fake_id' + data = {'flavor': {'name': 'GOLD', + 'description': 'the best flavor', + 'enabled': True}} + expected = copy.copy(data) + expected['flavor']['service_profiles'] = [] + + instance = self.plugin.return_value + instance.update_flavor.return_value = expected['flavor'] + res = self.api.put(_get_path('flavors', id=flavor_id, fmt=self.fmt), + self.serialize(data), + content_type='application/%s' % self.fmt) + + instance.update_flavor.assert_called_with(mock.ANY, + flavor_id, + flavor=expected) + res = self.deserialize(res) + self.assertIn('flavor', res) + self.assertEqual(expected, res) + + def test_delete_flavor(self): + flavor_id = 'fake_id' + instance = self.plugin.return_value + self.api.delete(_get_path('flavors', id=flavor_id, fmt=self.fmt), + content_type='application/%s' % self.fmt) + + instance.delete_flavor.assert_called_with(mock.ANY, + flavor_id) + + def test_show_flavor(self): + flavor_id = 'fake_id' + expected = {'flavor': {'id': flavor_id, + 'name': 'GOLD', + 'description': 'the best flavor', + 'enabled': True, + 'service_profiles': ['profile-1']}} + instance = self.plugin.return_value + instance.get_flavor.return_value = expected['flavor'] + res = self.api.get(_get_path('flavors', id=flavor_id, fmt=self.fmt)) + instance.get_flavor.assert_called_with(mock.ANY, + flavor_id, + fields=mock.ANY) + res = self.deserialize(res) + self.assertEqual(expected, res) + + def test_get_flavors(self): + data = {'flavors': [{'id': 'id1', + 'name': 'GOLD', + 'description': 'the best flavor', + 'enabled': True, + 'service_profiles': ['profile-1']}, + {'id': 'id2', + 'name': 'GOLD', + 'description': 'the best flavor', + 'enabled': True, + 'service_profiles': ['profile-2', 'profile-1']}]} + instance = self.plugin.return_value + instance.get_flavors.return_value = data['flavors'] + res = self.api.get(_get_path('flavors', fmt=self.fmt)) + instance.get_flavors.assert_called_with(mock.ANY, + fields=mock.ANY, + filters=mock.ANY) + res = self.deserialize(res) + self.assertEqual(data, res) + + def test_create_service_profile(self): + tenant_id = uuidutils.generate_uuid() + expected = {'service_profile': {'description': 'the best sp', + 'driver': '', + 'tenant_id': tenant_id, + 'enabled': True, + 'metainfo': '{"data": "value"}'}} + + instance = self.plugin.return_value + instance.create_service_profile.return_value = ( + expected['service_profile']) + res = self.api.post(_get_path('service_profiles', fmt=self.fmt), + self.serialize(expected), + content_type='application/%s' % self.fmt) + instance.create_service_profile.assert_called_with( + mock.ANY, + service_profile=expected) + res = self.deserialize(res) + self.assertIn('service_profile', res) + self.assertEqual(expected, res) + + def test_update_service_profile(self): + sp_id = "fake_id" + expected = {'service_profile': {'description': 'the best sp', + 'enabled': False, + 'metainfo': '{"data1": "value3"}'}} + + instance = self.plugin.return_value + instance.update_service_profile.return_value = ( + expected['service_profile']) + res = self.api.put(_get_path('service_profiles', + id=sp_id, fmt=self.fmt), + self.serialize(expected), + content_type='application/%s' % self.fmt) + + instance.update_service_profile.assert_called_with( + mock.ANY, + sp_id, + service_profile=expected) + res = self.deserialize(res) + self.assertIn('service_profile', res) + self.assertEqual(expected, res) + + def test_delete_service_profile(self): + sp_id = 'fake_id' + instance = self.plugin.return_value + self.api.delete(_get_path('service_profiles', id=sp_id, fmt=self.fmt), + content_type='application/%s' % self.fmt) + instance.delete_service_profile.assert_called_with(mock.ANY, + sp_id) + + def test_show_service_profile(self): + sp_id = 'fake_id' + expected = {'service_profile': {'id': 'id1', + 'driver': 'entrypoint1', + 'description': 'desc', + 'metainfo': '{}', + 'enabled': True}} + instance = self.plugin.return_value + instance.get_service_profile.return_value = ( + expected['service_profile']) + res = self.api.get(_get_path('service_profiles', + id=sp_id, fmt=self.fmt)) + instance.get_service_profile.assert_called_with(mock.ANY, + sp_id, + fields=mock.ANY) + res = self.deserialize(res) + self.assertEqual(expected, res) + + def test_get_service_profiles(self): + expected = {'service_profiles': [{'id': 'id1', + 'driver': 'entrypoint1', + 'description': 'desc', + 'metainfo': '{}', + 'enabled': True}, + {'id': 'id2', + 'driver': 'entrypoint2', + 'description': 'desc', + 'metainfo': '{}', + 'enabled': True}]} + instance = self.plugin.return_value + instance.get_service_profiles.return_value = ( + expected['service_profiles']) + res = self.api.get(_get_path('service_profiles', fmt=self.fmt)) + instance.get_service_profiles.assert_called_with(mock.ANY, + fields=mock.ANY, + filters=mock.ANY) + res = self.deserialize(res) + self.assertEqual(expected, res) + + def test_associate_service_profile_with_flavor(self): + expected = {'service_profile': {'id': _uuid()}} + instance = self.plugin.return_value + instance.create_flavor_service_profile.return_value = ( + expected['service_profile']) + res = self.api.post('/flavors/fl_id/service_profiles', + self.serialize(expected), + content_type='application/%s' % self.fmt) + instance.create_flavor_service_profile.assert_called_with( + mock.ANY, service_profile=expected, flavor_id='fl_id') + res = self.deserialize(res) + self.assertEqual(expected, res) + + def test_disassociate_service_profile_with_flavor(self): + instance = self.plugin.return_value + instance.delete_flavor_service_profile.return_value = None + self.api.delete('/flavors/fl_id/service_profiles/%s' % 'fake_spid', + content_type='application/%s' % self.fmt) + instance.delete_flavor_service_profile.assert_called_with( + mock.ANY, + 'fake_spid', + flavor_id='fl_id') + + +class DummyCorePlugin(object): + pass + + +class DummyServicePlugin(object): + + def driver_loaded(self, driver, service_profile): + pass + + def get_plugin_type(self): + return constants.DUMMY + + def get_plugin_description(self): + return "Dummy service plugin, aware of flavors" + + +class DummyServiceDriver(object): + + @staticmethod + def get_service_type(): + return constants.DUMMY + + def __init__(self, plugin): + pass + + +class FlavorManagerTestCase(test_db_base_plugin_v2.NeutronDbPluginV2TestCase, + base.PluginFixture): + def setUp(self): + super(FlavorManagerTestCase, self).setUp() + + self.config_parse() + cfg.CONF.set_override( + 'core_plugin', + 'neutron.tests.unit.extensions.test_flavors.DummyCorePlugin') + cfg.CONF.set_override( + 'service_plugins', + ['neutron.tests.unit.extensions.test_flavors.DummyServicePlugin']) + + self.useFixture( + fixtures.MonkeyPatch('neutron.manager.NeutronManager._instance')) + + self.plugin = flavors_db.FlavorManager( + manager.NeutronManager().get_instance()) + self.ctx = context.get_admin_context() + dbapi.get_engine() + + def _create_flavor(self, description=None): + flavor = {'flavor': {'name': 'GOLD', + 'service_type': constants.LOADBALANCER, + 'description': description or 'the best flavor', + 'enabled': True}} + return self.plugin.create_flavor(self.ctx, flavor), flavor + + def test_create_flavor(self): + self._create_flavor() + res = self.ctx.session.query(flavors_db.Flavor).all() + self.assertEqual(1, len(res)) + self.assertEqual('GOLD', res[0]['name']) + + def test_update_flavor(self): + fl, flavor = self._create_flavor() + flavor = {'flavor': {'name': 'Silver', + 'enabled': False}} + self.plugin.update_flavor(self.ctx, fl['id'], flavor) + res = (self.ctx.session.query(flavors_db.Flavor). + filter_by(id=fl['id']).one()) + self.assertEqual('Silver', res['name']) + self.assertFalse(res['enabled']) + + def test_delete_flavor(self): + fl, data = self._create_flavor() + self.plugin.delete_flavor(self.ctx, fl['id']) + res = (self.ctx.session.query(flavors_db.Flavor).all()) + self.assertFalse(res) + + def test_show_flavor(self): + fl, data = self._create_flavor() + show_fl = self.plugin.get_flavor(self.ctx, fl['id']) + self.assertEqual(fl, show_fl) + + def test_get_flavors(self): + fl, flavor = self._create_flavor() + flavor['flavor']['name'] = 'SILVER' + self.plugin.create_flavor(self.ctx, flavor) + show_fl = self.plugin.get_flavors(self.ctx) + self.assertEqual(2, len(show_fl)) + + def _create_service_profile(self, description=None): + data = {'service_profile': + {'description': description or 'the best sp', + 'driver': + ('neutron.tests.unit.extensions.test_flavors.' + 'DummyServiceDriver'), + 'enabled': True, + 'metainfo': '{"data": "value"}'}} + sp = self.plugin.unit_create_service_profile(self.ctx, + data) + return sp, data + + def test_create_service_profile(self): + sp, data = self._create_service_profile() + res = (self.ctx.session.query(flavors_db.ServiceProfile). + filter_by(id=sp['id']).one()) + self.assertEqual(data['service_profile']['driver'], res['driver']) + self.assertEqual(data['service_profile']['metainfo'], res['metainfo']) + + def test_update_service_profile(self): + sp, data = self._create_service_profile() + data['service_profile']['metainfo'] = '{"data": "value1"}' + sp = self.plugin.update_service_profile(self.ctx, sp['id'], + data) + res = (self.ctx.session.query(flavors_db.ServiceProfile). + filter_by(id=sp['id']).one()) + self.assertEqual(data['service_profile']['metainfo'], res['metainfo']) + + def test_delete_service_profile(self): + sp, data = self._create_service_profile() + self.plugin.delete_service_profile(self.ctx, sp['id']) + res = self.ctx.session.query(flavors_db.ServiceProfile).all() + self.assertFalse(res) + + def test_show_service_profile(self): + sp, data = self._create_service_profile() + sp_show = self.plugin.get_service_profile(self.ctx, sp['id']) + self.assertEqual(sp, sp_show) + + def test_get_service_profiles(self): + self._create_service_profile() + self._create_service_profile(description='another sp') + self.assertEqual(2, len(self.plugin.get_service_profiles(self.ctx))) + + def test_associate_service_profile_with_flavor(self): + sp, data = self._create_service_profile() + fl, data = self._create_flavor() + self.plugin.create_flavor_service_profile( + self.ctx, + {'service_profile': {'id': sp['id']}}, + fl['id']) + binding = ( + self.ctx.session.query(flavors_db.FlavorServiceProfileBinding). + first()) + self.assertEqual(fl['id'], binding['flavor_id']) + self.assertEqual(sp['id'], binding['service_profile_id']) + + res = self.plugin.get_flavor(self.ctx, fl['id']) + self.assertEqual(1, len(res['service_profiles'])) + self.assertEqual(sp['id'], res['service_profiles'][0]) + + res = self.plugin.get_service_profile(self.ctx, sp['id']) + self.assertEqual(1, len(res['flavors'])) + self.assertEqual(fl['id'], res['flavors'][0]) + + def test_autodelete_flavor_associations(self): + sp, data = self._create_service_profile() + fl, data = self._create_flavor() + self.plugin.create_flavor_service_profile( + self.ctx, + {'service_profile': {'id': sp['id']}}, + fl['id']) + self.plugin.delete_flavor(self.ctx, fl['id']) + binding = ( + self.ctx.session.query(flavors_db.FlavorServiceProfileBinding). + first()) + self.assertIsNone(binding) + + def test_associate_service_profile_with_flavor_exists(self): + sp, data = self._create_service_profile() + fl, data = self._create_flavor() + self.plugin.create_flavor_service_profile( + self.ctx, + {'service_profile': {'id': sp['id']}}, + fl['id']) + self.assertRaises(flavors_db.FlavorServiceProfileBindingExists, + self.plugin.create_flavor_service_profile, + self.ctx, + {'service_profile': {'id': sp['id']}}, + fl['id']) + + def test_disassociate_service_profile_with_flavor(self): + sp, data = self._create_service_profile() + fl, data = self._create_flavor() + self.plugin.create_flavor_service_profile( + self.ctx, + {'service_profile': {'id': sp['id']}}, + fl['id']) + self.plugin.delete_flavor_service_profile( + self.ctx, sp['id'], fl['id']) + binding = ( + self.ctx.session.query(flavors_db.FlavorServiceProfileBinding). + first()) + self.assertIsNone(binding) + + self.assertRaises( + flavors_db.FlavorServiceProfileBindingNotFound, + self.plugin.delete_flavor_service_profile, + self.ctx, sp['id'], fl['id']) + + def test_delete_service_profile_in_use(self): + sp, data = self._create_service_profile() + fl, data = self._create_flavor() + self.plugin.create_flavor_service_profile( + self.ctx, + {'service_profile': {'id': sp['id']}}, + fl['id']) + self.assertRaises( + flavors_db.ServiceProfileInUse, + self.plugin.delete_service_profile, + self.ctx, + sp['id']) diff --git a/neutron/tests/unit/extensions/test_l3.py b/neutron/tests/unit/extensions/test_l3.py index 5eaa250884a..143c34869e7 100644 --- a/neutron/tests/unit/extensions/test_l3.py +++ b/neutron/tests/unit/extensions/test_l3.py @@ -2383,6 +2383,51 @@ class L3NatTestCaseBase(L3NatTestCaseMixin): result = plugin.create_router(context.Context('', 'foo'), router_req) self.assertEqual(result['id'], router_req['router']['id']) + def test_create_floatingip_ipv6_only_network_returns_400(self): + with self.subnet(cidr="2001:db8::/48", ip_version=6) as public_sub: + self._set_net_external(public_sub['subnet']['network_id']) + res = self._create_floatingip( + self.fmt, + public_sub['subnet']['network_id']) + self.assertEqual(res.status_int, exc.HTTPBadRequest.code) + + def test_create_floatingip_ipv6_and_ipv4_network_creates_ipv4(self): + with self.network() as n,\ + self.subnet(cidr="2001:db8::/48", ip_version=6, network=n),\ + self.subnet(cidr="192.168.1.0/24", ip_version=4, network=n): + self._set_net_external(n['network']['id']) + fip = self._make_floatingip(self.fmt, n['network']['id']) + self.assertEqual(fip['floatingip']['floating_ip_address'], + '192.168.1.2') + + def test_create_floatingip_with_assoc_to_ipv6_subnet(self): + with self.subnet() as public_sub: + self._set_net_external(public_sub['subnet']['network_id']) + with self.subnet(cidr="2001:db8::/48", + ip_version=6) as private_sub: + with self.port(subnet=private_sub) as private_port: + res = self._create_floatingip( + self.fmt, + public_sub['subnet']['network_id'], + port_id=private_port['port']['id']) + self.assertEqual(res.status_int, exc.HTTPBadRequest.code) + + def test_create_floatingip_with_assoc_to_ipv4_and_ipv6_port(self): + with self.network() as n,\ + self.subnet(cidr='10.0.0.0/24', network=n) as s4,\ + self.subnet(cidr='2001:db8::/64', ip_version=6, network=n),\ + self.port(subnet=s4) as p: + self.assertEqual(len(p['port']['fixed_ips']), 2) + ipv4_address = next(i['ip_address'] for i in + p['port']['fixed_ips'] if + netaddr.IPAddress(i['ip_address']).version == 4) + with self.floatingip_with_assoc(port_id=p['port']['id']) as fip: + self.assertEqual(fip['floatingip']['fixed_ip_address'], + ipv4_address) + floating_ip = netaddr.IPAddress( + fip['floatingip']['floating_ip_address']) + self.assertEqual(floating_ip.version, 4) + class L3AgentDbTestCaseBase(L3NatTestCaseMixin): diff --git a/neutron/tests/unit/extensions/test_portsecurity.py b/neutron/tests/unit/extensions/test_portsecurity.py index 42d0c340cca..76a269839ec 100644 --- a/neutron/tests/unit/extensions/test_portsecurity.py +++ b/neutron/tests/unit/extensions/test_portsecurity.py @@ -23,6 +23,7 @@ from neutron.db import securitygroups_db from neutron.extensions import portsecurity as psec from neutron.extensions import securitygroup as ext_sg from neutron import manager +from neutron.plugins.ml2.extensions import port_security from neutron.tests.unit.db import test_db_base_plugin_v2 from neutron.tests.unit.extensions import test_securitygroup @@ -399,3 +400,15 @@ class TestPortSecurity(PortSecurityDBTestCase): '', 'not_network_owner') res = req.get_response(self.api) self.assertEqual(res.status_int, exc.HTTPForbidden.code) + + def test_extend_port_dict_no_port_security(self): + """Test _extend_port_security_dict won't crash + if port_security item is None + """ + for db_data in ({'port_security': None, 'name': 'net1'}, {}): + response_data = {} + + driver = port_security.PortSecurityExtensionDriver() + driver._extend_port_security_dict(response_data, db_data) + + self.assertTrue(response_data[psec.PORTSECURITY]) diff --git a/neutron/tests/unit/extensions/test_quotasv2.py b/neutron/tests/unit/extensions/test_quotasv2.py index 6f8fd6b0a2a..e0780e1ee78 100644 --- a/neutron/tests/unit/extensions/test_quotasv2.py +++ b/neutron/tests/unit/extensions/test_quotasv2.py @@ -27,8 +27,9 @@ from neutron.common import config from neutron.common import constants from neutron.common import exceptions from neutron import context -from neutron.db import quota_db +from neutron.db.quota import driver from neutron import quota +from neutron.quota import resource_registry from neutron.tests import base from neutron.tests import tools from neutron.tests.unit.api.v2 import test_base @@ -64,7 +65,7 @@ class QuotaExtensionTestCase(testlib_api.WebTestCase): self.plugin.return_value.supported_extension_aliases = ['quotas'] # QUOTAS will register the items in conf when starting # extra1 here is added later, so have to do it manually - quota.QUOTAS.register_resource_by_name('extra1') + resource_registry.register_resource_by_name('extra1') ext_mgr = extensions.PluginAwareExtensionManager.get_instance() app = config.load_paste_app('extensions_test_app') ext_middleware = extensions.ExtensionMiddleware(app, ext_mgr=ext_mgr) @@ -95,7 +96,7 @@ class QuotaExtensionDbTestCase(QuotaExtensionTestCase): def setUp(self): cfg.CONF.set_override( 'quota_driver', - 'neutron.db.quota_db.DbQuotaDriver', + 'neutron.db.quota.driver.DbQuotaDriver', group='QUOTAS') super(QuotaExtensionDbTestCase, self).setUp() @@ -404,25 +405,25 @@ class QuotaExtensionCfgTestCase(QuotaExtensionTestCase): class TestDbQuotaDriver(base.BaseTestCase): - """Test for neutron.db.quota_db.DbQuotaDriver.""" + """Test for neutron.db.quota.driver.DbQuotaDriver.""" def test_get_tenant_quotas_arg(self): - """Call neutron.db.quota_db.DbQuotaDriver._get_quotas.""" + """Call neutron.db.quota.driver.DbQuotaDriver._get_quotas.""" - driver = quota_db.DbQuotaDriver() + quota_driver = driver.DbQuotaDriver() ctx = context.Context('', 'bar') foo_quotas = {'network': 5} default_quotas = {'network': 10} target_tenant = 'foo' - with mock.patch.object(quota_db.DbQuotaDriver, + with mock.patch.object(driver.DbQuotaDriver, 'get_tenant_quotas', return_value=foo_quotas) as get_tenant_quotas: - quotas = driver._get_quotas(ctx, - target_tenant, - default_quotas) + quotas = quota_driver._get_quotas(ctx, + target_tenant, + default_quotas) self.assertEqual(quotas, foo_quotas) get_tenant_quotas.assert_called_once_with(ctx, @@ -441,17 +442,17 @@ class TestQuotaDriverLoad(base.BaseTestCase): cfg.CONF.set_override('quota_driver', cfg_driver, group='QUOTAS') with mock.patch.dict(sys.modules, {}): if (not with_quota_db_module and - 'neutron.db.quota_db' in sys.modules): - del sys.modules['neutron.db.quota_db'] + 'neutron.db.quota.driver' in sys.modules): + del sys.modules['neutron.db.quota.driver'] driver = quota.QUOTAS.get_driver() self.assertEqual(loaded_driver, driver.__class__.__name__) def test_quota_db_driver_with_quotas_table(self): - self._test_quota_driver('neutron.db.quota_db.DbQuotaDriver', + self._test_quota_driver('neutron.db.quota.driver.DbQuotaDriver', 'DbQuotaDriver', True) def test_quota_db_driver_fallback_conf_driver(self): - self._test_quota_driver('neutron.db.quota_db.DbQuotaDriver', + self._test_quota_driver('neutron.db.quota.driver.DbQuotaDriver', 'ConfDriver', False) def test_quota_conf_driver(self): diff --git a/neutron/tests/unit/ipam/drivers/neutrondb_ipam/test_db_api.py b/neutron/tests/unit/ipam/drivers/neutrondb_ipam/test_db_api.py index 5680018159f..645a09564ad 100644 --- a/neutron/tests/unit/ipam/drivers/neutrondb_ipam/test_db_api.py +++ b/neutron/tests/unit/ipam/drivers/neutrondb_ipam/test_db_api.py @@ -43,12 +43,18 @@ class TestIpamSubnetManager(testlib_api.SqlTestCase): id=self.ipam_subnet_id).all() self.assertEqual(1, len(subnets)) - def test_associate_neutron_id(self): - self.subnet_manager.associate_neutron_id(self.ctx.session, - 'test-id') - subnet = self.ctx.session.query(db_models.IpamSubnet).filter_by( - id=self.ipam_subnet_id).first() - self.assertEqual('test-id', subnet['neutron_subnet_id']) + def test_remove(self): + count = db_api.IpamSubnetManager.delete(self.ctx.session, + self.neutron_subnet_id) + self.assertEqual(1, count) + subnets = self.ctx.session.query(db_models.IpamSubnet).filter_by( + id=self.ipam_subnet_id).all() + self.assertEqual(0, len(subnets)) + + def test_remove_non_existent_subnet(self): + count = db_api.IpamSubnetManager.delete(self.ctx.session, + 'non-existent') + self.assertEqual(0, count) def _create_pools(self, pools): db_pools = [] diff --git a/neutron/tests/unit/ipam/drivers/neutrondb_ipam/test_driver.py b/neutron/tests/unit/ipam/drivers/neutrondb_ipam/test_driver.py index 53c511e19d3..5a3f6d6e9cb 100644 --- a/neutron/tests/unit/ipam/drivers/neutrondb_ipam/test_driver.py +++ b/neutron/tests/unit/ipam/drivers/neutrondb_ipam/test_driver.py @@ -144,8 +144,7 @@ class TestNeutronDbIpamPool(testlib_api.SqlTestCase, def test_update_subnet_pools(self): cidr = '10.0.0.0/24' subnet, subnet_req = self._prepare_specific_subnet_request(cidr) - ipam_subnet = self.ipam_pool.allocate_subnet(subnet_req) - ipam_subnet.associate_neutron_subnet(subnet['id']) + self.ipam_pool.allocate_subnet(subnet_req) allocation_pools = [netaddr.IPRange('10.0.0.100', '10.0.0.150'), netaddr.IPRange('10.0.0.200', '10.0.0.250')] update_subnet_req = ipam_req.SpecificSubnetRequest( @@ -162,8 +161,7 @@ class TestNeutronDbIpamPool(testlib_api.SqlTestCase, def test_get_subnet(self): cidr = '10.0.0.0/24' subnet, subnet_req = self._prepare_specific_subnet_request(cidr) - ipam_subnet = self.ipam_pool.allocate_subnet(subnet_req) - ipam_subnet.associate_neutron_subnet(subnet['id']) + self.ipam_pool.allocate_subnet(subnet_req) # Retrieve the subnet ipam_subnet = self.ipam_pool.get_subnet(subnet['id']) self._verify_ipam_subnet_details( @@ -176,6 +174,30 @@ class TestNeutronDbIpamPool(testlib_api.SqlTestCase, self.ipam_pool.get_subnet, 'boo') + def test_remove_ipam_subnet(self): + cidr = '10.0.0.0/24' + subnet, subnet_req = self._prepare_specific_subnet_request(cidr) + self.ipam_pool.allocate_subnet(subnet_req) + # Remove ipam subnet by neutron subnet id + self.ipam_pool.remove_subnet(subnet['id']) + + def test_remove_non_existent_subnet_fails(self): + self.assertRaises(n_exc.SubnetNotFound, + self.ipam_pool.remove_subnet, + 'non-existent-id') + + def test_get_details_for_invalid_subnet_id_fails(self): + cidr = '10.0.0.0/24' + subnet_req = ipam_req.SpecificSubnetRequest( + self._tenant_id, + 'non-existent-id', + cidr) + self.ipam_pool.allocate_subnet(subnet_req) + # Neutron subnet does not exist, so get_subnet should fail + self.assertRaises(n_exc.SubnetNotFound, + self.ipam_pool.get_subnet, + 'non-existent-id') + class TestNeutronDbIpamSubnet(testlib_api.SqlTestCase, TestNeutronDbIpamMixin): @@ -214,7 +236,6 @@ class TestNeutronDbIpamSubnet(testlib_api.SqlTestCase, gateway_ip=subnet['gateway_ip'], allocation_pools=allocation_pool_ranges) ipam_subnet = self.ipam_pool.allocate_subnet(subnet_req) - ipam_subnet.associate_neutron_subnet(subnet['id']) return ipam_subnet, subnet def setUp(self): @@ -314,7 +335,7 @@ class TestNeutronDbIpamSubnet(testlib_api.SqlTestCase, subnet = self._create_subnet( self.plugin, self.ctx, self.net_id, cidr) subnet_req = ipam_req.SpecificSubnetRequest( - 'tenant_id', subnet, cidr, gateway_ip=subnet['gateway_ip']) + 'tenant_id', subnet['id'], cidr, gateway_ip=subnet['gateway_ip']) ipam_subnet = self.ipam_pool.allocate_subnet(subnet_req) with self.ctx.session.begin(): ranges = ipam_subnet._allocate_specific_ip( @@ -416,28 +437,10 @@ class TestNeutronDbIpamSubnet(testlib_api.SqlTestCase, # This test instead might be made to pass, but for the wrong reasons! pass - def _test_allocate_subnet(self, subnet_id): - subnet_req = ipam_req.SpecificSubnetRequest( - 'tenant_id', subnet_id, '192.168.0.0/24') - return self.ipam_pool.allocate_subnet(subnet_req) - def test_allocate_subnet_for_non_existent_subnet_pass(self): - # This test should pass because neutron subnet is not checked - # until associate neutron subnet step + # This test should pass because ipam subnet is no longer + # have foreign key relationship with neutron subnet. + # Creating ipam subnet before neutron subnet is a valid case. subnet_req = ipam_req.SpecificSubnetRequest( 'tenant_id', 'meh', '192.168.0.0/24') self.ipam_pool.allocate_subnet(subnet_req) - - def test_associate_neutron_subnet(self): - ipam_subnet, subnet = self._create_and_allocate_ipam_subnet( - '192.168.0.0/24', ip_version=4) - details = ipam_subnet.get_details() - self.assertEqual(subnet['id'], details.subnet_id) - - def test_associate_non_existing_neutron_subnet_fails(self): - subnet_req = ipam_req.SpecificSubnetRequest( - 'tenant_id', 'meh', '192.168.0.0/24') - ipam_subnet = self.ipam_pool.allocate_subnet(subnet_req) - self.assertRaises(n_exc.SubnetNotFound, - ipam_subnet.associate_neutron_subnet, - 'meh') diff --git a/neutron/tests/unit/ipam/test_requests.py b/neutron/tests/unit/ipam/test_requests.py index 243e8b70320..e15f3a7f4e1 100644 --- a/neutron/tests/unit/ipam/test_requests.py +++ b/neutron/tests/unit/ipam/test_requests.py @@ -10,8 +10,6 @@ # License for the specific language governing permissions and limitations # under the License. -import types - import mock import netaddr from oslo_config import cfg @@ -277,7 +275,7 @@ class TestIpamDriverLoader(base.BaseTestCase): def test_ipam_driver_is_loaded_from_ipam_driver_config_value(self): ipam_driver = self._load_ipam_driver('fake', None) self.assertIsInstance( - ipam_driver, (fake_driver.FakeDriver, types.ClassType), + ipam_driver, fake_driver.FakeDriver, "loaded ipam driver should be of type FakeDriver") @mock.patch(FAKE_IPAM_CLASS) @@ -291,20 +289,26 @@ class TestAddressRequestFactory(base.BaseTestCase): def test_specific_address_request_is_loaded(self): for address in ('10.12.0.15', 'fffe::1'): + ip = {'ip_address': address} self.assertIsInstance( - ipam_req.AddressRequestFactory.get_request(None, - None, - address), + ipam_req.AddressRequestFactory.get_request(None, None, ip), ipam_req.SpecificAddressRequest) def test_any_address_request_is_loaded(self): for addr in [None, '']: + ip = {'ip_address': addr} self.assertIsInstance( - ipam_req.AddressRequestFactory.get_request(None, - None, - addr), + ipam_req.AddressRequestFactory.get_request(None, None, ip), ipam_req.AnyAddressRequest) + def test_automatic_address_request_is_loaded(self): + ip = {'mac': '6c:62:6d:de:cf:49', + 'subnet_cidr': '2001:470:abcd::/64', + 'eui64_address': True} + self.assertIsInstance( + ipam_req.AddressRequestFactory.get_request(None, None, ip), + ipam_req.AutomaticAddressRequest) + class TestSubnetRequestFactory(IpamSubnetRequestTestCase): @@ -331,31 +335,31 @@ class TestSubnetRequestFactory(IpamSubnetRequestTestCase): subnet, subnetpool = self._build_subnet_dict(cidr=address) self.assertIsInstance( ipam_req.SubnetRequestFactory.get_request(None, - subnet, - subnetpool), + subnet, + subnetpool), ipam_req.SpecificSubnetRequest) def test_any_address_request_is_loaded_for_ipv4(self): subnet, subnetpool = self._build_subnet_dict(cidr=None, ip_version=4) self.assertIsInstance( ipam_req.SubnetRequestFactory.get_request(None, - subnet, - subnetpool), + subnet, + subnetpool), ipam_req.AnySubnetRequest) def test_any_address_request_is_loaded_for_ipv6(self): subnet, subnetpool = self._build_subnet_dict(cidr=None, ip_version=6) self.assertIsInstance( ipam_req.SubnetRequestFactory.get_request(None, - subnet, - subnetpool), + subnet, + subnetpool), ipam_req.AnySubnetRequest) def test_args_are_passed_to_specific_request(self): subnet, subnetpool = self._build_subnet_dict() request = ipam_req.SubnetRequestFactory.get_request(None, - subnet, - subnetpool) + subnet, + subnetpool) self.assertIsInstance(request, ipam_req.SpecificSubnetRequest) self.assertEqual(self.tenant_id, request.tenant_id) diff --git a/neutron/tests/unit/plugins/ml2/drivers/base_type_tunnel.py b/neutron/tests/unit/plugins/ml2/drivers/base_type_tunnel.py index 725fdaab18e..5bbb3ec38dc 100644 --- a/neutron/tests/unit/plugins/ml2/drivers/base_type_tunnel.py +++ b/neutron/tests/unit/plugins/ml2/drivers/base_type_tunnel.py @@ -93,6 +93,35 @@ class TunnelTypeTestMixin(object): self.assertIsNone( self.driver.get_allocation(self.session, (TUN_MAX + 5 + 1))) + def _test_sync_allocations_and_allocated(self, tunnel_id): + segment = {api.NETWORK_TYPE: self.TYPE, + api.PHYSICAL_NETWORK: None, + api.SEGMENTATION_ID: tunnel_id} + self.driver.reserve_provider_segment(self.session, segment) + + self.driver.tunnel_ranges = UPDATED_TUNNEL_RANGES + self.driver.sync_allocations() + + self.assertTrue( + self.driver.get_allocation(self.session, tunnel_id).allocated) + + def test_sync_allocations_and_allocated_in_initial_range(self): + self._test_sync_allocations_and_allocated(TUN_MIN + 2) + + def test_sync_allocations_and_allocated_in_final_range(self): + self._test_sync_allocations_and_allocated(TUN_MAX + 2) + + def test_sync_allocations_no_op(self): + + def verify_no_chunk(iterable, chunk_size): + # no segment removed/added + self.assertEqual(0, len(list(iterable))) + return [] + with mock.patch.object( + type_tunnel, 'chunks', side_effect=verify_no_chunk) as chunks: + self.driver.sync_allocations() + self.assertEqual(2, len(chunks.mock_calls)) + def test_partial_segment_is_partial_segment(self): segment = {api.NETWORK_TYPE: self.TYPE, api.PHYSICAL_NETWORK: None, diff --git a/neutron/tests/unit/plugins/ml2/drivers/cisco/__init__.py b/neutron/tests/unit/plugins/ml2/drivers/cisco/__init__.py deleted file mode 100644 index e69de29bb2d..00000000000 diff --git a/neutron/tests/unit/plugins/ml2/drivers/cisco/apic/__init__.py b/neutron/tests/unit/plugins/ml2/drivers/cisco/apic/__init__.py deleted file mode 100644 index e69de29bb2d..00000000000 diff --git a/neutron/tests/unit/plugins/ml2/drivers/cisco/apic/base.py b/neutron/tests/unit/plugins/ml2/drivers/cisco/apic/base.py deleted file mode 100644 index 889a32e4385..00000000000 --- a/neutron/tests/unit/plugins/ml2/drivers/cisco/apic/base.py +++ /dev/null @@ -1,234 +0,0 @@ -# Copyright (c) 2014 Cisco Systems -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -import contextlib -import requests - -import mock -from oslo_config import cfg - -from neutron.tests import base - - -OK = requests.codes.ok - -APIC_HOSTS = ['fake.controller.local'] -APIC_PORT = 7580 -APIC_USR = 'notadmin' -APIC_PWD = 'topsecret' - -APIC_TENANT = 'citizen14' -APIC_NETWORK = 'network99' -APIC_NETNAME = 'net99name' -APIC_SUBNET = '10.3.2.1/24' -APIC_L3CTX = 'layer3context' -APIC_AP = 'appProfile001' -APIC_EPG = 'endPointGroup001' - -APIC_CONTRACT = 'signedContract' -APIC_SUBJECT = 'testSubject' -APIC_FILTER = 'carbonFilter' -APIC_ENTRY = 'forcedEntry' - -APIC_SYSTEM_ID = 'sysid' -APIC_DOMAIN = 'cumuloNimbus' - -APIC_NODE_PROF = 'red' -APIC_LEAF = 'green' -APIC_LEAF_TYPE = 'range' -APIC_NODE_BLK = 'blue' -APIC_PORT_PROF = 'yellow' -APIC_PORT_SEL = 'front' -APIC_PORT_TYPE = 'range' -APIC_PORT_BLK1 = 'block01' -APIC_PORT_BLK2 = 'block02' -APIC_ACC_PORT_GRP = 'alpha' -APIC_FUNC_PROF = 'beta' -APIC_ATT_ENT_PROF = 'delta' -APIC_VLAN_NAME = 'gamma' -APIC_VLAN_MODE = 'dynamic' -APIC_VLANID_FROM = 2900 -APIC_VLANID_TO = 2999 -APIC_VLAN_FROM = 'vlan-%d' % APIC_VLANID_FROM -APIC_VLAN_TO = 'vlan-%d' % APIC_VLANID_TO - -APIC_ROUTER = 'router_id' - -APIC_EXT_SWITCH = '203' -APIC_EXT_MODULE = '1' -APIC_EXT_PORT = '34' -APIC_EXT_ENCAP = 'vlan-100' -APIC_EXT_CIDR_EXPOSED = '10.10.40.2/16' -APIC_EXT_GATEWAY_IP = '10.10.40.1' - -APIC_KEY = 'key' - -KEYSTONE_TOKEN = '123Token123' - -APIC_UPLINK_PORTS = ['uplink_port'] - -SERVICE_HOST = 'host1' -SERVICE_HOST_IFACE = 'eth0' -SERVICE_HOST_MAC = 'aa:ee:ii:oo:uu:yy' - -SERVICE_PEER_CHASSIS_NAME = 'leaf4' -SERVICE_PEER_CHASSIS = 'topology/pod-1/node-' + APIC_EXT_SWITCH -SERVICE_PEER_PORT_LOCAL = 'Eth%s/%s' % (APIC_EXT_MODULE, APIC_EXT_PORT) -SERVICE_PEER_PORT_DESC = ('topology/pod-1/paths-%s/pathep-[%s]' % - (APIC_EXT_SWITCH, SERVICE_PEER_PORT_LOCAL.lower())) - - -cfg.CONF.import_group('ml2', 'neutron.plugins.ml2.config') - - -class ControllerMixin(object): - - """Mock the controller for APIC driver and service unit tests.""" - - def __init__(self): - self.response = None - - def set_up_mocks(self): - # The mocked responses from the server are lists used by - # mock.side_effect, which means each call to post or get will - # return the next item in the list. This allows the test cases - # to stage a sequence of responses to method(s) under test. - self.response = {'post': [], 'get': []} - self.reset_reponses() - - def reset_reponses(self, req=None): - # Clear all staged responses. - reqs = [req] if req else ['post', 'get'] # Both if none specified. - for req in reqs: - del self.response[req][:] - self.restart_responses(req) - - def restart_responses(self, req): - responses = mock.MagicMock(side_effect=self.response[req]) - if req == 'post': - requests.Session.post = responses - elif req == 'get': - requests.Session.get = responses - - def mock_response_for_post(self, mo, **attrs): - attrs['debug_mo'] = mo # useful for debugging - self._stage_mocked_response('post', OK, mo, **attrs) - - def _stage_mocked_response(self, req, mock_status, mo, **attrs): - response = mock.MagicMock() - response.status_code = mock_status - mo_attrs = [{mo: {'attributes': attrs}}] if attrs else [] - response.json.return_value = {'imdata': mo_attrs} - self.response[req].append(response) - - def mock_apic_manager_login_responses(self, timeout=300): - # APIC Manager tests are based on authenticated session - self.mock_response_for_post('aaaLogin', userName=APIC_USR, - token='ok', refreshTimeoutSeconds=timeout) - - @contextlib.contextmanager - def fake_transaction(self, *args, **kwargs): - yield 'transaction' - - -class ConfigMixin(object): - - """Mock the config for APIC driver and service unit tests.""" - - def __init__(self): - self.mocked_parser = None - - def set_up_mocks(self): - # Mock the configuration file - base.BaseTestCase.config_parse() - - # Configure global option apic_system_id - cfg.CONF.set_override('apic_system_id', APIC_SYSTEM_ID) - - # Configure option keystone_authtoken - cfg.CONF.keystone_authtoken = KEYSTONE_TOKEN - - # Configure the ML2 mechanism drivers and network types - ml2_opts = { - 'mechanism_drivers': ['apic'], - 'tenant_network_types': ['vlan'], - } - for opt, val in ml2_opts.items(): - cfg.CONF.set_override(opt, val, 'ml2') - - # Configure the ML2 type_vlan opts - ml2_type_vlan_opts = { - 'vlan_ranges': ['physnet1:100:199'], - } - cfg.CONF.set_override('network_vlan_ranges', - ml2_type_vlan_opts['vlan_ranges'], - 'ml2_type_vlan') - self.vlan_ranges = ml2_type_vlan_opts['vlan_ranges'] - - # Configure the Cisco APIC mechanism driver - apic_test_config = { - 'apic_hosts': APIC_HOSTS, - 'apic_username': APIC_USR, - 'apic_password': APIC_PWD, - 'apic_domain_name': APIC_SYSTEM_ID, - 'apic_vlan_ns_name': APIC_VLAN_NAME, - 'apic_vlan_range': '%d:%d' % (APIC_VLANID_FROM, APIC_VLANID_TO), - 'apic_node_profile': APIC_NODE_PROF, - 'apic_entity_profile': APIC_ATT_ENT_PROF, - 'apic_function_profile': APIC_FUNC_PROF, - 'apic_host_uplink_ports': APIC_UPLINK_PORTS - } - for opt, val in apic_test_config.items(): - cfg.CONF.set_override(opt, val, 'ml2_cisco_apic') - self.apic_config = cfg.CONF.ml2_cisco_apic - - # Configure switch topology - apic_switch_cfg = { - 'apic_switch:101': {'ubuntu1,ubuntu2': ['3/11']}, - 'apic_switch:102': {'rhel01,rhel02': ['4/21'], - 'rhel03': ['4/22']}, - } - self.switch_dict = { - '101': { - '3/11': ['ubuntu1', 'ubuntu2'], - }, - '102': { - '4/21': ['rhel01', 'rhel02'], - '4/22': ['rhel03'], - }, - } - self.vpc_dict = { - '201': '202', - '202': '201', - } - self.external_network_dict = { - APIC_NETWORK + '-name': { - 'switch': APIC_EXT_SWITCH, - 'port': APIC_EXT_MODULE + '/' + APIC_EXT_PORT, - 'encap': APIC_EXT_ENCAP, - 'cidr_exposed': APIC_EXT_CIDR_EXPOSED, - 'gateway_ip': APIC_EXT_GATEWAY_IP, - }, - } - self.mocked_parser = mock.patch.object( - cfg, 'MultiConfigParser').start() - self.mocked_parser.return_value.read.return_value = [apic_switch_cfg] - self.mocked_parser.return_value.parsed = [apic_switch_cfg] - - -class FakeDbContract(object): - - def __init__(self, contract_id): - self.contract_id = contract_id diff --git a/neutron/tests/unit/plugins/ml2/drivers/cisco/apic/test_apic_sync.py b/neutron/tests/unit/plugins/ml2/drivers/cisco/apic/test_apic_sync.py deleted file mode 100644 index 47584710105..00000000000 --- a/neutron/tests/unit/plugins/ml2/drivers/cisco/apic/test_apic_sync.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) 2014 Cisco Systems -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -import sys - -import mock - -sys.modules["apicapi"] = mock.Mock() - -from neutron.plugins.ml2.drivers.cisco.apic import apic_sync -from neutron.tests import base - -LOOPING_CALL = 'oslo_service.loopingcall.FixedIntervalLoopingCall' -GET_PLUGIN = 'neutron.manager.NeutronManager.get_plugin' -GET_ADMIN_CONTEXT = 'neutron.context.get_admin_context' -L2_DB = 'neutron.plugins.ml2.db.get_locked_port_and_binding' -NETWORK_CONTEXT = 'neutron.plugins.ml2.driver_context.NetworkContext' -SUBNET_CONTEXT = 'neutron.plugins.ml2.driver_context.SubnetContext' -PORT_CONTEXT = 'neutron.plugins.ml2.driver_context.PortContext' - - -class TestCiscoApicSync(base.BaseTestCase): - - def setUp(self): - super(TestCiscoApicSync, self).setUp() - self.driver = mock.Mock() - # Patch looping call - loopingcall_c = mock.patch(LOOPING_CALL).start() - self.loopingcall = mock.Mock() - loopingcall_c.return_value = self.loopingcall - # Patch get plugin - self.get_plugin = mock.patch(GET_PLUGIN).start() - self.get_plugin.return_value = mock.Mock() - # Patch get admin context - self.get_admin_context = mock.patch(GET_ADMIN_CONTEXT).start() - self.get_admin_context.return_value = mock.Mock() - # Patch get locked port and binding - self.get_locked_port_and_binding = mock.patch(L2_DB).start() - self.get_locked_port_and_binding.return_value = [mock.Mock()] * 2 - # Patch driver context - mock.patch(NETWORK_CONTEXT).start() - mock.patch(SUBNET_CONTEXT).start() - mock.patch(PORT_CONTEXT).start() - - def test_sync_base(self): - sync = apic_sync.ApicBaseSynchronizer(self.driver) - sync.core_plugin = mock.Mock() - sync.core_plugin.get_networks.return_value = [{'id': 'net'}] - sync.core_plugin.get_subnets.return_value = [{'id': 'sub'}] - sync.core_plugin.get_ports.return_value = [{'id': 'port', - 'network_id': 'net'}] - sync.sync_base() - self.assertEqual(1, self.driver.create_network_postcommit.call_count) - self.assertEqual(1, self.driver.create_subnet_postcommit.call_count) - self.assertEqual(1, self.get_locked_port_and_binding.call_count) - self.assertEqual(1, self.driver.create_port_postcommit.call_count) - - def test_sync_router(self): - sync = apic_sync.ApicRouterSynchronizer(self.driver) - sync.core_plugin = mock.Mock() - sync.core_plugin.get_ports.return_value = [{'id': 'port', - 'network_id': 'net', - 'device_id': 'dev'}] - sync.sync_router() - self.assertEqual( - 1, self.driver.add_router_interface_postcommit.call_count) diff --git a/neutron/tests/unit/plugins/ml2/drivers/cisco/apic/test_apic_topology.py b/neutron/tests/unit/plugins/ml2/drivers/cisco/apic/test_apic_topology.py deleted file mode 100644 index 292cb54e0ff..00000000000 --- a/neutron/tests/unit/plugins/ml2/drivers/cisco/apic/test_apic_topology.py +++ /dev/null @@ -1,206 +0,0 @@ -# Copyright (c) 2014 Cisco Systems -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -import sys - -import mock - -sys.modules["apicapi"] = mock.Mock() - -from neutron.plugins.ml2.drivers.cisco.apic import apic_topology -from neutron.tests import base -from neutron.tests.unit.plugins.ml2.drivers.cisco.apic import ( - base as mocked) - -NOTIFIER = ('neutron.plugins.ml2.drivers.cisco.apic.' - 'apic_topology.ApicTopologyServiceNotifierApi') -RPC_CONNECTION = 'neutron.common.rpc.Connection' -AGENTS_DB = 'neutron.db.agents_db' -PERIODIC_TASK = 'oslo_service.periodic_task' -DEV_EXISTS = 'neutron.agent.linux.ip_lib.device_exists' -IP_DEVICE = 'neutron.agent.linux.ip_lib.IPDevice' -EXECUTE = 'neutron.agent.linux.utils.execute' - -LLDP_CMD = ['lldpctl', '-f', 'keyvalue'] -ETH0 = mocked.SERVICE_HOST_IFACE - -LLDPCTL_RES = ( - 'lldp.' + ETH0 + '.via=LLDP\n' - 'lldp.' + ETH0 + '.rid=1\n' - 'lldp.' + ETH0 + '.age=0 day, 20:55:54\n' - 'lldp.' + ETH0 + '.chassis.mac=' + mocked.SERVICE_HOST_MAC + '\n' - 'lldp.' + ETH0 + '.chassis.name=' + mocked.SERVICE_PEER_CHASSIS_NAME + '\n' - 'lldp.' + ETH0 + '.chassis.descr=' + mocked.SERVICE_PEER_CHASSIS + '\n' - 'lldp.' + ETH0 + '.chassis.Bridge.enabled=on\n' - 'lldp.' + ETH0 + '.chassis.Router.enabled=on\n' - 'lldp.' + ETH0 + '.port.local=' + mocked.SERVICE_PEER_PORT_LOCAL + '\n' - 'lldp.' + ETH0 + '.port.descr=' + mocked.SERVICE_PEER_PORT_DESC) - - -class TestCiscoApicTopologyService(base.BaseTestCase, - mocked.ControllerMixin, - mocked.ConfigMixin): - - def setUp(self): - super(TestCiscoApicTopologyService, self).setUp() - mocked.ControllerMixin.set_up_mocks(self) - mocked.ConfigMixin.set_up_mocks(self) - # Patch notifier - notifier_c = mock.patch(NOTIFIER).start() - self.notifier = mock.Mock() - notifier_c.return_value = self.notifier - # Patch Connection - connection_c = mock.patch(RPC_CONNECTION).start() - self.connection = mock.Mock() - connection_c.return_value = self.connection - # Patch agents db - self.agents_db = mock.patch(AGENTS_DB).start() - self.service = apic_topology.ApicTopologyService() - self.service.apic_manager = mock.Mock() - - def test_init_host(self): - self.service.init_host() - self.connection.create_consumer.ensure_called_once() - self.connection.consume_in_threads.ensure_called_once() - - def test_update_link_add_nopeers(self): - self.service.peers = {} - args = (mocked.SERVICE_HOST, mocked.SERVICE_HOST_IFACE, - mocked.SERVICE_HOST_MAC, mocked.APIC_EXT_SWITCH, - mocked.APIC_EXT_MODULE, mocked.APIC_EXT_PORT) - self.service.update_link(None, *args) - self.service.apic_manager.add_hostlink.assert_called_once_with(*args) - self.assertEqual(args, - self.service.peers[(mocked.SERVICE_HOST, - mocked.SERVICE_HOST_IFACE)]) - - def test_update_link_add_with_peers_diff(self): - args = (mocked.SERVICE_HOST, mocked.SERVICE_HOST_IFACE, - mocked.SERVICE_HOST_MAC, mocked.APIC_EXT_SWITCH, - mocked.APIC_EXT_MODULE, mocked.APIC_EXT_PORT) - args_prime = args[:2] + tuple(x + '1' for x in args[2:]) - self.service.peers = {args_prime[:2]: args_prime} - self.service.update_link(None, *args) - self.service.apic_manager.remove_hostlink.assert_called_once_with( - *args_prime) - self.service.apic_manager.add_hostlink.assert_called_once_with(*args) - self.assertEqual( - args, self.service.peers[ - (mocked.SERVICE_HOST, mocked.SERVICE_HOST_IFACE)]) - - def test_update_link_add_with_peers_eq(self): - args = (mocked.SERVICE_HOST, mocked.SERVICE_HOST_IFACE, - mocked.SERVICE_HOST_MAC, - mocked.APIC_EXT_SWITCH, - mocked.APIC_EXT_MODULE, mocked.APIC_EXT_PORT) - self.service.peers = {args[:2]: args} - self.service.update_link(None, *args) - - def test_update_link_rem_with_peers(self): - args = (mocked.SERVICE_HOST, mocked.SERVICE_HOST_IFACE, - mocked.SERVICE_HOST_MAC, 0, - mocked.APIC_EXT_MODULE, mocked.APIC_EXT_PORT) - self.service.peers = {args[:2]: args} - self.service.update_link(None, *args) - self.service.apic_manager.remove_hostlink.assert_called_once_with( - *args) - self.assertFalse(bool(self.service.peers)) - - def test_update_link_rem_no_peers(self): - args = (mocked.SERVICE_HOST, mocked.SERVICE_HOST_IFACE, - mocked.SERVICE_HOST_MAC, 0, - mocked.APIC_EXT_MODULE, mocked.APIC_EXT_PORT) - self.service.update_link(None, *args) - - -class TestCiscoApicTopologyAgent(base.BaseTestCase, - mocked.ControllerMixin, - mocked.ConfigMixin): - - def setUp(self): - super(TestCiscoApicTopologyAgent, self).setUp() - mocked.ControllerMixin.set_up_mocks(self) - mocked.ConfigMixin.set_up_mocks(self) - # Patch notifier - notifier_c = mock.patch(NOTIFIER).start() - self.notifier = mock.Mock() - notifier_c.return_value = self.notifier - # Patch device_exists - self.dev_exists = mock.patch(DEV_EXISTS).start() - # Patch IPDevice - ipdev_c = mock.patch(IP_DEVICE).start() - self.ipdev = mock.Mock() - ipdev_c.return_value = self.ipdev - self.ipdev.link.address = mocked.SERVICE_HOST_MAC - # Patch execute - self.execute = mock.patch(EXECUTE).start() - self.execute.return_value = LLDPCTL_RES - # Patch tasks - self.periodic_task = mock.patch(PERIODIC_TASK).start() - self.agent = apic_topology.ApicTopologyAgent() - self.agent.host = mocked.SERVICE_HOST - self.agent.service_agent = mock.Mock() - self.agent.lldpcmd = LLDP_CMD - - def test_init_host_device_exists(self): - self.agent.lldpcmd = None - self.dev_exists.return_value = True - self.agent.init_host() - self.assertEqual(LLDP_CMD + mocked.APIC_UPLINK_PORTS, - self.agent.lldpcmd) - - def test_init_host_device_not_exist(self): - self.agent.lldpcmd = None - self.dev_exists.return_value = False - self.agent.init_host() - self.assertEqual(LLDP_CMD, self.agent.lldpcmd) - - def test_get_peers(self): - self.agent.peers = {} - peers = self.agent._get_peers() - expected = [(mocked.SERVICE_HOST, mocked.SERVICE_HOST_IFACE, - mocked.SERVICE_HOST_MAC, mocked.APIC_EXT_SWITCH, - mocked.APIC_EXT_MODULE, mocked.APIC_EXT_PORT)] - self.assertEqual(expected, - peers[mocked.SERVICE_HOST_IFACE]) - - def test_check_for_new_peers_no_peers(self): - self.agent.peers = {} - expected = (mocked.SERVICE_HOST, mocked.SERVICE_HOST_IFACE, - mocked.SERVICE_HOST_MAC, mocked.APIC_EXT_SWITCH, - mocked.APIC_EXT_MODULE, mocked.APIC_EXT_PORT) - peers = {mocked.SERVICE_HOST_IFACE: [expected]} - context = mock.Mock() - with mock.patch.object(self.agent, '_get_peers', - return_value=peers): - self.agent._check_for_new_peers(context) - self.assertEqual(expected, - self.agent.peers[mocked.SERVICE_HOST_IFACE]) - self.agent.service_agent.update_link.assert_called_once_with( - context, *expected) - - def test_check_for_new_peers_with_peers(self): - expected = (mocked.SERVICE_HOST, mocked.SERVICE_HOST_IFACE, - mocked.SERVICE_HOST_MAC, mocked.APIC_EXT_SWITCH, - mocked.APIC_EXT_MODULE, mocked.APIC_EXT_PORT) - peers = {mocked.SERVICE_HOST_IFACE: [expected]} - self.agent.peers = {mocked.SERVICE_HOST_IFACE: - [tuple(x + '1' for x in expected)]} - context = mock.Mock() - with mock.patch.object(self.agent, '_get_peers', - return_value=peers): - self.agent._check_for_new_peers(context) - self.agent.service_agent.update_link.assert_called_with( - context, *expected) diff --git a/neutron/tests/unit/plugins/ml2/drivers/cisco/apic/test_mechanism_apic.py b/neutron/tests/unit/plugins/ml2/drivers/cisco/apic/test_mechanism_apic.py deleted file mode 100644 index f59fedcf20a..00000000000 --- a/neutron/tests/unit/plugins/ml2/drivers/cisco/apic/test_mechanism_apic.py +++ /dev/null @@ -1,336 +0,0 @@ -# Copyright (c) 2014 Cisco Systems -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -import sys - -import mock - -sys.modules["apicapi"] = mock.Mock() - -from neutron.common import constants as n_constants -from neutron.extensions import portbindings -from neutron.plugins.ml2.drivers.cisco.apic import mechanism_apic as md -from neutron.plugins.ml2.drivers import type_vlan # noqa -from neutron.tests import base -from neutron.tests.unit.plugins.ml2.drivers.cisco.apic import ( - base as mocked) - - -HOST_ID1 = 'ubuntu' -HOST_ID2 = 'rhel' -ENCAP = '101' - -SUBNET_GATEWAY = '10.3.2.1' -SUBNET_CIDR = '10.3.1.0/24' -SUBNET_NETMASK = '24' - -TEST_SEGMENT1 = 'test-segment1' -TEST_SEGMENT2 = 'test-segment2' - - -class TestCiscoApicMechDriver(base.BaseTestCase, - mocked.ControllerMixin, - mocked.ConfigMixin): - - def setUp(self): - super(TestCiscoApicMechDriver, self).setUp() - mocked.ControllerMixin.set_up_mocks(self) - mocked.ConfigMixin.set_up_mocks(self) - - self.mock_apic_manager_login_responses() - self.driver = md.APICMechanismDriver() - self.driver.synchronizer = None - md.APICMechanismDriver.get_base_synchronizer = mock.Mock() - self.driver.vif_type = 'test-vif_type' - self.driver.cap_port_filter = 'test-cap_port_filter' - self.driver.name_mapper = mock.Mock() - self.driver.name_mapper.tenant.return_value = mocked.APIC_TENANT - self.driver.name_mapper.network.return_value = mocked.APIC_NETWORK - self.driver.name_mapper.subnet.return_value = mocked.APIC_SUBNET - self.driver.name_mapper.port.return_value = mocked.APIC_PORT - self.driver.name_mapper.router.return_value = mocked.APIC_ROUTER - self.driver.name_mapper.app_profile.return_value = mocked.APIC_AP - self.driver.apic_manager = mock.Mock( - name_mapper=mock.Mock(), ext_net_dict=self.external_network_dict) - - self.driver.apic_manager.apic.transaction = self.fake_transaction - - def test_initialize(self): - self.driver.initialize() - mgr = self.driver.apic_manager - self.assertEqual(1, mgr.ensure_infra_created_on_apic.call_count) - self.assertEqual(1, - mgr.ensure_bgp_pod_policy_created_on_apic.call_count) - - def test_update_port_postcommit(self): - net_ctx = self._get_network_context(mocked.APIC_TENANT, - mocked.APIC_NETWORK, - TEST_SEGMENT1) - port_ctx = self._get_port_context(mocked.APIC_TENANT, - mocked.APIC_NETWORK, - 'vm1', net_ctx, HOST_ID1, - device_owner='any') - mgr = self.driver.apic_manager - self.driver.update_port_postcommit(port_ctx) - mgr.ensure_path_created_for_port.assert_called_once_with( - mocked.APIC_TENANT, mocked.APIC_NETWORK, HOST_ID1, - ENCAP, transaction='transaction') - - def test_create_port_postcommit(self): - net_ctx = self._get_network_context(mocked.APIC_TENANT, - mocked.APIC_NETWORK, - TEST_SEGMENT1) - port_ctx = self._get_port_context(mocked.APIC_TENANT, - mocked.APIC_NETWORK, - 'vm1', net_ctx, HOST_ID1, - device_owner='any') - mgr = self.driver.apic_manager - self.driver.create_port_postcommit(port_ctx) - mgr.ensure_path_created_for_port.assert_called_once_with( - mocked.APIC_TENANT, mocked.APIC_NETWORK, HOST_ID1, - ENCAP, transaction='transaction') - - def test_update_port_nobound_postcommit(self): - net_ctx = self._get_network_context(mocked.APIC_TENANT, - mocked.APIC_NETWORK, - TEST_SEGMENT1) - port_ctx = self._get_port_context(mocked.APIC_TENANT, - mocked.APIC_NETWORK, - 'vm1', net_ctx, None, - device_owner='any') - self.driver.update_port_postcommit(port_ctx) - mgr = self.driver.apic_manager - self.assertFalse(mgr.ensure_path_created_for_port.called) - - def test_create_port_nobound_postcommit(self): - net_ctx = self._get_network_context(mocked.APIC_TENANT, - mocked.APIC_NETWORK, - TEST_SEGMENT1) - port_ctx = self._get_port_context(mocked.APIC_TENANT, - mocked.APIC_NETWORK, - 'vm1', net_ctx, None, - device_owner='any') - self.driver.create_port_postcommit(port_ctx) - mgr = self.driver.apic_manager - self.assertFalse(mgr.ensure_path_created_for_port.called) - - def test_update_gw_port_postcommit(self): - net_ctx = self._get_network_context(mocked.APIC_TENANT, - mocked.APIC_NETWORK, - TEST_SEGMENT1, external=True) - port_ctx = self._get_port_context(mocked.APIC_TENANT, - mocked.APIC_NETWORK, - 'vm1', net_ctx, HOST_ID1, gw=True) - mgr = self.driver.apic_manager - mgr.get_router_contract.return_value = mocked.FakeDbContract( - mocked.APIC_CONTRACT) - self.driver.update_port_postcommit(port_ctx) - mgr.get_router_contract.assert_called_once_with( - port_ctx.current['device_id']) - self.assertEqual(1, mgr.ensure_context_enforced.call_count) - mgr.ensure_external_routed_network_created.assert_called_once_with( - mocked.APIC_NETWORK, transaction='transaction') - mgr.ensure_logical_node_profile_created.assert_called_once_with( - mocked.APIC_NETWORK, mocked.APIC_EXT_SWITCH, - mocked.APIC_EXT_MODULE, mocked.APIC_EXT_PORT, - mocked.APIC_EXT_ENCAP, mocked.APIC_EXT_CIDR_EXPOSED, - transaction='transaction') - mgr.ensure_static_route_created.assert_called_once_with( - mocked.APIC_NETWORK, mocked.APIC_EXT_SWITCH, - mocked.APIC_EXT_GATEWAY_IP, transaction='transaction') - mgr.ensure_external_epg_created.assert_called_once_with( - mocked.APIC_NETWORK, transaction='transaction') - mgr.ensure_external_epg_consumed_contract.assert_called_once_with( - mocked.APIC_NETWORK, mgr.get_router_contract.return_value, - transaction='transaction') - mgr.ensure_external_epg_provided_contract.assert_called_once_with( - mocked.APIC_NETWORK, mgr.get_router_contract.return_value, - transaction='transaction') - - def test_create_network_postcommit(self): - ctx = self._get_network_context(mocked.APIC_TENANT, - mocked.APIC_NETWORK, - TEST_SEGMENT1) - mgr = self.driver.apic_manager - self.driver.create_network_postcommit(ctx) - mgr.ensure_bd_created_on_apic.assert_called_once_with( - mocked.APIC_TENANT, mocked.APIC_NETWORK, transaction='transaction') - mgr.ensure_epg_created.assert_called_once_with( - mocked.APIC_TENANT, mocked.APIC_NETWORK, transaction='transaction') - - def test_create_external_network_postcommit(self): - ctx = self._get_network_context(mocked.APIC_TENANT, - mocked.APIC_NETWORK, - TEST_SEGMENT1, external=True) - mgr = self.driver.apic_manager - self.driver.create_network_postcommit(ctx) - self.assertFalse(mgr.ensure_bd_created_on_apic.called) - self.assertFalse(mgr.ensure_epg_created.called) - - def test_delete_network_postcommit(self): - ctx = self._get_network_context(mocked.APIC_TENANT, - mocked.APIC_NETWORK, - TEST_SEGMENT1) - mgr = self.driver.apic_manager - self.driver.delete_network_postcommit(ctx) - mgr.delete_bd_on_apic.assert_called_once_with( - mocked.APIC_TENANT, mocked.APIC_NETWORK, transaction='transaction') - mgr.delete_epg_for_network.assert_called_once_with( - mocked.APIC_TENANT, mocked.APIC_NETWORK, transaction='transaction') - - def test_delete_external_network_postcommit(self): - ctx = self._get_network_context(mocked.APIC_TENANT, - mocked.APIC_NETWORK, - TEST_SEGMENT1, external=True) - mgr = self.driver.apic_manager - self.driver.delete_network_postcommit(ctx) - mgr.delete_external_routed_network.assert_called_once_with( - mocked.APIC_NETWORK) - - def test_create_subnet_postcommit(self): - net_ctx = self._get_network_context(mocked.APIC_TENANT, - mocked.APIC_NETWORK, - TEST_SEGMENT1) - subnet_ctx = self._get_subnet_context(SUBNET_GATEWAY, - SUBNET_CIDR, - net_ctx) - mgr = self.driver.apic_manager - self.driver.create_subnet_postcommit(subnet_ctx) - mgr.ensure_subnet_created_on_apic.assert_called_once_with( - mocked.APIC_TENANT, mocked.APIC_NETWORK, - '%s/%s' % (SUBNET_GATEWAY, SUBNET_NETMASK)) - - def test_create_subnet_nogw_postcommit(self): - net_ctx = self._get_network_context(mocked.APIC_TENANT, - mocked.APIC_NETWORK, - TEST_SEGMENT1) - subnet_ctx = self._get_subnet_context(None, - SUBNET_CIDR, - net_ctx) - mgr = self.driver.apic_manager - self.driver.create_subnet_postcommit(subnet_ctx) - self.assertFalse(mgr.ensure_subnet_created_on_apic.called) - - def _get_network_context(self, tenant_id, net_id, seg_id=None, - seg_type='vlan', external=False): - network = {'id': net_id, - 'name': net_id + '-name', - 'tenant_id': tenant_id, - 'provider:segmentation_id': seg_id} - if external: - network['router:external'] = True - if seg_id: - network_segments = [{'id': seg_id, - 'segmentation_id': ENCAP, - 'network_type': seg_type, - 'physical_network': 'physnet1'}] - else: - network_segments = [] - return FakeNetworkContext(network, network_segments) - - def _get_subnet_context(self, gateway_ip, cidr, network): - subnet = {'tenant_id': network.current['tenant_id'], - 'network_id': network.current['id'], - 'id': '[%s/%s]' % (gateway_ip, cidr), - 'gateway_ip': gateway_ip, - 'cidr': cidr} - return FakeSubnetContext(subnet, network) - - def _get_port_context(self, tenant_id, net_id, vm_id, network, host, - gw=False, device_owner='compute'): - port = {'device_id': vm_id, - 'device_owner': device_owner, - 'binding:host_id': host, - 'tenant_id': tenant_id, - 'id': mocked.APIC_PORT, - 'name': mocked.APIC_PORT, - 'network_id': net_id} - if gw: - port['device_owner'] = n_constants.DEVICE_OWNER_ROUTER_GW - port['device_id'] = mocked.APIC_ROUTER - return FakePortContext(port, network) - - -class FakeNetworkContext(object): - """To generate network context for testing purposes only.""" - - def __init__(self, network, segments): - self._network = network - self._segments = segments - - @property - def current(self): - return self._network - - @property - def network_segments(self): - return self._segments - - -class FakeSubnetContext(object): - """To generate subnet context for testing purposes only.""" - - def __init__(self, subnet, network): - self._subnet = subnet - self._network = network - self._plugin = mock.Mock() - self._plugin_context = mock.Mock() - self._plugin.get_network.return_value = {} - - @property - def current(self): - return self._subnet - - @property - def network(self): - return self._network - - -class FakePortContext(object): - """To generate port context for testing purposes only.""" - - def __init__(self, port, network): - self._port = port - self._network = network - self._plugin = mock.Mock() - self._plugin_context = mock.Mock() - self._plugin.get_ports.return_value = [] - if network.network_segments: - self._bound_segment = network.network_segments[0] - else: - self._bound_segment = None - - @property - def current(self): - return self._port - - @property - def network(self): - return self._network - - @property - def top_bound_segment(self): - return self._bound_segment - - def set_binding(self, segment_id, vif_type, cap_port_filter): - pass - - @property - def host(self): - return self._port.get(portbindings.HOST_ID) - - @property - def original_host(self): - return self._original_port.get(portbindings.HOST_ID) diff --git a/neutron/tests/unit/plugins/ml2/drivers/linuxbridge/agent/test_linuxbridge_neutron_agent.py b/neutron/tests/unit/plugins/ml2/drivers/linuxbridge/agent/test_linuxbridge_neutron_agent.py index 3d5fa6cf4c4..55b9bd821b4 100644 --- a/neutron/tests/unit/plugins/ml2/drivers/linuxbridge/agent/test_linuxbridge_neutron_agent.py +++ b/neutron/tests/unit/plugins/ml2/drivers/linuxbridge/agent/test_linuxbridge_neutron_agent.py @@ -17,6 +17,7 @@ import os import mock from oslo_config import cfg +from neutron.agent.linux import bridge_lib from neutron.agent.linux import ip_lib from neutron.agent.linux import utils from neutron.common import constants @@ -577,9 +578,12 @@ class TestLinuxBridgeManager(base.BaseTestCase): self.assertFalse(self.lbm._bridge_exists_and_ensure_up("br0")) def test_ensure_bridge(self): + bridge_device = mock.Mock() + bridge_device_old = mock.Mock() with mock.patch.object(self.lbm, '_bridge_exists_and_ensure_up') as de_fn,\ - mock.patch.object(utils, 'execute') as exec_fn,\ + mock.patch.object(bridge_lib, "BridgeDevice", + return_value=bridge_device_old) as br_fn, \ mock.patch.object(self.lbm, 'update_interface_ip_details') as upd_fn,\ mock.patch.object(self.lbm, @@ -588,7 +592,10 @@ class TestLinuxBridgeManager(base.BaseTestCase): mock.patch.object(self.lbm, 'get_bridge_for_tap_device') as get_if_br_fn: de_fn.return_value = False - exec_fn.return_value = False + br_fn.addbr.return_value = bridge_device + bridge_device.setfd.return_value = False + bridge_device.disable_stp.return_value = False + bridge_device.link.set_up.return_value = False self.assertEqual(self.lbm.ensure_bridge("br0", None), "br0") ie_fn.return_Value = False self.lbm.ensure_bridge("br0", "eth0") @@ -599,24 +606,17 @@ class TestLinuxBridgeManager(base.BaseTestCase): upd_fn.assert_called_with("br0", "eth0", "ips", "gateway") ie_fn.assert_called_with("br0", "eth0") - exec_fn.side_effect = Exception() de_fn.return_value = True + bridge_device.delif.side_effect = Exception() self.lbm.ensure_bridge("br0", "eth0") ie_fn.assert_called_with("br0", "eth0") - exec_fn.reset_mock() - exec_fn.side_effect = None de_fn.return_value = True ie_fn.return_value = False get_if_br_fn.return_value = "br1" self.lbm.ensure_bridge("br0", "eth0") - expected = [ - mock.call(['brctl', 'delif', 'br1', 'eth0'], - run_as_root=True), - mock.call(['brctl', 'addif', 'br0', 'eth0'], - run_as_root=True), - ] - exec_fn.assert_has_calls(expected) + bridge_device_old.delif.assert_called_once_with('eth0') + br_fn.return_value.addif.assert_called_once_with('eth0') def test_ensure_physical_in_bridge(self): self.assertFalse( @@ -653,11 +653,13 @@ class TestLinuxBridgeManager(base.BaseTestCase): ) de_fn.return_value = True + bridge_device = mock.Mock() with mock.patch.object(self.lbm, "ensure_local_bridge") as en_fn,\ - mock.patch.object(utils, "execute") as exec_fn,\ + mock.patch.object(bridge_lib, "BridgeDevice", + return_value=bridge_device), \ mock.patch.object(self.lbm, "get_bridge_for_tap_device") as get_br: - exec_fn.return_value = False + bridge_device.addif.retun_value = False get_br.return_value = True self.assertTrue(self.lbm.add_tap_interface("123", p_const.TYPE_LOCAL, @@ -666,7 +668,7 @@ class TestLinuxBridgeManager(base.BaseTestCase): en_fn.assert_called_with("123") get_br.return_value = False - exec_fn.return_value = True + bridge_device.addif.retun_value = True self.assertFalse(self.lbm.add_tap_interface("123", p_const.TYPE_LOCAL, "physnet1", None, @@ -698,6 +700,7 @@ class TestLinuxBridgeManager(base.BaseTestCase): "1", "tap234") def test_delete_vlan_bridge(self): + bridge_device = mock.Mock() with mock.patch.object(ip_lib, "device_exists") as de_fn,\ mock.patch.object(self.lbm, "get_interfaces_on_bridge") as getif_fn,\ @@ -707,7 +710,8 @@ class TestLinuxBridgeManager(base.BaseTestCase): mock.patch.object(self.lbm, "update_interface_ip_details") as updif_fn,\ mock.patch.object(self.lbm, "delete_vxlan") as del_vxlan,\ - mock.patch.object(utils, "execute") as exec_fn: + mock.patch.object(bridge_lib, "BridgeDevice", + return_value=bridge_device): de_fn.return_value = False self.lbm.delete_vlan_bridge("br0") self.assertFalse(getif_fn.called) @@ -715,12 +719,13 @@ class TestLinuxBridgeManager(base.BaseTestCase): de_fn.return_value = True getif_fn.return_value = ["eth0", "eth1", "vxlan-1002"] if_det_fn.return_value = ("ips", "gateway") - exec_fn.return_value = False + bridge_device.link.set_down.return_value = False self.lbm.delete_vlan_bridge("br0") updif_fn.assert_called_with("eth1", "br0", "ips", "gateway") del_vxlan.assert_called_with("vxlan-1002") def test_delete_vlan_bridge_with_ip(self): + bridge_device = mock.Mock() with mock.patch.object(ip_lib, "device_exists") as de_fn,\ mock.patch.object(self.lbm, "get_interfaces_on_bridge") as getif_fn,\ @@ -730,16 +735,18 @@ class TestLinuxBridgeManager(base.BaseTestCase): mock.patch.object(self.lbm, "update_interface_ip_details") as updif_fn,\ mock.patch.object(self.lbm, "delete_vlan") as del_vlan,\ - mock.patch.object(utils, "execute") as exec_fn: + mock.patch.object(bridge_lib, "BridgeDevice", + return_value=bridge_device): de_fn.return_value = True getif_fn.return_value = ["eth0", "eth1.1"] if_det_fn.return_value = ("ips", "gateway") - exec_fn.return_value = False + bridge_device.link.set_down.return_value = False self.lbm.delete_vlan_bridge("br0") updif_fn.assert_called_with("eth1.1", "br0", "ips", "gateway") self.assertFalse(del_vlan.called) def test_delete_vlan_bridge_no_ip(self): + bridge_device = mock.Mock() with mock.patch.object(ip_lib, "device_exists") as de_fn,\ mock.patch.object(self.lbm, "get_interfaces_on_bridge") as getif_fn,\ @@ -749,10 +756,11 @@ class TestLinuxBridgeManager(base.BaseTestCase): mock.patch.object(self.lbm, "update_interface_ip_details") as updif_fn,\ mock.patch.object(self.lbm, "delete_vlan") as del_vlan,\ - mock.patch.object(utils, "execute") as exec_fn: + mock.patch.object(bridge_lib, "BridgeDevice", + return_value=bridge_device): de_fn.return_value = True getif_fn.return_value = ["eth0", "eth1.1"] - exec_fn.return_value = False + bridge_device.link.set_down.return_value = False if_det_fn.return_value = ([], None) self.lbm.delete_vlan_bridge("br0") del_vlan.assert_called_with("eth1.1") @@ -765,19 +773,21 @@ class TestLinuxBridgeManager(base.BaseTestCase): lbm = linuxbridge_neutron_agent.LinuxBridgeManager( interface_mappings) + bridge_device = mock.Mock() with mock.patch.object(ip_lib, "device_exists") as de_fn,\ mock.patch.object(lbm, "get_interfaces_on_bridge") as getif_fn,\ mock.patch.object(lbm, "remove_interface"),\ mock.patch.object(lbm, "delete_vxlan") as del_vxlan,\ - mock.patch.object(utils, "execute") as exec_fn: + mock.patch.object(bridge_lib, "BridgeDevice", + return_value=bridge_device): de_fn.return_value = False lbm.delete_vlan_bridge("br0") self.assertFalse(getif_fn.called) de_fn.return_value = True getif_fn.return_value = ["vxlan-1002"] - exec_fn.return_value = False + bridge_device.link.set_down.return_value = False lbm.delete_vlan_bridge("br0") del_vxlan.assert_called_with("vxlan-1002") @@ -795,10 +805,12 @@ class TestLinuxBridgeManager(base.BaseTestCase): del_br_fn.assert_called_once_with('brqnet1') def test_remove_interface(self): + bridge_device = mock.Mock() with mock.patch.object(ip_lib, "device_exists") as de_fn,\ mock.patch.object(self.lbm, "is_device_on_bridge") as isdev_fn,\ - mock.patch.object(utils, "execute") as exec_fn: + mock.patch.object(bridge_lib, "BridgeDevice", + return_value=bridge_device): de_fn.return_value = False self.assertFalse(self.lbm.remove_interface("br0", "eth0")) self.assertFalse(isdev_fn.called) @@ -808,10 +820,10 @@ class TestLinuxBridgeManager(base.BaseTestCase): self.assertTrue(self.lbm.remove_interface("br0", "eth0")) isdev_fn.return_value = True - exec_fn.return_value = True + bridge_device.delif.return_value = True self.assertFalse(self.lbm.remove_interface("br0", "eth0")) - exec_fn.return_value = False + bridge_device.delif.return_value = False self.assertTrue(self.lbm.remove_interface("br0", "eth0")) def test_delete_vlan(self): @@ -822,7 +834,7 @@ class TestLinuxBridgeManager(base.BaseTestCase): self.assertFalse(exec_fn.called) de_fn.return_value = True - exec_fn.return_value = False + exec_fn.return_value = True self.lbm.delete_vlan("eth1.1") self.assertTrue(exec_fn.called) diff --git a/neutron/plugins/ml2/drivers/cisco/n1kv/extensions/__init__.py b/neutron/tests/unit/plugins/ml2/drivers/mech_sriov/__init__.py similarity index 100% rename from neutron/plugins/ml2/drivers/cisco/n1kv/extensions/__init__.py rename to neutron/tests/unit/plugins/ml2/drivers/mech_sriov/__init__.py diff --git a/neutron/tests/unit/plugins/ml2/drivers/mech_sriov/agent/test_pci_lib.py b/neutron/tests/unit/plugins/ml2/drivers/mech_sriov/agent/test_pci_lib.py index 62a10f0fba0..98c02548438 100644 --- a/neutron/tests/unit/plugins/ml2/drivers/mech_sriov/agent/test_pci_lib.py +++ b/neutron/tests/unit/plugins/ml2/drivers/mech_sriov/agent/test_pci_lib.py @@ -50,51 +50,51 @@ class TestPciLib(base.BaseTestCase): def test_get_assigned_macs(self): with mock.patch.object(self.pci_wrapper, - "_execute") as mock_exec: - mock_exec.return_value = self.VF_LINK_SHOW + "_as_root") as mock_as_root: + mock_as_root.return_value = self.VF_LINK_SHOW result = self.pci_wrapper.get_assigned_macs([self.VF_INDEX]) self.assertEqual([self.MAC_MAPPING[self.VF_INDEX]], result) def test_get_assigned_macs_fail(self): with mock.patch.object(self.pci_wrapper, - "_execute") as mock_exec: - mock_exec.side_effect = Exception() + "_as_root") as mock_as_root: + mock_as_root.side_effect = Exception() self.assertRaises(exc.IpCommandError, self.pci_wrapper.get_assigned_macs, [self.VF_INDEX]) def test_get_vf_state_enable(self): with mock.patch.object(self.pci_wrapper, - "_execute") as mock_exec: - mock_exec.return_value = self.VF_LINK_SHOW + "_as_root") as mock_as_root: + mock_as_root.return_value = self.VF_LINK_SHOW result = self.pci_wrapper.get_vf_state(self.VF_INDEX) self.assertTrue(result) def test_get_vf_state_disable(self): with mock.patch.object(self.pci_wrapper, - "_execute") as mock_exec: - mock_exec.return_value = self.VF_LINK_SHOW + "_as_root") as mock_as_root: + mock_as_root.return_value = self.VF_LINK_SHOW result = self.pci_wrapper.get_vf_state(self.VF_INDEX_DISABLE) self.assertFalse(result) def test_get_vf_state_fail(self): with mock.patch.object(self.pci_wrapper, - "_execute") as mock_exec: - mock_exec.side_effect = Exception() + "_as_root") as mock_as_root: + mock_as_root.side_effect = Exception() self.assertRaises(exc.IpCommandError, self.pci_wrapper.get_vf_state, self.VF_INDEX) def test_set_vf_state(self): - with mock.patch.object(self.pci_wrapper, "_execute"): + with mock.patch.object(self.pci_wrapper, "_as_root"): result = self.pci_wrapper.set_vf_state(self.VF_INDEX, True) self.assertIsNone(result) def test_set_vf_state_fail(self): with mock.patch.object(self.pci_wrapper, - "_execute") as mock_exec: - mock_exec.side_effect = Exception() + "_as_root") as mock_as_root: + mock_as_root.side_effect = Exception() self.assertRaises(exc.IpCommandError, self.pci_wrapper.set_vf_state, self.VF_INDEX, diff --git a/neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/ovs_bridge_test_base.py b/neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/ovs_bridge_test_base.py index fabf698a818..ad9de289fc3 100644 --- a/neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/ovs_bridge_test_base.py +++ b/neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/ovs_bridge_test_base.py @@ -16,6 +16,8 @@ import mock +from neutron.common import constants + from neutron.tests.unit.plugins.ml2.drivers.openvswitch.agent \ import ovs_test_base @@ -112,7 +114,8 @@ class OVSDVRProcessTestMixin(object): expected = [ call.add_flow(table=self.dvr_process_table_id, proto='icmp6', dl_src=gateway_mac, actions='drop', - priority=3, dl_vlan=vlan_tag), + priority=3, dl_vlan=vlan_tag, + icmp_type=constants.ICMPV6_TYPE_RA), ] self.assertEqual(expected, self.mock.mock_calls) @@ -124,7 +127,8 @@ class OVSDVRProcessTestMixin(object): expected = [ call.delete_flows(table=self.dvr_process_table_id, dl_vlan=vlan_tag, dl_src=gateway_mac, - proto='icmp6'), + proto='icmp6', + icmp_type=constants.ICMPV6_TYPE_RA), ] self.assertEqual(expected, self.mock.mock_calls) diff --git a/neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/test_ovs_neutron_agent.py b/neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/test_ovs_neutron_agent.py index 19bcd520d99..9aaa3132f19 100644 --- a/neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/test_ovs_neutron_agent.py +++ b/neutron/tests/unit/plugins/ml2/drivers/openvswitch/agent/test_ovs_neutron_agent.py @@ -322,16 +322,30 @@ class TestOvsNeutronAgent(object): vif_port_set, registered_ports, port_tags_dict=port_tags_dict) self.assertEqual(expected, actual) - def test_treat_devices_added_returns_raises_for_missing_device(self): - with mock.patch.object(self.agent.plugin_rpc, - 'get_devices_details_list', - side_effect=Exception()),\ - mock.patch.object(self.agent.int_br, - 'get_vif_port_by_id', - return_value=mock.Mock()): - self.assertRaises( - self.mod_agent.DeviceListRetrievalError, - self.agent.treat_devices_added_or_updated, [{}], False) + def test_bind_devices(self): + devices_up = ['tap1'] + devices_down = ['tap2'] + self.agent.local_vlan_map["net1"] = mock.Mock() + port_details = [ + {'network_id': 'net1', 'vif_port': mock.Mock(), + 'device': devices_up[0], + 'admin_state_up': True}, + {'network_id': 'net1', 'vif_port': mock.Mock(), + 'device': devices_down[0], + 'admin_state_up': False}] + with mock.patch.object( + self.agent.plugin_rpc, 'update_device_list', + return_value={'devices_up': devices_up, + 'devices_down': devices_down, + 'failed_devices_up': [], + 'failed_devices_down': []}) as update_devices, \ + mock.patch.object(self.agent, + 'int_br') as int_br: + int_br.db_list.return_value = [] + self.agent._bind_devices(port_details) + update_devices.assert_called_once_with(mock.ANY, devices_up, + devices_down, + mock.ANY, mock.ANY) def _mock_treat_devices_added_updated(self, details, port, func_name): """Mock treat devices added or updated. @@ -342,11 +356,17 @@ class TestOvsNeutronAgent(object): :returns: whether the named function was called """ with mock.patch.object(self.agent.plugin_rpc, - 'get_devices_details_list', - return_value=[details]),\ + 'get_devices_details_list_and_failed_devices', + return_value={'devices': [details], + 'failed_devices': None}),\ mock.patch.object(self.agent.int_br, 'get_vifs_by_ids', return_value={details['device']: port}),\ + mock.patch.object(self.agent.plugin_rpc, 'update_device_list', + return_value={'devices_up': [], + 'devices_down': details, + 'failed_devices_up': [], + 'failed_devices_down': []}),\ mock.patch.object(self.agent, func_name) as func: skip_devs, need_bound_devices = ( self.agent.treat_devices_added_or_updated([{}], False)) @@ -367,8 +387,9 @@ class TestOvsNeutronAgent(object): mock.MagicMock(), port, 'port_dead')) def test_treat_devices_added_does_not_process_missing_port(self): - with mock.patch.object(self.agent.plugin_rpc, - 'get_device_details') as get_dev_fn,\ + with mock.patch.object( + self.agent.plugin_rpc, + 'get_devices_details_list_and_failed_devices') as get_dev_fn,\ mock.patch.object(self.agent.int_br, 'get_vif_port_by_id', return_value=None): @@ -384,8 +405,9 @@ class TestOvsNeutronAgent(object): dev_mock = mock.MagicMock() dev_mock.__getitem__.return_value = 'the_skipped_one' with mock.patch.object(self.agent.plugin_rpc, - 'get_devices_details_list', - return_value=[dev_mock]),\ + 'get_devices_details_list_and_failed_devices', + return_value={'devices': [dev_mock], + 'failed_devices': None}),\ mock.patch.object(self.agent.int_br, 'get_vifs_by_ids', return_value={}),\ @@ -411,8 +433,9 @@ class TestOvsNeutronAgent(object): } with mock.patch.object(self.agent.plugin_rpc, - 'get_devices_details_list', - return_value=[fake_details_dict]),\ + 'get_devices_details_list_and_failed_devices', + return_value={'devices': [fake_details_dict], + 'failed_devices': None}),\ mock.patch.object(self.agent.int_br, 'get_vifs_by_ids', return_value={'xxx': mock.MagicMock()}),\ @@ -424,15 +447,14 @@ class TestOvsNeutronAgent(object): self.assertFalse(skip_devs) self.assertTrue(treat_vif_port.called) - def test_treat_devices_removed_returns_true_for_missing_device(self): - with mock.patch.object(self.agent.plugin_rpc, 'update_device_down', - side_effect=Exception()): - self.assertTrue(self.agent.treat_devices_removed([{}])) - def _mock_treat_devices_removed(self, port_exists): details = dict(exists=port_exists) - with mock.patch.object(self.agent.plugin_rpc, 'update_device_down', - return_value=details): + with mock.patch.object(self.agent.plugin_rpc, + 'update_device_list', + return_value={'devices_up': [], + 'devices_down': details, + 'failed_devices_up': [], + 'failed_devices_down': []}): with mock.patch.object(self.agent, 'port_unbound') as port_unbound: self.assertFalse(self.agent.treat_devices_removed([{}])) self.assertTrue(port_unbound.called) @@ -1046,7 +1068,11 @@ class TestOvsNeutronAgent(object): 'physical_network', 'segmentation_id', 'admin_state_up', 'fixed_ips', 'device', 'device_owner')}] - self.agent.plugin_rpc.get_devices_details_list.return_value = plist + (self.agent.plugin_rpc.get_devices_details_list_and_failed_devices. + return_value) = {'devices': plist, 'failed_devices': []} + self.agent.plugin_rpc.update_device_list.return_value = { + 'devices_up': plist, 'devices_down': [], 'failed_devices_up': [], + 'failed_devices_down': []} self.agent.setup_arp_spoofing_protection = mock.Mock() self.agent.treat_devices_added_or_updated([], False) self.assertFalse(self.agent.setup_arp_spoofing_protection.called) @@ -1664,9 +1690,12 @@ class TestOvsDvrNeutronAgent(object): int_br.reset_mock() tun_br.reset_mock() with mock.patch.object(self.agent, 'reclaim_local_vlan'),\ - mock.patch.object(self.agent.plugin_rpc, - 'update_device_down', - return_value=None),\ + mock.patch.object(self.agent.plugin_rpc, 'update_device_list', + return_value={ + 'devices_up': [], + 'devices_down': [self._port.vif_id], + 'failed_devices_up': [], + 'failed_devices_down': []}),\ mock.patch.object(self.agent, 'int_br', new=int_br),\ mock.patch.object(self.agent, 'tun_br', new=tun_br),\ mock.patch.object(self.agent.dvr_agent, 'int_br', new=int_br),\ @@ -1766,9 +1795,13 @@ class TestOvsDvrNeutronAgent(object): int_br.reset_mock() tun_br.reset_mock() with mock.patch.object(self.agent, 'reclaim_local_vlan'),\ - mock.patch.object(self.agent.plugin_rpc, - 'update_device_down', - return_value=None),\ + mock.patch.object(self.agent.plugin_rpc, 'update_device_list', + return_value={ + 'devices_up': [], + 'devices_down': [ + self._compute_port.vif_id], + 'failed_devices_up': [], + 'failed_devices_down': []}),\ mock.patch.object(self.agent, 'int_br', new=int_br),\ mock.patch.object(self.agent, 'tun_br', new=tun_br),\ mock.patch.object(self.agent.dvr_agent, 'int_br', new=int_br),\ @@ -1853,9 +1886,12 @@ class TestOvsDvrNeutronAgent(object): int_br.reset_mock() tun_br.reset_mock() with mock.patch.object(self.agent, 'reclaim_local_vlan'),\ - mock.patch.object(self.agent.plugin_rpc, - 'update_device_down', - return_value=None),\ + mock.patch.object(self.agent.plugin_rpc, 'update_device_list', + return_value={ + 'devices_up': [], + 'devices_down': [self._port.vif_id], + 'failed_devices_up': [], + 'failed_devices_down': []}),\ mock.patch.object(self.agent, 'int_br', new=int_br),\ mock.patch.object(self.agent, 'tun_br', new=tun_br),\ mock.patch.object(self.agent.dvr_agent, 'int_br', new=int_br),\ diff --git a/neutron/tests/unit/plugins/ml2/drivers/test_type_gre.py b/neutron/tests/unit/plugins/ml2/drivers/test_type_gre.py index ec4d342012b..0471c68ec41 100644 --- a/neutron/tests/unit/plugins/ml2/drivers/test_type_gre.py +++ b/neutron/tests/unit/plugins/ml2/drivers/test_type_gre.py @@ -13,13 +13,6 @@ # License for the specific language governing permissions and limitations # under the License. -import mock - -from oslo_db import exception as db_exc -from sqlalchemy.orm import exc as sa_exc -import testtools - -from neutron.db import api as db_api from neutron.plugins.common import constants as p_const from neutron.plugins.ml2 import config from neutron.plugins.ml2.drivers import type_gre @@ -62,32 +55,6 @@ class GreTypeTest(base_type_tunnel.TunnelTypeTestMixin, elif endpoint['ip_address'] == base_type_tunnel.TUNNEL_IP_TWO: self.assertEqual(base_type_tunnel.HOST_TWO, endpoint['host']) - def test_sync_allocations_entry_added_during_session(self): - with mock.patch.object(self.driver, '_add_allocation', - side_effect=db_exc.DBDuplicateEntry) as ( - mock_add_allocation): - self.driver.sync_allocations() - self.assertTrue(mock_add_allocation.called) - - def test__add_allocation_not_existing(self): - session = db_api.get_session() - _add_allocation(session, gre_id=1) - self.driver._add_allocation(session, {1, 2}) - _get_allocation(session, 2) - - def test__add_allocation_existing_allocated_is_kept(self): - session = db_api.get_session() - _add_allocation(session, gre_id=1, allocated=True) - self.driver._add_allocation(session, {2}) - _get_allocation(session, 1) - - def test__add_allocation_existing_not_allocated_is_removed(self): - session = db_api.get_session() - _add_allocation(session, gre_id=1) - self.driver._add_allocation(session, {2}) - with testtools.ExpectedException(sa_exc.NoResultFound): - _get_allocation(session, 1) - def test_get_mtu(self): config.cfg.CONF.set_override('segment_mtu', 1500, group='ml2') config.cfg.CONF.set_override('path_mtu', 1475, group='ml2') diff --git a/neutron/tests/unit/plugins/ml2/test_ext_portsecurity.py b/neutron/tests/unit/plugins/ml2/test_ext_portsecurity.py index 0def93842e3..e6ea22e81fe 100644 --- a/neutron/tests/unit/plugins/ml2/test_ext_portsecurity.py +++ b/neutron/tests/unit/plugins/ml2/test_ext_portsecurity.py @@ -13,7 +13,9 @@ # License for the specific language governing permissions and limitations # under the License. +from neutron import context from neutron.extensions import portsecurity as psec +from neutron import manager from neutron.plugins.ml2 import config from neutron.tests.unit.extensions import test_portsecurity as test_psec from neutron.tests.unit.plugins.ml2 import test_plugin @@ -29,6 +31,25 @@ class PSExtDriverTestCase(test_plugin.Ml2PluginV2TestCase, group='ml2') super(PSExtDriverTestCase, self).setUp() + def test_create_net_port_security_default(self): + _core_plugin = manager.NeutronManager.get_plugin() + admin_ctx = context.get_admin_context() + _default_value = (psec.EXTENDED_ATTRIBUTES_2_0['networks'] + [psec.PORTSECURITY]['default']) + args = {'network': + {'name': 'test', + 'tenant_id': '', + 'shared': False, + 'admin_state_up': True, + 'status': 'ACTIVE'}} + try: + network = _core_plugin.create_network(admin_ctx, args) + _value = network[psec.PORTSECURITY] + finally: + if network: + _core_plugin.delete_network(admin_ctx, network['id']) + self.assertEqual(_default_value, _value) + def test_create_port_with_secgroup_none_and_port_security_false(self): if self._skip_security_group: self.skipTest("Plugin does not support security groups") diff --git a/neutron/tests/unit/plugins/ml2/test_plugin.py b/neutron/tests/unit/plugins/ml2/test_plugin.py index 7f8fe7dbfe7..7766e6cfba7 100644 --- a/neutron/tests/unit/plugins/ml2/test_plugin.py +++ b/neutron/tests/unit/plugins/ml2/test_plugin.py @@ -1489,10 +1489,7 @@ class TestFaultyMechansimDriver(Ml2PluginV2FaultyDriverTestCase): data = {'port': {'name': new_name}} req = self.new_update_request('ports', data, port_id) res = req.get_response(self.api) - self.assertEqual(500, res.status_int) - error = self.deserialize(self.fmt, res) - self.assertEqual('MechanismDriverError', - error['NeutronError']['type']) + self.assertEqual(200, res.status_int) # Test if other mechanism driver was called self.assertTrue(upp.called) port = self._show('ports', port_id) @@ -1500,6 +1497,56 @@ class TestFaultyMechansimDriver(Ml2PluginV2FaultyDriverTestCase): self._delete('ports', port['port']['id']) + def test_update_dvr_router_interface_port(self): + """Test validate dvr router interface update succeeds.""" + host_id = 'host' + binding = models.DVRPortBinding( + port_id='port_id', + host=host_id, + router_id='old_router_id', + vif_type=portbindings.VIF_TYPE_OVS, + vnic_type=portbindings.VNIC_NORMAL, + status=constants.PORT_STATUS_DOWN) + with mock.patch.object( + mech_test.TestMechanismDriver, + 'update_port_postcommit', + side_effect=ml2_exc.MechanismDriverError) as port_post,\ + mock.patch.object( + mech_test.TestMechanismDriver, + 'update_port_precommit') as port_pre,\ + mock.patch.object(ml2_db, + 'get_dvr_port_bindings') as dvr_bindings: + dvr_bindings.return_value = [binding] + port_pre.return_value = True + with self.network() as network: + with self.subnet(network=network) as subnet: + subnet_id = subnet['subnet']['id'] + data = {'port': { + 'network_id': network['network']['id'], + 'tenant_id': + network['network']['tenant_id'], + 'name': 'port1', + 'device_owner': + 'network:router_interface_distributed', + 'admin_state_up': 1, + 'fixed_ips': + [{'subnet_id': subnet_id}]}} + port_req = self.new_create_request('ports', data) + port_res = port_req.get_response(self.api) + self.assertEqual(201, port_res.status_int) + port = self.deserialize(self.fmt, port_res) + port_id = port['port']['id'] + new_name = 'a_brand_new_name' + data = {'port': {'name': new_name}} + req = self.new_update_request('ports', data, port_id) + res = req.get_response(self.api) + self.assertEqual(200, res.status_int) + self.assertTrue(dvr_bindings.called) + self.assertTrue(port_pre.called) + self.assertTrue(port_post.called) + port = self._show('ports', port_id) + self.assertEqual(new_name, port['port']['name']) + class TestMl2PluginCreateUpdateDeletePort(base.BaseTestCase): def setUp(self): diff --git a/neutron/tests/unit/plugins/ml2/test_rpc.py b/neutron/tests/unit/plugins/ml2/test_rpc.py index f0e1a360322..72775b9fe80 100644 --- a/neutron/tests/unit/plugins/ml2/test_rpc.py +++ b/neutron/tests/unit/plugins/ml2/test_rpc.py @@ -22,6 +22,7 @@ import collections import mock from oslo_config import cfg from oslo_context import context as oslo_context +import oslo_messaging from sqlalchemy.orm import exc from neutron.agent import rpc as agent_rpc @@ -134,27 +135,53 @@ class RpcCallbacksTestCase(base.BaseTestCase): self.callbacks.get_device_details(mock.Mock()) self.assertTrue(self.plugin.update_port_status.called) - def test_get_devices_details_list(self): + def _test_get_devices_list(self, callback, side_effect, expected): devices = [1, 2, 3, 4, 5] kwargs = {'host': 'fake_host', 'agent_id': 'fake_agent_id'} with mock.patch.object(self.callbacks, 'get_device_details', - side_effect=devices) as f: - res = self.callbacks.get_devices_details_list('fake_context', - devices=devices, - **kwargs) - self.assertEqual(devices, res) + side_effect=side_effect) as f: + res = callback('fake_context', devices=devices, **kwargs) + self.assertEqual(expected, res) self.assertEqual(len(devices), f.call_count) calls = [mock.call('fake_context', device=i, cached_networks={}, **kwargs) for i in devices] f.assert_has_calls(calls) + def test_get_devices_details_list(self): + devices = [1, 2, 3, 4, 5] + expected = devices + callback = self.callbacks.get_devices_details_list + self._test_get_devices_list(callback, devices, expected) + def test_get_devices_details_list_with_empty_devices(self): with mock.patch.object(self.callbacks, 'get_device_details') as f: res = self.callbacks.get_devices_details_list('fake_context') self.assertFalse(f.called) self.assertEqual([], res) + def test_get_devices_details_list_and_failed_devices(self): + devices = [1, 2, 3, 4, 5] + expected = {'devices': devices, 'failed_devices': []} + callback = ( + self.callbacks.get_devices_details_list_and_failed_devices) + self._test_get_devices_list(callback, devices, expected) + + def test_get_devices_details_list_and_failed_devices_failures(self): + devices = [1, Exception('testdevice'), 3, + Exception('testdevice'), 5] + expected = {'devices': [1, 3, 5], 'failed_devices': [2, 4]} + callback = ( + self.callbacks.get_devices_details_list_and_failed_devices) + self._test_get_devices_list(callback, devices, expected) + + def test_get_devices_details_list_and_failed_devices_empty_dev(self): + with mock.patch.object(self.callbacks, 'get_device_details') as f: + res = self.callbacks.get_devices_details_list_and_failed_devices( + 'fake_context') + self.assertFalse(f.called) + self.assertEqual({'devices': [], 'failed_devices': []}, res) + def _test_update_device_not_bound_to_host(self, func): self.plugin.port_bound_to_host.return_value = False self.plugin._device_to_port_id.return_value = 'fake_port_id' @@ -192,6 +219,64 @@ class RpcCallbacksTestCase(base.BaseTestCase): self.callbacks.update_device_down( mock.Mock(), device='fake_device')) + def _test_update_device_list(self, devices_up_side_effect, + devices_down_side_effect, expected): + devices_up = [1, 2, 3] + devices_down = [4, 5] + kwargs = {'host': 'fake_host', 'agent_id': 'fake_agent_id'} + with mock.patch.object(self.callbacks, 'update_device_up', + side_effect=devices_up_side_effect) as f_up, \ + mock.patch.object(self.callbacks, 'update_device_down', + side_effect=devices_down_side_effect) as f_down: + res = self.callbacks.update_device_list( + 'fake_context', devices_up=devices_up, + devices_down=devices_down, **kwargs) + self.assertEqual(expected, res) + self.assertEqual(len(devices_up), f_up.call_count) + self.assertEqual(len(devices_down), f_down.call_count) + + def test_update_device_list_no_failure(self): + devices_up_side_effect = [1, 2, 3] + devices_down_side_effect = [ + {'device': 4, 'exists': True}, + {'device': 5, 'exists': True}] + expected = {'devices_up': devices_up_side_effect, + 'failed_devices_up': [], + 'devices_down': + [{'device': 4, 'exists': True}, + {'device': 5, 'exists': True}], + 'failed_devices_down': []} + self._test_update_device_list(devices_up_side_effect, + devices_down_side_effect, + expected) + + def test_update_device_list_failed_devices(self): + + devices_up_side_effect = [1, Exception('testdevice'), 3] + devices_down_side_effect = [{'device': 4, 'exists': True}, + Exception('testdevice')] + expected = {'devices_up': [1, 3], + 'failed_devices_up': [2], + 'devices_down': + [{'device': 4, 'exists': True}], + 'failed_devices_down': [5]} + + self._test_update_device_list(devices_up_side_effect, + devices_down_side_effect, + expected) + + def test_update_device_list_empty_devices(self): + + expected = {'devices_up': [], + 'failed_devices_up': [], + 'devices_down': [], + 'failed_devices_down': []} + + kwargs = {'host': 'fake_host', 'agent_id': 'fake_agent_id'} + res = self.callbacks.update_device_list( + 'fake_context', devices_up=[], devices_down=[], **kwargs) + self.assertEqual(expected, res) + class RpcApiTestCase(base.BaseTestCase): @@ -314,3 +399,73 @@ class RpcApiTestCase(base.BaseTestCase): device='fake_device', agent_id='fake_agent_id', host='fake_host') + + def test_update_device_list(self): + rpcapi = agent_rpc.PluginApi(topics.PLUGIN) + self._test_rpc_api(rpcapi, None, + 'update_device_list', rpc_method='call', + devices_up=['fake_device1', 'fake_device2'], + devices_down=['fake_device3', 'fake_device4'], + agent_id='fake_agent_id', + host='fake_host', + version='1.5') + + def test_update_device_list_unsupported(self): + rpcapi = agent_rpc.PluginApi(topics.PLUGIN) + ctxt = oslo_context.RequestContext('fake_user', 'fake_project') + devices_up = ['fake_device1', 'fake_device2'] + devices_down = ['fake_device3', 'fake_device4'] + expected_ret_val = {'devices_up': ['fake_device2'], + 'failed_devices_up': ['fake_device1'], + 'devices_down': [ + {'device': 'fake_device3', 'exists': True}], + 'failed_devices_down': ['fake_device4']} + rpcapi.update_device_up = mock.Mock( + side_effect=[Exception('fake_device1 fails'), None]) + rpcapi.update_device_down = mock.Mock( + side_effect=[{'device': 'fake_device3', 'exists': True}, + Exception('fake_device4 fails')]) + with mock.patch.object(rpcapi.client, 'call'),\ + mock.patch.object(rpcapi.client, 'prepare') as prepare_mock: + prepare_mock.side_effect = oslo_messaging.UnsupportedVersion( + 'test') + res = rpcapi.update_device_list(ctxt, devices_up, devices_down, + 'fake_agent_id', 'fake_host') + self.assertEqual(expected_ret_val, res) + + def test_get_devices_details_list_and_failed_devices(self): + rpcapi = agent_rpc.PluginApi(topics.PLUGIN) + self._test_rpc_api(rpcapi, None, + 'get_devices_details_list_and_failed_devices', + rpc_method='call', + devices=['fake_device1', 'fake_device2'], + agent_id='fake_agent_id', + host='fake_host', + version='1.5') + + def test_devices_details_list_and_failed_devices(self): + rpcapi = agent_rpc.PluginApi(topics.PLUGIN) + self._test_rpc_api(rpcapi, None, + 'get_devices_details_list_and_failed_devices', + rpc_method='call', + devices=['fake_device1', 'fake_device2'], + agent_id='fake_agent_id', host='fake_host', + version='1.5') + + def test_get_devices_details_list_and_failed_devices_unsupported(self): + rpcapi = agent_rpc.PluginApi(topics.PLUGIN) + ctxt = oslo_context.RequestContext('fake_user', 'fake_project') + devices = ['fake_device1', 'fake_device2'] + dev2_details = {'device': 'fake_device2', 'network_id': 'net_id', + 'port_id': 'port_id', 'admin_state_up': True} + expected_ret_val = {'devices': [dev2_details], + 'failed_devices': ['fake_device1']} + rpcapi.get_device_details = mock.Mock( + side_effect=[Exception('fake_device1 fails'), dev2_details]) + with mock.patch.object(rpcapi.client, 'call'),\ + mock.patch.object(rpcapi.client, 'prepare') as prepare_mock: + prepare_mock.side_effect = oslo_messaging.UnsupportedVersion( + 'test') + res = rpcapi.get_devices_details_list_and_failed_devices( + ctxt, devices, 'fake_agent_id', 'fake_host') + self.assertEqual(expected_ret_val, res) diff --git a/neutron/tests/unit/plugins/ml2/test_security_group.py b/neutron/tests/unit/plugins/ml2/test_security_group.py index 897cadf58aa..a9b92201371 100644 --- a/neutron/tests/unit/plugins/ml2/test_security_group.py +++ b/neutron/tests/unit/plugins/ml2/test_security_group.py @@ -117,7 +117,7 @@ class TestMl2SecurityGroups(Ml2SecurityGroupsTestCase, plugin.get_ports_from_devices(self.ctx, ['%s%s' % (const.TAP_DEVICE_PREFIX, i) for i in range(ports_to_query)]) - all_call_args = map(lambda x: x[1][1], get_mock.mock_calls) + all_call_args = [x[1][1] for x in get_mock.mock_calls] last_call_args = all_call_args.pop() # all but last should be getting MAX_PORTS_PER_QUERY ports self.assertTrue( diff --git a/neutron/tests/unit/plugins/opencontrail/test_contrail_plugin.py b/neutron/tests/unit/plugins/opencontrail/test_contrail_plugin.py index 17df9fcab35..b5ca8d18e1d 100644 --- a/neutron/tests/unit/plugins/opencontrail/test_contrail_plugin.py +++ b/neutron/tests/unit/plugins/opencontrail/test_contrail_plugin.py @@ -193,6 +193,8 @@ class KeyStoneInfo(object): class ContrailPluginTestCase(test_plugin.NeutronDbPluginV2TestCase): _plugin_name = ('%s.NeutronPluginContrailCoreV2' % CONTRAIL_PKG_PATH) + _fetch = ('neutron.ipam.drivers.neutrondb_ipam.driver.NeutronDbSubnet' + '._fetch_subnet') def setUp(self, plugin=None, ext_mgr=None): if 'v6' in self._testMethodName: @@ -201,6 +203,7 @@ class ContrailPluginTestCase(test_plugin.NeutronDbPluginV2TestCase): self.skipTest("OpenContrail Plugin does not support subnet pools.") cfg.CONF.keystone_authtoken = KeyStoneInfo() mock.patch('requests.post').start().side_effect = FAKE_SERVER.request + mock.patch(self._fetch).start().side_effect = FAKE_SERVER._get_subnet super(ContrailPluginTestCase, self).setUp(self._plugin_name) diff --git a/neutron/plugins/metaplugin/proxy_neutron_plugin.py b/neutron/tests/unit/quota/__init__.py similarity index 57% rename from neutron/plugins/metaplugin/proxy_neutron_plugin.py rename to neutron/tests/unit/quota/__init__.py index 353dee242ac..3fd44693afd 100644 --- a/neutron/plugins/metaplugin/proxy_neutron_plugin.py +++ b/neutron/tests/unit/quota/__init__.py @@ -1,5 +1,4 @@ -# Copyright 2012, Nachi Ueno, NTT MCL, Inc. -# All Rights Reserved. +# Copyright (c) 2015 OpenStack Foundation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain @@ -13,7 +12,17 @@ # License for the specific language governing permissions and limitations # under the License. -from metaplugin.plugin import proxy_neutron_plugin +import sqlalchemy as sa + +from neutron.db import model_base +from neutron.db import models_v2 + +# Model classes for test resources -ProxyPluginV2 = proxy_neutron_plugin.ProxyPluginV2 +class MehModel(model_base.BASEV2, models_v2.HasTenant): + meh = sa.Column(sa.String(8), primary_key=True) + + +class OtherMehModel(model_base.BASEV2, models_v2.HasTenant): + othermeh = sa.Column(sa.String(8), primary_key=True) diff --git a/neutron/tests/unit/quota/test_resource.py b/neutron/tests/unit/quota/test_resource.py new file mode 100644 index 00000000000..7f668539807 --- /dev/null +++ b/neutron/tests/unit/quota/test_resource.py @@ -0,0 +1,254 @@ +# Copyright (c) 2015 OpenStack Foundation. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import random + +import mock +from oslo_config import cfg + +from neutron import context +from neutron.db import api as db_api +from neutron.db.quota import api as quota_api +from neutron.quota import resource +from neutron.tests import base +from neutron.tests.unit import quota as test_quota +from neutron.tests.unit import testlib_api + + +meh_quota_flag = 'quota_meh' +meh_quota_opts = [cfg.IntOpt(meh_quota_flag, default=99)] +random.seed() + + +class TestResource(base.DietTestCase): + """Unit tests for neutron.quota.resource.BaseResource""" + + def test_create_resource_without_plural_name(self): + res = resource.BaseResource('foo', None) + self.assertEqual('foos', res.plural_name) + res = resource.BaseResource('foy', None) + self.assertEqual('foies', res.plural_name) + + def test_create_resource_with_plural_name(self): + res = resource.BaseResource('foo', None, + plural_name='foopsies') + self.assertEqual('foopsies', res.plural_name) + + def test_resource_default_value(self): + res = resource.BaseResource('foo', 'foo_quota') + with mock.patch('oslo_config.cfg.CONF') as mock_cfg: + mock_cfg.QUOTAS.foo_quota = 99 + self.assertEqual(99, res.default) + + def test_resource_negative_default_value(self): + res = resource.BaseResource('foo', 'foo_quota') + with mock.patch('oslo_config.cfg.CONF') as mock_cfg: + mock_cfg.QUOTAS.foo_quota = -99 + self.assertEqual(-1, res.default) + + +class TestTrackedResource(testlib_api.SqlTestCaseLight): + + def _add_data(self, tenant_id=None): + session = db_api.get_session() + with session.begin(): + tenant_id = tenant_id or self.tenant_id + session.add(test_quota.MehModel( + meh='meh_%d' % random.randint(0, 10000), + tenant_id=tenant_id)) + session.add(test_quota.MehModel( + meh='meh_%d' % random.randint(0, 10000), + tenant_id=tenant_id)) + + def _delete_data(self): + session = db_api.get_session() + with session.begin(): + query = session.query(test_quota.MehModel).filter_by( + tenant_id=self.tenant_id) + for item in query: + session.delete(item) + + def _update_data(self): + session = db_api.get_session() + with session.begin(): + query = session.query(test_quota.MehModel).filter_by( + tenant_id=self.tenant_id) + for item in query: + item['meh'] = 'meh-%s' % item['meh'] + session.add(item) + + def setUp(self): + base.BaseTestCase.config_parse() + cfg.CONF.register_opts(meh_quota_opts, 'QUOTAS') + self.addCleanup(cfg.CONF.reset) + self.resource = 'meh' + self.other_resource = 'othermeh' + self.tenant_id = 'meh' + self.context = context.Context( + user_id='', tenant_id=self.tenant_id, is_admin=False) + super(TestTrackedResource, self).setUp() + + def _register_events(self, res): + res.register_events() + self.addCleanup(res.unregister_events) + + def _create_resource(self): + res = resource.TrackedResource( + self.resource, test_quota.MehModel, meh_quota_flag) + self._register_events(res) + return res + + def _create_other_resource(self): + res = resource.TrackedResource( + self.other_resource, test_quota.OtherMehModel, meh_quota_flag) + self._register_events(res) + return res + + def test_count_first_call_with_dirty_false(self): + quota_api.set_quota_usage( + self.context, self.resource, self.tenant_id, in_use=1) + res = self._create_resource() + self._add_data() + # explicitly set dirty flag to False + quota_api.set_all_quota_usage_dirty( + self.context, self.resource, dirty=False) + # Expect correct count to be returned anyway since the first call to + # count() always resyncs with the db + self.assertEqual(2, res.count(self.context, None, self.tenant_id)) + + def _test_count(self): + res = self._create_resource() + quota_api.set_quota_usage( + self.context, res.name, self.tenant_id, in_use=0) + self._add_data() + return res + + def test_count_with_dirty_false(self): + res = self._test_count() + res.count(self.context, None, self.tenant_id) + # At this stage count has been invoked, and the dirty flag should be + # false. Another invocation of count should not query the model class + set_quota = 'neutron.db.quota.api.set_quota_usage' + with mock.patch(set_quota) as mock_set_quota: + self.assertEqual(0, mock_set_quota.call_count) + self.assertEqual(2, res.count(self.context, + None, + self.tenant_id)) + + def test_count_with_dirty_true_resync(self): + res = self._test_count() + # Expect correct count to be returned, which also implies + # set_quota_usage has been invoked with the correct parameters + self.assertEqual(2, res.count(self.context, + None, + self.tenant_id, + resync_usage=True)) + + def test_count_with_dirty_true_resync_calls_set_quota_usage(self): + res = self._test_count() + set_quota_usage = 'neutron.db.quota.api.set_quota_usage' + with mock.patch(set_quota_usage) as mock_set_quota_usage: + quota_api.set_quota_usage_dirty(self.context, + self.resource, + self.tenant_id) + res.count(self.context, None, self.tenant_id, + resync_usage=True) + mock_set_quota_usage.assert_called_once_with( + self.context, self.resource, self.tenant_id, in_use=2) + + def test_count_with_dirty_true_no_usage_info(self): + res = self._create_resource() + self._add_data() + # Invoke count without having usage info in DB - Expect correct + # count to be returned + self.assertEqual(2, res.count(self.context, None, self.tenant_id)) + + def test_count_with_dirty_true_no_usage_info_calls_set_quota_usage(self): + res = self._create_resource() + self._add_data() + set_quota_usage = 'neutron.db.quota.api.set_quota_usage' + with mock.patch(set_quota_usage) as mock_set_quota_usage: + quota_api.set_quota_usage_dirty(self.context, + self.resource, + self.tenant_id) + res.count(self.context, None, self.tenant_id, resync_usage=True) + mock_set_quota_usage.assert_called_once_with( + self.context, self.resource, self.tenant_id, in_use=2) + + def test_add_delete_data_triggers_event(self): + res = self._create_resource() + other_res = self._create_other_resource() + # Validate dirty tenants since mock does not work well with sqlalchemy + # event handlers. + self._add_data() + self._add_data('someone_else') + self.assertEqual(2, len(res._dirty_tenants)) + # Also, the dirty flag should not be set for other resources + self.assertEqual(0, len(other_res._dirty_tenants)) + self.assertIn(self.tenant_id, res._dirty_tenants) + self.assertIn('someone_else', res._dirty_tenants) + + def test_delete_data_triggers_event(self): + res = self._create_resource() + self._add_data() + self._add_data('someone_else') + # Artificially clear _dirty_tenants + res._dirty_tenants.clear() + self._delete_data() + # We did not delete "someone_else", so expect only a single dirty + # tenant + self.assertEqual(1, len(res._dirty_tenants)) + self.assertIn(self.tenant_id, res._dirty_tenants) + + def test_update_does_not_trigger_event(self): + res = self._create_resource() + self._add_data() + self._add_data('someone_else') + # Artificially clear _dirty_tenants + res._dirty_tenants.clear() + self._update_data() + self.assertEqual(0, len(res._dirty_tenants)) + + def test_mark_dirty(self): + res = self._create_resource() + self._add_data() + self._add_data('someone_else') + set_quota_usage = 'neutron.db.quota.api.set_quota_usage_dirty' + with mock.patch(set_quota_usage) as mock_set_quota_usage: + res.mark_dirty(self.context) + self.assertEqual(2, mock_set_quota_usage.call_count) + mock_set_quota_usage.assert_any_call( + self.context, self.resource, self.tenant_id) + mock_set_quota_usage.assert_any_call( + self.context, self.resource, 'someone_else') + + def test_mark_dirty_no_dirty_tenant(self): + res = self._create_resource() + set_quota_usage = 'neutron.db.quota.api.set_quota_usage_dirty' + with mock.patch(set_quota_usage) as mock_set_quota_usage: + res.mark_dirty(self.context) + self.assertFalse(mock_set_quota_usage.call_count) + + def test_resync(self): + res = self._create_resource() + self._add_data() + res.mark_dirty(self.context) + # self.tenant_id now is out of sync + set_quota_usage = 'neutron.db.quota.api.set_quota_usage' + with mock.patch(set_quota_usage) as mock_set_quota_usage: + res.resync(self.context, self.tenant_id) + # and now it should be in sync + self.assertNotIn(self.tenant_id, res._out_of_sync_tenants) + mock_set_quota_usage.assert_called_once_with( + self.context, self.resource, self.tenant_id, in_use=2) diff --git a/neutron/tests/unit/quota/test_resource_registry.py b/neutron/tests/unit/quota/test_resource_registry.py new file mode 100644 index 00000000000..6d1d272060f --- /dev/null +++ b/neutron/tests/unit/quota/test_resource_registry.py @@ -0,0 +1,159 @@ +# Copyright (c) 2015 OpenStack Foundation. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); you may +# not use this file except in compliance with the License. You may obtain +# a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT +# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the +# License for the specific language governing permissions and limitations +# under the License. + +import mock + +from oslo_config import cfg + +from neutron import context +from neutron.quota import resource +from neutron.quota import resource_registry +from neutron.tests import base +from neutron.tests.unit import quota as test_quota + + +class TestResourceRegistry(base.DietTestCase): + + def setUp(self): + super(TestResourceRegistry, self).setUp() + self.registry = resource_registry.ResourceRegistry.get_instance() + # clean up the registry at every test + self.registry.unregister_resources() + + def test_set_tracked_resource_new_resource(self): + self.registry.set_tracked_resource('meh', test_quota.MehModel) + self.assertEqual(test_quota.MehModel, + self.registry._tracked_resource_mappings['meh']) + + def test_set_tracked_resource_existing_with_override(self): + self.test_set_tracked_resource_new_resource() + self.registry.set_tracked_resource('meh', test_quota.OtherMehModel, + override=True) + # Overidde is set to True, the model class should change + self.assertEqual(test_quota.OtherMehModel, + self.registry._tracked_resource_mappings['meh']) + + def test_set_tracked_resource_existing_no_override(self): + self.test_set_tracked_resource_new_resource() + self.registry.set_tracked_resource('meh', test_quota.OtherMehModel) + # Overidde is set to false, the model class should not change + self.assertEqual(test_quota.MehModel, + self.registry._tracked_resource_mappings['meh']) + + def _test_register_resource_by_name(self, resource_name, expected_type): + self.assertNotIn(resource_name, self.registry._resources) + self.registry.register_resource_by_name(resource_name) + self.assertIn(resource_name, self.registry._resources) + self.assertIsInstance(self.registry.get_resource(resource_name), + expected_type) + + def test_register_resource_by_name_tracked(self): + self.test_set_tracked_resource_new_resource() + self._test_register_resource_by_name('meh', resource.TrackedResource) + + def test_register_resource_by_name_not_tracked(self): + self._test_register_resource_by_name('meh', resource.CountableResource) + + def test_register_resource_by_name_with_tracking_disabled_by_config(self): + cfg.CONF.set_override('track_quota_usage', False, + group='QUOTAS') + # DietTestCase does not automatically cleans configuration overrides + self.addCleanup(cfg.CONF.reset) + self.registry.set_tracked_resource('meh', test_quota.MehModel) + self.assertNotIn( + 'meh', self.registry._tracked_resource_mappings) + self._test_register_resource_by_name('meh', resource.CountableResource) + + +class TestAuxiliaryFunctions(base.DietTestCase): + + def setUp(self): + super(TestAuxiliaryFunctions, self).setUp() + self.registry = resource_registry.ResourceRegistry.get_instance() + # clean up the registry at every test + self.registry.unregister_resources() + + def test_resync_tracking_disabled(self): + cfg.CONF.set_override('track_quota_usage', False, + group='QUOTAS') + # DietTestCase does not automatically cleans configuration overrides + self.addCleanup(cfg.CONF.reset) + with mock.patch('neutron.quota.resource.' + 'TrackedResource.resync') as mock_resync: + self.registry.set_tracked_resource('meh', test_quota.MehModel) + self.registry.register_resource_by_name('meh') + resource_registry.resync_resource(mock.ANY, 'meh', 'tenant_id') + self.assertEqual(0, mock_resync.call_count) + + def test_resync_tracked_resource(self): + with mock.patch('neutron.quota.resource.' + 'TrackedResource.resync') as mock_resync: + self.registry.set_tracked_resource('meh', test_quota.MehModel) + self.registry.register_resource_by_name('meh') + resource_registry.resync_resource(mock.ANY, 'meh', 'tenant_id') + mock_resync.assert_called_once_with(mock.ANY, 'tenant_id') + + def test_resync_non_tracked_resource(self): + with mock.patch('neutron.quota.resource.' + 'TrackedResource.resync') as mock_resync: + self.registry.register_resource_by_name('meh') + resource_registry.resync_resource(mock.ANY, 'meh', 'tenant_id') + self.assertEqual(0, mock_resync.call_count) + + def test_set_resources_dirty_invoked_with_tracking_disabled(self): + cfg.CONF.set_override('track_quota_usage', False, + group='QUOTAS') + # DietTestCase does not automatically cleans configuration overrides + self.addCleanup(cfg.CONF.reset) + with mock.patch('neutron.quota.resource.' + 'TrackedResource.mark_dirty') as mock_mark_dirty: + self.registry.set_tracked_resource('meh', test_quota.MehModel) + self.registry.register_resource_by_name('meh') + resource_registry.set_resources_dirty(mock.ANY) + self.assertEqual(0, mock_mark_dirty.call_count) + + def test_set_resources_dirty_no_dirty_resource(self): + ctx = context.Context('user_id', 'tenant_id', + is_admin=False, is_advsvc=False) + with mock.patch('neutron.quota.resource.' + 'TrackedResource.mark_dirty') as mock_mark_dirty: + self.registry.set_tracked_resource('meh', test_quota.MehModel) + self.registry.register_resource_by_name('meh') + res = self.registry.get_resource('meh') + # This ensures dirty is false + res._dirty_tenants.clear() + resource_registry.set_resources_dirty(ctx) + self.assertEqual(0, mock_mark_dirty.call_count) + + def test_set_resources_dirty_no_tracked_resource(self): + ctx = context.Context('user_id', 'tenant_id', + is_admin=False, is_advsvc=False) + with mock.patch('neutron.quota.resource.' + 'TrackedResource.mark_dirty') as mock_mark_dirty: + self.registry.register_resource_by_name('meh') + resource_registry.set_resources_dirty(ctx) + self.assertEqual(0, mock_mark_dirty.call_count) + + def test_set_resources_dirty(self): + ctx = context.Context('user_id', 'tenant_id', + is_admin=False, is_advsvc=False) + with mock.patch('neutron.quota.resource.' + 'TrackedResource.mark_dirty') as mock_mark_dirty: + self.registry.set_tracked_resource('meh', test_quota.MehModel) + self.registry.register_resource_by_name('meh') + res = self.registry.get_resource('meh') + # This ensures dirty is true + res._dirty_tenants.add('tenant_id') + resource_registry.set_resources_dirty(ctx) + mock_mark_dirty.assert_called_once_with(ctx, nested=True) diff --git a/neutron/tests/unit/scheduler/test_dhcp_agent_scheduler.py b/neutron/tests/unit/scheduler/test_dhcp_agent_scheduler.py index 6dc8b68e969..75493454b23 100644 --- a/neutron/tests/unit/scheduler/test_dhcp_agent_scheduler.py +++ b/neutron/tests/unit/scheduler/test_dhcp_agent_scheduler.py @@ -289,7 +289,6 @@ class DHCPAgentWeightSchedulerTestCase(TestDhcpSchedulerBaseTestCase): def test_scheduler_one_agents_per_network(self): self._save_networks(['1111']) helpers.register_dhcp_agent(HOST_C) - helpers.register_dhcp_agent(HOST_C) self.plugin.network_scheduler.schedule(self.plugin, self.ctx, {'id': '1111'}) agents = self.plugin.get_dhcp_agents_hosting_networks(self.ctx, diff --git a/neutron/tests/unit/scheduler/test_l3_agent_scheduler.py b/neutron/tests/unit/scheduler/test_l3_agent_scheduler.py index a0d22f7c826..a8f5c15dd44 100644 --- a/neutron/tests/unit/scheduler/test_l3_agent_scheduler.py +++ b/neutron/tests/unit/scheduler/test_l3_agent_scheduler.py @@ -256,7 +256,7 @@ class L3SchedulerBaseTestCase(base.BaseTestCase): '_router_has_binding', return_value=has_binding) as mock_has_binding,\ mock.patch.object(self.scheduler, - '_create_ha_router_binding') as mock_bind: + 'create_ha_port_and_bind') as mock_bind: self.scheduler._bind_routers(mock.ANY, mock.ANY, routers, agent) mock_has_binding.assert_called_once_with(mock.ANY, 'foo_router', 'foo_agent') @@ -1421,6 +1421,9 @@ class L3HATestCaseMixin(testlib_api.SqlTestCase, self.plugin = L3HAPlugin() self.setup_coreplugin('neutron.plugins.ml2.plugin.Ml2Plugin') + cfg.CONF.set_override('service_plugins', + ['neutron.services.l3_router.' + 'l3_router_plugin.L3RouterPlugin']) mock.patch.object(l3_hamode_db.L3_HA_NAT_db_mixin, '_notify_ha_interfaces_updated').start() @@ -1495,12 +1498,14 @@ class L3_HA_scheduler_db_mixinTestCase(L3HATestCaseMixin): class L3AgentSchedulerDbMixinTestCase(L3HATestCaseMixin): - def test_reschedule_ha_routers_from_down_agents(self): + def _setup_ha_router(self): router = self._create_ha_router() self.plugin.schedule_router(self.adminContext, router['id']) - agents = self.plugin.get_l3_agents_hosting_routers( - self.adminContext, [router['id']], - admin_state_up=True) + agents = self._get_agents_scheduled_for_router(router) + return router, agents + + def test_reschedule_ha_routers_from_down_agents(self): + agents = self._setup_ha_router()[1] self.assertEqual(2, len(agents)) self._set_l3_agent_dead(self.agent_id1) with mock.patch.object(self.plugin, 'reschedule_router') as reschedule: @@ -1538,6 +1543,68 @@ class L3AgentSchedulerDbMixinTestCase(L3HATestCaseMixin): self.assertEqual({'agents': []}, self.plugin._get_agents_dict_for_router([])) + def test_manual_add_ha_router_to_agent(self): + cfg.CONF.set_override('max_l3_agents_per_router', 2) + router, agents = self._setup_ha_router() + self.assertEqual(2, len(agents)) + agent = helpers.register_l3_agent(host='myhost_3') + # We allow to exceed max l3 agents per router via manual scheduling + self.plugin.add_router_to_l3_agent( + self.adminContext, agent.id, router['id']) + agents = self._get_agents_scheduled_for_router(router) + self.assertIn(agent.id, [_agent.id for _agent in agents]) + self.assertEqual(3, len(agents)) + + def test_manual_remove_ha_router_from_agent(self): + router, agents = self._setup_ha_router() + self.assertEqual(2, len(agents)) + agent = agents.pop() + # Remove router from agent and make sure it is removed + self.plugin.remove_router_from_l3_agent( + self.adminContext, agent.id, router['id']) + agents = self._get_agents_scheduled_for_router(router) + self.assertEqual(1, len(agents)) + self.assertNotIn(agent.id, [_agent.id for _agent in agents]) + + def test_manual_remove_ha_router_from_all_agents(self): + router, agents = self._setup_ha_router() + self.assertEqual(2, len(agents)) + agent = agents.pop() + self.plugin.remove_router_from_l3_agent( + self.adminContext, agent.id, router['id']) + agent = agents.pop() + self.plugin.remove_router_from_l3_agent( + self.adminContext, agent.id, router['id']) + agents = self._get_agents_scheduled_for_router(router) + self.assertEqual(0, len(agents)) + + def _get_agents_scheduled_for_router(self, router): + return self.plugin.get_l3_agents_hosting_routers( + self.adminContext, [router['id']], + admin_state_up=True) + + def test_delete_ha_interfaces_from_agent(self): + router, agents = self._setup_ha_router() + agent = agents.pop() + self.plugin.remove_router_from_l3_agent( + self.adminContext, agent.id, router['id']) + session = self.adminContext.session + db = l3_hamode_db.L3HARouterAgentPortBinding + results = session.query(db).filter_by( + router_id=router['id']) + results = [binding.l3_agent_id for binding in results.all()] + self.assertNotIn(agent.id, results) + + def test_add_ha_interface_to_l3_agent(self): + agent = self.plugin.get_agents_db(self.adminContext)[0] + router = self._create_ha_router() + self.plugin.add_router_to_l3_agent(self.adminContext, agent.id, + router['id']) + # Verify agent has HA interface + ha_ports = self.plugin.get_ha_router_port_bindings(self.adminContext, + [router['id']]) + self.assertIn(agent.id, [ha_port.l3_agent_id for ha_port in ha_ports]) + class L3HAChanceSchedulerTestCase(L3HATestCaseMixin): diff --git a/neutron/tests/unit/services/l3_router/test_l3_apic.py b/neutron/tests/unit/services/l3_router/test_l3_apic.py deleted file mode 100644 index 24431b5d870..00000000000 --- a/neutron/tests/unit/services/l3_router/test_l3_apic.py +++ /dev/null @@ -1,134 +0,0 @@ -# Copyright (c) 2014 Cisco Systems -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -import sys - -import mock - -sys.modules["apicapi"] = mock.Mock() - -from neutron.plugins.ml2.drivers.cisco.apic import mechanism_apic as md -from neutron.services.l3_router import l3_apic -from neutron.tests.unit.plugins.ml2.drivers.cisco.apic import base as mocked -from neutron.tests.unit import testlib_api - - -TENANT = 'tenant1' -TENANT_CONTRACT = 'abcd' -ROUTER = 'router1' -SUBNET = 'subnet1' -NETWORK = 'network1' -PORT = 'port1' -NETWORK_NAME = 'one_network' -NETWORK_EPG = 'one_network-epg' -TEST_SEGMENT1 = 'test-segment1' -SUBNET_GATEWAY = '10.3.2.1' -SUBNET_CIDR = '10.3.1.0/24' -SUBNET_NETMASK = '24' - - -class FakeContext(object): - def __init__(self): - self.tenant_id = None - - -class FakeContract(object): - def __init__(self): - self.contract_id = '123' - - -class FakeEpg(object): - def __init__(self): - self.epg_id = 'abcd_epg' - - -class FakePort(object): - def __init__(self): - self.id = 'Fake_port_id' - self.network_id = NETWORK - self.subnet_id = SUBNET - - -class TestCiscoApicL3Plugin(testlib_api.SqlTestCase, - mocked.ControllerMixin, - mocked.ConfigMixin): - def setUp(self): - super(TestCiscoApicL3Plugin, self).setUp() - mock.patch('neutron.plugins.ml2.drivers.cisco.apic.apic_model.' - 'ApicDbModel').start() - mocked.ControllerMixin.set_up_mocks(self) - mocked.ConfigMixin.set_up_mocks(self) - self.plugin = l3_apic.ApicL3ServicePlugin() - md.APICMechanismDriver.get_router_synchronizer = mock.Mock() - self.context = FakeContext() - self.context.tenant_id = TENANT - self.interface_info = {'subnet': {'subnet_id': SUBNET}, - 'port': {'port_id': PORT}} - self.subnet = {'network_id': NETWORK, 'tenant_id': TENANT} - self.port = {'tenant_id': TENANT, - 'network_id': NETWORK, - 'fixed_ips': [{'subnet_id': SUBNET}]} - self.plugin.name_mapper = mock.Mock() - l3_apic.apic_mapper.mapper_context = self.fake_transaction - self.plugin.name_mapper.tenant.return_value = mocked.APIC_TENANT - self.plugin.name_mapper.network.return_value = mocked.APIC_NETWORK - self.plugin.name_mapper.subnet.return_value = mocked.APIC_SUBNET - self.plugin.name_mapper.port.return_value = mocked.APIC_PORT - self.plugin.name_mapper.router.return_value = mocked.APIC_ROUTER - self.plugin.name_mapper.app_profile.return_value = mocked.APIC_AP - - self.contract = FakeContract() - self.plugin.get_router = mock.Mock( - return_value={'id': ROUTER, 'admin_state_up': True}) - self.plugin.manager = mock.Mock() - self.plugin.manager.apic.transaction = self.fake_transaction - - self.plugin.get_subnet = mock.Mock(return_value=self.subnet) - self.plugin.get_network = mock.Mock(return_value=self.interface_info) - self.plugin.get_port = mock.Mock(return_value=self.port) - mock.patch('neutron.db.l3_dvr_db.L3_NAT_with_dvr_db_mixin.' - '_core_plugin').start() - mock.patch('neutron.db.l3_dvr_db.L3_NAT_with_dvr_db_mixin.' - 'add_router_interface').start() - mock.patch('neutron.db.l3_dvr_db.L3_NAT_with_dvr_db_mixin.' - 'remove_router_interface').start() - mock.patch('oslo_utils.excutils.save_and_reraise_exception').start() - - def _test_add_router_interface(self, interface_info): - mgr = self.plugin.manager - self.plugin.add_router_interface(self.context, ROUTER, interface_info) - mgr.create_router.assert_called_once_with(mocked.APIC_ROUTER, - transaction='transaction') - mgr.add_router_interface.assert_called_once_with( - mocked.APIC_TENANT, mocked.APIC_ROUTER, mocked.APIC_NETWORK) - - def _test_remove_router_interface(self, interface_info): - mgr = self.plugin.manager - self.plugin.remove_router_interface(self.context, ROUTER, - interface_info) - mgr.remove_router_interface.assert_called_once_with( - mocked.APIC_TENANT, mocked.APIC_ROUTER, mocked.APIC_NETWORK) - - def test_add_router_interface_subnet(self): - self._test_add_router_interface(self.interface_info['subnet']) - - def test_add_router_interface_port(self): - self._test_add_router_interface(self.interface_info['port']) - - def test_remove_router_interface_subnet(self): - self._test_remove_router_interface(self.interface_info['subnet']) - - def test_remove_router_interface_port(self): - self._test_remove_router_interface(self.interface_info['port']) diff --git a/neutron/tests/unit/services/test_provider_configuration.py b/neutron/tests/unit/services/test_provider_configuration.py index c5a5a3eb336..a10b6f783a9 100644 --- a/neutron/tests/unit/services/test_provider_configuration.py +++ b/neutron/tests/unit/services/test_provider_configuration.py @@ -128,9 +128,9 @@ class ProviderConfigurationTestCase(base.BaseTestCase): 'default': False} pconf.add_provider(prov) self.assertEqual(len(pconf.providers), 1) - self.assertEqual(pconf.providers.keys(), + self.assertEqual(list(pconf.providers.keys()), [(constants.LOADBALANCER, 'name')]) - self.assertEqual(pconf.providers.values(), + self.assertEqual(list(pconf.providers.values()), [{'driver': 'path', 'default': False}]) def test_add_duplicate_provider(self): diff --git a/neutron/tests/unit/test_context.py b/neutron/tests/unit/test_context.py index 1ecf338a22f..9dfc7f30662 100644 --- a/neutron/tests/unit/test_context.py +++ b/neutron/tests/unit/test_context.py @@ -34,7 +34,7 @@ class TestNeutronContext(base.BaseTestCase): self.assertEqual('user_id', ctx.user_id) self.assertEqual('tenant_id', ctx.project_id) self.assertEqual('tenant_id', ctx.tenant_id) - self.assertThat(ctx.request_id, matchers.StartsWith('req-')) + self.assertThat(ctx.request_id, matchers.StartsWith(b'req-')) self.assertEqual('user_id', ctx.user) self.assertEqual('tenant_id', ctx.tenant) self.assertIsNone(ctx.user_name) diff --git a/neutron/tests/unit/test_manager.py b/neutron/tests/unit/test_manager.py index 59ecb58a88e..2020804fd4f 100644 --- a/neutron/tests/unit/test_manager.py +++ b/neutron/tests/unit/test_manager.py @@ -13,8 +13,6 @@ # License for the specific language governing permissions and limitations # under the License. -import types - import fixtures from oslo_config import cfg from oslo_log import log as logging @@ -56,7 +54,7 @@ class NeutronManagerTestCase(base.BaseTestCase): plugin = mgr.get_service_plugins()[constants.DUMMY] self.assertIsInstance( - plugin, (dummy_plugin.DummyServicePlugin, types.ClassType), + plugin, dummy_plugin.DummyServicePlugin, "loaded plugin should be of type neutronDummyPlugin") def test_service_plugin_by_name_is_loaded(self): @@ -66,7 +64,7 @@ class NeutronManagerTestCase(base.BaseTestCase): plugin = mgr.get_service_plugins()[constants.DUMMY] self.assertIsInstance( - plugin, (dummy_plugin.DummyServicePlugin, types.ClassType), + plugin, dummy_plugin.DummyServicePlugin, "loaded plugin should be of type neutronDummyPlugin") def test_multiple_plugins_specified_for_service_type(self): @@ -107,7 +105,7 @@ class NeutronManagerTestCase(base.BaseTestCase): "MultiServiceCorePlugin") mgr = manager.NeutronManager.get_instance() svc_plugins = mgr.get_service_plugins() - self.assertEqual(3, len(svc_plugins)) + self.assertEqual(4, len(svc_plugins)) self.assertIn(constants.CORE, svc_plugins.keys()) self.assertIn(constants.LOADBALANCER, svc_plugins.keys()) self.assertIn(constants.DUMMY, svc_plugins.keys()) diff --git a/neutron/tests/unit/tests/test_base.py b/neutron/tests/unit/tests/test_base.py index 355c2cf6506..8a2bb555cb6 100644 --- a/neutron/tests/unit/tests/test_base.py +++ b/neutron/tests/unit/tests/test_base.py @@ -16,28 +16,36 @@ """Tests to test the test framework""" import sys +import unittest2 from neutron.tests import base -class SystemExitTestCase(base.BaseTestCase): +class SystemExitTestCase(base.DietTestCase): + # Embedded to hide from the regular test discovery + class MyTestCase(base.DietTestCase): + def __init__(self, exitcode): + super(SystemExitTestCase.MyTestCase, self).__init__() + self.exitcode = exitcode - def setUp(self): - def _fail_SystemExit(exc_info): - if isinstance(exc_info[1], SystemExit): - self.fail("A SystemExit was allowed out") - super(SystemExitTestCase, self).setUp() - # add the handler last so reaching it means the handler in BaseTestCase - # didn't do it's job - self.addOnException(_fail_SystemExit) + def runTest(self): + if self.exitcode is not None: + sys.exit(self.exitcode) - def run(self, *args, **kwargs): - exc = self.assertRaises(AssertionError, - super(SystemExitTestCase, self).run, - *args, **kwargs) - # this message should be generated when SystemExit is raised by a test - self.assertIn('A SystemExit was raised during the test.', str(exc)) + def test_no_sysexit(self): + result = self.MyTestCase(exitcode=None).run() + self.assertTrue(result.wasSuccessful()) - def test_system_exit(self): - # this should generate a failure that mentions SystemExit was used - sys.exit(1) + def test_sysexit(self): + expectedFails = [self.MyTestCase(exitcode) for exitcode in (0, 1)] + + suite = unittest2.TestSuite(tests=expectedFails) + result = self.defaultTestResult() + try: + suite.run(result) + except SystemExit: + self.fail('SystemExit escaped!') + + self.assertEqual([], result.errors) + self.assertItemsEqual(set(id(t) for t in expectedFails), + set(id(t) for (t, traceback) in result.failures)) diff --git a/neutron/wsgi.py b/neutron/wsgi.py index a207c35d24f..dd71a9b907c 100644 --- a/neutron/wsgi.py +++ b/neutron/wsgi.py @@ -19,6 +19,7 @@ Utility methods for working with WSGI servers from __future__ import print_function import errno +import logging as std_logging import os import socket import ssl @@ -238,6 +239,8 @@ class Server(object): if workers < 1: # The API service should run in the current process. self._server = service + # Dump the initial option values + cfg.CONF.log_opt_values(LOG, std_logging.DEBUG) service.start() systemd.notify_once() else: diff --git a/openstack-common.conf b/openstack-common.conf index fbc952c8fae..082a4a4fa95 100644 --- a/openstack-common.conf +++ b/openstack-common.conf @@ -1,7 +1,6 @@ [DEFAULT] # The list of modules to copy from oslo-incubator.git module=cache -module=fileutils # The following module is not synchronized by update.sh script since it's # located in tools/ not neutron/openstack/common/. Left here to make it # explicit that we still ship code from incubator here diff --git a/requirements.txt b/requirements.txt index 8d5041c38ab..823f597ceb1 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,18 +1,19 @@ # The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. -pbr<2.0,>=0.11 +pbr<2.0,>=1.3 Paste PasteDeploy>=1.5.0 -Routes!=2.0,>=1.12.3 +Routes!=2.0,!=2.1,>=1.12.3;python_version=='2.7' +Routes!=2.0,>=1.12.3;python_version!='2.7' debtcollector>=0.3.0 # Apache-2.0 eventlet>=0.17.4 greenlet>=0.3.2 httplib2>=0.7.5 requests>=2.5.2 Jinja2>=2.6 # BSD License (3 clause) -keystonemiddleware>=1.5.0 +keystonemiddleware>=2.0.0 netaddr>=0.7.12 python-neutronclient<3,>=2.3.11 retrying!=1.3.0,>=1.2.3 # Apache-2.0 @@ -22,19 +23,19 @@ python-keystoneclient>=1.6.0 alembic>=0.7.2 six>=1.9.0 stevedore>=1.5.0 # Apache-2.0 -oslo.concurrency>=2.1.0 # Apache-2.0 +oslo.concurrency>=2.3.0 # Apache-2.0 oslo.config>=1.11.0 # Apache-2.0 oslo.context>=0.2.0 # Apache-2.0 -oslo.db>=1.10.0 # Apache-2.0 +oslo.db>=1.12.0 # Apache-2.0 oslo.i18n>=1.5.0 # Apache-2.0 -oslo.log>=1.2.0 # Apache-2.0 -oslo.messaging!=1.12.0,>=1.8.0 # Apache-2.0 -oslo.middleware!=2.0.0,>=1.2.0 # Apache-2.0 +oslo.log>=1.6.0 # Apache-2.0 +oslo.messaging!=1.17.0,!=1.17.1,>=1.16.0 # Apache-2.0 +oslo.middleware>=2.4.0 # Apache-2.0 oslo.policy>=0.5.0 # Apache-2.0 oslo.rootwrap>=2.0.0 # Apache-2.0 oslo.serialization>=1.4.0 # Apache-2.0 oslo.service>=0.1.0 # Apache-2.0 -oslo.utils>=1.6.0 # Apache-2.0 +oslo.utils>=1.9.0 # Apache-2.0 python-novaclient>=2.22.0 diff --git a/setup.cfg b/setup.cfg index e3f0f05bb9e..6eb5adf5a13 100644 --- a/setup.cfg +++ b/setup.cfg @@ -58,7 +58,6 @@ data_files = etc/neutron/plugins/cisco/cisco_vpn_agent.ini etc/neutron/plugins/embrane = etc/neutron/plugins/embrane/heleos_conf.ini etc/neutron/plugins/ibm = etc/neutron/plugins/ibm/sdnve_neutron_plugin.ini - etc/neutron/plugins/metaplugin = etc/neutron/plugins/metaplugin/metaplugin.ini etc/neutron/plugins/midonet = etc/neutron/plugins/midonet/midonet.ini etc/neutron/plugins/ml2 = etc/neutron/plugins/bigswitch/restproxy.ini @@ -112,8 +111,6 @@ console_scripts = neutron-metering-agent = neutron.cmd.eventlet.services.metering_agent:main neutron-sriov-nic-agent = neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent:main neutron-sanity-check = neutron.cmd.sanity_check:main - neutron-cisco-apic-service-agent = neutron.plugins.ml2.drivers.cisco.apic.apic_topology:service_main - neutron-cisco-apic-host-agent = neutron.plugins.ml2.drivers.cisco.apic.apic_topology:agent_main neutron.core_plugins = bigswitch = neutron.plugins.bigswitch.plugin:NeutronRestProxyV2 brocade = neutron.plugins.brocade.NeutronPlugin:BrocadePluginV2 @@ -124,7 +121,6 @@ neutron.core_plugins = ml2 = neutron.plugins.ml2.plugin:Ml2Plugin nec = neutron.plugins.nec.nec_plugin:NECPluginV2 nuage = neutron.plugins.nuage.plugin:NuagePlugin - metaplugin = neutron.plugins.metaplugin.meta_neutron_plugin:MetaPluginV2 oneconvergence = neutron.plugins.oneconvergence.plugin:OneConvergencePluginV2 plumgrid = neutron.plugins.plumgrid.plumgrid_plugin.plumgrid_plugin:NeutronPluginPLUMgridV2 vmware = neutron.plugins.vmware.plugin:NsxMhPlugin @@ -177,8 +173,6 @@ neutron.ml2.mechanism_drivers = ncs = neutron.plugins.ml2.drivers.cisco.ncs.driver:NCSMechanismDriver cisco_ncs = neutron.plugins.ml2.drivers.cisco.ncs.driver:NCSMechanismDriver cisco_nexus = neutron.plugins.ml2.drivers.cisco.nexus.mech_cisco_nexus:CiscoNexusMechanismDriver - cisco_apic = neutron.plugins.ml2.drivers.cisco.apic.mechanism_apic:APICMechanismDriver - cisco_n1kv = neutron.plugins.ml2.drivers.cisco.n1kv.mech_cisco_n1kv:N1KVMechanismDriver cisco_ucsm = neutron.plugins.ml2.drivers.cisco.ucsm.mech_cisco_ucsm:CiscoUcsmMechanismDriver l2population = neutron.plugins.ml2.drivers.l2pop.mech_driver:L2populationMechanismDriver bigswitch = neutron.plugins.ml2.drivers.mech_bigswitch.driver:BigSwitchMechanismDriver @@ -195,7 +189,6 @@ neutron.ml2.extension_drivers = test = neutron.tests.unit.plugins.ml2.drivers.ext_test:TestExtensionDriver testdb = neutron.tests.unit.plugins.ml2.drivers.ext_test:TestDBExtensionDriver port_security = neutron.plugins.ml2.extensions.port_security:PortSecurityExtensionDriver - cisco_n1kv_ext = neutron.plugins.ml2.drivers.cisco.n1kv.n1kv_ext_driver:CiscoN1kvExtensionDriver neutron.openstack.common.cache.backends = memory = neutron.openstack.common.cache._backends.memory:MemoryBackend neutron.ipam_drivers = diff --git a/setup.py b/setup.py index 056c16c2b8f..d8080d05c86 100644 --- a/setup.py +++ b/setup.py @@ -25,5 +25,5 @@ except ImportError: pass setuptools.setup( - setup_requires=['pbr'], + setup_requires=['pbr>=1.3'], pbr=True) diff --git a/test-requirements.txt b/test-requirements.txt index 6693ab22eed..d26812f817d 100644 --- a/test-requirements.txt +++ b/test-requirements.txt @@ -6,7 +6,7 @@ hacking<0.11,>=0.10.0 cliff>=1.13.0 # Apache-2.0 coverage>=3.6 fixtures>=1.3.1 -mock>=1.0 +mock>=1.2 python-subunit>=0.0.18 requests-mock>=0.6.0 # Apache-2.0 sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 @@ -15,7 +15,8 @@ testrepository>=0.0.18 testtools>=1.4.0 testscenarios>=0.4 WebTest>=2.0 -oslotest>=1.5.1 # Apache-2.0 +oslotest>=1.9.0 # Apache-2.0 os-testr>=0.1.0 -tempest-lib>=0.5.0 +tempest-lib>=0.6.1 ddt>=0.7.0 +pylint==1.4.4 # GNU GPL v2 diff --git a/tox.ini b/tox.ini index f473cd350a8..fdc2dbe47ad 100644 --- a/tox.ini +++ b/tox.ini @@ -78,14 +78,13 @@ downloadcache = ~/cache/pip basepython = python2.7 deps = {[testenv]deps} - pylint commands= # If it is easier to add a check via a shell script, consider adding it in this file sh ./tools/misc-sanity-checks.sh {toxinidir}/tools/check_unit_test_structure.sh # Checks for coding and style guidelines flake8 - # sh ./tools/coding-checks.sh --pylint '{posargs}' + sh ./tools/coding-checks.sh --pylint '{posargs}' neutron-db-manage --config-file neutron/tests/etc/neutron.conf check_migration whitelist_externals = sh @@ -104,14 +103,19 @@ commands = sphinx-build -W -b html doc/source doc/build/html [testenv:py34] commands = python -m testtools.run \ + neutron.tests.unit.test_context \ neutron.tests.unit.services.metering.drivers.test_iptables \ - neutron.tests.unit.services.l3_router.test_l3_apic \ + neutron.tests.unit.services.metering.agents.test_metering_agent \ + neutron.tests.unit.services.test_provider_configuration \ neutron.tests.unit.plugins.ml2.drivers.mech_sriov.agent.test_sriov_nic_agent \ + neutron.tests.unit.plugins.ml2.drivers.mech_sriov.agent.test_eswitch_manager \ + neutron.tests.unit.plugins.ml2.drivers.mech_sriov.agent.common.test_config \ neutron.tests.unit.plugins.ml2.drivers.mech_sriov.agent.test_pci_lib \ neutron.tests.unit.plugins.ml2.drivers.openvswitch.agent.ovs_test_base \ neutron.tests.unit.plugins.ml2.drivers.openvswitch.agent.openflow.ovs_ofctl.test_br_phys \ neutron.tests.unit.plugins.ml2.drivers.openvswitch.agent.openflow.ovs_ofctl.test_br_int \ neutron.tests.unit.plugins.ml2.drivers.openvswitch.agent.openflow.ovs_ofctl.test_br_tun \ + neutron.tests.unit.plugins.ml2.drivers.openvswitch.agent.test_agent_scheduler \ neutron.tests.unit.plugins.brocade.test_brocade_db \ neutron.tests.unit.plugins.brocade.test_brocade_vlan \ neutron.tests.unit.plugins.oneconvergence.test_nvsd_agent \ @@ -121,10 +125,13 @@ commands = python -m testtools.run \ neutron.tests.unit.plugins.ibm.test_sdnve_api \ neutron.tests.unit.plugins.ml2.test_db \ neutron.tests.unit.plugins.ml2.test_driver_context \ + neutron.tests.unit.plugins.ml2.test_port_binding \ + neutron.tests.unit.plugins.ml2.test_extension_driver_api \ neutron.tests.unit.plugins.ml2.test_rpc \ neutron.tests.unit.plugins.ml2.drivers.mlnx.test_mech_mlnx \ neutron.tests.unit.plugins.ml2.drivers.openvswitch.mech_driver.test_mech_openvswitch \ neutron.tests.unit.plugins.ml2.drivers.linuxbridge.mech_driver.test_mech_linuxbridge \ + neutron.tests.unit.plugins.ml2.drivers.linuxbridge.agent.test_linuxbridge_neutron_agent \ neutron.tests.unit.plugins.ml2.drivers.base_type_tunnel \ neutron.tests.unit.plugins.ml2.drivers.opendaylight.test_driver \ neutron.tests.unit.plugins.ml2.drivers.ext_test \ @@ -136,22 +143,24 @@ commands = python -m testtools.run \ neutron.tests.unit.plugins.ml2.drivers.arista.test_mechanism_arista \ neutron.tests.unit.plugins.ml2.drivers.test_type_local \ neutron.tests.unit.plugins.ml2.drivers.mechanism_logger \ - neutron.tests.unit.plugins.ml2.drivers.cisco.apic.test_apic_sync \ - neutron.tests.unit.plugins.ml2.drivers.cisco.apic.base \ - neutron.tests.unit.plugins.ml2.drivers.cisco.apic.test_apic_topology \ neutron.tests.unit.plugins.ml2.drivers.test_type_flat \ neutron.tests.unit.plugins.ml2.drivers.test_type_vlan \ neutron.tests.unit.plugins.ml2.drivers.mechanism_test \ neutron.tests.unit.plugins.ml2.drivers.l2pop.rpc_manager.l2population_rpc_base \ neutron.tests.unit.plugins.ml2.extensions.fake_extension \ neutron.tests.unit.plugins.ml2.drivers.l2pop.rpc_manager.test_l2population_rpc \ + neutron.tests.unit.plugins.ml2.drivers.l2pop.test_mech_driver \ neutron.tests.unit.plugins.cisco.n1kv.test_n1kv_db \ neutron.tests.unit.plugins.cisco.n1kv.fake_client \ neutron.tests.unit.plugins.cisco.test_network_db \ + neutron.tests.unit.scheduler.test_l3_agent_scheduler \ neutron.tests.unit.scheduler.test_dhcp_agent_scheduler \ + neutron.tests.unit.db.test_ipam_backend_mixin \ neutron.tests.unit.db.test_l3_dvr_db \ + neutron.tests.unit.db.test_l3_hamode_db \ neutron.tests.unit.db.test_migration \ neutron.tests.unit.db.test_agents_db \ + neutron.tests.unit.db.quota.test_driver \ neutron.tests.unit.db.test_dvr_mac_db \ neutron.tests.unit.debug.test_commands \ neutron.tests.unit.tests.test_post_mortem_debug \ @@ -165,6 +174,7 @@ commands = python -m testtools.run \ neutron.tests.unit.api.rpc.handlers.test_securitygroups_rpc \ neutron.tests.unit.api.rpc.handlers.test_dvr_rpc \ neutron.tests.unit.api.rpc.agentnotifiers.test_dhcp_rpc_agent_api \ + neutron.tests.unit.api.v2.test_attributes \ neutron.tests.unit.agent.metadata.test_driver \ neutron.tests.unit.agent.test_rpc \ neutron.tests.unit.agent.test_securitygroups_rpc \ @@ -175,8 +185,10 @@ commands = python -m testtools.run \ neutron.tests.unit.agent.l3.test_router_processing_queue \ neutron.tests.unit.agent.l3.test_namespace_manager \ neutron.tests.unit.agent.l3.test_dvr_fip_ns \ + neutron.tests.unit.agent.ovsdb.native.test_helpers \ neutron.tests.unit.agent.common.test_config \ neutron.tests.unit.agent.common.test_polling \ + neutron.tests.unit.agent.common.test_utils \ neutron.tests.unit.agent.linux.test_ip_lib \ neutron.tests.unit.agent.linux.test_keepalived \ neutron.tests.unit.agent.linux.test_daemon \ @@ -190,10 +202,15 @@ commands = python -m testtools.run \ neutron.tests.unit.agent.linux.test_ip_monitor \ neutron.tests.unit.agent.linux.test_iptables_manager \ neutron.tests.unit.agent.linux.test_external_process \ + neutron.tests.unit.agent.linux.test_dhcp \ + neutron.tests.unit.agent.linux.test_async_process \ neutron.tests.unit.agent.linux.test_ovsdb_monitor \ neutron.tests.unit.agent.linux.test_bridge_lib \ neutron.tests.unit.agent.linux.test_ip_link_support \ neutron.tests.unit.agent.linux.test_interface \ + neutron.tests.unit.agent.dhcp.test_agent \ + neutron.tests.unit.test_manager \ + neutron.tests.unit.test_service \ neutron.tests.unit.test_auth \ neutron.tests.unit.test_policy \ neutron.tests.unit.extensions.v2attributes \ @@ -203,18 +220,21 @@ commands = python -m testtools.run \ neutron.tests.unit.extensions.base \ neutron.tests.unit.extensions.foxinsocks \ neutron.tests.unit.extensions.extensionattribute \ + neutron.tests.unit.extensions.test_servicetype \ neutron.tests.unit.extensions.test_portsecurity \ + neutron.tests.unit.extensions.test_providernet \ neutron.tests.unit.callbacks.test_manager \ neutron.tests.unit.hacking.test_checks \ neutron.tests.unit.common.test_config \ neutron.tests.unit.common.test_rpc \ - neutron.tests.unit.common.test_log \ neutron.tests.unit.common.test_ipv6_utils \ neutron.tests.unit.cmd.test_ovs_cleanup \ neutron.tests.unit.cmd.test_netns_cleanup \ neutron.tests.unit.ipam.drivers.neutrondb_ipam.test_db_api \ neutron.tests.unit.ipam.drivers.neutrondb_ipam.test_driver \ neutron.tests.unit.ipam.test_subnet_alloc \ + neutron.tests.unit.ipam.test_utils \ + neutron.tests.unit.ipam.test_requests \ neutron.tests.unit.notifiers.test_nova \ neutron.tests.unit.notifiers.test_batch_notifier