This function isn't necessary. The json encoding of a
named tuple will already turn into a normal list.
ports = [l2pop_rpc.PortInfo('abcdef', '188.8.131.52')]
json.dumps(ports) == json.dumps([(mac, ip) for (mac, ip) in ports])
An argument could be made that the PortInfo object could have
something added to it later that we wouldn't want to serialize
in order to remain backward compatible. However, doing so would
break all of the constructions of PortInfo objects on the agents
once they got the updated code for PortInfo that requires the
So there is no way currently to add a new field to PortInfo without
breaking existing legacy clients or breaking new clients.
Given that, let's stop doing the json encoder's job.
This patch also adds a sanity unit test to make sure the json
serialization method used in oslo does not break on the named tuples.
If the required extensions are missing, we currently log an error
that is going to be practically ignored. That said, the unfulfilled
requirement will most definitely going to lead to other failures,
so we might as well fail fast.
This patch also cleans up some <barf>dns-integration nonsense</barf>
within the ML2 framework: the extension must not be declared statically
as it's being loaded by the extension manager, and this fixes the lousy
unit tests we have to live with. As for the db base plugin, some cleanup
is still overdue, but it will have to be taken care of in a follow-up
It is expected that pair router_id and l3_agent_id will be unique
in table ha_router_agent_port_bindings. As it appeared that
duplicates can be added this change adds UniqueConstraint for
Having duplicates is odd and leads to problems during sync_routers.
DBReferenceError will be caught create_ha_port_and_bind and
L3HARouterAgentPortBinding are created with l3_agent_id=None
in _create_ha_port_binding (l3_hamode_db.py)
A while ago we copied Tempest networking API tests in to the
Neutron repo, and along came thousands of lines of code of Tempest
testing infrastructure (neutron.tests.tempest). For a while we
periodically refreshed our fork via:
I think it's time we move away from that model by eliminating
the fork. We do this by deleting unused code and importing the
rest from tempest_lib. There's some Tempest code still not
moved from Tempest to tempest_lib in tempest.common. I think
it's preferable to import that code than to copy it, and Tempest
cores mostly agree. Manila and Ironic also do the same.
To be able to import from tempest I added it as a requirement:
Since Tempest is not on PyPi, I had to get it from git. Only the api
tests environment needs Tempest, so instead of adding it to
test-requirements, I added it specifically to the api and
neutron.tests.tempest.test and neutron.tests.tempest.common.*
still remain. These are tighly coupled with one another, and
sadly since Neutron forked Tempest code, Tempest has made significant
changes to those files that also require changes to the test files.
I aim to get rid of the Neutron fork of these files in a follow up
Also fixed import grouping in test files so that it's std libs,
3rd party libs, and then Neutron code.
* Removed neutron.tests.tempest.config:
- We only added one option after the fork. I created a new group
called 'neutron_plugin_options' and moved the new option to that
group. This is in preperation for the Tempest plugin architecture,
where you're supposed to add new config options to a new group
and not to existing configuration groups. Note that this is
obviously a backward incompatible change, but it's to an option
added in the same cycle.
* Removed neutron.tests.tempest.test and neutron.tests.tempest.common.
- This introduced an API change to the way we access Keystone,
which required mechanical changes to a few tests (create_tenant
calls need a different client now).
- The way Tempest manages primary, admin and alternative tenant
credentials was changed after we forked, which required another
mechanical change to a few tests.
* Cut all of the Keystone clients we don't need. We only need
to create/delete tenants, the other clients were used in Tempest by
actual Keystone tests.
* Changed neutron.tests.api.base.BaseNetworkTest:
- Re-implemented get_client_manager so that it returns the Neutron
clients manager and not the one in the Tempest repo.
- Updated it from the Tempest repo so that it uses the new way
to manage credentials (Since it now uses the Tempest test base
class and not our out of date forked copy).
We need to have the relationship between port and floating ip, because updating
quota will happen when the event "after_delete" occurs. And current cascade
removal of the floating ip does not cause the event "after_delete" for floating
ip. The cascade on the ORM-level "delete" must be added.
These features have their required extensions mixed up. There is
no reason why subnet pools (a core extension) depends on a non
core extension like router. On the other end, DNS does indeed
depends on it.
OVS agent tunnel interfaces are named via:
'%s-%s' % (tunnel_type, destination_ip)
This means that the tunnel interface name is not unique if
two OVS agents on the same machine try to form a tunnel with a
third agent. This happens during full stack tests that start
multiple copies of the OVS agent on the test machine.
Thus, for full stack tests, to make sure that the tunnel
interface names created by ovs agents are globally unique, they
will have the following format :
'%s-%s-%s' % (tunnel_type, hash of source IP, hash of dest IP)
Since this patch centralizes the formation of the tunnel interface
name in a dedicated method that is monkey patched by the full stack
framework, a unit test has been added for this method.
Co-Authored-By: Mathieu Rohon <email@example.com>
The goal is to extract the common agent code from the linuxbridge agent
to share this code with other agents (e.g. sriov and new macvtap ).
This is a first step into the direction of a so called modular l2
Therefore all linuxbridge implementation specifics are moved into the
LinuxBridgeManager class. The manager class will be passed as argument
into the common agent loop instead of instantiating it in its
constructor. In addition the network_maps and the updated_devices map
has been moved into the rpc class.
A clear manager interface has been defined for the communication
between the common agent loop and the impl specific manager class.
In a follow up patchset, the common agent loop will be moved into a
new file. This has not yet happened to simplify tracking the code
changes during review.
Currently in method autonested_transaction from db.api is decided
begin(nested) or begin(subtransactions) should be used based on
catching the the exception. Instead of this it can be checking is
there active session or not.
Method name and docstring are confusing: actually it does not
delete any namespaces, just returns info about which dvr routers
should be removed from which agents.
The patch renames the method and updates it's docstring accordingly.
During a port list operation, a port and its parent network
may be concurrently deleted from the database after they have
been retrieved from the DB but before policy is enforced.
Then when the policy engine tries to do a get_network to check
network ownership for a port on a network that no longer exists,
it will encounter a NetworkNotFound exception from the core plugin.
This exception was being propagated all of the way up to the whole
API operation as a 404, which made no sense in the context of a
This patch adjusts the logic to catch any NotFound exceptions during
this processing and convert them into a RetryRequest to trigger the
API to restart the operation. At this point the objects will be gone
from the database so the problematic items will not be passed to the
policy engine for enforcement.
Change I192793aa433606a1508f24564c54164f961018d9 introduced a way
to handle DB objects that don't have 'id' as primary field.
Anyway create_object should be modified not to add the 'id' field
if not present in the model.
During a floating IP association, the tenant making the request
may not always be the owner of the router. To make the association,
Neutron needs to query the router details internally but needs to
use an elevated context to do so. Otherwise, the user sees a
cryptic error stating that the router doesn't exist.
Alembic migration documentation for developers is updated to explain the need
to update neutron/db/migration/models/head.py when adding models to the
After the upgrade of Gerrit the message field works as expected
so the review tool can use it instead of filtering using the
topic. This implies that having the bug number in the commit
message is enough for the patch to be included in the dashboard.
The current values of min:3 and max:10 mean radvd is sending
an RA about every 7 seconds, which can be excessive when we
have thousands of routers. Let's relax it by 10x since most
VMs will send a Router Solicition at boot, obviating the need
for a small interval.