rpc_workers can be set < 1 with 'ovn' backend when no
other agent is running apart from ovn agents to
consume these rpc notifications.
Add and apply disable_notifications decorator on
methods which do rpc cast calls to agents, the
decorator makes the caller method execute only
when rpc_workers >=1. This patch not changing
default behavior and utilizes the rpc_workers config option
to enable rpc notification on resources updates only when
rpc_workers >= 1.
Also set rpc_workers=0 in ovn jobs to cover this scenario.
Closes-Bug: #1889737
Closes-Bug: #1992352
Change-Id: I700fe2cd422bc1eb8b5144ec116e7f0a60238419
Running with a stricter .pylintrc generates a lot of
C0330 warnings (hanging/continued indentation). Fix
the ones in neutron/api.
Trivialfix
Change-Id: I1258b04f64a18036407e1d9de9ddca7472af0d11
When segment plugin is enabled, we should return segments details as
they are part of network.
Partial-Bug: #1956435
Partial-Bug: #1764738
Signed-off-by: Sahid Orentino Ferdjaoui <sahid.ferdjaoui@industrialdiscipline.com>
Change-Id: I1dab155bc812f8764d22e78ebb7d80aaaad65515
After port update, DHCP agent will be notified about changes only if
one of the port's attributes related somehow to the DHCP will change.
Such fields are:
* fixed_ips,
* MAC address,
* dns_domain,
* dns_name,
* dns_assignment,
* extra_dhcp_opts.
In other cases there is no reason to send notifications to the agent.
This will results with less notifications to the DHCP agent and less
possibilities to race condition between DHCP and L2 agents while
switching ports from the DOWN to ACTIVE status and sending notifications
to nova.
Closes-Bug: #1982367
Change-Id: If7990bdec435af76ad2e88fd4ea2bc24a255fd5a
Use importlib.util module instead of imp, since it's
being deprecated:
DeprecationWarning: the imp module is deprecated in favour of
importlib; see the module's documentation for alternative uses
Had to change test setup to call super() first to get around a
config issue I was seeing locally, causing the entire class of
tests to fail:
oslo_config.cfg.NoSuchOptError: no such option api_extensions_path
in group [DEFAULT]
Closes-bug: #1981077
Change-Id: Ic171028a661c3f9f83f6758a57aaeab4450aa907
In case when HA router isn't active on any L3 agent,
_ensure_host_set_on_port method shouldn't try to update port's host to
the host from which there was an rpc message sent, as this can be host
on which router is in the "standby" mode.
This method should only update port's host to the router's "active_host"
if there is such active_host found already.
Depends-On: https://review.opendev.org/c/openstack/requirements/+/841489
Closes-Bug: #1973162
Closes-Bug: #1942190
Change-Id: Ib3945d294601b35f9b268c25841cd284b52c4ca3
convert_to_sanitized_binding_profile_allocation was added to Neutron
temporarily before [1] was merged and released in neutron-lib.
[1]: https://review.opendev.org/c/openstack/neutron-lib/+/813650
Related-Bug: #1922237
Change-Id: I953b96d97076cd6a80fff6e97e2fd956da737d46
When new default policy rules and scope enforcement are enabled, Neutron
needs to handle properly not only PolicyNotAuthorized exception from
oslo_policy module but also InvalidScope exception.
This patch adds handling of that exception to the neutron policy
modules.
In the check() method from the neutron.policy module we are calling
ENFORCER.enforce() method with do_raise=False which means that
PolicyNotAuthorized isn't rasised. Unfortunately it seems that there is
bug in oslo.policy module and InvalidScope is raised even with
do_raise=False.
For now, lets workaround it in Neutron by properly handling InvalidScope
exception in the check() method.
This workaround can be cleaned when bug [1] will be fixed in
oslo.policy.
[1] https://bugs.launchpad.net/oslo.policy/+bug/1965315
Partial-Bug: #1959333
Change-Id: I973f8896248c8222031c53343bb53ce48254da74
The agent side codes need consider three scenarios:
1. Non-dvr router. The all related rules are applied in
qrouter-namespace
2. Dvr router with the local agent mode is dvr_no_external.
The all related rules are applied in snat-namespace.
3. Dvr router with the local agent mode is dvr. In this scenario,
The all related rules are applied in fip-namespace.
Change-Id: Ie8729586d318be4a673858021a0116e09e193522
Partial-Bug: #1877301
Host parameter is needed there to filter subnets per segment when
segments plugin is enabled.
When dhcp agent requests informations about networks, and segments
plugin is enabled, subnets which belongs to the network are filtered out
based on the host passed as argument to the get_network_info() method.
But we never passed host to that method, even when we should e.g.
during the full sync of the DHCP agent, when it requests details about
each network.
This patch fixes that issue by passing host parameter to that method.
Closes-Bug: #1958955
Change-Id: Ib5eef501493f6735a47ea085196242a5807c4565
In patch [1] method get_network_info was refactored and that causes
NameError in the DHCP agent when there is "network object passed in
kwargs and there are subnets with segments in network. See related bug
for details.
[1] https://review.opendev.org/c/openstack/neutron/+/820190
Closes-Bug: #1958955
Change-Id: Iad8d85c79f8b11a24b1bb1ca44c776e909b610c3
OVS agent part of Local IP feature was divided into
2 parts to make it easier for reviewers:
1. This patch adds agent extension skeleton and sets
server <-> agent RPC communication mechanism via
push notifications of LocalIPAssociation objects
create/delete. It also shows how the extension would
treat those changes. It may be called extension "frontend".
2. Agent extension flows patch (next one) - deals with OVS
flows and can be called extension "backend".
Partial-Bug: #1930200
Change-Id: I31cb4062b6a21b71c739ab202c60aa7002e4d36e
With the introduction of port-resource-request-groups extension,
format of binding-profile.allocation has changed. Since the DB,
may contain port bindings that were created before the introduction
of the new format, it's necessary to perform upgrade check and
sanitize those rows that are still using an older format.
Partial-Bug: #1922237
See-Also: https://review.opendev.org/785236
Change-Id: I95e9e1bc553ac499d75c9280e45dfea61d135279
The goal of [1] is to, in case of failing when removing the quota
reservation, continue the operation. Any expired reservation will
be removed automatically in any driver.
If the DB transaction fails, it should affect only to the reservation
trying to be deleted. This is why this patch isolates the
"remove_reservation" method and guarantees it is called outside an
active DB session. That guarantees, in case of failure, no other DB
operation will be affected.
This patch also partially reverts [2] but still checks the security
group rule quota when a new security group is created. Instead of
creating and releasing a quota reservation for the security group
rules created, now only the available quota limit is checked before
creating them. That won't prevent another operation to create security
group rules in parallel, exceeding the available quota. However, this
is not even guaranteed with the current quota driver.
[1]https://review.opendev.org/c/openstack/neutron/+/805031
[2]https://review.opendev.org/c/openstack/neutron/+/701565
Closes-Bug: #1943714
Change-Id: Id73368576a948f78a043d7cf0be16661a65626a9
This patch implements support for CRUD operations for QoS minimum
packet rate, for example:
DELETE /qos/policies/$POLICY_ID/minimum_packet_rate_rules/$RULE_ID
Placement or dataplane enforcement is not implemented yet.
Partial-Bug: #1922237
See-Also: https://review.opendev.org/785236
Change-Id: Ie994bdab62bab33737f25287e568519c782dea9a
It seems that using default singleton=True in the
routes.middleware.RoutesMiddleware which is leading to use thread-local
RequestConfig singleton object is not working well with eventlet
monkeypatching of threading library which we are doing in Neutron.
As a result it leaks memory in neutron-api workers every time when API
request to not existing API endpoint is made by user.
To avoid that memory leak, let's use singletone=False in that
RoutesMiddleware object, at least until problem with thread-local
singleton and eventlet monkey patching will be solved.
Closes-Bug: #1942179
Change-Id: Id3a529248d3984506f0166bdc32e334127a01b7b
This parameter, sent by the DHCP agent, is needed to remove the
workaround method "_get_network_lock_id".
The removal of this method will be done in [1] in Y release.
Related-Bug: #1732456
[1]https://review.opendev.org/c/openstack/neutron/+/800967
Change-Id: Ibd7fed33d314e901c69da33f42029f8ea67df98d
The quota driver ``ConfDriver`` was deprecated in Liberty release.
``NullQuotaDriver`` is created for testing although it could be used
in production if no quota enforcement is needed. However, because
the Quota engine is not plugable (is an extension always loaded), it
could be interesting to make it plugable as any other plugin.
This patch also creates a Quota engine driver API class that should be
used in any Quota engine driver. Currently it is used in the three
in-tree drivers implemented: ``NullQuotaDriver``, ``DbQuotaDriver``
and ``DbQuotaNoLockDriver``.
Change-Id: Ib4af80e18fac52b9f68f26c84a215415e63c2822
Closes-Bug: #1928211
This patch switches over to callback payloads for PORT
AFTER_DELETE events.
Some shims were removed.
Change-Id: If69e37b84fe1b027777b1d673b3d08a6651a979e
This reverts commit 062336e59b.
Now, we have proper fix for the system_scope='all' in elevated context
in the neutron-lib so we can revert temporary fix made at the end of the
Wallaby cycle.
Related-Bug: #1920001
Conflicts:
neutron/api/rpc/agentnotifiers/dhcp_rpc_agent_api.py
neutron/common/utils.py
neutron/db/address_group_db.py
neutron/services/segments/db.py
Change-Id: Ife9b647b403bdd76a8a99984ea8858bf95c96bc3
This patch switches the code over to the payload style of callbacks [1]
for PORT AFTER_CREATE events. In addition it adds a branch/shim to the
dhcp_rpc_agent_api to support both payload and kwarg style callbacks.
NeutronLibImpact
[1]
https://docs.openstack.org/neutron-lib/latest/contributor/callbacks.html
Change-Id: I25d43d4f8f2390b07e0d11c631f894d88669bbe0
This patch switches the code over to the payload style of callbacks [1]
for ROUTER_INTERFACE events for those that are not using them yet.
The unit tests are also updated where needed to account for the
payload style callbacks and publish() method. In addition, a few
callback methods that use the retry_if_session_inactive() decorator are
separated out from the callback so that the context can still be
passed and detected by retry_if_session_inactive logic.
NeutronLibImpact
[1]
https://docs.openstack.org/neutron-lib/latest/contributor/callbacks.html
Change-Id: I8d9f8296952dfb10fcccd6afd72e90a5d4f379eb
This patch switches over to the payload style of callbacks for
NETWORK based events. As part of this change a few shims are needed
to handle cases where some callbacks don't yet use payloads and others
do. Once we move over to payloads for all callbacks the shims can be
removed.
NeutronLibImpact
Change-Id: I889364b5d184d47a79fe6ed604ce13a4b334acfa
Add enable_dhcp, to make a filter to avoid unnecessary
net_info data transfer through rpc.
Change-Id: Ibcef366f5b1f4b7da4f47f1f538a17111da0faa1
Closes-Bug: #1552614
DHCP notification is done after each create/update/delete for
network, subnet and port.
This notification currently has to retrieve network from DB almost
every time, which is a quite heavy DB request and hence affects
performance of port and subnet CRUD.
This patch suggests 2 optimizations:
- do not fetch network if not needed (only fetch when schedule needed)
- for port and subnet AFTER_CREATE event pass network dict from plugin
According to Rally tests these changes improve performance:
- port create ~20%
- port update ~20%
- subnet create ~15%
- port delete and subnet update/delete - not tested
Closes-Bug: #1923161
Change-Id: I0ab836ac09225f4f3ad435e9ceaf315018855d52
In case when enforce_new_defaults is set to True and new policy rules
are used, context.is_admin flag isn't really working as it was with old
rules.
But in case when elevated context is needed, it means that we need
context which has full rights to the system. So we should also set
"system_scope" parameter to "all" to be sure that system scope queries
can be done with such elevated context always.
It is needed e.g. when elevated context is used to get some data from
db. In such case we need to have db query which will not be scoped to
the single project_id and with new defaults to achieve that system_scope
has to be set to "all".
Proper fix for that should be done in neutron-lib and it is proposed
in [1] already but as we are have frozen neutron-lib version for
stable/wallaby already this patch for neutron is temporary fix for that
issue.
We can revert that patch as soon as we will be in Xena development cycle
and [1] will be merged and released.
[1] https://review.opendev.org/c/openstack/neutron-lib/+/781625
Related-Bug: #1920001
Change-Id: I0068c1de09f5c6fae5bb5cd0d6f26f451e701939
Support security group rules with remote_address_group_id in openvswitch
firewall. This change reuses most of the firewall functions handling remote
security groups to also process remote address groups. The conjunctive flows
for a rule with remote_adress_group_id are similar to others with
remote_group_id but have different conj_ids.
Change-Id: I8c69e62ba56b0d3204e9c12df3133126071b92f7
Implements: blueprint address-groups-in-sg-rules
When processing port events (create, update, delete), the port
provisioning (port creation) has priority over the other events [1].
As reported in the related bug, if a port deletion with an IP
address and another port creation with the same IP address arrive
to the DHCP agent, those events can be processed in the same queue.
Because of the creation event priority, even when this event arrived
after the deletion event, it will be processed first. That will
clash with the DHCP agent cache, that contains a port (not deleted
yet) with the same IP address. That will trigger an unwanted resync.
This patch implements a specific logic to store the events in
"ResourceProcessingQueue" (that uses "PriorityQueue" [2]). When
a port event arrives, the event comparison method checks the
(subnet, fixed_ips) tuple set of both elements. If there is a
coincidence, that means those ports are the same or are using
the same IP addreses (the race condition explained in the bug).
In this case, the priority is defined only by the timestamp;
that means the events are processed in order of arrival.
Because the Neutron server do not allow to have two ports in the
same subnet with the same IP address, the order of the events is
guaranteed. In the case explained in the bug, the deletion event
will be processed first.
[1]https://review.opendev.org/c/openstack/neutron/+/626830
[2]https://docs.python.org/3/library/queue.html#queue.PriorityQueue
Closes-Bug: #1913723
Change-Id: I89438feae3c0244f6da5e6a2a035d45b956ac247
This change adds code to retrieve for the agent the security group ids
affected by an update or deletion of an address group.
Also adds event notificatoins to add and remove addresses from address
groups.
Co-authored-by: Hang Yang <hangyang@verizonmedia.com>
Change-Id: I34766b96cb775356664f5e0d48a08a22ac6898e2
Router HA port may be deleted concurrently while the plugin
is trying to update. This patch catches the known exceptions.
Should not `plugin.update_port_statuses` use because:
1. plugin.update_port_statuses will hide all exception
no matter the port exists.
2. The code just needs to catch the port not found error,
but let all other exception raised if port still exists.
Closes-Bug: #1906375
Change-Id: Id5d9c99be3bd6854568d2b1baa86c25c0cfd4756
A partial upgrading of neutron cluster, neutron-server
has a newer version while neutron-agents not, does not
run well after a RPC data structure upgrading. This
patch upgrades the security group related RPC version
between neutron-server and agents. A partial upgrading
neutron cluster will explicitly raise error. The RPC
version should be aligned.
Closes-bug: #1903531
Related-bug: #1867119
Change-Id: I6cb2ba05fa3337be46eb01f2d9f869efa41e4db6