Today Nova updates the mac_address of a direct-physical port to reflect
the MAC address of the physical device the port is bound to. But this
can only be done before the port is bound. However during migration Nova
is not able to update the MAC when the port is bound to a different
physical device on the destination host.
This patch extends port binding logic for direct-physical ports to allow
providing the MAC address of the physical device via the binding profile.
If it is provided then Neutron overwrites the value of the mac_address
field of the port with the value from the active binding profile.
Also when the port is being unbound or the MAC address is removed from
the active binding porfile then neutron resets the mac_address field of
port to a generated MAC to avoid duplicated MAC issues when another port
is being bound to the same physical device.
The shim API extension for this change is being proposed in
I54b4c85ffc4856fba7ad5e9e29f77f74815e1275 in neutron-lib.
Depends-On: https://review.opendev.org/c/openstack/neutron-lib/+/831935
Closes-Bug: #1942329
Change-Id: Ib0638f5db69cb92daf6932890cb89e83cf84f295
When OVN is clustered, connection be set multiple addresses, inactivity
probe cannot currently be set correctly. this patch fix it.
Closes-bug: #1958364
Change-Id: I5f83d6f47dc60b849cca5830ec3f77c15a446530
Enabled ``DbQuotaDriverNull`` as a productio quota database
quota driver. This driver does not enforce any quota nor have access
to the database. When using this quota driver, the API will return
the default empty values expected from the ``QuotaDriverAPI`` class.
Closes-bug: #1960032
Change-Id: Iafa24753e657746a8b8165b5a63c17de9a9ba791
Signed-off-by: Jakub Libosvar <libosvar@redhat.com>
Co-Authored-By: Rodolfo Alonso Hernandez <ralonsoh@redhat.com>
The validation is intended mostly for tests and don't make much sense
when running the migration in production because likely there are
already running workloads. This patch changes the default to False so
migration validation must be explicitly asked for.
Change-Id: I5470f61a5e0b55bf682526208c3f57dc0ca6ffd5
Signed-off-by: Jakub Libosvar <libosvar@redhat.com>
Extension "uplink-status-propagation" does not allow to modify existing
ports. This extension only enables the creation of new ports with
this new flag.
Similar to [1], this patch changes the default behaviour of the
exiting ports: if no "propagate_uplink_status" flag is present, "True"
is returned now. The aim of this change is to enable this feature for
all existing ports, that is usually the aim of an administrator when
enables this extension.
[1]https://bugs.launchpad.net/neutron/+bug/1888487
Closes-Bug: #1967881
Related-Bug: #1888487
Change-Id: Ica5b76e0a9a5ae12f764c66be259d7f3cd5b248b
This patch implements router gateway IP QoS based on meter,
using the existing plugin and extension, only the driver side
is different.
Closes-Bug: #1893625
Co-Authored-By: zhanghao <hao.zhang.am.i@gmail.com>
Co-Authored-By: Rodolfo Alonso Hernandez <ralonsoh@redhat.com>
Change-Id: I46864b9234af64f190f6b6daebfd94d2e3bd0c17
Add file to the reno documentation build to show release notes for
stable/yoga.
Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/yoga.
Sem-Ver: feature
Change-Id: I83a7081a2aaaa0cc4812ba823a9a91f48149556c
Added support for filtering the QoS rule type list command.
Two new filter flags are added:
- all_supported: if True, the listing call will print all QoS rule
types supported by at least one loaded mechanism driver.
- all_rules: if True, the listing call will print all QoS rule types
supported by the Neutron server.
Both filter flags are exclusive and not required.
Depends-On: https://review.opendev.org/c/openstack/neutron-lib/+/827533
Closes-Bug: #1959749
Change-Id: I41eaab177e121316c3daec34b309c266e2f81979
Traditionally it has been the CMSs, in OpenStacks case Nova's,
responsibility to create Virtual Interfaces (VIFs) as part of
instance life cycle, and subsequently manage plug/unplug operations
on the Open vSwitch integration bridge.
With the advent of SmartNIC DPUs which are connected to multiple
distinct CPUs we can have a topology where the instance runs on one
host and Open vSwitch and OVN runs on a different host, the
SmartNIC DPU control plane CPU.
One of the main use cases for having this topology is security
where we treat the hypervisor host as untrusted and prohibit
direct communication between the hypervisor host and the SmartNIC
DPU control plane host. In addition to that control facilities
such as switchdev devices are only visible from the SmartNIC DPU
control plane CPUs.
Adds support for binding ports of type VNIC_REMOTE_MANAGED by
looking up chassis based on serial number that Nova provides in
the binding_profile.
Information required by the OVN controller to successfully look up
and plug representor port is provided as options on the LSP as
defined by the representor plug provider documentation [0][1].
0: https://docs.ovn.org/en/stable/topics/vif-plug-providers/vif-plug-providers.html
1: https://github.com/ovn-org/ovn-vif/blob/main/Documentation/topics/vif-plug-providers/vif-plug-representor.rst
Partial-Bug: #1932154
Depends-On: I496db96ea40da3bee5b81bcee1edc79e1f46b541
Depends-On: I83a128a260acdd8bf78fede566af6881b8b82a9c
Change-Id: Icc6c2d0f7f8f5cc94997db6244175a8e8884789f
Introduce a new API extension to enable GET, PUT and DELETE
operations on QoS minimum packet rate rule without specifying
policy ID.
Partial-Bug: #1922237
See-Also: https://review.opendev.org/785236
Change-Id: Ia083b5ac98c9e18ddbcdd2e0fc46f2f8432a628c
Now "L3AgentExtensionsManager" lists loaded extension, checking if
they inherit from "neutron_lib.agent.l3_extension.L3AgentExtension".
If any extension does not, the L3 agent raises an exception and exits.
Closes-Bug: #1951569
Change-Id: I3ce4858cef9b3a3d7eab005dd1ad2bb3b5ef6ef3
Floating IP now have information of the QoS policy of the external
network. The OVN QoS extension will use this network QoS policy if
there is no floating IP QoS policy.
Partial-Bug: #1950454
Change-Id: I380a130d97e8bfe54caa5f3a129877507d1ce2a6
Added a check for OVN SB schema, looking for "virtual_parent" in
"Port_Binding" table (added in OVN SB schema 2.5).
This patch removes the code to support OVN without virtual ports.
It is assumed that "virtual_parent" field is present in "Port_Binding"
table.
Closes-Bug: #1949496
Change-Id: I3d01f58dca570537b5e754b331ca4809a7161ae2
The [agent] veth_mtu parameter has had unused since the [ovs]
use_veth_interconnection parameter was removed by [1] during Wallaby.
[1] https://review.opendev.org/c/openstack/neutron/+/759947
This change formally deprecate the parameter so that we can remove it
in a next cycle.
Change-Id: Ib85959fbc06928a49df7ea104eae3aca3f04e091
Closes-Bug: #1957180
Not Neutron nor other active projects use "PortBindingMixin" class or
the table "portbindingports" anymore.
Closes-Bug: #1956980
Change-Id: I34424a271f6c66cd99852c6109a96a4dcf374913
Added a check for OVN NB schema, looking for "options" field in "NAT"
table (added in OVN NB schema 5.17).
This patch removes the code to support OVN without stateless NAT rules.
It is assumed that "options" field in "NAT" table is always present.
Closes-Bug: #1949494
Change-Id: Ib3b6dd68009ab635627168b11626d7e7c548ee2f
Same as in other ML2 plugins (OVS, Linux Bridge), OVN mechanism driver
should allow only one physical network per bridge. The rule "one
network, one bridge" should be present in OVN too.
By allowing only one physical network per bridge, Neutron prevents
having two networks with subnets with the same CIDR in the same bridge.
Currently is possible and this CIDR clash is not prevented (shouldn't be
by the API). This architectural limitation prevents this situation.
This limitation is already present in deployment tools as TripleO.
Closes-Bug: #1956476
Change-Id: I74a2ca9a344a93219deb94d60247478ee3200659
When ovn/dns_servers consist of IPv6 dns nameservers,
it was getting added to IPv4 dhcp options also, and due to
this an invalid nameserver(last 4 octets of an IPv6 address)
is set in the instances.
This patch filters IPv4/IPv6 dns nameservers and set
dhcpv4/dhcpv6 options accordingly.
Also when dns_nameservers are not set for IPv6 subnets,
get those from ovn/dns_servers config or system nameservers
just like it's done with the IPv4 subnets. Updated
get_system_dns_resolvers to pick both IPv4/IPv6 valid
ips, this also requires bump of oslo.utils minimum version
to 4.8.0 to use strict option for IPv4[1].
Additionally fix some unit tests which were setting IPv4 dns
nameservers on the IPv6 subnets, this is not allowed with api.
[1] https://github.com/openstack/oslo.utils/commit/3288539
Closes-Bug: #1951816
Change-Id: I9f914e721201072e43a8c6c266ed97ca85fcc13d
It was added temporary to have compatybility with 3rd party code
which uses Neutron interface driver but it was said that since
"W" release that old, deprecated way of calling "plug_new" method
will be removed. Now we are far after "W" release so it's time to
do some cleaning there.
Related-Bug: #1879307
Change-Id: I03214079f752c7efe6611f2e928f32652fe681bc
Today the sriov qos service plugin blindly blocks creating ports
with minimum bandwidth qos and with direct_physical vnic_type. This was
originally added when only dataplane enforcement was the scope of the
qos service plugin. However in the last many releases we created
placement enforcement for this qos rule regardless of the vnic_type.
So now blindly blocking the port creation is now preventing using the
placement enforcement for this rule for direct_physical ports.
This patch removes this limitation by marking minimum bandwidth as
supported rule for the sriov qos service plugin. The limitation that
data plane enforcement is not supported for this rule remains. The agent
will not even try to apply any kind of rules to these ports as port
binding is not forwarded for the sriov agent at all.
The documentation is extended to explain that placement enforcement now
works while data plane enforcement still not supported.
This is somewhat similar to the case when the support for egress
direction is added to the minimum bandwidth rule, while the sriov data
plane enforcement was not (could not) been implemented for this
direction in the sriov agent. Today the sriov agent simply ignores the
egress direction rules in the minimum bandwidth qos rule during applying
the data plane enforcement.
Closes-Bug: #1949877
Change-Id: I20ad32eac414ff90b551bff940d92cbcfa848101
When I writing 'ndp_proxy' service plugin, I found I couldn't get enough
informations about router from the callback system (Such as: the origin
request body of user send). So, for write service plugin that related
router plugin more concisely I commit this patch.
This patch proposal two changes about router callback publish events:
1. Add 'request_body' parameter to some event's payload
2. add 'BEFORE_UPDATE' event for router gateway
Related-bug: #1877301
Change-Id: I5f6a4e6f0b7c5feb794ddb7efbd07d01bad91af8
Neutron allows deleting the only IP of a router port but
the OVN NB DB doesn't, since it expects that the
network value of a port is greater than 0. This should
not be possible since it causes that the DB are not
perfectly sync.
It is needed to check BEFORE_UPDATE if the port
that will be updated is of type router owned and
if it will have an IP after the update. If not
an error needs to be raised.
Closes-Bug: #1948457
Change-Id: I206c31201470f178efdde8839622be7900c6ae3e
This patch does *not* implement dataplane enforcement.
QoS minimum packet rate rule is enabled in OVS backend driver and
create/delete/update empty methods are added to enable placement
enforcement.
Partial-Bug: #1922237
See-Also: https://review.opendev.org/785236
Change-Id: Ie283ad3a4ec433c88ac23f798908cd143159394b