Add support in SR-IOV mechanism driver for VNIC type
"accelerator-direct".
This VNIC type refers to any device that supports hardware
acceleration of any kind and is provided by Cyborg [1].
NOTE: "accelerator-direct-physical" is not supported yet.
[1]https://wiki.openstack.org/wiki/Cyborg
Change-Id: I529e6a2a445a140dca7686976efeefcd13d333f0
Closes-Bug: #1909100
This patch enables the "mcast_flood_reports" and "mcast_flood" (on provnet
ports only) options in the Logical Switch Ports in the OVN driver. Without
these options, the ovn-controller will consume the IGMP queries and won't
send it to the LSP ports, meaning that external IGMP queries will never
arrive to the VMs.
In talks to the core OVN team, it was suggested [0] to enable the
"mcast_flood_reports" option by default in the OVN driver (at least until
fixed in core OVN) as a workaround to this problem. And, to avoid having
to update all ports (which can be many) based on the igmp_snooping_enable
configuration option, we are always setting "mcast_flood_reports" to
"true" in the LSPs. This won't cause any harm (also confirmed by core
OVN developers [0]) since it will be ignored if multicast snoop is
disabled.
[0] https://bugzilla.redhat.com/show_bug.cgi?id=1933990#c3
Closes-Bug: #1918108
Change-Id: I99a60b9af94b8208b5818b035e189822981bb269
Signed-off-by: Lucas Alvares Gomes <lucasagomes@gmail.com>
This is patchset 2 of 2 for OVN driver handling of security-group-logging.
It includes the core changes and tests for this feature.
This feature requires OVN 20.12 [0] or newer. Functional test will be
skipped for non-supported versions.
Related-Bug: 1468366
Closes-Bug: 1914757
[0]: 880dca99ea
Change-Id: Ic86fa70eb34c9b178267b80de1f8883a3ef03e98
Signed-off-by: Flavio Fernandes <flaviof@redhat.com>
Now is possible to define a gateway IP when creating a subnet using a
subnet pool. The IPAM subnet generator retrieves the available IP
ranges in the subnet pool and generates a list of candidate subnets
with the prefix lenght defined. If the gateway IP can be allocated in
one of those candidate subnets, the IPAM returns a valid IpamSubnet
that will be used to create a Neutron subnet.
Closes-Bug: #1904436
Change-Id: Ib1d1f591c4d0f59ebff3ddcb3be7b10b0b5e67dc
Ports with device_owner like:
* floating_ip,
* DHCP,
* some types of router ports, like: HA interface interface,
* distributed ports,
don't need to be configured in the dnsmasq file.
So there is no need to reload dnsmasq every time when such port is
added/updated to the network.
This patch adds skip in such case which should improve load on the
Neutron DHCP agent.
Closes-Bug: #1913269
Change-Id: I63221507713b941c261cdf88781133149da8ab8d
This change adds VNIC type vDPA ("vdpa") to the list of
supported VNIC types for the OVS and OVN mech drivers.
Depends-On: https://review.opendev.org/#/c/760043/
Change-Id: If22aedc147f7e2256f8f8ad3bebb80b6bb2f6d3d
Support security group rules with remote_address_group_id in openvswitch
firewall. This change reuses most of the firewall functions handling remote
security groups to also process remote address groups. The conjunctive flows
for a rule with remote_adress_group_id are similar to others with
remote_group_id but have different conj_ids.
Change-Id: I8c69e62ba56b0d3204e9c12df3133126071b92f7
Implements: blueprint address-groups-in-sg-rules
Replace rootwrap execution with privsep context execution.
This series of patches will progressively replace any
rootwrap call.
This patch replaces some "IpNetnsCommand" command execution
methods.
Change-Id: Ic5fdf221a2a2cd0951539b0e040d2a941feee287
Story: #2007686
Task: #41558
Added a new port extension: device profile (``port_device_profile``).
This extension adds the "device_profile" parameter to the "port" API
and specifies the device profile per port. This parameter is a
string.
This parameter is passed to Nova and Nova retrieves the requested
device profile from Cyborg. Reference:
https://docs.openstack.org/api-ref/accelerator/v2/index.html#
device-profiles
For backwards compatibility, this parameter will be "None" by
default.
Closes-Bug: #1906602
Depends-On: https://review.opendev.org/c/openstack/neutron-lib/+/767586
Change-Id: I1202a8388e64ae4270ef4ca118993504ae7c1731
Do not report ovs agent state when ovs is dead,
and let neutron-server mark service as down. So
cluster admin could determine there is a problem
of the given ovs agent
Change-Id: Ib4b06c7877a7343f4204d4f4f5863931717ff507
Closes-Bug: #1910946
This adds support for deleting OVN controller/metadata agents.
Behavior is undefined if the agents are still actually up as per
the Agent API docs.
As part of this, it is necessary to be able to tell all workers
that the agent is gone. This can't be done by deleting the
Chassis, because ovn-controller deletes the Chassis if it is
stopped gracefully and we need to still display those agents as
down until ovn-controller is restarted. This also means we can't
write a value to the Chassis marking the agent as 'deleted'
because the Chassis may not be there. And of course you can't
use the cache because then other workers won't see that the
agent is deleted.
Due to the hash ring implementation, we also cannot naively just
send some pre-defined event that all workers can listen for to
update their status of the agent. Only one worker would process
the event. So we need some kind of GLOBAL event type that is
processed by all workers.
When the hash ring implementation was done, the agent API
implementation was redesigned to work around moving from having
a single OVN Worker to having distributed events. That
implementation relied on marking the agents 'alive' in the
OVSDB. With large numbers of Chassis entries, this induces
significant load, with 2 DB writes per Chassis per
cfg.CONF.agent_down_time / 2 seconds (37 by default).
This patch reverts that change and goes back to using events
to store agent information in the cache, but adds support for
"GLOBAL" events that are run on each worker that uses a particular
connection.
Change-Id: I4581848ad3e176fa576f80a752f2f062c974c2d1
Until this patch the metadata namespace had the format
'ovnmeta-<OVN datapath uuid>' which makes it hard to
relate to the Neutron network for which metadata is
provisioned to. This applies to the name of the tap
interface that connects the network namespace to the
integration bridge.
This patch is changing the name convention and providing
an upgrade path to include the Neutron network ID to make
debugging and troubleshooting easier for developers and
operators.
The new name pattern is:
'ovnmeta-<Neutron network uuid>'
Please, note that with this patch, the old namespaces will
be deleted and new ones will be recreated.
Signed-off-by: Daniel Alvarez <dalvarez@redhat.com>
Change-Id: Ic8ffa9c4437aab6fb1878b3a1ebf2c3ab86e3d0c
As per the community goal of migrating the policy file
the format from JSON to YAML[1], we need to do two things:
1. Change the default value of '[oslo_policy] policy_file''
config option from 'policy.json' to 'policy.yaml' with
upgrade checks.
2. Deprecate the JSON formatted policy file on the project side
via warning in doc and releasenotes.
Also replace policy.json to policy.yaml ref from doc and tests.
[1]https://governance.openstack.org/tc/goals/selected/wallaby/migrate-policy-format-from-json-to-yaml.html
Change-Id: I0dbb8484e749e645627756e88ec79c1b26a6414a
New API extension was added in [1] to extend security group rules with
"normalized_cidr" read only attribute.
This patch implements this API extension in Neutron ML2 plugin and
extends security group rules with "normalized_cidr" value.
[1] https://review.opendev.org/#/c/743630/
Related-Bug: #1869129
Change-Id: I65584817a22f952da8da979ab68cd6cfaa2143be
Need the revision_number attribute to support address group update in
rpc resource_cache.
Change-Id: I6355c9394c23f7d94496e9c06e061d6d3fd4128d
Implements: blueprint address-groups-in-sg-rules
This change is needed for enabling floating IP's on routed networks.
To be able to create a subnet that spans all segments of a routed network,
a special subnet service type of 'network:routed' is used to denote a
network that can span all segments of a routed network.
To create floating IP's on a routed network, the subnet must be created
with a service_type of 'network:routed'. After the subnet has been
created, floating IP's can be allocated and associated as before.
See the design spec https://review.opendev.org/#/c/486450/ for
reference.
One caveat for this approach is that it requires the underlying
infrastructure to be aware of and able to route /32 host routes
for the floating IP. This implies that in practice, use of the
'network:routed' service type should be done in conjunction with
one or both of the following:
1. Third-party SDN backend that handles this service type in its
own way
2. neutron-dynamic-routing and the BGP service plugin for announcing
the appropriate next-hops for floating IP's. This is compatible
with DVR and non-DVR environments.
Depends-On: Ibde33bdacba6bd1e9c41cc69d0054bf55e1e6454
Change-Id: I9ae9d193b885364d5a4d90538880d8e9fbc8df74
Co-Author: Thomas Goirand <zigo@debian.org>
Partial-Bug: #1667329
Prior to this patch the IGMP configuration for ML2/OVN was inconsistent
with the configuration option description and also the ML2/OVS driver
because it was flooding traffic to unregistered VMs [0].
The "igmp_snooping_enable" configuration option says:
"Setting this option to True will also enable Open vSwitch
mcast-snooping-disable-flood-unregistered flag. This option will disable
flooding of unregistered multicast packets to all ports."
But, in ML2/OVN that behavior was inconsistent prior to this patch
because it allowed traffic to flood to unregistered VMs. This patch
fixes it.
[0]
https://opendev.org/openstack/neutron/src/branch/master/neutron/conf/agent/ovs_conf.py#L36-L47
Change-Id: I5cbe09e26120905b29351d61bbadb30b5dd14938
Closes-Bug: #1904399
Signed-off-by: Lucas Alvares Gomes <lucasagomes@gmail.com>
This patch adds support for vlan_transparent in the ovn mechanism
driver. So ovn is now second mech_driver after linuxbridge which can be
used with vlan_transparent networks.
It just adds "vlan-passthru" option to the Logical Switch's "other
config".
This needs also changes in the core OVN which are available only in OVN
master branch for now. See [1] for details.
[1] https://patchwork.ozlabs.org/project/ovn/patch/20201110023449.194642-1-ihrachys@redhat.com/
Change-Id: I76b8ba959398dcffff112d26ae7d81ff428be992
In RULES_INGRESS_TABLE table 82 there is a rule for allow established and
related connections. The current rule sends the packet directly to the dest
port without doing a mac learning. This is causing ovs to age out the dest mac
of the remote VM and causing the rule to be changed in flood rule. For the normal
case it fine as they try to avoid high cpu. ovs hardware offload reduce cpu usage
by moving some of the packet processing to nic and flood rule is not offloaded,
therefore it prefre to use the NORMAL action to avoid the flood rule.
We also keep the same logic as today when using explicitly_egress_direct=True
which avoid NORMAL action in the entire pipeline.
Closes-Bug: #1897637
Change-Id: I9b611d62be5d0529e8b35e3d8280baa5be54bc2b
Both drivers have different approaches when it comes to the metatada
agent, for one the metadata agent for ML2/OVN runs on the compute nodes
(it's distributed) instead of the controller nodes.
The previous default of "<# of CPUs> / 2" did not make sense for ML2/OVN
and if left unchanged could result in scalation problems because of the
number of connections to the OVSDB Southbound database, as seeing in
this email thread for example [0].
This patch puts a placeholder value (None) on the default field of
the "metadata_workers" config by not setting it immediately and then
conditionally set the default value based on each driver:
* ML2/OVS defaults to <# CPUs> // 2, as before.
* ML2/OVN defaults to 2, as suggested in the bug description and also
what's default in TripleO for the OVN driver.
[0]
http://lists.openstack.org/pipermail/openstack-discuss/2020-September/016960.html
Change-Id: I60d5dfef38dc130b47668604c04299b9d23b59b6
Closes-Bug: #1893656
Signed-off-by: Lucas Alvares Gomes <lucasagomes@gmail.com>
When "uplink-status-propagation" extension is enabled, new ports
created will default the value of "propagate_uplink_status" to True.
Closes-Bug: #1888487
Change-Id: If1e533a61aeebbb4761d669c516fe86a4381765c
In the spec we said:
"""
When the metadata proxy processes a request, it gathers the L2 addresses
of a VM, and the source interface, and passes it to the metadata service.
The Metadata service, instead of using the VM IP, uses the "VM MAC" and
"Gateway MAC" to identify the instance.
"""
But since we switched from the home-grown metadata-ns-proxy to haproxy
we no longer control some of the headers included, like X-Forwarded-For.
haproxy allows us to turn X-Forwarded-For on or off, but it cannot
give us an X-Forwarded-For-MAC header.
Instead it seems we have to rely on the source address being the IPv6
link local address generated from the NIC's MAC address as specified
in RFC 4291:
https://tools.ietf.org/html/rfc4291#section-2.5.6https://tools.ietf.org/html/rfc4291#appendix-A
Note that means you cannot use IPv6 Privacy Extensions:
https://tools.ietf.org/html/rfc4941
Change-Id: Ife592fcfc69e26f61ec1f45c06821cb025cc7cf2
Closes-Bug: #1460177
There is no real reason we should be using some of the
terms we do, they're outdated, and we're behind other
open-source projects in this respect. Let's switch to
using more inclusive terms in all possible places.
Change-Id: I99913107e803384b34cbd5ca588451b1cf64d594
Patch [1] added option "no_track" to the keepalived's config file which
is generated by L3 agent in HA mode.
This was added to handle properly keepalived 2.x and interfaces which
are in DOWN state in the backup nodes.
But this "no_track" option is not compatible with keepalived 1.x series
which is available e.g. on Ubuntu 18.04.
As there is no easy way to check automatically if keepalived supports or
not this config flag, this patch introduces new config option
"keepalived_use_no_track".
If this config option will be set to False, neutron L3 agent will not
add "no_track" to the keepalived's config.
As master branch is moving to gate on Ubuntu 20.04 where keepalived 2.x
is already available, this new config option default value is set to
True.
[1] https://review.opendev.org/#/c/721799/
Change-Id: I2dfdb9f56de28d56ca0f240ff34fa7c3a12e339b
Closes-Bug: #1890400