The neutron-lib commit I360545b6ee4291547e0c5c8e668ad03d3efa4725 moved
the externally consumed globals from neutron.common.constants into lib.
With the exception of PROVISIONAL_IPV6_PD_PREFIX all other constants
in neutron.common.constants should only be used in neutron, and will
hopefully remain that way. External consumers needing access to other
common constants should move them into lib first.
NeutronLibImpact
Change-Id: Ie4bcffccf626a6e1de84af01f3487feb825f8b65
Add support for QoS ingress bandwidth limiting in
openvswitch agent.
It uses default ovs QoS policies on bandwidth limiting
mechanism.
DocImpact: Ingress bandwidth limit in QoS supported by
Openvswitch agent
Change-Id: I9d94e27db5d574b61061689dc99f12f095625ca0
Partial-Bug: #1560961
In some cases we would want to refrain from cleaning up specific
openvswitch ports.
In Octavia, the health manager service is using a predefined[1]
openvswitch port which will gets nuked by the ovs_cleanup script in the
boot process.
That port is created by the operating system NIC configuration file
(by using OVS_EXTRA[2]), but due to the order of actions in the boot
process, the ovs_cleanup script gets invoked by systemd only at a later
stage. As a result the port will be deleted each time and the Octavia
health manager service will fail to bind.
This patch takes advantage of the 'external_ids' column that already
exists for ovs ports, in order to filter out ports we would like to
skip. We filter those ports by adding 'skip_cleanup' to the
'external_ids' column.
It is important to note that this will work if we append the following
to the port: -- set Interface o-hm0 external_ids:skip_cleanup=true"
Related-Bug: #1685223
[1] http://git.openstack.org/cgit/openstack/octavia/tree/devstack/plugin.sh?h=stable/ocata#n190
[2] https://github.com/osrg/openvswitch/blob/master/rhel/README.RHEL#L102
Change-Id: If483d0ee027596999370ab0d21b1743d4ef16acb
Objects must use project_id and not tenant_id. The object framework
ensures that tenant_id is added as an extra field for backward
compatibility.
This patch reverts the workaround implemented in change
I4ec9340094bc51cd8aa6e5112bf8114aa26c2982 and implements a proper fix
by explicitly updating the objects.
Co-Authored-By: Artur Korzeniewski <artur.korzeniewski@intel.com>
Co-Authored-By: Darek Smigiel <smigiel.dariusz@gmail.com>
Closes-Bug: #1630748
Change-Id: Iab90bcab41655b2e210aea0e7581eb00b94ce5e5
Linuxbridge agent uses iptable rules in POSTROUTING chain
in the mangle table to mark outgoing packets with the
DSCP mark value configured by the user in QoS policy.
DocImpact: DSCP Marking rule support is extended to the
Linuxbridge L2 agent
Closes-Bug: #1644369
Change-Id: I47e44cb2e67ab73bd5ee0aa4cca47cb3d07e43f3
Maintaining the context is important for keeping the request ID
and subsequently operator/developer sanity while debugging.
The resource_type is also helpful to have since a function could be
subscribed for multiple resources.
This maintains and deprecates the existing 'subscribe' method for
backwards compatibility with callbacks that don't support receiving
the context and resource type. A new 'register' method is added
for callbacks to use that are compatible with receiving the context.
Change-Id: I06c8302951c99039b532acd9f2a68d5b989fdab5
There are usage patterns which would benefit from having
the capability to send a list of resources in bulk instead
of using individual fanout messages.
From now on, the rpc callback subscriber receives a list of
resources (single or multiple), and the pushers must always
push a list.
Backwards compatibility for QoSPolicy consumers in mitaka
is provided by calling push with "resource" parameter for
single item lists during one release cycle. That will be
dropped when Ocata opens.
Partially-implements: blueprint vlan-aware-vms
Change-Id: I1117925360a29ecbd1902fa527b2f24f94ce81ec
In the implementation of DSCP QoS rule, the QosOVSAgentDriver uses the
wrong method to modify br-int flows. It uses br_int.mod_flows() whilst
it should use br_int.mod_flow().
This patch fixes this and also adds verification of the updates of DSCP,
as we have for bandwidth here to trigger that code path and avoid
regressions.
Change-Id: I685ac373701ff8407fd7fbf649e17a2f7dfc0008
Closes-Bug: #1564820
This patch adds the front end and back end implementation of QoS DSCP.
Associated patches that are dependent on this one:
* python-neutronclient: https://review.openstack.org/#/c/254280
* openstack-manuals: https://review.openstack.org/#/c/273638
* API Guide: https://review.openstack.org/#/c/275253
* Heat:
* Spec: https://review.openstack.org/#/c/272173
* QoSDscpMarkingRule resource: https://review.openstack.org/#/c/277567
* Fullstack tests: https://review.openstack.org/#/c/288392/
APIImpact - The API now supports marking traffic egressing from a VM's
dscp field with a valid dscp value.
Co-Authored-By: Nate Johnston <nate_johnston@cable.comcast.com>
Co-Authored-By: Victor Howard <victor.r.howard@gmail.com>
Co-Authored-By: Margaret Frances <margaret_frances@cable.comcast.com>
Co-Authored-By: James Reeves <james.reeves5546@gmail.com>
Co-Authored-By: John Schwarz <jschwarz@redhat.com>
Needed-By: I25ad60c1b9a66e568276a772b8c496987d9f8299
Needed-By: I881b8f5bc9024c20275bc56062de72a1c70c8321
Needed-By: I48ead4b459183db795337ab729830a1b3c0022da
Needed-By: Ib92b172dce48276b90ec75ee5880ddd69040d7c8
Needed-By: I4eb21495e84feea46880caf3360759263e1e8f95
Needed-By: I0ab6a1a0d1430c5791fea1d5b54106c6cc93b937
Partial-Bug: #1468353
Change-Id: Ic3baefe176df05f049a2e06529c58fd65fe6b419
Network devices, like internal router legs, or dhcp ports
should not be affected by bandwidth limiting rules.
This patch disables application of network attached policies
to network/neutron owned ports.
Closes-bug: #1486039
DocImpact
Change-Id: I75d80227f1e6c4b3f5fa7762b8dc3b0c0f1abd46
It seems that the Queue + QoS + linux-htb implementation was really
limiting ingress by default. So this patch switches the implementation
to the ovs ingress_policing_rate and ingress_policing_burst parameters
of the Interface table.
Later in time we may want to revise this, to make TC & queueing possible,
but this is good enough for egress limiting.
Also, removed the _update_bandwidth_limit del+set on OvS QoS driver for
the bandwidth limit rule update, since that's not needed anymore.
Change-Id: Ie802a235ae19bf679ba638563ac7377337448f2a
Partially-Implements: ml2-qos
Creates a port in a policy, and subsequently modifies the
bandwidth limit rule in the policy, then verifies that the
new limits are assigned to the port.
Change-Id: I23fe45ef08618ad91567feb1707028e0a0bfe0d6
Partially-Implements: ml2-qos
This functional test spawns the OVS agent, with bandwidth limit rules in
a policy attached to ports. Then it asserts that the low level OVS
bandwidth limits are set for each port.
To make this possible we refactor and extract the base OVS agent test
framework into neutron.tests.functional.agent.l2.base.
Partially-Implements: blueprint ml2-qos
Change-Id: Ie5424a257b9ca07afa72a39ae6f1551d6ad351e7