This change moves ConfigDict[1] and ConfigFileFixture[1] in order to use
them in fullstack and functional tests (in a follow-up change).
[1] neutron.tests.fullstack.resources.config
Related-bug: #1497027
Change-Id: I16157b0b739c9e620f23c984187b492bf5af2e60
This testcase may fail intermittently on 'Connection refused' error.
This could be due to the fact that the metadata proxy setup is not exactly
complete at the time the request is issued; in fact there is no
synchronization between the router being up and the metadata request being
issued, and clearly this may be the reason of accidental but seldom failures.
In order to rule out this possibility and stabilize the test, let's retry
on connection refused only. If we continue to fail, then the next step would
be to dump the content of iptables to figure out why the error occurs.
Closes-bug: #1461172
Change-Id: I65a5bf4fbbcad6ba93a46d36cabe7844ff528d8d
This patch deals with the lock wait timeout and the deadlock errors
observed under high concurrency (api_workers >= 4) with the pymysql
driver. It includes the following changes:
- Stop setting dirty status for resource usage when creating
reservation, as usage of reserved resources is not tracked anymore;
- Add a variable, increasing delay when retrying make_reservation
upon a DBDeadlock error in order to reduce the chances of further
collisions;
- Enable transaction retry upon DBDeadlock errors for set_quota_usage;
- Do not resync quota usage while making reservation. This puts a lot
of stress on the database and is also wasteful since resource usage
is very likely to change again once the transaction is committed;
- Use autonested_transaction to simplify logic around when the
nested flag should be used.
Change-Id: I7a335f9ebea3c0d6fee6e6b757554e045a66075c
Closes-Bug: #1486134
Related-Blueprint: better-quotas
When calling delete we need the pci slot details to reset the VF rate. The problem
is that when the VM is deleted libvirt return the VF to the hypervisor and eswitch
manager will mark the pci_slot as unassigned so can't know from the mac which pci slot (VF)
to reset. Also newer libvirt version reset the mac when deleteing VM, so than it is
not possible at all.
The solution is to keep pci slot details locally in the agent since upon removal event
you cannot get pci_slot from the neutron server as it is for create/update since port
is already removed from neutron.
This patch pairs the mac and pci_slot for a device (VF) so when calling the extension
port delete api we can have the pci_slot and reset the VF rate.
It is also add a mapping between mac to port_id so we can pass the port_id
when calling the extention port delete api.
Partially-Implements: blueprint ml2-sriov-qos-with-bwlimiting
Closes-Bug: #1492909
Change-Id: Icc3a9599c6d7a4de9c56b452dfab7909c8d0a576
The new option for the ovs agent will enable to set/unset the
csum option for the vxlan/gre tunnels. The default is maintained as False.
Change-Id: I18dcd8946b585e70f8890a5c222ea37059c4a0c5
Implements: bp ovs-tunnel-csum-option
Closes-bug: #1492111
Currently rpc loop processes ports only in case polling is required
(message from ovsdb monitor) or there are port_updated notifications from
server or security group notifications.
In case of just port_deleted notifications port processing is not
triggered during rpc loop.
This may lead to agent accumulating a big amount of deleted ports
and processing all of them at once during next iteration when polling is
required or any notification from server, which might be quite tough for
the agent. Tough means agent will be irresponsive while processing deleted
ports.
The patch makes port deletion processing more gradual.
Closes-Bug: #1491922
Change-Id: I0e1f6dfbf5b56fb18a978d6214e1768560d8ac98
This patch follows the previous patch(listed as dependent) and moves
the remaining cisco db models from neutron to networking-cisco.
The patch deletes l3_model and cisco_router_plugin and their associated
config and helper files from neutron
Change-Id: I5b71e1dfb683e633e1cd11386dfb7c2ed7cc7d62
Partial-Bug: #1489609
This patch removes the Cisco meta plugin and the Cisco
Nexus1000V monolithic plugin as they were deprecated in the
previous cycle.
Closes-bug: #1473217
Change-Id: Id170b9512b2f52a971264336d83b083d487359ee
The async_ping function returns a callable that returns True when all ping
futures are done. Since those futures are running for 10 secs, there was no
chance that the result of the callable was True.
The test was bailing out without calling bridge reset even a single time,
effectively leaving the feature untested in gate.
Another thing to note is that for some reason the patch fixed oslo rootwrap
errors in the test when executed locally. Since I still don't understand how
it's possible that it fixes the issue for me, I mark the bug as related only,
and will track logstash after it's merged to see whether it applies unknown
magic to gate jobs too.
Change-Id: Iaa977abddf1a0c6af7e964f1a5cd545ffb79585a
Related-Bug: #1490051
If a port is deleted right before _bind_devices is called,
get_ports_attributes will throw an exception since the row
corresponding to the port doesn't exist in the OVS DB.
Avoid that setting if_exists to True. The port will be
processed as deleted by the agent in the following iteration.
Change-Id: Ia6590d76f8683e6cba562cde3c39b051549f6c04
Closes-bug: #1489014
In some cases, under high load OVS agent has to delete a big amount of
ports during rpc_loop. remove_devices_filter() does iptables-save/restore
for IPv4 and IPv6 which is 4 system calls. It is very expensive and
inefficient to call it for each port individually.
Closes-Bug: #1491922
Change-Id: I4cfb2dfcef5088436c7aaae22c8f66e1ea052311