In case when VM is booted before Neutron L3 agent will prepare router
namespace and metadata proxy inside that router, VM may not be able to
get metadata from Nova thus e.g. ssh-key will not be configured in such
case.
To workaround that bug, this patch adds check if guest OS finished
booting and if yes, check if SSH to the VM is possible. In case that SSH
is not possible, Tobiko will reboot vm as, if it was due to bug [1] then
during second boot metadata service should be available and ssh key
should be configured properly.
[1] https://bugs.launchpad.net/neutron/+bug/1813787
Related-Bug: #1813787
Change-Id: I1bd7b86e64ea7083f365ac84ebe1fff34cb52036
This reverts commit 0b9ab33c11.
Reason for revert: This looks to require too much resources on target cloud. Many migration failures saw after this has been merged. Let get stable jobs again before investigate on this issue.
Change-Id: I8707c706cc72062ef94f3b038e3e858fc8c21ad3
So far every Octavia test was SINGLE topology oriented.
This patch adds Active standby support to Octavia tests (only
test_faults needed to be changed for now).
Change-Id: Id2df30599132aa4f97f0f3e4393650305213542f
This patch refactors the Octavia waiters module in a way of splitting
the waiter functions to have stronger cohesion.
Change-Id: Ic75539cf6f1a54d65cfb87ea312321d03a931a0f
It fixes the Octavia test_traffic test by using
wait_for_members_to_be_reachable before sending traffic and asserting.
This reverts the workaround I9983a2ff04f56d07b407ef4151be2f2c57cb7b1e
Change-Id: Ieb08763a8af7028023d7985a78ad459d5bdd0fb1
This patch fixes the Octavia unavailable service error, as it forces
the check_members_balanced method to wait until Octavia service is
ready and only then it allows the method to send traffic and to assert.
Change-Id: Icfc8c95a5836dcf88636cfa1bdc5e5d68a6f51b4
So far we used the class variable count_members in all Octavia modules.
Deciding how many members exist, saving it as a variable and testing by it
might create conflicts in the future and will be harder to maintain.
The LB should know how many members exist by using the Octavia API, not
by tests module variable.
This patch adds the list_members function and uses it in all Octavia
modules.
Change-Id: Id7c5f13db2098643ccbe4fe9a4d0ff08784d2307
So far, we used a raw ssh command which uses curl in order to make
sure the LB members are balanced (in the Octavia validators module).
This patch uses the Tobiko curl module instead, a change which
will make the maintenance of Octavia modules easier in the future.
To make the use of Tobiko curl possible, there has to be added a new
waiter method which waits until the members are reachable.
Change-Id: I98bd593422d7f7c8dde805fe0eb75293b5598dbe
So far Octavia had only support for one provider: amphora.
This patch adds support for OVN provider.
Change-Id: I048cb34dc6db729e9277183d3697931a4901e1c7
The VM used by the tobiko QoS stack is currently created with a trunk
port because it extends the VlanServerStackFixture class
This patch prevents the creation of the trunk port for the QoS stack
because QoS is not supported with trunk ports when ml2/ovs is configured
Change-Id: I9ea45424836eefb0988b17b8d204149e840463c0
So far the Triple reboot_method tried to re-activate the affected
servers (which are supposed to be shutoff due to the compute reboot)
before they were in the SHUTOFF status (and hence no action would
occur).
This patch fixes the Triple reboot method by making nova to wait
for the servers to be on SHUTOFF status before re-activating them.
Change-Id: Ic8d71fa0bf5f08ef15a53b6e500e3905e5886d26
- test start
- test stop
- test start after stop
- test delete
- test with multiple images (CentOS, CirrOS, Fedora, Ubuntu...)
Change-Id: I3fd1d388a207724474068a8136c18657f92c3674
This patch adds a compute node failover test,
as it reboots the Octavia amphora's compute node
and sends traffic to verify the Octavia's functionality.
Change-Id: I7ca1a63d46af3107c79edd768aee6f3bff8c2b82
So far Tripleo topology module had only methods to power on & off.
After a compute node is turned on, some of the servers/VMs that
are hosted on it can be found on "SHUTOFF" after hypervisor service
is started.
This patch adds the possibility to reboot the TripleO compute node and
making sure all servers which were hosted by this node/hypervisor
will be re-started in case its status doesn't match original status.
Change-Id: I4e15a0d4d739fac37aef150cc18d9dfd9251c37b