microstack/tests/test_cluster.py
Dmitrii Shcherbakov 780a4c4ead Use focal/core20/Ussuri/OVN & enable confinement
Major changes:

* Plumbing necessary for strict confinement with
  the microstack-support interface
  https://github.com/snapcore/snapd/pull/8926
  * Until the interface is merged, devmode will be used and kernel
    modules will be loaded via an auxiliary service.
* upgraded OpenStack components to Focal (20.04) and OpenStack Ussuri;
  * reworked the old patches;
  * added the Placement service since it is now separate;
  * addressed various build issues due to changes in snapcraft and
    built dependencies:
    * e.g. libvirt requires the build directory to be separate from the
      source directory) and LP: #1882255;
    * LP: #1882535 and https://github.com/pypa/pip/issues/8414
    * LP: #1882839
    * LP: #1885294
    * https://storyboard.openstack.org/#!/story/2007806
    * LP: #1864589
    * LP: #1777121
    * LP: #1881590
* ML2/OVS replated with ML2/OVN;
  * dnsmasq is not used anymore;
  * neutron l3 and DHCP agents are not used anymore;
  * Linux network namespaces are only used for
    neutron-ovn-metadata-agent.
  * ML2 DNS support is done via native OVN mechanisms;
  * OVN-related database services (southbound and northbound dbs);
  * OVN-related control plane services (ovn-controller, ovn-northd);
* core20 base support (bionic hosts are supported);
* the removal procedure now relies on the "remove" hook since `snap
remove` cannot be used from the confined environment anymore;
* prerequisites to enabling AppArmor confinement for QEMU processes
  created by the confined libvirtd.
* Added the Spice html5 console proxy service to enable clients to
  retrieve and use it via
  `microstack.openstack console url show --spice <servername>`.
* Added missing Cinder templates and DB migrations for the Cinder DB.
* Added experimental support for a loop device-based LVM backend for
  Cinder. Due to LP: #1892895 this is not recommended to be used in
  production except for tempest testing with an applied workaround;
  * includes iscsid and iscsi-tcp kernel module loading;
  * includes LIO and loading of relevant kernel modules;
  * An LVM PV is created on top of a loop device with a backing file
  present in $SNAP_COMMON/cinder-lvm.img;
  * A VG is created on top of the PV;
  * LVs are created by Cinder and exported via LIO over iscsi to iscsid
  which hot-plugs new SCSI devices. Those SCSI devices are then
  propagated by Nova to libvirt and QEMU during volume attachment;
* Added post-deployment testing via rally and tempest (via the
  microstack-test snap). A set of tests included into Refstack 2018.02
  is executed (except for object storage tests due to the lack of object
  storage support).

Change-Id: Ic70770095860a57d5e0a55a8a9451f9db6be7448
2020-09-25 13:20:12 +00:00

118 lines
3.9 KiB
Python
Executable File

#!/usr/bin/env python
"""
cluster_test.py
This is a test to verify that we can setup a small, two node cluster.
The host running this test must have at least 16GB of RAM, four cpu
cores, a large amount of disk space, and the ability to run multipass
vms.
"""
import json
import os
import sys
import unittest
sys.path.append(os.getcwd())
from tests.framework import Framework, check, check_output, call # noqa E402
os.environ['MULTIPASS'] = 'true' # TODO better way to do this.
class TestCluster(Framework):
def test_cluster(self):
# After the setUp step, we should have a control node running
# in a multipass vm. Let's look up its cluster password and ip
# address.
openstack = '/snap/bin/microstack.openstack'
control_host = self.get_host()
control_host.install()
control_host.init(['--control'])
control_prefix = control_host.prefix
cluster_password = check_output(*control_prefix, 'sudo', 'snap',
'get', 'microstack',
'config.cluster.password')
control_ip = check_output(*control_prefix, 'sudo', 'snap',
'get', 'microstack',
'config.network.control-ip')
self.assertTrue(cluster_password)
self.assertTrue(control_ip)
compute_host = self.add_host()
compute_host.install()
compute_machine = compute_host.machine
compute_prefix = compute_host.prefix
# TODO add the following to args for init
check(*compute_prefix, 'sudo', 'snap', 'set', 'microstack',
'config.network.control-ip={}'.format(control_ip))
check(*compute_prefix, 'sudo', 'microstack.init', '--compute',
'--cluster-password', cluster_password, '--debug')
# Verify that our services look setup properly on compute node.
services = check_output(
*compute_prefix, 'systemctl', 'status', 'snap.microstack.*',
'--no-page')
self.assertTrue('nova-compute' in services)
self.assertFalse('keystone-' in services)
check(*compute_prefix, '/snap/bin/microstack.launch', 'cirros',
'--name', 'breakfast', '--retry',
'--availability-zone', 'nova:{}'.format(compute_machine))
# TODO: verify horizon dashboard on control node.
# Verify endpoints
compute_ip = check_output(*compute_prefix, 'sudo', 'snap',
'get', 'microstack',
'config.network.compute-ip')
self.assertFalse(compute_ip == control_ip)
# Ping the instance
ip = None
servers = check_output(*compute_prefix, openstack,
'server', 'list', '--format', 'json')
servers = json.loads(servers)
for server in servers:
if server['Name'] == 'breakfast':
ip = server['Networks'].split(",")[1].strip()
break
self.assertTrue(ip)
pings = 1
max_pings = 60 # ~1 minutes
# Ping the machine from the control node (we don't have
# networking wired up for the other nodes).
while not call(*control_prefix, 'ping', '-c1', '-w1', ip):
pings += 1
if pings > max_pings:
self.assertFalse(
True,
msg='Max pings reached for instance on {}!'.format(
compute_machine))
self.passed = True
# Compute machine cleanup
check('sudo', 'multipass', 'delete', compute_machine)
if __name__ == '__main__':
# Run our tests, ignoring deprecation warnings and warnings about
# unclosed sockets. (TODO: setup a selenium server so that we can
# move from PhantomJS, which is deprecated, to to Selenium headless.)
unittest.main(warnings='ignore')