Use focal/core20/Ussuri/OVN & enable confinement

Major changes:

* Plumbing necessary for strict confinement with
  the microstack-support interface
  https://github.com/snapcore/snapd/pull/8926
  * Until the interface is merged, devmode will be used and kernel
    modules will be loaded via an auxiliary service.
* upgraded OpenStack components to Focal (20.04) and OpenStack Ussuri;
  * reworked the old patches;
  * added the Placement service since it is now separate;
  * addressed various build issues due to changes in snapcraft and
    built dependencies:
    * e.g. libvirt requires the build directory to be separate from the
      source directory) and LP: #1882255;
    * LP: #1882535 and https://github.com/pypa/pip/issues/8414
    * LP: #1882839
    * LP: #1885294
    * https://storyboard.openstack.org/#!/story/2007806
    * LP: #1864589
    * LP: #1777121
    * LP: #1881590
* ML2/OVS replated with ML2/OVN;
  * dnsmasq is not used anymore;
  * neutron l3 and DHCP agents are not used anymore;
  * Linux network namespaces are only used for
    neutron-ovn-metadata-agent.
  * ML2 DNS support is done via native OVN mechanisms;
  * OVN-related database services (southbound and northbound dbs);
  * OVN-related control plane services (ovn-controller, ovn-northd);
* core20 base support (bionic hosts are supported);
* the removal procedure now relies on the "remove" hook since `snap
remove` cannot be used from the confined environment anymore;
* prerequisites to enabling AppArmor confinement for QEMU processes
  created by the confined libvirtd.
* Added the Spice html5 console proxy service to enable clients to
  retrieve and use it via
  `microstack.openstack console url show --spice <servername>`.
* Added missing Cinder templates and DB migrations for the Cinder DB.
* Added experimental support for a loop device-based LVM backend for
  Cinder. Due to LP: #1892895 this is not recommended to be used in
  production except for tempest testing with an applied workaround;
  * includes iscsid and iscsi-tcp kernel module loading;
  * includes LIO and loading of relevant kernel modules;
  * An LVM PV is created on top of a loop device with a backing file
  present in $SNAP_COMMON/cinder-lvm.img;
  * A VG is created on top of the PV;
  * LVs are created by Cinder and exported via LIO over iscsi to iscsid
  which hot-plugs new SCSI devices. Those SCSI devices are then
  propagated by Nova to libvirt and QEMU during volume attachment;
* Added post-deployment testing via rally and tempest (via the
  microstack-test snap). A set of tests included into Refstack 2018.02
  is executed (except for object storage tests due to the lack of object
  storage support).

Change-Id: Ic70770095860a57d5e0a55a8a9451f9db6be7448
This commit is contained in:
Dmitrii Shcherbakov 2020-05-25 21:51:06 +00:00
parent e59d15eb58
commit 780a4c4ead
78 changed files with 1932 additions and 546 deletions

View File

@ -1,18 +1,18 @@
From 4d90b94a0a4ce3e7e69507c2c25a6981336c66a1 Mon Sep 17 00:00:00 2001
From: Pete Vander Giessen <pete.vandergiessen@canonical.com>
Date: Thu, 19 Sep 2019 13:18:50 +0000
Subject: [PATCH] Added SNAP_COMMON pathing
From 36c45710c8cc3bbdf86fe2513a07a0d5f0a5c3f9 Mon Sep 17 00:00:00 2001
From: Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
Date: Mon, 8 Jun 2020 13:56:20 +0000
Subject: [PATCH] Use SNAP_COMMON paths
---
lib/python3.6/site-packages/openstack_dashboard/local/local_settings.py | 4 +++-
openstack_dashboard/local/local_settings.py | 4 +++-
openstack_dashboard/settings.py | 6 ++++--
2 files changed, 7 insertions(+), 3 deletions(-)
diff --git a/lib/python3.6/site-packages/openstack_dashboard/local/local_settings.py b/lib/python3.6/site-packages/openstack_dashboard/local/local_settings.py
index 5f1ab10cc..cef4e9485 100644
--- a/lib/python3.6/site-packages/openstack_dashboard/local/local_settings.py
+++ b/lib/python3.6/site-packages/openstack_dashboard/local/local_settings.py
@@ -10,6 +10,8 @@ from openstack_dashboard.settings import HORIZON_CONFIG
diff --git a/lib/python3.8/site-packages/openstack_dashboard/local/local_settings.py b/lib/python3.8/site-packages/openstack_dashboard/local/local_settings.py
index 2b084bf24..aad403c04 100644
--- a/lib/python3.8/site-packages/openstack_dashboard/local/local_settings.py
+++ b/lib/python3.8/site-packages/openstack_dashboard/local/local_settings.py
@@ -22,6 +22,8 @@ from openstack_dashboard.settings import HORIZON_CONFIG
DEBUG = True
@ -21,7 +21,7 @@ index 5f1ab10cc..cef4e9485 100644
# This setting controls whether or not compression is enabled. Disabling
# compression makes Horizon considerably slower, but makes it much easier
# to debug JS and CSS changes
@@ -62,7 +64,7 @@ DEBUG = True
@@ -74,7 +76,7 @@ DEBUG = True
# including on the login form.
#HORIZON_CONFIG["disable_password_reveal"] = False
@ -30,11 +30,11 @@ index 5f1ab10cc..cef4e9485 100644
# Set custom secret key:
# You can either set it to a specific value or you can let horizon generate a
diff --git a/openstack_dashboard/settings.py b/openstack_dashboard/settings.py
index 02cd17ef3..69380f460 100644
--- a/lib/python3.6/site-packages/openstack_dashboard/settings.py
+++ b/lib/python3.6/site-packages/openstack_dashboard/settings.py
@@ -55,6 +55,8 @@ if ROOT_PATH not in sys.path:
diff --git a/lib/python3.8/site-packages/openstack_dashboard/settings.py b/lib/python3.8/site-packages/openstack_dashboard/settings.py
index 81b8e45e1..5909bc8a8 100644
--- a/lib/python3.8/site-packages/openstack_dashboard/settings.py
+++ b/lib/python3.8/site-packages/openstack_dashboard/settings.py
@@ -50,6 +50,8 @@ if ROOT_PATH not in sys.path:
DEBUG = False
@ -43,7 +43,7 @@ index 02cd17ef3..69380f460 100644
ROOT_URLCONF = 'openstack_dashboard.urls'
HORIZON_CONFIG = {
@@ -216,7 +218,7 @@ USE_TZ = True
@@ -211,7 +213,7 @@ USE_TZ = True
DEFAULT_EXCEPTION_REPORTER_FILTER = 'horizon.exceptions.HorizonReporterFilter'
SECRET_KEY = None
@ -52,7 +52,7 @@ index 02cd17ef3..69380f460 100644
ADD_INSTALLED_APPS = []
@@ -265,7 +267,7 @@ else:
@@ -260,7 +262,7 @@ else:
)
# allow to drop settings snippets into a local_settings_dir

View File

@ -1,4 +0,0 @@
#!/bin/bash
# Wrapper for dnsmasq
exec $SNAP/usr/sbin/dnsmasq-orig -u snap_daemon -g snap_daemon $@

View File

@ -0,0 +1,77 @@
From a8df30a8a837c223945a13fe4cd9418084d8ed21 Mon Sep 17 00:00:00 2001
From: Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
Date: Wed, 10 Jun 2020 20:14:32 +0000
Subject: [PATCH] drop setuid/setgid/initgroups
---
src/os/unix/ngx_process_cycle.c | 54 ---------------------------------
1 file changed, 54 deletions(-)
diff --git a/src/os/unix/ngx_process_cycle.c b/src/os/unix/ngx_process_cycle.c
index 5817a2c2..305c6823 100644
--- a/src/os/unix/ngx_process_cycle.c
+++ b/src/os/unix/ngx_process_cycle.c
@@ -825,60 +825,6 @@ ngx_worker_process_init(ngx_cycle_t *cycle, ngx_int_t worker)
}
}
- if (geteuid() == 0) {
- if (setgid(ccf->group) == -1) {
- ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno,
- "setgid(%d) failed", ccf->group);
- /* fatal */
- exit(2);
- }
-
- if (initgroups(ccf->username, ccf->group) == -1) {
- ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno,
- "initgroups(%s, %d) failed",
- ccf->username, ccf->group);
- }
-
-#if (NGX_HAVE_PR_SET_KEEPCAPS && NGX_HAVE_CAPABILITIES)
- if (ccf->transparent && ccf->user) {
- if (prctl(PR_SET_KEEPCAPS, 1, 0, 0, 0) == -1) {
- ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno,
- "prctl(PR_SET_KEEPCAPS, 1) failed");
- /* fatal */
- exit(2);
- }
- }
-#endif
-
- if (setuid(ccf->user) == -1) {
- ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno,
- "setuid(%d) failed", ccf->user);
- /* fatal */
- exit(2);
- }
-
-#if (NGX_HAVE_CAPABILITIES)
- if (ccf->transparent && ccf->user) {
- struct __user_cap_data_struct data;
- struct __user_cap_header_struct header;
-
- ngx_memzero(&header, sizeof(struct __user_cap_header_struct));
- ngx_memzero(&data, sizeof(struct __user_cap_data_struct));
-
- header.version = _LINUX_CAPABILITY_VERSION_1;
- data.effective = CAP_TO_MASK(CAP_NET_RAW);
- data.permitted = data.effective;
-
- if (syscall(SYS_capset, &header, &data) == -1) {
- ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno,
- "capset() failed");
- /* fatal */
- exit(2);
- }
- }
-#endif
- }
-
if (worker >= 0) {
cpu_affinity = ngx_get_cpu_affinity(worker);
--
2.17.1

View File

@ -1,57 +0,0 @@
Description: Drop code where nginx drops privileges for worker
processes. While setuid is covered by the browser-support plug,
setgroups isn't covered by any plugs. This code isn't required
because in strict mode we run worker processes as root:root.
The seccomp violation follows:
= Seccomp =
Time: Jun 16 01:13:15
Log: auid=4294967295 uid=0 gid=0 ses=4294967295 pid=6087 comm="nginx"
exe="/snap/keystone/x1/usr/sbin/nginx" sig=31 arch=c000003e
116(setgroups) compat=0 ip=0x7f40e288af09 code=0x0
Syscall: setgroups
Suggestion:
* adjust program to not use 'setgroups' until per-snap user/groups
are supported (https://launchpad.net/bugs/1446748)
Author: Corey Bryant <corey.bryant@canonical.com>
Forwarded: no
---
src/os/unix/ngx_process_cycle.c | 22 ----------------------
1 file changed, 22 deletions(-)
diff --git a/src/os/unix/ngx_process_cycle.c b/src/os/unix/ngx_process_cycle.c
index 1710ea8..c428673 100644
--- a/src/os/unix/ngx_process_cycle.c
+++ b/src/os/unix/ngx_process_cycle.c
@@ -824,28 +824,6 @@ ngx_worker_process_init(ngx_cycle_t *cycle, ngx_int_t worker)
}
}
- if (geteuid() == 0) {
- if (setgid(ccf->group) == -1) {
- ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno,
- "setgid(%d) failed", ccf->group);
- /* fatal */
- exit(2);
- }
-
- if (initgroups(ccf->username, ccf->group) == -1) {
- ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno,
- "initgroups(%s, %d) failed",
- ccf->username, ccf->group);
- }
-
- if (setuid(ccf->user) == -1) {
- ngx_log_error(NGX_LOG_EMERG, cycle->log, ngx_errno,
- "setuid(%d) failed", ccf->user);
- /* fatal */
- exit(2);
- }
- }
-
if (worker >= 0) {
cpu_affinity = ngx_get_cpu_affinity(worker);
--
2.7.4

View File

@ -0,0 +1,56 @@
From 84e8c808d146ef7d4a716bf951875f85fd7020c9 Mon Sep 17 00:00:00 2001
From: Dmitrii Shcherbakov <dmitrii.shcherbakov@canonical.com>
Date: Tue, 18 Aug 2020 19:07:37 +0000
Subject: [PATCH] Use a snap-specific abstract socket address
* open-iscsi is included into Ubuntu cloud images and, as a result,
sockets with names hard-coded in the source get created and owned by
systemd at the host level;
* iscsid checks for the LISTEN_FDS environment variable to determine
whether systemd passes the necessary socket file descriptors to it -
this does not happen since iscsid.socket service name differs from the
actual service name: snap.microstack.iscsid.service;
* snapd's support for the systemd socket activation feature is present
but abstract socket names are restricted to be prefixed with
snap.<snap-name>.<your-socket-name> - this means that open-scsi needs to
be patched since the abstract domain socket name isn't configurable and
is hard-coded at the compile time.
This patch alters the hard-coded abstract socket names in order to use
systemd socket activation via the means supported by snapd and to avoid
conflicts with an iscsid instance that might be used at the host where
this snap is installed.
---
usr/mgmt_ipc.h | 2 +-
usr/uip_mgmt_ipc.h | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/usr/mgmt_ipc.h b/usr/mgmt_ipc.h
index 55972ed..aa66419 100644
--- a/usr/mgmt_ipc.h
+++ b/usr/mgmt_ipc.h
@@ -23,7 +23,7 @@
#include "iscsi_if.h"
#include "config.h"
-#define ISCSIADM_NAMESPACE "ISCSIADM_ABSTRACT_NAMESPACE"
+#define ISCSIADM_NAMESPACE "snap.microstack.ISCSIADM_ABSTRACT_NAMESPACE"
#define PEERUSER_MAX 64
typedef enum iscsiadm_cmd {
diff --git a/usr/uip_mgmt_ipc.h b/usr/uip_mgmt_ipc.h
index 916113d..484e9f5 100644
--- a/usr/uip_mgmt_ipc.h
+++ b/usr/uip_mgmt_ipc.h
@@ -24,7 +24,7 @@
#include "initiator.h"
#include "transport.h"
-#define ISCSID_UIP_NAMESPACE "ISCSID_UIP_ABSTRACT_NAMESPACE"
+#define ISCSID_UIP_NAMESPACE "snap.microstack.ISCSID_UIP_ABSTRACT_NAMESPACE"
typedef enum iscsid_uip_cmd {
ISCSID_UIP_IPC_UNKNOWN = 0,
--
2.17.1

19
snap-overlay/bin/iscsid-start Executable file
View File

@ -0,0 +1,19 @@
#!/bin/sh
mkdir -p $SNAP_COMMON/etc/iscsi/
INAME_FILE=$SNAP_COMMON/etc/iscsi/initiatorname.iscsi
if ! [ -f $INAME_FILE ]; then
# Generate a unique InitiatorName and save it
INAME=`iscsi-iname -p iqn.1993-08.org.debian:01`
echo "## DO NOT EDIT OR REMOVE THIS FILE!" > $INAME_FILE
echo "## If you remove this file, the iSCSI daemon will not start." >> $INAME_FILE
echo "## If you change the InitiatorName, existing access control lists" >> $INAME_FILE
echo "## may reject this initiator. The InitiatorName must be unique">> $INAME_FILE
echo "## for each iSCSI initiator. Do NOT duplicate iSCSI InitiatorNames." >> $INAME_FILE
printf "InitiatorName=$INAME\n" >> $INAME_FILE
chmod 600 $INAME_FILE
fi
exec $SNAP/sbin/iscsid -p $SNAP_COMMON/var/run/iscsid.pid --initiatorname=$INAME_FILE --config=$SNAP_COMMON/etc/iscsi/iscsid.conf

5
snap-overlay/bin/load-modules Executable file
View File

@ -0,0 +1,5 @@
#!/bin/bash
set -ex
modprobe -a vhost vhost-net vhost-scsi vhost-vsock pci-stub vfio nbd dm-mod dm-thin-pool dm-snapshot iscsi-tcp target-core-mod

View File

@ -13,9 +13,11 @@ snapctl set \
# Networking related settings.
snapctl set \
config.network.dns=1.1.1.1 \
config.network.dns-servers=1.1.1.1 \
config.network.dns-domain=microstack.example. \
config.network.ext-gateway=10.20.20.1 \
config.network.control-ip=10.20.20.1 \
config.network.node-fqdn=`hostname -f` \
config.network.compute-ip=10.20.20.1 \
config.network.ext-cidr=10.20.20.1/24 \
config.network.security-rules=true \
@ -30,11 +32,19 @@ snapctl set \
config.credentials.os-password=keystone \
config.credentials.key-pair="/home/{USER}/snap/{SNAP_NAME}/common/.ssh/id_microstack" \
config.credentials.nova-password=nova \
config.credentials.cinder-password=cinder \
config.credentials.neutron-password=neutron \
config.credentials.placement-password=placement \
config.credentials.glance-password=glance \
;
# Cinder volume backend config.
snapctl set \
config.cinder.setup-loop-based-cinder-lvm-backend=false \
config.cinder.loop-device-file-size=32G \
config.cinder.lvm-backend-volume-group=cinder-volumes \
;
# Host optimizations and fixes.
snapctl set \
config.host.ip-forwarding=false \
@ -45,12 +55,13 @@ snapctl set \
snapctl set \
config.services.control-plane=true \
config.services.hypervisor=true \
config.services.spice-console=true \
;
# Clustering roles
snapctl set \
cluster.role=control \
cluster.password=null \
config.cluster.role=control \
config.cluster.password=null \
;
# Uninstall stuff

View File

@ -10,15 +10,30 @@
set -ex
extcidr=$(snapctl get config.network.ext-cidr)
controlip=$(snapctl get config.network.control-ip)
# Create external integration bridge
ovs-vsctl --retry --may-exist add-br br-ex
# NOTE(dmitriis): this needs to be reworked to allow for OVN + direct exit of traffic to
# the provider network from a compute node.
# Create an external bridge in the system datapath.
ovs-vsctl --retry --may-exist add-br br-ex -- set bridge br-ex datapath_type=system protocols=OpenFlow13,OpenFlow15
ovs-vsctl set open . external-ids:ovn-bridge-mappings=physnet1:br-ex
ovs-vsctl set open . external-ids:ovn-cms-options="enable-chassis-as-gw"
# Configure the settings used by self-configuration of ovn-controller.
ovs-vsctl set open . external-ids:ovn-encap-type=geneve -- set open . external-ids:ovn-encap-ip=$controlip
# Leave SB database connection details for ovn-controller to pick up.
ovs-vsctl set open . external-ids:ovn-remote='unix:/var/snap/microstack/common/run/ovn/ovnsb_db.sock'
# NOTE: system-id is a randomly-generated UUID (see the --system-id=random option for ovs-ctl)
# As it is generated automatically, we do not set it here.
# It can be retrieved by looking at `ovs-vsctl get open_vswitch . external-ids`.
# Configure br-ex
ip address add $extcidr dev br-ex || :
ip link set br-ex up || :
sudo iptables -w -t nat -A POSTROUTING -s $extcidr ! \
iptables-legacy -w -t nat -A POSTROUTING -s $extcidr ! \
-d $extcidr -j MASQUERADE || :
exit 0

4
snap-overlay/bin/target-start Executable file
View File

@ -0,0 +1,4 @@
#!/bin/sh
# This wrapper is needed due to https://bugs.launchpad.net/snapd/+bug/1882839
$SNAP/usr/bin/targetcli restoreconfig clear_existing=True

5
snap-overlay/bin/target-stop Executable file
View File

@ -0,0 +1,5 @@
#!/bin/sh
$SNAP/usr/bin/targetcli saveconfig
# This wrapper is needed due to https://bugs.launchpad.net/snapd/+bug/1882839
$SNAP/usr/bin/targetcli clearconfig confirm=True

View File

@ -1,5 +0,0 @@
[DEFAULT]
interface_driver = openvswitch
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
dnsmasq_dns_servers = 1.1.1.1

View File

@ -1,2 +0,0 @@
[DEFAULT]
interface_driver = openvswitch

View File

@ -1,3 +0,0 @@
[DEFAULT]
nova_metadata_ip = 10.20.20.1
metadata_proxy_shared_secret = supersecret

View File

@ -1,4 +1,10 @@
[DEFAULT]
core_plugin = ml2
service_plugins = router
service_plugins = ovn-router
allow_overlapping_ips = True
# Disable auto-scheduling of networks to DHCP agents since they are not used with OVN.
network_auto_schedule = False
[ovn]
ovn_metadata_enabled = True

View File

@ -1,13 +1,18 @@
[ml2]
mechanism_drivers = openvswitch
extension_drivers = port_security,trunk,qos
tenant_network_types = geneve,gre,vxlan
mechanism_drivers = ovn
extension_drivers = port_security,qos
tenant_network_types = geneve
overlay_ip_version = 4
external_network_type = flat
[ml2_type_geneve]
vni_ranges = 1:65535
max_header_size = 40
[ml2_type_gre]
tunnel_id_ranges = 1:65535
[ml2_type_flat]
flat_networks = *
[ml2_type_vxlan]
vni_ranges = 1:65535
[ovn]
# TODO(dmitriis): replace the common path with a template.
ovn_nb_connection = unix:/var/snap/microstack/common/run/ovn/ovnnb_db.sock
ovn_sb_connection = unix:/var/snap/microstack/common/run/ovn/ovnsb_db.sock

View File

@ -1,11 +0,0 @@
# Snap provided defaults for neutron-openvswitch-agent
[securitygroup]
enable_security_group = True
firewall_driver = openvswitch
[AGENT]
tunnel_types = geneve,vxlan,gre
[ovs]
local_ip = 127.0.0.1
bridge_mappings = physnet1:br-ex

View File

@ -0,0 +1,3 @@
# Snap distribution defaults - do not change, override in $SNAP_COMMON/etc/cinder.conf.d
[database]
max_retries = -1

View File

@ -116,7 +116,8 @@ table {
/* Login splash screen */
#splash {
background: url("/static/themes/ubuntu/img/image-background-pattern.png");
background: linear-gradient(to right, rgba(100, 100, 100, 0.2), transparent), url("/static/themes/ubuntu/img/image-background-pattern.png");
position: absolute;
width: 100vw;
.login {
background-color: $white;

View File

@ -10,6 +10,7 @@ setup:
- "{snap_common}/etc/nginx/sites-enabled"
- "{snap_common}/etc/nginx/snap/sites-enabled"
- "{snap_common}/etc/glance/glance.conf.d"
- "{snap_common}/etc/placement/placement.conf.d"
- "{snap_common}/etc/horizon/horizon.conf.d"
- "{snap_common}/etc/horizon/local_settings.d"
- "{snap_common}/var/horizon/static"
@ -17,6 +18,7 @@ setup:
- "{snap_common}/etc/cinder/uwsgi/snap"
- "{snap_common}/etc/nova/uwsgi/snap"
- "{snap_common}/etc/horizon/uwsgi/snap"
- "{snap_common}/etc/placement/uwsgi/snap"
- "{snap_common}/etc/rabbitmq"
- "{snap_common}/fernet-keys"
- "{snap_common}/lib"
@ -25,6 +27,9 @@ setup:
- "{snap_common}/log"
- "{snap_common}/run"
- "{snap_common}/lib/instances"
- "{snap_common}/etc/apparmor.d/libvirt"
- "{snap_common}/etc/iscsi"
- "{snap_common}/etc/target"
templates:
keystone-nginx.conf.j2: "{snap_common}/etc/nginx/snap/sites-enabled/keystone.conf"
keystone-snap.conf.j2: "{snap_common}/etc/keystone/keystone.conf.d/keystone-snap.conf"
@ -33,28 +38,40 @@ setup:
nova-snap.conf.j2: "{snap_common}/etc/nova/nova.conf.d/nova-snap.conf"
nova-nginx.conf.j2: "{snap_common}/etc/nginx/snap/sites-enabled/nova.conf"
glance-snap.conf.j2: "{snap_common}/etc/glance/glance.conf.d/glance-snap.conf"
placement-nginx.conf.j2: "{snap_common}/etc/nginx/snap/sites-enabled/placement.conf"
placement-snap.conf.j2: "{snap_common}/etc/placement/placement.conf.d/placement-snap.conf"
cinder-nginx.conf.j2: "{snap_common}/etc/nginx/snap/sites-enabled/cinder.conf"
cinder-snap.conf.j2: "{snap_common}/etc/cinder/cinder.conf.d/cinder-snap.conf"
cinder.database.conf.j2: "{snap_common}/etc/cinder/cinder.conf.d/database.conf"
cinder.rabbitmq.conf.j2: "{snap_common}/etc/cinder/cinder.conf.d/rabbitmq.conf"
cinder.keystone.conf.j2: "{snap_common}/etc/cinder/cinder.conf.d/keystone.conf"
cinder-rootwrap.conf.j2: "{snap_common}/etc/cinder/rootwrap.conf"
horizon-snap.conf.j2: "{snap_common}/etc/horizon/horizon.conf.d/horizon-snap.conf"
horizon-nginx.conf.j2: "{snap_common}/etc/nginx/snap/sites-enabled/horizon.conf"
05_snap_tweaks.j2: "{snap_common}/etc/horizon/local_settings.d/_05_snap_tweaks.py"
libvirtd.conf.j2: "{snap_common}/libvirt/libvirtd.conf"
virtlogd.conf.j2: "{snap_common}/libvirt/virtlogd.conf"
microstack.rc.j2: "{snap_common}/etc/microstack.rc"
microstack.json.j2: "{snap_common}/etc/microstack.json"
glance.conf.d.keystone.conf.j2: "{snap_common}/etc/glance/glance.conf.d/keystone.conf"
placement.conf.d.keystone.conf.j2: "{snap_common}/etc/placement/placement.conf.d/keystone.conf"
nova.conf.d.keystone.conf.j2: "{snap_common}/etc/nova/nova.conf.d/keystone.conf"
nova.conf.d.database.conf.j2: "{snap_common}/etc/nova/nova.conf.d/database.conf"
nova.conf.d.rabbitmq.conf.j2: "{snap_common}/etc/nova/nova.conf.d/rabbitmq.conf"
nova.conf.d.nova-placement.conf.j2: "{snap_common}/etc/nova/nova.conf.d/nova-placement.conf"
nova.conf.d.glance.conf.j2: "{snap_common}/etc/nova/nova.conf.d/glance.conf"
nova.conf.d.neutron.conf.j2: "{snap_common}/etc/nova/nova.conf.d/neutron.conf"
nova.conf.d.placement.conf.j2: "{snap_common}/etc/nova/nova.conf.d/placement.conf"
nova.conf.d.console.conf.j2: "{snap_common}/etc/nova/nova.conf.d/console.conf"
keystone.database.conf.j2: "{snap_common}/etc/keystone/keystone.conf.d/database.conf"
glance.database.conf.j2: "{snap_common}/etc/glance/glance.conf.d/database.conf"
placement.conf.d.database.conf.j2: "{snap_common}/etc/placement/placement.conf.d/database.conf"
neutron.keystone.conf.j2: "{snap_common}/etc/neutron/neutron.conf.d/keystone.conf"
neutron.nova.conf.j2: "{snap_common}/etc/neutron/neutron.conf.d/nova.conf"
neutron.database.conf.j2: "{snap_common}/etc/neutron/neutron.conf.d/database.conf"
neutron.conf.d.rabbitmq.conf.j2: "{snap_common}/etc/neutron/neutron.conf.d/rabbitmq.conf"
neutron_ovn_metadata_agent.ini.j2: "{snap_common}/etc/neutron/neutron_ovn_metadata_agent.ini"
rabbitmq.conf.j2: "{snap_common}/etc/rabbitmq/rabbitmq.config"
iscsid.conf.j2: "{snap_common}/etc/iscsi/iscsid.conf"
# LMA stack templates
telegraf.conf.j2: "{snap_common}/etc/telegraf/telegraf-microstack.conf"
@ -63,17 +80,22 @@ setup:
chmod:
"{snap_common}/instances": 0755
"{snap_common}/etc/microstack.rc": 0644
"{snap_common}/etc/microstack.json": 0644
snap-config-keys:
ospassword: 'config.credentials.os-password'
nova_password: 'config.credentials.nova-password'
cinder_password: 'config.credentials.cinder-password'
neutron_password: 'config.credentials.neutron-password'
placement_password: 'config.credentials.placement-password'
glance_password: 'config.credentials.glance-password'
placement_password: 'config.credentials.placement-password'
control_ip: 'config.network.control-ip'
node_fqdn: 'config.network.node-fqdn'
compute_ip: 'config.network.compute-ip'
extgateway: 'config.network.ext-gateway'
extcidr: 'config.network.ext-cidr'
dns: 'config.network.dns'
dns_servers: 'config.network.dns-servers'
dns_domain: 'config.network.dns-domain'
dashboard_allowed_hosts: 'config.network.dashboard-allowed-hosts'
dashboard_port: 'config.network.ports.dashboard'
mysql_port: 'config.network.ports.mysql'
@ -83,6 +105,10 @@ setup:
monitoring_tag: 'config.monitoring.tag'
monitoring_ipmi: 'config.monitoring.ipmi'
alerting_tag: 'config.alerting.tag'
ovn_nb_connection: 'config.network.ovn-nb-connection'
ovn_sb_connection: 'config.network.ovn-sb-connection'
setup_loop_based_cinder_lvm_backend: 'config.cinder.setup-loop-based-cinder-lvm-backend'
lvm_backend_volume_group: 'config.cinder.lvm-backend-volume-group'
entry_points:
keystone-manage:
binary: "{snap}/bin/keystone-manage"
@ -116,19 +142,6 @@ entry_points:
- "{snap_common}/etc/nova/nova.conf"
config-dirs:
- "{snap_common}/etc/nova/nova.conf.d"
nova-uwsgi:
type: uwsgi
uwsgi-dir: "{snap_common}/etc/nova/uwsgi/snap"
uwsgi-dir-override: "{snap_common}/etc/nova/uwsgi"
config-files:
- "{snap}/etc/nova/nova.conf"
config-files-override:
- "{snap_common}/etc/nova/nova.conf"
config-dirs:
- "{snap_common}/etc/nova/nova.conf.d"
templates:
nova-placement-api.ini.j2:
"{snap_common}/etc/nova/uwsgi/snap/nova-placement-api.ini"
nova-conductor:
binary: "{snap}/bin/nova-conductor"
config-files:
@ -169,6 +182,17 @@ entry_points:
- "{snap_common}/etc/nova/nova.conf"
config-dirs:
- "{snap_common}/etc/nova/nova.conf.d"
nova-spicehtml5proxy:
binary: "{snap}/bin/nova-spicehtml5proxy"
config-files:
- "{snap}/etc/nova/nova.conf"
config-files-override:
- "{snap_common}/etc/nova/nova.conf"
config-dirs:
- "{snap_common}/etc/nova/nova.conf.d"
templates:
nova.conf.d.console.conf.j2:
"{snap_common}/etc/nova/nova.conf.d/console.conf"
neutron-db-manage:
binary: "{snap}/bin/neutron-db-manage"
config-files:
@ -215,36 +239,19 @@ entry_points:
- "{snap_common}/etc/neutron/neutron.conf"
config-dirs:
- "{snap_common}/etc/neutron/neutron.conf.d"
neutron-l3-agent:
binary: "{snap}/bin/neutron-l3-agent"
neutron-ovn-metadata-agent:
binary: "{snap}/bin/neutron-ovn-metadata-agent"
config-files:
- "{snap}/etc/neutron/neutron.conf"
- "{snap}/etc/neutron/l3_agent.ini"
- "{snap}/etc/neutron/neutron_ovn_metadata_agent.ini"
config-files-override:
- "{snap_common}/etc/neutron/neutron.conf"
- "{snap_common}/etc/neutron/l3_agent.ini"
config-dirs:
- "{snap_common}/etc/neutron/neutron.conf.d"
neutron-dhcp-agent:
binary: "{snap}/bin/neutron-dhcp-agent"
config-files:
- "{snap}/etc/neutron/neutron.conf"
- "{snap}/etc/neutron/dhcp_agent.ini"
config-files-override:
- "{snap_common}/etc/neutron/neutron.conf"
- "{snap_common}/etc/neutron/dhcp_agent.ini"
config-dirs:
- "{snap_common}/etc/neutron/neutron.conf.d"
neutron-metadata-agent:
binary: "{snap}/bin/neutron-metadata-agent"
config-files:
- "{snap}/etc/neutron/neutron.conf"
- "{snap}/etc/neutron/metadata_agent.ini"
config-files-override:
- "{snap_common}/etc/neutron/neutron.conf"
- "{snap_common}/etc/neutron/metadata_agent.ini"
- "{snap_common}/etc/neutron/neutron_ovn_metadata_agent.ini"
config-dirs:
- "{snap_common}/etc/neutron/neutron.conf.d"
templates:
neutron_ovn_metadata_agent.ini.j2:
"{snap_common}/etc/neutron/neutron_ovn_metadata_agent.ini"
glance-manage:
binary: "{snap}/bin/glance-manage"
config-files:
@ -269,6 +276,27 @@ entry_points:
- "{snap_common}/etc/glance/glance-api.conf"
config-dirs:
- "{snap_common}/etc/glance/glance.conf.d"
placement-uwsgi:
type: uwsgi
uwsgi-dir: "{snap_common}/etc/placement/uwsgi/snap"
uwsgi-dir-override: "{snap_common}/etc/placement/uwsgi"
config-files:
- "{snap}/etc/placement/placement.conf"
config-files-override:
- "{snap_common}/etc/placement/placement.conf"
config-dirs:
- "{snap_common}/etc/placement/placement.conf.d"
templates:
placement-api.ini.j2:
"{snap_common}/etc/placement/uwsgi/snap/placement-api.ini"
placement-manage:
binary: "{snap}/bin/placement-manage"
config-files:
- "{snap}/etc/placement/placement.conf"
config-files-override:
- "{snap_common}/etc/placement/placement.conf"
config-dirs:
- "{snap_common}/etc/placement/placement.conf.d"
cinder-backup:
binary: "{snap}/bin/cinder-backup"
config-files:

View File

@ -0,0 +1,6 @@
[DEFAULT]
filters_path={{ snap }}/etc/cinder/rootwrap.d
use_syslog=False
syslog_log_facility=syslog
syslog_log_level=ERROR

View File

@ -2,9 +2,26 @@
# Set state path to writable directory
state_path = {{ snap_common }}/lib
resource_query_filters_file = {{ snap }}/etc/cinder/resource_filters.json
# Set volume configuration file storage directory
volumes_dir = {{ snap_common }}/lib/volumes
my_ip = {{ compute_ip }}
rootwrap_config = {{ snap_common }}/etc/cinder/rootwrap.conf
enabled_backends = {% if setup_loop_based_cinder_lvm_backend %}lvm-loop-based-backend{% endif %}
{% if setup_loop_based_cinder_lvm_backend %}
[lvm-loop-based-backend]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
iscsi_protocol = iscsi
target_helper = lioadm
volume_group = {{ lvm_backend_volume_group }}
volume_backend_name=lvm-loop-based
{% endif %}
[oslo_concurrency]
# Oslo Concurrency lock path
lock_path = {{ snap_common }}/lock

View File

@ -0,0 +1,2 @@
[database]
connection = mysql+pymysql://cinder:cinder@{{ control_ip }}:{{ mysql_port }}/cinder

View File

@ -0,0 +1,13 @@
[DEFAULT]
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://{{ control_ip }}:5000
auth_url = http://{{ control_ip }}:5000
memcached_servers = {{ control_ip }}:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = {{ cinder_password }}

View File

@ -0,0 +1,2 @@
[DEFAULT]
transport_url = rabbit://openstack:rabbitmq@{{ control_ip }}:{{ rabbit_port }}

View File

@ -0,0 +1,22 @@
iscsid.startup = {{ snap }}/sbin/iscsid
node.startup = manual
node.leading_login = No
node.session.timeo.replacement_timeout = 120
node.conn[0].timeo.login_timeout = 15
node.conn[0].timeo.logout_timeout = 15
node.conn[0].timeo.noop_out_interval = 5
node.conn[0].timeo.noop_out_timeout = 5
node.session.err_timeo.abort_timeout = 15
node.session.err_timeo.lu_reset_timeout = 30
node.session.err_timeo.tgt_reset_timeout = 30
node.session.cmds_max = 128
node.session.queue_depth = 32
node.session.xmit_thread_priority = -20
node.session.iscsi.InitialR2T = No
node.session.iscsi.ImmediateData = Yes
node.session.iscsi.FirstBurstLength = 262144
node.session.iscsi.MaxBurstLength = 16776192
node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144
node.conn[0].iscsi.MaxXmitDataSegmentLength = 0
discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768
node.session.nr_sessions = 1

View File

@ -0,0 +1,26 @@
{
"openstack": {
"admin": {
"password": "{{ ospassword }}",
"project_domain_name": "default",
"project_name": "admin",
"user_domain_name": "default",
"username": "admin"
},
"api_info": {
"keystone": {
"service_type": "identityv3",
"version": 3
}
},
"auth_url": "http://{{ control_ip }}:5000",
"endpoint_type": null,
"https_cacert": "",
"https_cert": "",
"https_insecure": false,
"https_key": "",
"profiler_conn_str": null,
"profiler_hmac_key": null,
"region_name": ""
}
}

View File

@ -4,6 +4,18 @@ state_path = {{ snap_common }}/lib
# Log to systemd journal
use_journal = True
{% if dns_domain %}
dns_domain = {{ dns_domain }}
{% endif %}
[oslo_concurrency]
# Oslo Concurrency lock path
lock_path = {{ snap_common }}/lock
[ovn]
{% if dns_servers %}
dns_servers= {{ dns_servers }}
{% endif %}
# TODO(dmitriis): enable once external bridge IP addressing for compute nodes is figured out.
# enable_distributed_floating_ip = True

View File

@ -0,0 +1,13 @@
[DEFAULT]
metadata_proxy_shared_secret = supersecret
[ovs]
ovsdb_connection = unix:{{ snap_common }}/run/openvswitch/db.sock
[ovn]
{% if ovn_nb_connection %}
ovn_nb_connection = {{ ovn_nb_connection }}
{% endif %}
{% if ovn_sb_connection %}
ovn_sb_connection = {{ ovn_sb_connection }}
{% endif %}

View File

@ -1,4 +1,4 @@
user root root;
user snap_daemon snap_daemon;
worker_processes auto;
pid {{ snap_common }}/run/nginx.pid;

View File

@ -4,6 +4,20 @@ state_path = {{ snap_common }}/lib
# Log to systemd journal
use_journal = True
# Set a hostname to be an FQDN to avoid issues with port binding for
# which a hostname of a Nova node must match a hostname of an OVN chassis.
host = {{ node_fqdn }}
[oslo_concurrency]
# Oslo Concurrency lock path
lock_path = {{ snap_common }}/lock
[os_vif_ovs]
# Nova relies on os-vif for openvswitch interface plugging and needs a connection to
# OVSDB. This is done via a TCP connection to localhost by default so we override this to
# use a unix socket instead.
# See os-vif/releasenotes/notes/add-ovsdb-native-322fffb49c91503d.yaml
ovsdb_connection = unix:{{ snap_common }}/run/openvswitch/db.sock
[cinder]
os_region_name = microstack

View File

@ -0,0 +1,18 @@
[DEFAULT]
web = {{ snap }}/usr/share/spice-html5
[vnc]
enabled = False
[spice]
# Proxy configuration (controller only).
html5proxy_host = 0.0.0.0
html5proxy_port = 6082
enabled = True
agent_enabled = True
html5proxy_base_url = http://{{ control_ip }}:6082/spice_auto.html
server_listen = {{ compute_ip }}
server_proxyclient_address = {{ compute_ip }}
keymap = en-us

View File

@ -0,0 +1,11 @@
[placement]
auth_uri = http://{{ control_ip }}:5000
auth_url = http://{{ control_ip }}:5000
memcached_servers = {{ control_ip }}:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = {{ nova_password }}
os_region_name = RegionOne

View File

@ -7,8 +7,8 @@ server_port=5666
#allowed_hosts=0.0.0.0/0
#allowed_hosts=10.0.0.0/8,127.0.0.1
nrpe_user=root
nrpe_group=root
nrpe_user=snap_daemon
nrpe_group=snap_daemon
dont_blame_nrpe=0
debug=0
pid_file={{ snap_common }}/run/nrpe.pid
@ -32,10 +32,6 @@ command[check_libvirtd]=python3 {{ snap }}/usr/lib/nagios/plugins/check_systemd.
command[check_memcached]=python3 {{ snap }}/usr/lib/nagios/plugins/check_systemd.py snap.microstack.memcached
command[check_mysqld]=python3 {{ snap }}/usr/lib/nagios/plugins/check_systemd.py snap.microstack.mysqld
command[check_neutron_api]=python3 {{ snap }}/usr/lib/nagios/plugins/check_systemd.py snap.microstack.neutron-api
command[check_neutron_dhcp_agent]=python3 {{ snap }}/usr/lib/nagios/plugins/check_systemd.py snap.microstack.neutron-dhcp-agent
command[check_neutron_l3_agent]=python3 {{ snap }}/usr/lib/nagios/plugins/check_systemd.py snap.microstack.neutron-l3-agent
command[check_neutron_metadata_agent]=python3 {{ snap }}/usr/lib/nagios/plugins/check_systemd.py snap.microstack.neutron-metadata-agent
command[check_neutron_openvswitch_agent]=python3 {{ snap }}/usr/lib/nagios/plugins/check_systemd.py snap.microstack.neutron-openvswitch-agent
command[check_nginx]=python3 {{ snap }}/usr/lib/nagios/plugins/check_systemd.py snap.microstack.nginx
command[check_nova_api]=python3 {{ snap }}/usr/lib/nagios/plugins/check_systemd.py snap.microstack.nova-api
command[check_nova_api_metadata]=python3 {{ snap }}/usr/lib/nagios/plugins/check_systemd.py snap.microstack.nova-api-metadata

View File

@ -1,5 +1,5 @@
[uwsgi]
wsgi-file = {{ snap }}/bin/nova-placement-api
wsgi-file = {{ snap }}/bin/placement-api
uwsgi-socket = {{ snap_common }}/run/placement-api.sock
buffer-size = 65535
master = true
@ -8,3 +8,4 @@ processes = 4
thunder-lock = true
lazy-apps = true
home = {{ snap }}/usr
pyargv = {{ pyargv }}

View File

@ -0,0 +1,10 @@
server {
listen 8778;
error_log syslog:server=unix:/dev/log;
access_log syslog:server=unix:/dev/log;
location / {
include {{ snap }}/usr/conf/uwsgi_params;
uwsgi_param SCRIPT_NAME '';
uwsgi_pass unix://{{ snap_common }}/run/placement-api.sock;
}
}

View File

@ -0,0 +1,9 @@
[DEFAULT]
# Set state path to writable directory
state_path = {{ snap_common }}/lib
# Log to systemd journal
use_journal = True
[oslo_concurrency]
# Oslo Concurrency lock path
lock_path = {{ snap_common }}/lock

View File

@ -0,0 +1,2 @@
[placement_database]
connection = mysql+pymysql://placement:placement@{{ control_ip }}:{{ mysql_port }}/placement

View File

@ -1,9 +1,13 @@
[placement]
os_region_name = microstack
project_domain_name = default
project_name = service
auth_type = password
user_domain_name = default
[keystone_authtoken]
auth_uri = http://{{ control_ip }}:5000
auth_url = http://{{ control_ip }}:5000
memcached_servers = {{ control_ip }}:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = placement
password = {{ placement_password }}
[paste_deploy]
flavor = keystone

View File

@ -0,0 +1,2 @@
[placement]
randomize_allocation_candidates = true

View File

@ -0,0 +1,2 @@
include {{ snap_common }}/lib/volumes/*
default-driver iscsi

25
snap-wrappers/ovn/ovn-wrapper Executable file
View File

@ -0,0 +1,25 @@
#!/bin/bash
set -e
export OVN_LOGDIR=${SNAP_COMMON}/log/ovn
export OVN_RUNDIR=${SNAP_COMMON}/run/ovn
export OVN_SYSCONFDIR=${SNAP_COMMON}/etc
export OVN_PKGDATADIR=${SNAP}/usr/local/share/ovn
export OVN_BINDIR=${SNAP}/bin
export OVN_SBINDIR=${SNAP}/sbin
mkdir -p ${OVN_LOGDIR}
mkdir -p ${OVN_RUNDIR}
mkdir -p ${OVN_SYSCONFDIR}/ovn
if [ `basename $1` = 'ovn-ctl' -a `snapctl get config.clustered` == 'true' ]
then
# TODO: replace this with a secure alternative once TLS is supported.
# Create an SB TCP socket to be used by remote ovn-controller and neutron-ovn-metadata
# agents.
exec $@ --db-sb-create-insecure-remote=yes
else
exec $@
fi

View File

@ -5,12 +5,16 @@ set -e
export OVS_LOGDIR=${SNAP_COMMON}/log/openvswitch
export OVS_RUNDIR=${SNAP_COMMON}/run/openvswitch
export OVS_SYSCONFDIR=${SNAP_COMMON}/etc
export OVS_PKGDATADIR=${SNAP}/share/openvswitch
export OVS_PKGDATADIR=${SNAP}/usr/local/share/openvswitch
export OVS_BINDIR=${SNAP}/bin
export OVS_SBINDIR=${SNAP}/sbin
mkdir -p ${OVS_LOGDIR}
mkdir -p ${OVS_RUNDIR}
mkdir -p ${OVS_SYSCONFDIR}/openvswitch
exec $@
if [ `basename $1` = 'ovs-ctl' ]
then
mkdir -p ${OVS_LOGDIR}
mkdir -p ${OVS_RUNDIR}
mkdir -p ${OVS_SYSCONFDIR}/openvswitch
exec $@ --system-id=random
else
exec $@
fi

View File

@ -19,7 +19,7 @@
# %CopyrightEnd%
#
ROOTDIR=$SNAP/usr/lib/erlang
BINDIR=$ROOTDIR/erts-9.2/bin
BINDIR=$ROOTDIR/erts-10.6.4/bin
EMU=beam
PROGNAME=`echo $0 | sed 's/.*\///'`
export EMU

View File

@ -15,6 +15,10 @@
## Copyright (c) 2007-2014 GoPivotal, Inc. All rights reserved.
##
# Make sure files created by rabbitmq (including the .erlang.cookie file which
# needs to be restricted to the user only) are created with strict permissions.
umask 077
mkdir -p $SNAP_COMMON/lib/rabbitmq
cd $SNAP_COMMON/lib/rabbitmq

View File

@ -1,10 +1,47 @@
#!/bin/bash
set -ex
# Initialize config
set-default-config
# TODO(dmitriis): disable other services and only enable them once the
# prerequisites are met instead of allowing snapd to start them and make them fail.
# snapd starts all non-disabled services by default which may lead to errors such as
# a module loading error in case of ovs-vswitchd. The sequence is as follows:
# 1. The snap is installed;
# 2. Non-disabled services are started;
# 3. Interfaces that do not have auto-connection enabled are manually connected by
# an operator (connecting openvswitch-support loads the openvswitch kernel module
# but auto-connection is not enabled for openvswitch-support).
snapctl stop --disable $SNAP_INSTANCE_NAME.ovsdb-server
snapctl stop --disable $SNAP_INSTANCE_NAME.ovn-ovsdb-server-sb
snapctl stop --disable $SNAP_INSTANCE_NAME.ovn-ovsdb-server-nb
snapctl stop --disable $SNAP_INSTANCE_NAME.ovs-vswitchd
snapctl stop --disable $SNAP_INSTANCE_NAME.ovn-northd
snapctl stop --disable $SNAP_INSTANCE_NAME.ovn-controller
snapctl stop --disable $SNAP_INSTANCE_NAME.iscsid
snapctl stop --disable $SNAP_INSTANCE_NAME.target
# No meaningful default backend is available yet.
snapctl stop --disable $SNAP_INSTANCE_NAME.cinder-backup
# Will only be enabled based on the answers during initialization.
snapctl stop --disable $SNAP_INSTANCE_NAME.setup-lvm-loopdev
# Will only be enabled if a backend is chosen to be configured by the user.
snapctl stop --disable $SNAP_INSTANCE_NAME.cinder-volume
mkdir -p $SNAP_DATA/lib/libvirt/images
mkdir -p ${SNAP_COMMON}/log/libvirt/qemu
# NOTE(dmitriis): there is currently no way to make sure this directory gets
# recreated on reboot which would normally be done via systemd-tmpfiles.
mkdir -p /run/lock/snap.$SNAP_INSTANCE_NAME
# Copy TEMPLATE.qemu into the common directory. Libvirt generates additional
# policy dynamically which is why its apparmor directory is writeable under $SNAP_COMMON.
# Also copy other abstractions that are used by this template.
rsync -rh $SNAP/etc/apparmor.d $SNAP_COMMON/etc
# MySQL snapshot for speedy install
# snapshot is a mysql data dir with
@ -24,4 +61,32 @@ done
# Make a place for our horizon config overrides to live
mkdir -p ${SNAP_COMMON}/etc/horizon/local_settings.d
# ----- OVN -----
# Lay out directories used for OVN configuration and persistent data
for dir in etc/ovn var/lib/ovn var/log/ovn var/run/ovn; do
if [ ! -d $SNAP_COMMON/$dir ]; then
mkdir -p $SNAP_COMMON/$dir
fi
done
# Prepare access to the hosting systems Open vSwitch instance
# NOTE end user must execute `snap connect ovn:openvswitch` for this to work
ln -s /var/run/openvswitch $SNAP_COMMON/var/run/openvswitch
# The `ovn-ctl` script does not have enough knobs for useful tailoring of
# execution of the `ovn-northd` daemon. Instead it provides a file to pass
# arguments directly to the `ovn-northd` process.
#
# We fill the `args_northd` with necessary defaults and link to the file
# `ovn-ctl` looks for.
#
# For other daemons the corrensponding args_* file is used to pass arguments to
# `ovn-ctl`.
cat << EOF > $SNAP_COMMON/args_northd
--ovnnb-db=unix:$SNAP_COMMON/run/ovn/ovnnb_db.sock
--ovnsb-db=unix:$SNAP_COMMON/run/ovn/ovnsb_db.sock
EOF
ln -s $SNAP_COMMON/args_northd $SNAP_COMMON/etc/ovn/ovn-northd-db-params.conf
# ----- END OVN -----
snap-openstack setup # Sets up templates for the first time.

View File

@ -1,6 +1,9 @@
#!/bin/bash
set -ex
# Refresh the TEMPLATE.qemu apparmor profile and abstractions.
rsync -rh $SNAP/etc/apparmor.d $SNAP_COMMON/etc
if [ -z "$(snapctl get config)" ]; then
# [2019-11-15] Handle build 171 (beta) -> 182
@ -30,4 +33,7 @@ if [ -z "$(snapctl get config.network.ports.rabbit)" ]; then
snapctl set config.network.ports.rabbit=5672
fi
mkdir -p ${SNAP_COMMON}/log/libvirt/qemu
mkdir -p /run/lock/snap.$SNAP_INSTANCE_NAME
snap-openstack setup # Write any template changes.

79
snap/hooks/remove Executable file
View File

@ -0,0 +1,79 @@
#!/usr/bin/env python3
import os
import sys
import logging
from subprocess import check_call, check_output, call, run
from pyroute2 import netns
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
if __name__ == '__main__':
# Work around the lack of modified LD_LIBRARY_PATH and PATH variables with
# snap-specific content.
snap_dir = os.environ['SNAP']
snap_libs = (f'{snap_dir}/lib:{snap_dir}/usr/lib:'
f'{snap_dir}/lib/x86_64-linux-gnu:'
f'{snap_dir}/usr/lib/x86_64-linux-gnu')
os.environ['LD_LIBRARY_PATH'] = snap_libs
check_call(['snapctl', 'start', 'microstack.ovsdb-server'])
check_call(['snapctl', 'start', 'microstack.ovs-vswitchd'])
logging.info('Attempting to remove br-ex.')
check_call(['ovs-vsctl', '--if-exists', 'del-br', 'br-ex'])
check_call(['snapctl', 'stop', 'microstack.ovsdb-server'])
check_call(['snapctl', 'stop', 'microstack.ovs-vswitchd'])
for ns in netns.listnetns():
if ns.startswith('ovnmeta-'):
logging.info(f'Removing the {ns} network namespace.')
netns.remove(ns)
# Need to expose targets prior to starting iscsid in order to properly log out
# of iSCSI sessions.
check_call(['snapctl', 'start', 'microstack.target'])
check_call(['snapctl', 'start', 'microstack.iscsid'])
check_call(['sync'])
# Assuming the QEMU processes have already been killed by snapd,
# log out of all targets prior to removing the snap to clean up
# the kernel state.
# TODO: be more selective about logging out since there may be sessions
# unrelated to MicroStack in the kernel.
# TODO: also clean up block devices by writing to
# /sys/class/block/<dev>/device/delete since those do not get deleted on
# session logout.
logging.info('Attempting to remove iscsi sessions from the kernel.')
res = run(['iscsiadm', '-m', 'node', '-u'])
# ISCSI_ERR_NO_OBJS_FOUND
if res.returncode == 21:
logging.debug('No iscsi sessions were found.')
elif res.returncode == 0:
logging.debug('Successfully logged the existing iscsi sessions out.')
else:
# Albeit this is an error condition we cannot do much in the remove
# hook to fix this besides logging since snapd does not stop the
# snap removal on error in the remove hook.
logging.error('Unexpected error code received from iscsiadm: '
f'{res.returncode}')
check_call(['snapctl', 'stop', 'microstack.iscsid'])
check_call(['snapctl', 'stop', 'microstack.target'])
# File-backed LVM resource cleanup (if present).
loop_file = f'{os.environ["SNAP_COMMON"]}/cinder-lvm.img'
allocated_loop_dev = check_output(
f'losetup -j {loop_file} | cut -d ":" -f 1', shell=True
).decode('utf-8').strip()
if allocated_loop_dev:
cinder_lvm_vg = check_output([
'snapctl', 'get', 'config.cinder.lvm-backend-volume-group']
).strip()
if not call(['vgdisplay', cinder_lvm_vg]):
check_call(['vgremove', '-f', cinder_lvm_vg])
if not call(['pvdisplay', allocated_loop_dev]):
check_call(['pvremove', '-f', allocated_loop_dev])
check_call(['losetup', '-d', allocated_loop_dev])

File diff suppressed because it is too large Load Diff

View File

@ -36,8 +36,8 @@ do
esac
done
if [ ! -f microstack_stein_amd64.snap ]; then
echo "microstack_stein_amd64.snap not found."
if [ ! -f microstack_ussuri_amd64.snap ]; then
echo "microstack_ussuri_amd64.snap not found."
echo "Please run snapcraft before executing the tests."
exit 1
fi
@ -72,7 +72,7 @@ if [ "$PREFIX" == "multipass" ]; then
PREFIX="multipass exec $MACHINE --"
multipass launch --cpus 2 --mem 16G $DISTRO --name $MACHINE
multipass copy-files microstack_stein_amd64.snap $MACHINE:
multipass copy-files microstack_ussuri_amd64.snap $MACHINE:
HORIZON_IP=`multipass info $MACHINE | grep IPv4 | cut -d":" -f2 \
| tr -d '[:space:]'`
@ -80,11 +80,32 @@ fi
# Possibly install a release of the snap before running a test.
if [ "${UPGRADE_FROM}" != "none" ]; then
$PREFIX sudo snap install --classic --${UPGRADE_FROM} microstack
$PREFIX sudo snap install --${UPGRADE_FROM} microstack
fi
# Install the snap under test -- try again if the machine is not yet ready.
$PREFIX sudo snap install --classic --dangerous microstack*.snap
$PREFIX sudo snap install --dangerous microstack*.snap
$PREFIX sudo snap connect microstack:libvirt
$PREFIX sudo snap connect microstack:netlink-audit
$PREFIX sudo snap connect microstack:firewall-control
$PREFIX sudo snap connect microstack:hardware-observe
$PREFIX sudo snap connect microstack:kernel-module-observe
$PREFIX sudo snap connect microstack:kvm
$PREFIX sudo snap connect microstack:log-observe
$PREFIX sudo snap connect microstack:mount-observe
$PREFIX sudo snap connect microstack:netlink-connector
$PREFIX sudo snap connect microstack:network-observe
$PREFIX sudo snap connect microstack:openvswitch-support
$PREFIX sudo snap connect microstack:process-control
$PREFIX sudo snap connect microstack:system-observe
$PREFIX sudo snap connect microstack:network-control
$PREFIX sudo snap connect microstack:system-trace
$PREFIX sudo snap connect microstack:block-devices
$PREFIX sudo snap connect microstack:raw-usb
$PREFIX sudo snap connect microstack:hugepages-control
# $PREFIX sudo snap connect microstack:microstack-support
$PREFIX sudo /snap/bin/microstack.init --auto
# Comment out the above and uncomment below to install the version of

View File

@ -82,7 +82,7 @@ class Host():
self.machine = ''
self.distro = os.environ.get('DISTRO') or 'bionic'
self.snap = os.environ.get('SNAP_FILE') or \
'microstack_stein_amd64.snap'
'microstack_ussuri_amd64.snap'
self.horizon_ip = '10.20.20.1'
self.host_type = 'localhost'
@ -91,17 +91,34 @@ class Host():
print("Booting a Multipass VM ...")
self.multipass()
self.microstack_test()
def install(self, snap=None, channel='dangerous'):
if snap is None:
snap = self.snap
print("Installing {}".format(snap))
check(*self.prefix, 'sudo', 'snap', 'install', '--devmode',
'--{}'.format(channel), snap)
check(*self.prefix, 'sudo', 'snap', 'install',
'--{}'.format(channel), '--devmode', snap)
def init(self, flag='auto'):
print("Initializing the snap with --{}".format(flag))
check(*self.prefix, 'sudo', 'microstack.init', '--{}'.format(flag))
# TODO: add microstack-support once it is merged into snapd.
connections = [
'microstack:libvirt', 'microstack:netlink-audit',
'microstack:firewall-control', 'microstack:hardware-observe',
'microstack:kernel-module-observe', 'microstack:kvm',
'microstack:log-observe', 'microstack:mount-observe',
'microstack:netlink-connector', 'microstack:network-observe',
'microstack:openvswitch-support', 'microstack:process-control',
'microstack:system-observe', 'microstack:network-control',
'microstack:system-trace', 'microstack:block-devices',
'microstack:raw-usb'
]
for connection in connections:
check('sudo', 'snap', 'connect', connection)
def init(self, args=['--auto']):
print(f"Initializing the snap with {args}")
check(*self.prefix, 'sudo', 'microstack.init', *args)
def multipass(self):
self.machine = petname.generate()
@ -119,6 +136,9 @@ class Host():
info = json.loads(info)
self.horizon_ip = info['info'][self.machine]['ipv4'][0]
def microstack_test(self):
check('sudo', 'snap', 'install', 'microstack-test')
def dump_logs(self):
# TODO: make unique log name
if check_output('whoami') == 'zuul':

View File

@ -17,6 +17,7 @@ Web IDE.
import os
import sys
import time
import json
import unittest
sys.path.append(os.getcwd())
@ -35,7 +36,11 @@ class TestBasics(Framework):
"""
host = self.get_host()
host.install()
host.init()
host.init([
'--auto',
'--setup-loop-based-cinder-lvm-backend',
'--loop-device-file-size=32'
])
prefix = host.prefix
endpoints = check_output(
@ -71,13 +76,35 @@ class TestBasics(Framework):
# Check to verify that our bridge is there.
self.assertTrue('br-ex' in check_output(*prefix, 'ip', 'a'))
# Try to uninstall snap without sudo.
self.assertFalse(call(*prefix, '/snap/bin/microstack.remove',
'--purge', '--auto'))
check(*prefix, 'sudo', 'mkdir', '-p', '/tmp/snap.microstack-test/tmp')
check(*prefix, 'sudo', 'cp',
'/var/snap/microstack/common/etc/microstack.json',
'/tmp/snap.microstack-test/tmp/microstack.json')
check(*prefix, 'microstack-test.rally', 'db', 'recreate')
check(*prefix, 'microstack-test.rally', 'deployment', 'create',
'--filename', '/tmp/microstack.json',
'--name', 'snap_generated')
check(*prefix, 'microstack-test.tempest-init')
check(*prefix, 'microstack-test.rally', 'verify', 'start',
'--load-list',
'/snap/microstack-test/current/2020.06-test-list.txt',
'--detailed', '--concurrency', '2')
check(*prefix, 'microstack-test.rally', 'verify', 'report',
'--type', 'json', '--to',
'/tmp/verification-report.json')
report = json.loads(check_output(
*prefix, 'sudo', 'cat',
'/tmp/snap.microstack-test/tmp/verification-report.json'))
# Make sure there are no verification failures in the report.
failures = list(report['verifications'].values())[0]['failures']
self.assertEqual(failures, 0, 'Verification tests had failure.')
# Try to remove the snap without sudo.
self.assertFalse(
call(*prefix, 'snap', 'remove', '--purge', 'microstack'))
# Retry with sudo (should succeed).
check(*prefix, 'sudo', '/snap/bin/microstack.remove',
'--purge', '--auto')
check(*prefix, 'sudo', 'snap', 'remove', '--purge', 'microstack')
# Verify that MicroStack is gone.
self.assertFalse(call(*prefix, 'snap', 'list', 'microstack'))

View File

@ -34,7 +34,7 @@ class TestCluster(Framework):
openstack = '/snap/bin/microstack.openstack'
control_host = self.get_host()
control_host.install()
control_host.init(flag='control')
control_host.init(['--control'])
control_prefix = control_host.prefix
cluster_password = check_output(*control_prefix, 'sudo', 'snap',

View File

@ -26,7 +26,7 @@ class TestControlNode(Framework):
host = self.get_host()
host.install()
host.init(flag='control')
host.init(['--control'])
print("Checking output of services ...")
services = check_output(

2
tools/cluster/cluster/client.py Normal file → Executable file
View File

@ -1,3 +1,5 @@
#!/usr/bin/env python3
import json
import requests

View File

@ -34,6 +34,7 @@ import logging
import secrets
import string
import sys
import socket
from functools import wraps
@ -55,6 +56,15 @@ def requires_sudo(func):
return wrapper
def check_file_size_positive(value):
ival = int(value)
if ival < 1:
raise argparse.ArgumentTypeError(
f'The file size for a loop device'
f' must be larger than 1GB, current: {value}')
return ival
def parse_init_args():
parser = argparse.ArgumentParser()
parser.add_argument('--auto', '-a', action='store_true',
@ -63,6 +73,18 @@ def parse_init_args():
parser.add_argument('--compute', action='store_true')
parser.add_argument('--control', action='store_true')
parser.add_argument('--debug', action='store_true')
parser.add_argument(
'--setup-loop-based-cinder-lvm-backend',
action='store_true',
help='(experimental) set up a loop device-backed'
' LVM backend for Cinder.'
)
parser.add_argument(
'--loop-device-file-size',
type=check_file_size_positive, default=32,
help=('File size in GB (10^9) of a file to be exposed as a loop'
' device for the Cinder LVM backend.')
)
args = parser.parse_args()
return args
@ -100,6 +122,12 @@ def process_init_args(args):
if args.debug:
log.setLevel(logging.DEBUG)
check('snapctl', 'set',
f'config.cinder.setup-loop-based-cinder-lvm-backend='
f'{str(args.setup_loop_based_cinder_lvm_backend).lower()}')
check('snapctl', 'set',
f'config.cinder.loop-device-file-size={args.loop_device_file_size}G')
return auto
@ -110,7 +138,8 @@ def init() -> None:
question_list = [
questions.Clustering(),
questions.Dns(),
questions.DnsServers(),
questions.DnsDomain(),
questions.NetworkSettings(),
questions.OsPassword(), # TODO: turn this off if COMPUTE.
questions.ForceQemu(),
@ -120,11 +149,15 @@ def init() -> None:
questions.DashboardAccess(),
questions.RabbitMq(),
questions.DatabaseSetup(),
questions.PlacementSetup(),
questions.NovaHypervisor(),
questions.NovaControlPlane(),
questions.NovaSpiceConsoleSetup(),
questions.NeutronControlPlane(),
questions.GlanceSetup(),
questions.SecurityRules(),
questions.CinderSetup(),
questions.CinderVolumeLVMSetup(),
questions.PostSetup(),
questions.ExtraServicesQuestion(),
]
@ -160,7 +193,8 @@ def set_network_info() -> None:
check('snapctl', 'set', 'config.network.ext-gateway={}'.format(gate))
check('snapctl', 'set', 'config.network.ext-cidr={}'.format(cidr))
check('snapctl', 'set', 'config.network.control-ip={}'.format(ip))
check('snapctl', 'set', 'config.network.control-ip={}'.format(ip))
check('snapctl', 'set',
'config.network.node-fqdn={}'.format(socket.getfqdn()))
@requires_sudo

View File

@ -28,7 +28,7 @@ from time import sleep
from os import path
from init.shell import (check, call, check_output, sql, nc_wait, log_wait,
restart, download)
start, restart, download, disable, enable)
from init.config import Env, log
from init.questions.question import Question
from init.questions import clustering, network, uninstall # noqa F401
@ -104,7 +104,7 @@ class Clustering(Question):
# Turn off cluster server
# TODO: it would be more secure to reverse this -- only enable
# to service if we are doing clustering.
check('snapctl', 'stop', '--disable', 'microstack.cluster-server')
disable('cluster-server')
class ConfigQuestion(Question):
@ -138,32 +138,36 @@ class ConfigQuestion(Question):
_env[key.strip()] = val.strip()
class Dns(Question):
"""Possibly override default dns."""
class DnsServers(ConfigQuestion):
"""Provide default DNS forwarders for MicroStack to use."""
_type = 'string'
_question = 'DNS to use'
config_key = 'config.network.dns'
_question = 'Upstream DNS servers to be used by instances (VMs)'
config_key = 'config.network.dns-servers'
def yes(self, answer: str):
"""Override the default dhcp_agent.ini file."""
file_path = '{SNAP_COMMON}/etc/neutron/dhcp_agent.ini'.format(**_env)
with open(file_path, 'w') as f:
f.write("""\
[DEFAULT]
interface_driver = openvswitch
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
dnsmasq_dns_servers = {answer}
""".format(answer=answer))
# Neutron is not actually started at this point, so we don't
# need to restart.
# TODO: This isn't idempotent, because it will behave
# differently if we re-run this script when neutron *is*
# started. Need to figure that out.
pass
class DnsDomain(ConfigQuestion):
"""An internal DNS domain to be used for ML2 DNS."""
_type = 'string'
_question = 'An internal DNS domain to be used for ML2 DNS'
config_key = 'config.network.dns-domain'
def yes(self, answer: str):
# Neutron is not actually started at this point, so we don't
# need to restart.
# TODO: This isn't idempotent, because it will behave
# differently if we re-run this script when neutron *is*
# started. Need to figure that out.
pass
class NetworkSettings(Question):
@ -174,19 +178,46 @@ class NetworkSettings(Question):
def yes(self, answer):
log.info('Configuring networking ...')
# OpenvSwitch services may not have started up properly
restart('ovsdb-server')
restart('ovs-vswitchd')
role = check_output('snapctl', 'get', 'config.cluster.role')
# Enable and start the services.
enable('ovsdb-server')
enable('ovs-vswitchd')
enable('ovn-ovsdb-server-sb')
enable('ovn-ovsdb-server-nb')
network.ExtGateway().ask()
network.ExtCidr().ask()
if role == 'control':
nb_conn = 'unix:{SNAP_COMMON}/run/ovn/ovnnb_db.sock'.format(**_env)
sb_conn = 'unix:{SNAP_COMMON}/run/ovn/ovnsb_db.sock'.format(**_env)
elif role == 'compute':
control_ip = check_output('snapctl', 'get',
'config.network.control-ip')
sb_conn = f'tcp:{control_ip}:6642'
# Not used by any compute node services.
nb_conn = ''
else:
raise Exception(f'Unexpected node role: {role}')
# Configure OVN SB and NB sockets based on the role node. For
# single-node deployments there is no need to use a TCP socket.
check_output('snapctl', 'set',
f'config.network.ovn-nb-connection={nb_conn}')
check_output('snapctl', 'set',
f'config.network.ovn-sb-connection={sb_conn}')
# Now that we have default or overriden values, setup the
# bridge and write all the proper values into our config
# files.
check('setup-br-ex')
check('snap-openstack', 'setup')
if role == 'control':
enable('ovn-northd')
enable('ovn-controller')
network.IpForwarding().ask()
@ -296,7 +327,7 @@ class RabbitMq(Question):
def no(self, answer: str):
log.info('Disabling local rabbit ...')
check('snapctl', 'stop', '--disable', 'microstack.rabbitmq-server')
disable('rabbitmq-server')
class DatabaseSetup(Question):
@ -315,16 +346,17 @@ class DatabaseSetup(Question):
def _create_dbs(self) -> None:
# TODO: actually use passwords here.
for db in ('neutron', 'nova', 'nova_api', 'nova_cell0', 'cinder',
'glance', 'keystone'):
sql("CREATE DATABASE IF NOT EXISTS {db};".format(db=db))
sql(
"GRANT ALL PRIVILEGES ON {db}.* TO {db}@{control_ip} \
IDENTIFIED BY '{db}';".format(db=db, **_env))
'glance', 'keystone', 'placement'):
sql("CREATE USER IF NOT EXISTS '{db}'@'{control_ip}'"
" IDENTIFIED BY '{db}';".format(db=db, **_env))
sql("CREATE DATABASE IF NOT EXISTS `{db}`;".format(db=db))
sql("GRANT ALL PRIVILEGES ON {db}.* TO '{db}'@'{control_ip}';"
"".format(db=db, **_env))
# Grant nova user access to cell0
sql(
"GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'{control_ip}' \
IDENTIFIED BY \'nova';".format(**_env))
"GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'{control_ip}';"
"".format(**_env))
def _bootstrap(self) -> None:
@ -337,7 +369,8 @@ class DatabaseSetup(Question):
'--bootstrap-password', _env['ospassword'],
'--bootstrap-admin-url', bootstrap_url,
'--bootstrap-internal-url', bootstrap_url,
'--bootstrap-public-url', bootstrap_url)
'--bootstrap-public-url', bootstrap_url,
'--bootstrap-region-id', 'microstack')
def yes(self, answer: str) -> None:
"""Setup Databases.
@ -355,8 +388,8 @@ class DatabaseSetup(Question):
# Start keystone-uwsgi. We use snapctl, because systemd
# doesn't yet know about the service.
check('snapctl', 'start', 'microstack.nginx')
check('snapctl', 'start', 'microstack.keystone-uwsgi')
start('nginx')
start('keystone-uwsgi')
log.info('Configuring Keystone Fernet Keys ...')
check('snap-openstack', 'launch', 'keystone-manage',
@ -382,7 +415,7 @@ class DatabaseSetup(Question):
check('snapctl', 'set', 'database.ready=true')
log.info('Disabling local MySQL ...')
check('snapctl', 'stop', '--disable', 'microstack.mysqld')
disable('mysqld')
class NovaHypervisor(Question):
@ -404,11 +437,63 @@ class NovaHypervisor(Question):
'microstack', 'compute', endpoint,
'http://{compute_ip}:8774/v2.1'.format(**_env))
check('snapctl', 'start', 'microstack.nova-compute')
start('nova-compute')
def no(self, answer):
log.info('Disabling nova compute service ...')
check('snapctl', 'stop', '--disable', 'microstack.nova-compute')
disable('nova-compute')
class NovaSpiceConsoleSetup(Question):
"""Run the Spice HTML5 console proxy service"""
_type = 'boolean'
config_key = 'config.services.spice-console'
def yes(self, answer):
log.info('Configuring the Spice HTML5 console service...')
start('nova-spicehtml5proxy')
def no(self, answer):
log.info('Disabling nova compute service ...')
disable('nova-spicehtml5proxy')
class PlacementSetup(Question):
"""Setup Placement services."""
_type = 'boolean'
config_key = 'config.services.control-plane'
def yes(self, answer: str) -> None:
log.info('Configuring the Placement service...')
if not call('openstack', 'user', 'show', 'placement'):
check('openstack', 'user', 'create', '--domain', 'default',
'--password', 'placement', 'placement')
check('openstack', 'role', 'add', '--project', 'service',
'--user', 'placement', 'admin')
if not call('openstack', 'service', 'show', 'placement'):
check('openstack', 'service', 'create', '--name',
'placement', '--description', '"Placement API"',
'placement')
for endpoint in ['public', 'internal', 'admin']:
call('openstack', 'endpoint', 'create', '--region',
'microstack', 'placement', endpoint,
'http://{control_ip}:8778'.format(**_env))
start('placement-uwsgi')
log.info('Running Placement DB migrations...')
check('snap-openstack', 'launch', 'placement-manage', 'db', 'sync')
restart('placement-uwsgi')
def no(self, answer):
log.info('Disabling the Placement service...')
disable('placement-uwsgi')
class NovaControlPlane(Question):
@ -446,31 +531,14 @@ class NovaControlPlane(Question):
check('openstack', 'role', 'add', '--project',
'service', '--user', 'nova', 'admin')
if not call('openstack', 'user', 'show', 'placement'):
check('openstack', 'user', 'create', '--domain', 'default',
'--password', 'placement', 'placement')
check('openstack', 'role', 'add', '--project', 'service',
'--user', 'placement', 'admin')
if not call('openstack', 'service', 'show', 'placement'):
check('openstack', 'service', 'create', '--name',
'placement', '--description', '"Placement API"',
'placement')
for endpoint in ['public', 'internal', 'admin']:
call('openstack', 'endpoint', 'create', '--region',
'microstack', 'placement', endpoint,
'http://{control_ip}:8778'.format(**_env))
# Use snapctl to start nova services. We need to call them
# out manually, because systemd doesn't know about them yet.
# TODO: parse the output of `snapctl services` to get this
# list automagically.
for service in [
'microstack.nova-api',
]:
check('snapctl', 'start', service)
start('nova-api')
log.info('Running Nova API DB migrations'
' (this will take a lot of time)...')
check('snap-openstack', 'launch', 'nova-manage', 'api_db', 'sync')
if 'cell0' not in check_output('snap-openstack', 'launch',
@ -485,18 +553,19 @@ class NovaControlPlane(Question):
check('snap-openstack', 'launch', 'nova-manage', 'cell_v2',
'create_cell', '--name=cell1', '--verbose')
log.info('Running Nova DB migrations'
' (this will take a lot of time)...')
check('snap-openstack', 'launch', 'nova-manage', 'db', 'sync')
restart('nova-api')
restart('nova-compute')
for service in [
'microstack.nova-api-metadata',
'microstack.nova-conductor',
'microstack.nova-scheduler',
'microstack.nova-uwsgi',
'nova-api-metadata',
'nova-conductor',
'nova-scheduler',
]:
check('snapctl', 'start', service)
start(service)
nc_wait(_env['compute_ip'], '8774')
@ -509,13 +578,92 @@ class NovaControlPlane(Question):
log.info('Disabling nova control plane services ...')
for service in [
'microstack.nova-uwsgi',
'microstack.nova-api',
'microstack.nova-conductor',
'microstack.nova-scheduler',
'microstack.nova-api-metadata']:
'nova-api',
'nova-conductor',
'nova-scheduler',
'nova-api-metadata']:
disable(service)
check('snapctl', 'stop', '--disable', service)
class CinderSetup(Question):
"""Setup Placement services."""
_type = 'boolean'
config_key = 'config.services.control-plane'
def yes(self, answer: str) -> None:
log.info('Configuring the Cinder services...')
if not call('openstack', 'user', 'show', 'cinder'):
check('openstack', 'user', 'create', '--domain', 'default',
'--password', 'cinder', 'cinder')
check('openstack', 'role', 'add', '--project', 'service',
'--user', 'cinder', 'admin')
control_ip = _env['control_ip']
for endpoint in ['public', 'internal', 'admin']:
for api_version in ['v2', 'v3']:
if not call('openstack', 'service', 'show',
f'cinder{api_version}'):
check('openstack', 'service', 'create', '--name',
f'cinder{api_version}', '--description',
f'"Cinder {api_version} API"',
f'volume{api_version}')
if not check_output(
'openstack', 'endpoint', 'list',
'--service', f'volume{api_version}', '--interface',
endpoint):
check(
'openstack', 'endpoint', 'create', '--region',
'microstack', f'volume{api_version}', endpoint,
f'http://{control_ip}:8776/{api_version}/'
'$(project_id)s'
)
restart('cinder-uwsgi')
log.info('Running Cinder DB migrations...')
check('snap-openstack', 'launch', 'cinder-manage', 'db', 'sync')
restart('cinder-uwsgi')
restart('cinder-scheduler')
def no(self, answer):
log.info('Disabling Cinder services...')
for service in [
'cinder-uwsgi',
'cinder-scheduler',
'cinder-volume',
'cinder-backup']:
disable(service)
class CinderVolumeLVMSetup(Question):
"""Setup cinder-volume with LVM."""
_type = 'boolean'
config_key = 'config.cinder.setup-loop-based-cinder-lvm-backend'
_question = ('(experimental) Do you want to setup a loop device-backed LVM'
' volume backend for Cinder?')
interactive = True
def yes(self, answer: bool) -> None:
check('snapctl', 'set',
f'config.cinder.setup-loop-based-cinder-lvm-backend'
f'={str(answer).lower()}')
log.info('Setting up cinder-volume service with the LVM backend...')
enable('setup-lvm-loopdev')
enable('cinder-volume')
enable('target')
enable('iscsid')
def no(self, answer: bool) -> None:
check('snapctl', 'set', f'config.cinder.lvm.setup-file-backed-lvm='
f'{str(answer).lower()}')
disable('setup-lvm-loopdev')
disable('cinder-volume')
disable('iscsid')
disable('target')
class NeutronControlPlane(Question):
@ -541,26 +689,16 @@ class NeutronControlPlane(Question):
'microstack', 'network', endpoint,
'http://{control_ip}:9696'.format(**_env))
for service in [
'microstack.neutron-api',
'microstack.neutron-dhcp-agent',
'microstack.neutron-l3-agent',
'microstack.neutron-metadata-agent',
'microstack.neutron-openvswitch-agent',
]:
check('snapctl', 'start', service)
start('neutron-api')
check('snap-openstack', 'launch', 'neutron-db-manage', 'upgrade',
'head')
for service in [
'microstack.neutron-api',
'microstack.neutron-dhcp-agent',
'microstack.neutron-l3-agent',
'microstack.neutron-metadata-agent',
'microstack.neutron-openvswitch-agent',
'neutron-api',
'neutron-ovn-metadata-agent',
]:
check('snapctl', 'restart', service)
restart(service)
nc_wait(_env['control_ip'], '9696')
@ -594,20 +732,23 @@ class NeutronControlPlane(Question):
neutron on this machine.
"""
# Make sure that the agent is running.
# Make sure the necessary services are enabled and started.
for service in [
'microstack.neutron-openvswitch-agent',
'ovs-vswitchd',
'ovsdb-server',
'ovn-controller',
'neutron-ovn-metadata-agent'
]:
check('snapctl', 'start', service)
enable(service)
# Disable the other services.
for service in [
'microstack.neutron-api',
'microstack.neutron-dhcp-agent',
'microstack.neutron-metadata-agent',
'microstack.neutron-l3-agent',
'neutron-api',
'ovn-northd',
'ovn-ovsdb-server-sb',
'ovn-ovsdb-server-nb',
]:
check('snapctl', 'stop', '--disable', service)
disable(service)
class GlanceSetup(Question):
@ -660,10 +801,10 @@ class GlanceSetup(Question):
'http://{compute_ip}:9292'.format(**_env))
for service in [
'microstack.glance-api',
'microstack.registry', # TODO rename to glance-registery
'glance-api',
'registry', # TODO rename to glance-registery
]:
check('snapctl', 'start', service)
start(service)
check('snap-openstack', 'launch', 'glance-manage', 'db_sync')
@ -677,8 +818,8 @@ class GlanceSetup(Question):
self._fetch_cirros()
def no(self, answer):
check('snapctl', 'stop', '--disable', 'microstack.glance-api')
check('snapctl', 'stop', '--disable', 'microstack.registry')
disable('glance-api')
disable('registry')
class SecurityRules(Question):
@ -725,9 +866,9 @@ class PostSetup(Question):
# TODO: fix issue.
restart('libvirtd')
restart('virtlogd')
restart('nova-compute')
# Start horizon
check('snapctl', 'start', 'microstack.horizon-uwsgi')
restart('horizon-uwsgi')
check('snapctl', 'set', 'initialized=true')
log.info('Complete. Marked microstack as initialized!')
@ -739,13 +880,13 @@ class SimpleServiceQuestion(Question):
log.info('enabling and starting ' + self.__class__.__name__)
for service in self.services:
check('snapctl', 'start', '--enable', service)
enable(service)
log.info(self.__class__.__name__ + ' enabled')
def no(self, answer):
for service in self.services:
check('snapctl', 'stop', '--disable', service)
disable(service)
class ExtraServicesQuestion(Question):

View File

@ -2,7 +2,7 @@ import sys
from init.config import Env, log
from init.questions.question import Question
from init.shell import check, call
from init.shell import call
_env = Env().get_env()
@ -29,7 +29,6 @@ class DeleteBridge(Question):
# TODO: cleanup system optimizations
# TODO: cleanup kernel modules?
# TODO: cleanup iptables rules
class RemoveMicrostack(Question):
@ -40,8 +39,4 @@ class RemoveMicrostack(Question):
def yes(self, answer):
"""Uninstall MicroStack, passing any command line options to snapd."""
log.info('Uninstalling MicroStack (this may take a while) ...')
check('snap', 'remove', '{SNAP_INSTANCE_NAME}'.format(**_env),
*ARGS)
log.info('MicroStack has been removed from your system!')

View File

@ -129,6 +129,16 @@ def log_wait(log: str, message: str) -> None:
sleep(1)
def start(service: str) -> None:
"""Start a microstack service.
:param service: the service(s) to be started. Can contain wild cards.
e.g. *rabbit*
"""
check('snapctl', 'start', 'microstack.{}'.format(service))
def restart(service: str) -> None:
"""Restart a microstack service.
@ -139,6 +149,16 @@ def restart(service: str) -> None:
check('snapctl', 'restart', 'microstack.{}'.format(service))
def enable(service: str) -> None:
"""Disable and mask a service.
:param service: the service(s) to be enabled. Can contain wild cards.
e.g. *rabbit*
"""
check('snapctl', 'start', '--enable', 'microstack.{}'.format(service))
def disable(service: str) -> None:
"""Disable and mask a service.

View File

@ -1,4 +1,5 @@
netaddr
# netaddr is pinned to match the upper-constraints.txt file of Ussuri
netaddr===0.7.19
netifaces
pymysql
pymysql==0.9.3
wget

View File

@ -9,7 +9,6 @@ setup(
'console_scripts': [
'microstack_init = init.main:init',
'set_network_info = init.main:set_network_info',
'microstack_remove = init.main:remove',
],
},
)

View File

@ -7,7 +7,7 @@ import mock
# TODO: drop in test runner and get rid of this line.
sys.path.append(os.getcwd()) # noqa
from init.questions.question import (Question, InvalidQuestion, InvalidAnswer)
from init.questions.question import (Question, InvalidQuestion, InvalidAnswer) # noqa
##############################################################################

View File

@ -9,9 +9,11 @@ sudo apt update
sudo apt install -y firefox-geckodriver python3-petname python3-selenium
# Setup snapd and snapcraft
# Install snapd if it isn't installed yet (needed to install the snapd snap itself).
sudo apt install -y snapd
# Build our snap!
sudo snap install snapd
sudo snap install --classic snapcraft
sudo snap install lxd
@ -22,4 +24,6 @@ newgrp lxd << END
set -ex
lxd init --auto
snapcraft --use-lxd
# Delete the build container to free the storage space on a test node.
lxc delete snapcraft-microstack
END

View File

@ -20,9 +20,49 @@ MACHINE=$(petname)
multipass launch --cpus 2 --mem 16G $DISTRO --name $MACHINE
# Install the snap
multipass copy-files microstack_stein_amd64.snap $MACHINE:
multipass copy-files microstack_ussuri_amd64.snap $MACHINE:
multipass exec $MACHINE -- \
sudo snap install --classic --dangerous microstack*.snap
sudo snap install --dangerous microstack*.snap
multipass exec $MACHINE -- \
sudo snap connect microstack:libvirt
multipass exec $MACHINE -- \
sudo snap connect microstack:netlink-audit
multipass exec $MACHINE -- \
sudo snap connect microstack:firewall-control
multipass exec $MACHINE -- \
sudo snap connect microstack:hardware-observe
multipass exec $MACHINE -- \
sudo snap connect microstack:kernel-module-observe
multipass exec $MACHINE -- \
sudo snap connect microstack:kvm
multipass exec $MACHINE -- \
sudo snap connect microstack:log-observe
multipass exec $MACHINE -- \
sudo snap connect microstack:mount-observe
multipass exec $MACHINE -- \
sudo snap connect microstack:netlink-connector
multipass exec $MACHINE -- \
sudo snap connect microstack:network-observe
multipass exec $MACHINE -- \
sudo snap connect microstack:openvswitch-support
multipass exec $MACHINE -- \
sudo snap connect microstack:process-control
multipass exec $MACHINE -- \
sudo snap connect microstack:system-observe
multipass exec $MACHINE -- \
sudo snap connect microstack:network-control
multipass exec $MACHINE -- \
sudo snap connect microstack:system-trace
multipass exec $MACHINE -- \
sudo snap connect microstack:block-devices
multipass exec $MACHINE -- \
sudo snap connect microstack:raw-usb
multipass exec $MACHINE -- \
sudo snap connect microstack:hugepages-control
# TODO: add the below once the interface is merge into snapd.
# multipass exec $MACHINE -- \
# sudo snap connect microstack:microstack-support
# Drop the user into a snap shell, as root.
multipass exec $MACHINE -- \

View File

@ -0,0 +1,28 @@
#!/bin/bash
set -ex
cinder_volumes_vg=`snapctl get config.cinder.lvm-backend-volume-group`
if [ `snapctl get config.cinder.setup-loop-based-cinder-lvm-backend` = 'true' ]
then
loop_file=$SNAP_COMMON/cinder-lvm.img
loop_file_size=`snapctl get config.cinder.loop-device-file-size`
# Create a file to hold an LVM PV+VG + LVs if it does not exist.
test -f $loop_file || fallocate -l $loop_file_size $loop_file
# Unless this file already has an associated loop device, associate a free loop device with it.
if [ -z `losetup -j $loop_file` ]
then
until losetup -f $loop_file
do
echo 'Waiting until the device cgroup entry is updated, see LP# 1892895'
sleep 1
done
fi
allocated_loop_dev=`losetup -j $loop_file | cut -d':' -f 1`
# Create a PV on the allocated loop device unless there is already one on it.
lvmdiskscan -l --config 'devices { filter = [ "a|'$allocated_loop_dev'|", "r|.*|" ] }' | grep -q '1 LVM' || (pvcreate $allocated_loop_dev && vgcreate $cinder_volumes_vg $allocated_loop_dev && exit 0)
fi
# Activate the logical volumes (relevant on node reboot).
lvchange -a y $cinder_volumes_vg