Update install page for Xena

Update the install-openstack page to Xena

Add screenshots for OVS bridge section on
the install-maas page

Include various small corrections

Change-Id: Iff1d382c19da37174da189496e7f63485b9268af
This commit is contained in:
Peter Matulis 2021-10-07 17:55:19 -04:00
parent abe70d394d
commit e89b910f9e
8 changed files with 234 additions and 210 deletions

View File

@ -65,7 +65,7 @@ Sample output:
OS_REGION_NAME=RegionOne
OS_AUTH_VERSION=3
OS_CACERT=/home/ubuntu/snap/openstackclients/common/root-ca.crt
OS_AUTH_URL=https://10.0.0.162:5000/v3
OS_AUTH_URL=https://10.0.0.170:5000/v3
OS_PROJECT_DOMAIN_NAME=admin_domain
OS_AUTH_PROTOCOL=https
OS_USERNAME=admin
@ -97,13 +97,13 @@ The output will look similar to this:
+----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------------------+
| 172dc2610f2a46cbbf64919a7b414266 | RegionOne | cinderv3 | volumev3 | True | admin | https://10.0.0.171:8776/v3/$(tenant_id)s |
| 60466514cde4401eaa810301bddb1d2c | RegionOne | glance | image | True | admin | https://10.0.0.167:9292 |
| 70be9abb201748078b6d91ff803ede86 | RegionOne | cinderv2 | volumev2 | True | admin | https://10.0.0.171:8776/v2/$(tenant_id)s |
| 835f368961744d3aa62b0b7ead24c5c4 | RegionOne | placement | placement | True | admin | https://10.0.0.165:8778 |
| 9478c33a71994f9daa4d79a5630f1784 | RegionOne | neutron | network | True | admin | https://10.0.0.161:9696 |
| bcff6b5d81474cb9884b8161865b1394 | RegionOne | keystone | identity | True | admin | https://10.0.0.162:35357/v3 |
| cb4dcb58607448c7981ddae79e8ca92d | RegionOne | nova | compute | True | admin | https://10.0.0.164:8774/v2.1 |
| 12011a63a8e24e2290986cf7d8c285db | RegionOne | cinderv3 | volumev3 | True | admin | https://10.0.0.179:8776/v3/$(tenant_id)s |
| 17a66b67744c42beb20135dca647a9a4 | RegionOne | keystone | identity | True | admin | https://10.0.0.170:35357/v3 |
| 296755b4627641379fd43095c5fab3ba | RegionOne | nova | compute | True | admin | https://10.0.0.172:8774/v2.1 |
| 682fd715c05f492fb0abc08f56e25439 | RegionOne | placement | placement | True | admin | https://10.0.0.173:8778 |
| 7b20063d208c40aa9d3e3d1152259868 | RegionOne | neutron | network | True | admin | https://10.0.0.169:9696 |
| a613af1a0d8349ee9329e1230e76b764 | RegionOne | cinderv2 | volumev2 | True | admin | https://10.0.0.179:8776/v2/$(tenant_id)s |
| b4fe417933704e8b86cfbca91811fcbf | RegionOne | glance | image | True | admin | https://10.0.0.175:9292 |
+----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------------------+
If the endpoints aren't visible, it's likely your environment variables aren't
@ -124,7 +124,7 @@ a Focal amd64 image:
.. code-block:: none
curl http://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img
curl http://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img \
--output ~/cloud-images/focal-amd64.img
Now import the image and call it 'focal-amd64':
@ -168,7 +168,7 @@ subnet is '10.0.0.0/24':
openstack subnet create --network ext_net --no-dhcp \
--gateway 10.0.0.1 --subnet-range 10.0.0.0/24 \
--allocation-pool start=10.0.0.10,end=10.0.0.200 \
--allocation-pool start=10.0.0.40,end=10.0.0.99 \
ext_subnet
.. important::
@ -225,14 +225,14 @@ environment:
echo $OS_AUTH_URL
The output for the last command for this example is
**https://10.0.0.162:5000/v3**.
**https://10.0.0.170:5000/v3**.
The contents of the file, say ``project1-rc``, will therefore look like this
(assuming the user password is 'ubuntu'):
.. code-block:: ini
export OS_AUTH_URL=https://10.0.0.162:5000/v3
export OS_AUTH_URL=https://10.0.0.170:5000/v3
export OS_USER_DOMAIN_NAME=domain1
export OS_USERNAME=user1
export OS_PROJECT_DOMAIN_NAME=domain1
@ -277,8 +277,8 @@ project-specific network with a private subnet. We'll also need a router to
link this network to the public network created earlier.
The non-admin user now creates a private internal network called 'user1_net'
and an accompanying subnet called 'user1_subnet' (the DNS server is pointing to
the MAAS server at 10.0.0.2):
and an accompanying subnet called 'user1_subnet' (the DNS server is the MAAS
server at 10.0.0.2):
.. code-block:: none
@ -286,7 +286,7 @@ the MAAS server at 10.0.0.2):
openstack subnet create --network user1_net --dns-nameserver 10.0.0.2 \
--gateway 192.168.0.1 --subnet-range 192.168.0/24 \
--allocation-pool start=192.168.0.10,end=192.168.0.200 \
--allocation-pool start=192.168.0.10,end=192.168.0.199 \
user1_subnet
Now a router called 'user1_router' is created, added to the subnet, and told to
@ -295,8 +295,8 @@ use the public external network as its gateway network:
.. code-block:: none
openstack router create user1_router
openstack router set --external-gateway ext_net user1_router
openstack router add subnet user1_router user1_subnet
openstack router set user1_router --external-gateway ext_net
Configure SSH and security groups
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -305,7 +305,7 @@ An SSH keypair needs to be imported into the cloud in order to access your
instances.
Generate one first if you do not yet have one. This command creates a
passphraseless keypair (remove the `-N` option to avoid that):
passphraseless keypair (remove the ``-N`` option to avoid that):
.. code-block:: none
@ -377,7 +377,7 @@ The instance is ready when the output contains:
.
.
.
Ubuntu 20.04.2 LTS focal-1 ttyS0
Ubuntu 20.04.3 LTS focal-1 ttyS0
focal-1 login:

View File

@ -108,9 +108,9 @@ the environment. It should now look very similar to this:
.. code-block:: none
Model Controller Cloud/Region Version SLA Timestamp
openstack maas-controller mymaas/default 2.9.0 unsupported 01:51:00Z
openstack maas-controller mymaas/default 2.9.15 unsupported 15:56:13Z
Model "admin/openstack" is empty
Model "admin/openstack" is empty.
Next steps
----------

View File

@ -73,7 +73,7 @@ instructions`_ for details:
.. code-block:: none
sudo snap install maas-test-db
sudo snap install maas --channel=2.9/stable
sudo snap install maas --channel=3.0/stable
sudo maas init region+rack --maas-url http://10.0.0.2:5240/MAAS --database-uri maas-test-db:///
sudo maas createadmin --username admin --password ubuntu --email admin@example.com --ssh-import lp:<unsername>
sudo maas apikey --username admin > ~ubuntu/admin-api-key
@ -200,8 +200,27 @@ Create OVS bridge
~~~~~~~~~~~~~~~~~
Create an Open vSwitch bridge from a network bond or a single interface. Here
we will do the latter with interface 'enp1s0'. The bridge will be named
'br-ex'.
we will do the latter with interface 'enp1s0':
.. figure:: ./media/ovs-bridge-1.png
:scale: 70%
:alt: Select interface to use for OVS bridge
.. role:: raw-html(raw)
:format: html
:raw-html:`<br />`
The bridge will be named 'br-ex':
.. figure:: ./media/ovs-bridge-2.png
:scale: 70%
:alt: OVS bridge configuration
.. role:: raw-html(raw)
:format: html
:raw-html:`<br />`
Multiple VLANs can be added to the bridge but in this example cloud a single
untagged VLAN is used.

View File

@ -12,107 +12,107 @@ installed from the instructions given on the :doc:`Install OpenStack
.. code-block:: console
Model Controller Cloud/Region Version SLA Timestamp
openstack maas-one maas-one/default 2.9.0 unsupported 01:35:20Z
Model Controller Cloud/Region Version SLA Timestamp
openstack maas-controller mymaas/default 2.9.15 unsupported 22:00:48Z
App Version Status Scale Charm Store Channel Rev OS Message
ceph-mon 16.2.0 active 3 ceph-mon charmstore stable 464 ubuntu Unit is ready and clustered
ceph-osd 16.2.0 active 4 ceph-osd charmstore stable 489 ubuntu Unit is ready (2 OSD)
ceph-radosgw 16.2.0 active 1 ceph-radosgw charmstore stable 398 ubuntu Unit is ready
cinder 18.0.0 active 1 cinder charmstore stable 436 ubuntu Unit is ready
cinder-ceph 18.0.0 active 1 cinder-ceph charmstore stable 352 ubuntu Unit is ready
cinder-mysql-router 8.0.23 active 1 mysql-router charmstore stable 48 ubuntu Unit is ready
dashboard-mysql-router 8.0.23 active 1 mysql-router charmstore stable 48 ubuntu Unit is ready
glance 22.0.0 active 1 glance charmstore stable 450 ubuntu Unit is ready
glance-mysql-router 8.0.23 active 1 mysql-router charmstore stable 48 ubuntu Unit is ready
keystone 19.0.0 active 1 keystone charmstore stable 542 ubuntu Application Ready
keystone-mysql-router 8.0.23 active 1 mysql-router charmstore stable 48 ubuntu Unit is ready
mysql-innodb-cluster 8.0.23 active 3 mysql-innodb-cluster charmstore stable 74 ubuntu Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
ncc-mysql-router 8.0.23 active 1 mysql-router charmstore stable 48 ubuntu Unit is ready
neutron-api 18.0.0 active 1 neutron-api charmstore stable 471 ubuntu Unit is ready
neutron-api-mysql-router 8.0.23 active 1 mysql-router charmstore stable 48 ubuntu Unit is ready
neutron-api-plugin-ovn 18.0.0 active 1 neutron-api-plugin-ovn charmstore stable 40 ubuntu Unit is ready
nova-cloud-controller 23.0.0 active 1 nova-cloud-controller charmstore stable 521 ubuntu Unit is ready
nova-compute 23.0.0 active 3 nova-compute charmstore stable 539 ubuntu Unit is ready
ntp 3.5 active 4 ntp charmstore stable 45 ubuntu chrony: Ready
openstack-dashboard 19.2.0 active 1 openstack-dashboard charmstore stable 505 ubuntu Unit is ready
ovn-central 20.12.0 active 3 ovn-central charmstore stable 51 ubuntu Unit is ready (leader: ovnnb_db, ovnsb_db northd: active)
ovn-chassis 20.12.0 active 3 ovn-chassis charmstore stable 63 ubuntu Unit is ready
placement 5.0.0 active 1 placement charmstore stable 47 ubuntu Unit is ready
placement-mysql-router 8.0.23 active 1 mysql-router charmstore stable 48 ubuntu Unit is ready
rabbitmq-server 3.8.2 active 1 rabbitmq-server charmstore stable 406 ubuntu Unit is ready
vault 1.5.4 active 1 vault charmstore stable 141 ubuntu Unit is ready (active: true, mlock: disabled)
vault-mysql-router 8.0.23 active 1 mysql-router charmstore stable 48 ubuntu Unit is ready
App Version Status Scale Charm Store Channel Rev OS Message
ceph-mon 16.2.6 active 3 ceph-mon charmstore stable 482 ubuntu Unit is ready and clustered
ceph-osd 16.2.6 active 4 ceph-osd charmstore stable 502 ubuntu Unit is ready (1 OSD)
ceph-radosgw 16.2.6 active 1 ceph-radosgw charmstore stable 412 ubuntu Unit is ready
cinder 19.0.0 active 1 cinder charmstore stable 448 ubuntu Unit is ready
cinder-ceph 19.0.0 active 1 cinder-ceph charmstore stable 360 ubuntu Unit is ready
cinder-mysql-router 8.0.26 active 1 mysql-router charmstore stable 60 ubuntu Unit is ready
dashboard-mysql-router 8.0.26 active 1 mysql-router charmstore stable 60 ubuntu Unit is ready
glance 23.0.0 active 1 glance charmstore stable 473 ubuntu Unit is ready
glance-mysql-router 8.0.26 active 1 mysql-router charmstore stable 60 ubuntu Unit is ready
keystone 20.0.0 active 1 keystone charmstore stable 565 ubuntu Application Ready
keystone-mysql-router 8.0.26 active 1 mysql-router charmstore stable 60 ubuntu Unit is ready
mysql-innodb-cluster 8.0.26 active 3 mysql-innodb-cluster charmstore stable 88 ubuntu Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
ncc-mysql-router 8.0.26 active 1 mysql-router charmstore stable 60 ubuntu Unit is ready
neutron-api 19.0.0 active 1 neutron-api charmstore stable 485 ubuntu Unit is ready
neutron-api-mysql-router 8.0.26 active 1 mysql-router charmstore stable 60 ubuntu Unit is ready
neutron-api-plugin-ovn 19.0.0 active 1 neutron-api-plugin-ovn charmstore stable 46 ubuntu Unit is ready
nova-cloud-controller 24.0.0 active 1 nova-cloud-controller charmstore stable 552 ubuntu Unit is ready
nova-compute 24.0.0 active 3 nova-compute charmstore stable 577 ubuntu Unit is ready
ntp 3.5 active 4 ntp charmhub stable 47 ubuntu chrony: Ready
openstack-dashboard 20.1.0 active 1 openstack-dashboard charmstore stable 513 ubuntu Unit is ready
ovn-central 21.09.0~git2... active 3 ovn-central charmstore stable 68 ubuntu Unit is ready
ovn-chassis 21.09.0~git2... active 3 ovn-chassis charmstore stable 86 ubuntu Unit is ready
placement 6.0.0 active 1 placement charmstore stable 64 ubuntu Unit is ready
placement-mysql-router 8.0.26 active 1 mysql-router charmstore stable 60 ubuntu Unit is ready
rabbitmq-server 3.8.2 active 1 rabbitmq-server charmstore stable 440 ubuntu Unit is ready
vault 1.5.9 active 1 vault charmstore stable 153 ubuntu Unit is ready (active: true, mlock: disabled)
vault-mysql-router 8.0.26 active 1 mysql-router charmstore stable 60 ubuntu Unit is ready
Unit Workload Agent Machine Public address Ports Message
ceph-mon/0 active idle 0/lxd/3 10.0.0.170 Unit is ready and clustered
ceph-mon/1 active idle 1/lxd/3 10.0.0.169 Unit is ready and clustered
ceph-mon/2* active idle 2/lxd/4 10.0.0.168 Unit is ready and clustered
ceph-osd/0* active idle 0 10.0.0.150 Unit is ready (2 OSD)
ntp/3 active idle 10.0.0.150 123/udp chrony: Ready
ceph-osd/1 active idle 1 10.0.0.151 Unit is ready (2 OSD)
ntp/2 active idle 10.0.0.151 123/udp chrony: Ready
ceph-osd/2 active idle 2 10.0.0.152 Unit is ready (2 OSD)
ntp/1 active idle 10.0.0.152 123/udp chrony: Ready
ceph-osd/3 active idle 3 10.0.0.153 Unit is ready (2 OSD)
ntp/0* active idle 10.0.0.153 123/udp chrony: Ready
ceph-radosgw/0* active idle 0/lxd/4 10.0.0.172 80/tcp Unit is ready
cinder/0* active idle 1/lxd/4 10.0.0.171 8776/tcp Unit is ready
cinder-ceph/0* active idle 10.0.0.171 Unit is ready
cinder-mysql-router/0* active idle 10.0.0.171 Unit is ready
glance/0* active idle 3/lxd/3 10.0.0.167 9292/tcp Unit is ready
glance-mysql-router/0* active idle 10.0.0.167 Unit is ready
keystone/0* active idle 0/lxd/2 10.0.0.162 5000/tcp Unit is ready
keystone-mysql-router/0* active idle 10.0.0.162 Unit is ready
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.154 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.155 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.156 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
neutron-api/0* active idle 1/lxd/2 10.0.0.161 9696/tcp Unit is ready
neutron-api-mysql-router/0* active idle 10.0.0.161 Unit is ready
neutron-api-plugin-ovn/0* active idle 10.0.0.161 Unit is ready
nova-cloud-controller/0* active idle 3/lxd/1 10.0.0.164 8774/tcp,8775/tcp Unit is ready
ncc-mysql-router/0* active idle 10.0.0.164 Unit is ready
nova-compute/0* active idle 1 10.0.0.151 Unit is ready
ovn-chassis/2 active idle 10.0.0.151 Unit is ready
nova-compute/1 active idle 2 10.0.0.152 Unit is ready
ovn-chassis/0* active idle 10.0.0.152 Unit is ready
nova-compute/2 active idle 3 10.0.0.153 Unit is ready
ovn-chassis/1 active idle 10.0.0.153 Unit is ready
openstack-dashboard/0* active idle 2/lxd/3 10.0.0.166 80/tcp,443/tcp Unit is ready
dashboard-mysql-router/0* active idle 10.0.0.166 Unit is ready
ovn-central/0* active idle 0/lxd/1 10.0.0.158 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db northd: active)
ovn-central/1 active idle 1/lxd/1 10.0.0.159 6641/tcp,6642/tcp Unit is ready
ovn-central/2 active idle 2/lxd/1 10.0.0.160 6641/tcp,6642/tcp Unit is ready
placement/0* active idle 3/lxd/2 10.0.0.165 8778/tcp Unit is ready
placement-mysql-router/0* active idle 10.0.0.165 Unit is ready
rabbitmq-server/0* active idle 2/lxd/2 10.0.0.163 5672/tcp Unit is ready
vault/0* active idle 3/lxd/0 10.0.0.157 8200/tcp Unit is ready (active: true, mlock: disabled)
vault-mysql-router/0* active idle 10.0.0.157 Unit is ready
ceph-mon/0* active idle 0/lxd/3 10.0.0.176 Unit is ready and clustered
ceph-mon/1 active idle 1/lxd/3 10.0.0.177 Unit is ready and clustered
ceph-mon/2 active idle 2/lxd/4 10.0.0.178 Unit is ready and clustered
ceph-osd/0 active idle 0 10.0.0.158 Unit is ready (1 OSD)
ntp/1 active idle 10.0.0.158 123/udp chrony: Ready
ceph-osd/1* active idle 1 10.0.0.159 Unit is ready (1 OSD)
ntp/2 active idle 10.0.0.159 123/udp chrony: Ready
ceph-osd/2 active idle 2 10.0.0.160 Unit is ready (1 OSD)
ntp/0* active idle 10.0.0.160 123/udp chrony: Ready
ceph-osd/3 active idle 3 10.0.0.161 Unit is ready (1 OSD)
ntp/3 active idle 10.0.0.161 123/udp chrony: Ready
ceph-radosgw/0* active idle 0/lxd/4 10.0.0.180 80/tcp Unit is ready
cinder/0* active idle 1/lxd/4 10.0.0.179 8776/tcp Unit is ready
cinder-ceph/0* active idle 10.0.0.179 Unit is ready
cinder-mysql-router/0* active idle 10.0.0.179 Unit is ready
glance/0* active idle 3/lxd/3 10.0.0.175 9292/tcp Unit is ready
glance-mysql-router/0* active idle 10.0.0.175 Unit is ready
keystone/0* active idle 0/lxd/2 10.0.0.170 5000/tcp Unit is ready
keystone-mysql-router/0* active idle 10.0.0.170 Unit is ready
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.162 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.163 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.165 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
neutron-api/0* active idle 1/lxd/2 10.0.0.169 9696/tcp Unit is ready
neutron-api-mysql-router/0* active idle 10.0.0.169 Unit is ready
neutron-api-plugin-ovn/0* active idle 10.0.0.169 Unit is ready
nova-cloud-controller/0* active idle 3/lxd/1 10.0.0.172 8774/tcp,8775/tcp Unit is ready
ncc-mysql-router/0* active idle 10.0.0.172 Unit is ready
nova-compute/0* active idle 1 10.0.0.159 Unit is ready
ovn-chassis/3 active idle 10.0.0.159 Unit is ready
nova-compute/1 active idle 2 10.0.0.160 Unit is ready
ovn-chassis/2 active idle 10.0.0.160 Unit is ready
nova-compute/2 active idle 3 10.0.0.161 Unit is ready
ovn-chassis/1* active idle 10.0.0.161 Unit is ready
openstack-dashboard/0* active idle 2/lxd/3 10.0.0.174 80/tcp,443/tcp Unit is ready
dashboard-mysql-router/0* active idle 10.0.0.174 Unit is ready
ovn-central/0 active idle 0/lxd/1 10.0.0.166 6641/tcp,6642/tcp Unit is ready
ovn-central/1 active idle 1/lxd/1 10.0.0.167 6641/tcp,6642/tcp Unit is ready
ovn-central/2* active idle 2/lxd/1 10.0.0.168 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db northd: active)
placement/0* active idle 3/lxd/2 10.0.0.173 8778/tcp Unit is ready
placement-mysql-router/0* active idle 10.0.0.173 Unit is ready
rabbitmq-server/0* active idle 2/lxd/2 10.0.0.171 5672/tcp Unit is ready
vault/0* active idle 3/lxd/0 10.0.0.164 8200/tcp Unit is ready (active: true, mlock: disabled)
vault-mysql-router/0* active idle 10.0.0.164 Unit is ready
Machine State DNS Inst id Series AZ Message
0 started 10.0.0.150 node4 focal default Deployed
0/lxd/0 started 10.0.0.154 juju-3d942c-0-lxd-0 focal default Container started
0/lxd/1 started 10.0.0.158 juju-3d942c-0-lxd-1 focal default Container started
0/lxd/2 started 10.0.0.162 juju-3d942c-0-lxd-2 focal default Container started
0/lxd/3 started 10.0.0.170 juju-3d942c-0-lxd-3 focal default Container started
0/lxd/4 started 10.0.0.172 juju-3d942c-0-lxd-4 focal default Container started
1 started 10.0.0.151 node1 focal default Deployed
1/lxd/0 started 10.0.0.155 juju-3d942c-1-lxd-0 focal default Container started
1/lxd/1 started 10.0.0.159 juju-3d942c-1-lxd-1 focal default Container started
1/lxd/2 started 10.0.0.161 juju-3d942c-1-lxd-2 focal default Container started
1/lxd/3 started 10.0.0.169 juju-3d942c-1-lxd-3 focal default Container started
1/lxd/4 started 10.0.0.171 juju-3d942c-1-lxd-4 focal default Container started
2 started 10.0.0.152 node2 focal default Deployed
2/lxd/0 started 10.0.0.156 juju-3d942c-2-lxd-0 focal default Container started
2/lxd/1 started 10.0.0.160 juju-3d942c-2-lxd-1 focal default Container started
2/lxd/2 started 10.0.0.163 juju-3d942c-2-lxd-2 focal default Container started
2/lxd/3 started 10.0.0.166 juju-3d942c-2-lxd-3 focal default Container started
2/lxd/4 started 10.0.0.168 juju-3d942c-2-lxd-4 focal default Container started
3 started 10.0.0.153 node3 focal default Deployed
3/lxd/0 started 10.0.0.157 juju-3d942c-3-lxd-0 focal default Container started
3/lxd/1 started 10.0.0.164 juju-3d942c-3-lxd-1 focal default Container started
3/lxd/2 started 10.0.0.165 juju-3d942c-3-lxd-2 focal default Container started
3/lxd/3 started 10.0.0.167 juju-3d942c-3-lxd-3 focal default Container started
0 started 10.0.0.158 node1 focal default Deployed
0/lxd/0 started 10.0.0.162 juju-c6e3fb-0-lxd-0 focal default Container started
0/lxd/1 started 10.0.0.166 juju-c6e3fb-0-lxd-1 focal default Container started
0/lxd/2 started 10.0.0.170 juju-c6e3fb-0-lxd-2 focal default Container started
0/lxd/3 started 10.0.0.176 juju-c6e3fb-0-lxd-3 focal default Container started
0/lxd/4 started 10.0.0.180 juju-c6e3fb-0-lxd-4 focal default Container started
1 started 10.0.0.159 node2 focal default Deployed
1/lxd/0 started 10.0.0.163 juju-c6e3fb-1-lxd-0 focal default Container started
1/lxd/1 started 10.0.0.167 juju-c6e3fb-1-lxd-1 focal default Container started
1/lxd/2 started 10.0.0.169 juju-c6e3fb-1-lxd-2 focal default Container started
1/lxd/3 started 10.0.0.177 juju-c6e3fb-1-lxd-3 focal default Container started
1/lxd/4 started 10.0.0.179 juju-c6e3fb-1-lxd-4 focal default Container started
2 started 10.0.0.160 node3 focal default Deployed
2/lxd/0 started 10.0.0.165 juju-c6e3fb-2-lxd-0 focal default Container started
2/lxd/1 started 10.0.0.168 juju-c6e3fb-2-lxd-1 focal default Container started
2/lxd/2 started 10.0.0.171 juju-c6e3fb-2-lxd-2 focal default Container started
2/lxd/3 started 10.0.0.174 juju-c6e3fb-2-lxd-3 focal default Container started
2/lxd/4 started 10.0.0.178 juju-c6e3fb-2-lxd-4 focal default Container started
3 started 10.0.0.161 node4 focal default Deployed
3/lxd/0 started 10.0.0.164 juju-c6e3fb-3-lxd-0 focal default Container started
3/lxd/1 started 10.0.0.172 juju-c6e3fb-3-lxd-1 focal default Container started
3/lxd/2 started 10.0.0.173 juju-c6e3fb-3-lxd-2 focal default Container started
3/lxd/3 started 10.0.0.175 juju-c6e3fb-3-lxd-3 focal default Container started
Relation provider Requirer Interface Type Message
ceph-mon:client cinder-ceph:ceph ceph-client regular

View File

@ -60,9 +60,9 @@ OpenStack release
do use this method).
As the :doc:`Overview <install-overview>` of the Installation section states,
OpenStack Wallaby will be deployed atop Ubuntu 20.04 LTS (Focal) cloud nodes.
In order to achieve this a cloud archive release of 'cloud:focal-wallaby' will
be used during the install of each OpenStack application. Note that some
OpenStack Xena will be deployed atop Ubuntu 20.04 LTS (Focal) cloud nodes. In
order to achieve this a cloud archive release of 'cloud:focal-xena' will be
used during the install of each OpenStack application. Note that some
applications are not part of the OpenStack project per se and therefore do not
apply (exceptionally, Ceph applications do use this method). Not using a more
recent OpenStack release in this way will result in an Ussuri deployment (i.e.
@ -75,8 +75,7 @@ and how they are used when upgrading OpenStack.
.. important::
The chosen OpenStack release may impact the installation and configuration
instructions. **This guide assumes that OpenStack Wallaby is being
deployed.**
instructions. **This guide assumes that OpenStack Xena is being deployed.**
Installation progress
---------------------
@ -125,13 +124,13 @@ The name of the block devices backing the OSDs is dependent upon the hardware
on the nodes. All possible devices across the nodes should be given as the
value for the ``osd-devices`` option (space-separated). Here, we'll be using
the same device on each cloud node: ``/dev/sdb``. File ``ceph-osd.yaml``
contains the configuration.
contains the configuration:
.. code-block:: yaml
ceph-osd:
osd-devices: /dev/sdb
source: cloud:focal-wallaby
source: cloud:focal-xena
To deploy the application we'll make use of the 'compute' tag that we placed on
each of these nodes on the :doc:`Install MAAS <install-maas>` page:
@ -166,7 +165,7 @@ charm. We'll then scale-out the application to two other machines. File
enable-live-migration: true
enable-resize: true
migration-auth-type: ssh
openstack-origin: cloud:focal-wallaby
openstack-origin: cloud:focal-xena
The initial node must be targeted by machine since there are no more free Juju
machines (MAAS nodes) available. This means we're placing multiple services on
@ -236,18 +235,18 @@ status` should look similar to this:
.. code-block:: console
Unit Workload Agent Machine Public address Ports Message
ceph-osd/0* blocked idle 0 10.0.0.150 Missing relation: monitor
ceph-osd/1 blocked idle 1 10.0.0.151 Missing relation: monitor
ceph-osd/2 blocked idle 2 10.0.0.152 Missing relation: monitor
ceph-osd/3 blocked idle 3 10.0.0.153 Missing relation: monitor
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.154 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.155 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.156 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
nova-compute/0* blocked idle 1 10.0.0.151 Missing relations: messaging, image
nova-compute/1 blocked idle 2 10.0.0.152 Missing relations: messaging, image
nova-compute/2 blocked idle 3 10.0.0.153 Missing relations: messaging, image
vault/0* active idle 3/lxd/0 10.0.0.157 8200/tcp Unit is ready (active: true, mlock: disabled)
vault-mysql-router/0* active idle 10.0.0.157 Unit is ready
ceph-osd/0 blocked idle 0 10.0.0.158 Missing relation: monitor
ceph-osd/1* blocked idle 1 10.0.0.159 Missing relation: monitor
ceph-osd/2 blocked idle 2 10.0.0.160 Missing relation: monitor
ceph-osd/3 blocked idle 3 10.0.0.161 Missing relation: monitor
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.162 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.163 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.165 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
nova-compute/0* blocked idle 1 10.0.0.159 Missing relations: messaging, image
nova-compute/1 blocked idle 2 10.0.0.160 Missing relations: messaging, image
nova-compute/2 blocked idle 3 10.0.0.161 Missing relations: image, messaging
vault/0* active idle 3/lxd/0 10.0.0.164 8200/tcp Unit is ready (active: true, mlock: disabled)
vault-mysql-router/0* active idle 10.0.0.164 Unit is ready
Cloud applications are TLS-enabled via the ``vault:certificates`` relation.
Below we start with the cloud database. Although the latter has a self-signed
@ -280,9 +279,9 @@ File ``neutron.yaml`` contains the configuration necessary for three of them:
neutron-security-groups: true
flat-network-providers: physnet1
worker-multiplier: 0.25
openstack-origin: cloud:focal-wallaby
openstack-origin: cloud:focal-xena
ovn-central:
source: cloud:focal-wallaby
source: cloud:focal-xena
The ``bridge-interface-mappings`` setting impacts the OVN Chassis and refers to
a mapping of OVS bridge to network interface. As described in the :ref:`Create
@ -348,7 +347,7 @@ The keystone application will be containerised on machine 0. File
keystone:
worker-multiplier: 0.25
openstack-origin: cloud:focal-wallaby
openstack-origin: cloud:focal-xena
To deploy:
@ -393,31 +392,35 @@ look similar to this:
.. code-block:: console
Unit Workload Agent Machine Public address Ports Message
ceph-osd/0* blocked idle 0 10.0.0.150 Missing relation: monitor
ceph-osd/1 blocked idle 1 10.0.0.151 Missing relation: monitor
ceph-osd/2 blocked idle 2 10.0.0.152 Missing relation: monitor
ceph-osd/3 blocked idle 3 10.0.0.153 Missing relation: monitor
keystone/0* active idle 0/lxd/2 10.0.0.162 5000/tcp Unit is ready
keystone-mysql-router/0* active idle 10.0.0.162 Unit is ready
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.154 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.155 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.156 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
neutron-api/0* active idle 1/lxd/2 10.0.0.161 9696/tcp Unit is ready
neutron-api-mysql-router/0* active idle 10.0.0.161 Unit is ready
neutron-api-plugin-ovn/0* active idle 10.0.0.161 Unit is ready
nova-compute/0* blocked idle 1 10.0.0.151 Missing relations: image
ovn-chassis/2 active idle 10.0.0.151 Unit is ready
nova-compute/1 blocked idle 2 10.0.0.152 Missing relations: image
ovn-chassis/0* active idle 10.0.0.152 Unit is ready
nova-compute/2 blocked idle 3 10.0.0.153 Missing relations: image
ovn-chassis/1 active idle 10.0.0.153 Unit is ready
ovn-central/0* active idle 0/lxd/1 10.0.0.158 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db northd: active)
ovn-central/1 active idle 1/lxd/1 10.0.0.159 6641/tcp,6642/tcp Unit is ready
ovn-central/2 active idle 2/lxd/1 10.0.0.160 6641/tcp,6642/tcp Unit is ready
rabbitmq-server/0* active idle 2/lxd/2 10.0.0.163 5672/tcp Unit is ready
vault/0* active idle 3/lxd/0 10.0.0.157 8200/tcp Unit is ready (active: true, mlock: disabled)
vault-mysql-router/0* active idle 10.0.0.157 Unit is ready
ceph-osd/0 blocked idle 0 10.0.0.158 Missing relation: monitor
ceph-osd/1* blocked idle 1 10.0.0.159 Missing relation: monitor
ceph-osd/2 blocked idle 2 10.0.0.160 Missing relation: monitor
ceph-osd/3 blocked idle 3 10.0.0.161 Missing relation: monitor
keystone/0* active idle 0/lxd/2 10.0.0.170 5000/tcp Unit is ready
keystone-mysql-router/0* active idle 10.0.0.170 Unit is ready
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.162 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to
ONE failure.
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.163 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to
ONE failure.
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.165 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to
ONE failure.
neutron-api/0* active idle 1/lxd/2 10.0.0.169 9696/tcp Unit is ready
neutron-api-mysql-router/0* active idle 10.0.0.169 Unit is ready
neutron-api-plugin-ovn/0* active idle 10.0.0.169 Unit is ready
nova-compute/0* blocked idle 1 10.0.0.159 Missing relations: image
ovn-chassis/3 active idle 10.0.0.159 Unit is ready
nova-compute/1 blocked idle 2 10.0.0.160 Missing relations: image
ovn-chassis/2 active idle 10.0.0.160 Unit is ready
nova-compute/2 blocked idle 3 10.0.0.161 Missing relations: image
ovn-chassis/1* active idle 10.0.0.161 Unit is ready
ovn-central/0 active idle 0/lxd/1 10.0.0.166 6641/tcp,6642/tcp Unit is ready
ovn-central/1 active idle 1/lxd/1 10.0.0.167 6641/tcp,6642/tcp Unit is ready
ovn-central/2* active idle 2/lxd/1 10.0.0.168 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db northd: active)
rabbitmq-server/0* active idle 2/lxd/2 10.0.0.171 5672/tcp Unit is ready
vault/0* active idle 3/lxd/0 10.0.0.164 8200/tcp Unit is ready (active: true, mlock: disabled)
vault-mysql-router/0* active idle 10.0.0.164 Unit is ready
Nova cloud controller
~~~~~~~~~~~~~~~~~~~~~
@ -432,7 +435,7 @@ the configuration:
nova-cloud-controller:
network-manager: Neutron
worker-multiplier: 0.25
openstack-origin: cloud:focal-wallaby
openstack-origin: cloud:focal-xena
To deploy:
@ -474,7 +477,7 @@ The placement application will be containerised on machine 3 with the
placement:
worker-multiplier: 0.25
openstack-origin: cloud:focal-wallaby
openstack-origin: cloud:focal-xena
To deploy:
@ -506,7 +509,7 @@ The openstack-dashboard application (Horizon) will be containerised on machine
.. code-block:: none
juju deploy --to lxd:2 --config openstack-origin=cloud:focal-wallaby openstack-dashboard
juju deploy --to lxd:2 --config openstack-origin=cloud:focal-xena openstack-dashboard
Join openstack-dashboard to the cloud database:
@ -539,7 +542,7 @@ charm. File ``glance.yaml`` contains the configuration:
glance:
worker-multiplier: 0.25
openstack-origin: cloud:focal-wallaby
openstack-origin: cloud:focal-xena
To deploy:
@ -570,38 +573,40 @@ look similar to this:
.. code-block:: console
Unit Workload Agent Machine Public address Ports Message
ceph-osd/0* blocked idle 0 10.0.0.150 Missing relation: monitor
ceph-osd/1 blocked idle 1 10.0.0.151 Missing relation: monitor
ceph-osd/2 blocked idle 2 10.0.0.152 Missing relation: monitor
ceph-osd/3 blocked idle 3 10.0.0.153 Missing relation: monitor
glance/0* active idle 3/lxd/3 10.0.0.167 9292/tcp Unit is ready
glance-mysql-router/0* active idle 10.0.0.167 Unit is ready
keystone/0* active idle 0/lxd/2 10.0.0.162 5000/tcp Unit is ready
keystone-mysql-router/0* active idle 10.0.0.162 Unit is ready
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.154 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.155 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.156 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
neutron-api/0* active idle 1/lxd/2 10.0.0.161 9696/tcp Unit is ready
neutron-api-mysql-router/0* active idle 10.0.0.161 Unit is ready
neutron-api-plugin-ovn/0* active idle 10.0.0.161 Unit is ready
nova-cloud-controller/0* active idle 3/lxd/1 10.0.0.164 8774/tcp,8775/tcp Unit is ready
ncc-mysql-router/0* active idle 10.0.0.164 Unit is ready
nova-compute/0* active idle 1 10.0.0.151 Unit is ready
ovn-chassis/2 active idle 10.0.0.151 Unit is ready
nova-compute/1 active idle 2 10.0.0.152 Unit is ready
ovn-chassis/0* active idle 10.0.0.152 Unit is ready
nova-compute/2 active idle 3 10.0.0.153 Unit is ready
ovn-chassis/1 active idle 10.0.0.153 Unit is ready
openstack-dashboard/0* active idle 2/lxd/3 10.0.0.166 80/tcp,443/tcp Unit is ready
dashboard-mysql-router/0* active idle 10.0.0.166 Unit is ready
ovn-central/0* active idle 0/lxd/1 10.0.0.158 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db northd: active)
ovn-central/1 active idle 1/lxd/1 10.0.0.159 6641/tcp,6642/tcp Unit is ready
ovn-central/2 active idle 2/lxd/1 10.0.0.160 6641/tcp,6642/tcp Unit is ready
placement/0* active idle 3/lxd/2 10.0.0.165 8778/tcp Unit is ready
placement-mysql-router/0* active idle 10.0.0.165 Unit is ready
rabbitmq-server/0* active idle 2/lxd/2 10.0.0.163 5672/tcp Unit is ready
vault/0* active idle 3/lxd/0 10.0.0.157 8200/tcp Unit is ready (active: true, mlock: disabled)
vault-mysql-router/0* active idle 10.0.0.157 Unit is ready
ceph-osd/0 blocked idle 0 10.0.0.158 Missing relation: monitor
ceph-osd/1* blocked idle 1 10.0.0.159 Missing relation: monitor
ceph-osd/2 blocked idle 2 10.0.0.160 Missing relation: monitor
ceph-osd/3 blocked idle 3 10.0.0.161 Missing relation: monitor
glance/0* active idle 3/lxd/3 10.0.0.175 9292/tcp Unit is ready
glance-mysql-router/0* active idle 10.0.0.175 Unit is ready
keystone/0* active idle 0/lxd/2 10.0.0.170 5000/tcp Unit is ready
keystone-mysql-router/0* active idle 10.0.0.170 Unit is ready
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.162 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to
ONE failure.
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.163 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to
ONE failure.
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.165 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
neutron-api/0* active idle 1/lxd/2 10.0.0.169 9696/tcp Unit is ready
neutron-api-mysql-router/0* active idle 10.0.0.169 Unit is ready
neutron-api-plugin-ovn/0* active idle 10.0.0.169 Unit is ready
nova-cloud-controller/0* active idle 3/lxd/1 10.0.0.172 8774/tcp,8775/tcp Unit is ready
ncc-mysql-router/0* active idle 10.0.0.172 Unit is ready
nova-compute/0* active idle 1 10.0.0.159 Unit is ready
ovn-chassis/3 active idle 10.0.0.159 Unit is ready
nova-compute/1 active idle 2 10.0.0.160 Unit is ready
ovn-chassis/2 active idle 10.0.0.160 Unit is ready
nova-compute/2 active idle 3 10.0.0.161 Unit is ready
ovn-chassis/1* active idle 10.0.0.161 Unit is ready
openstack-dashboard/0* active idle 2/lxd/3 10.0.0.174 80/tcp,443/tcp Unit is ready
dashboard-mysql-router/0* active idle 10.0.0.174 Unit is ready
ovn-central/0 active idle 0/lxd/1 10.0.0.166 6641/tcp,6642/tcp Unit is ready
ovn-central/1 active idle 1/lxd/1 10.0.0.167 6641/tcp,6642/tcp Unit is ready
ovn-central/2* active idle 2/lxd/1 10.0.0.168 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db northd: active)
placement/0* active idle 3/lxd/2 10.0.0.173 8778/tcp Unit is ready
placement-mysql-router/0* active idle 10.0.0.173 Unit is ready
rabbitmq-server/0* active idle 2/lxd/2 10.0.0.171 5672/tcp Unit is ready
vault/0* active idle 3/lxd/0 10.0.0.164 8200/tcp Unit is ready (active: true, mlock: disabled)
vault-mysql-router/0* active idle 10.0.0.164 Unit is ready
Ceph monitor
~~~~~~~~~~~~
@ -612,9 +617,9 @@ The ceph-mon application will be containerised on machines 0, 1, and 2 with the
.. code-block:: yaml
ceph-mon:
expected-osd-count: 3
expected-osd-count: 4
monitor-count: 3
source: cloud:focal-wallaby
source: cloud:focal-xena
.. code-block:: none
@ -648,7 +653,7 @@ charm. File ``cinder.yaml`` contains the configuration:
block-device: None
glance-api-version: 2
worker-multiplier: 0.25
openstack-origin: cloud:focal-wallaby
openstack-origin: cloud:focal-xena
To deploy:
@ -705,7 +710,7 @@ The ceph-radosgw application will be containerised on machine 0 with the
.. code-block:: none
juju deploy --to lxd:0 --config source=cloud:focal-wallaby ceph-radosgw
juju deploy --to lxd:0 --config source=cloud:focal-xena ceph-radosgw
A single relation is needed:

View File

@ -7,15 +7,15 @@ multi-node OpenStack cloud with `MAAS`_, `Juju`_, and `OpenStack Charms`_. For
easy adoption the cloud will be minimal. Nevertheless, it will be capable of
both performing some real work and scaling to fit more ambitious projects. High
availability will not be implemented beyond natively HA applications (Ceph,
MySQL8, OVN, Swift, and RabbitMQ).
MySQL, OVN, Swift, and RabbitMQ).
The software versions used in this guide are as follows:
* **Ubuntu 20.04 LTS (Focal)** for the MAAS server, Juju client, Juju
controller, and all cloud nodes (including containers)
* **MAAS 2.9.2**
* **Juju 2.9.0**
* **OpenStack Wallaby**
* **MAAS 3.0.0**
* **Juju 2.9.15**
* **OpenStack Xena**
Proceed to the :doc:`Install MAAS <install-maas>` page to begin your
installation journey. Hardware requirements are also listed there.

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB