Refresh install openstack page to Wallaby

Perform a manual install and update 'juju status'
output.

Update software versions:

* OpenStack Wallaby
* Juju 2.9.0
* MAAS 2.9.2

Add 'worker-multiplier', 'expected-osd-count', and
'monitor-count' options to the appropriate charms
as per the openstack-base bundle

Reword Dashboard section as per recent openstack-base
bundle commit

Add source option to ceph-radosgw deployment

Minor rewording

Change-Id: I6314e113a4d81ee7990650a3b3ff675a57d41e72
This commit is contained in:
Peter Matulis 2021-05-02 21:56:29 -04:00
parent 8eabfca257
commit 3b73c33af9
4 changed files with 246 additions and 213 deletions

View File

@ -108,7 +108,7 @@ the environment. It should now look very similar to this:
.. code-block:: none .. code-block:: none
Model Controller Cloud/Region Version SLA Timestamp Model Controller Cloud/Region Version SLA Timestamp
openstack maas-controller mymaas/default 2.8.7 unsupported 04:28:49Z openstack maas-controller mymaas/default 2.9.0 unsupported 01:51:00Z
Model "admin/openstack" is empty Model "admin/openstack" is empty

View File

@ -12,107 +12,107 @@ installed from the instructions given on the :doc:`Install OpenStack
.. code-block:: console .. code-block:: console
Model Controller Cloud/Region Version SLA Timestamp Model Controller Cloud/Region Version SLA Timestamp
openstack maas-controller mymaas/default 2.8.7 unsupported 01:12:49Z openstack maas-one maas-one/default 2.9.0 unsupported 01:35:20Z
App Version Status Scale Charm Store Rev OS Notes App Version Status Scale Charm Store Channel Rev OS Message
ceph-mon 15.2.5 active 3 ceph-mon jujucharms 50 ubuntu ceph-mon 16.2.0 active 3 ceph-mon charmstore stable 464 ubuntu Unit is ready and clustered
ceph-osd 15.2.5 active 4 ceph-osd jujucharms 306 ubuntu ceph-osd 16.2.0 active 4 ceph-osd charmstore stable 489 ubuntu Unit is ready (2 OSD)
ceph-radosgw 15.2.5 active 1 ceph-radosgw jujucharms 291 ubuntu ceph-radosgw 16.2.0 active 1 ceph-radosgw charmstore stable 398 ubuntu Unit is ready
cinder 17.0.0 active 1 cinder jujucharms 306 ubuntu cinder 18.0.0 active 1 cinder charmstore stable 436 ubuntu Unit is ready
cinder-ceph 17.0.0 active 1 cinder-ceph jujucharms 258 ubuntu cinder-ceph 18.0.0 active 1 cinder-ceph charmstore stable 352 ubuntu Unit is ready
cinder-mysql-router 8.0.22 active 1 mysql-router jujucharms 4 ubuntu cinder-mysql-router 8.0.23 active 1 mysql-router charmstore stable 48 ubuntu Unit is ready
dashboard-mysql-router 8.0.22 active 1 mysql-router jujucharms 4 ubuntu dashboard-mysql-router 8.0.23 active 1 mysql-router charmstore stable 48 ubuntu Unit is ready
glance 21.0.0 active 1 glance jujucharms 301 ubuntu glance 22.0.0 active 1 glance charmstore stable 450 ubuntu Unit is ready
glance-mysql-router 8.0.22 active 1 mysql-router jujucharms 4 ubuntu glance-mysql-router 8.0.23 active 1 mysql-router charmstore stable 48 ubuntu Unit is ready
keystone 18.0.0 active 1 keystone jujucharms 319 ubuntu keystone 19.0.0 active 1 keystone charmstore stable 542 ubuntu Application Ready
keystone-mysql-router 8.0.22 active 1 mysql-router jujucharms 4 ubuntu keystone-mysql-router 8.0.23 active 1 mysql-router charmstore stable 48 ubuntu Unit is ready
mysql-innodb-cluster 8.0.22 active 3 mysql-innodb-cluster jujucharms 3 ubuntu mysql-innodb-cluster 8.0.23 active 3 mysql-innodb-cluster charmstore stable 74 ubuntu Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
ncc-mysql-router 8.0.22 active 1 mysql-router jujucharms 4 ubuntu ncc-mysql-router 8.0.23 active 1 mysql-router charmstore stable 48 ubuntu Unit is ready
neutron-api 17.0.0 active 1 neutron-api jujucharms 290 ubuntu neutron-api 18.0.0 active 1 neutron-api charmstore stable 471 ubuntu Unit is ready
neutron-api-mysql-router 8.0.22 active 1 mysql-router jujucharms 4 ubuntu neutron-api-mysql-router 8.0.23 active 1 mysql-router charmstore stable 48 ubuntu Unit is ready
neutron-api-plugin-ovn 17.0.0 active 1 neutron-api-plugin-ovn jujucharms 2 ubuntu neutron-api-plugin-ovn 18.0.0 active 1 neutron-api-plugin-ovn charmstore stable 40 ubuntu Unit is ready
nova-cloud-controller 22.0.0 active 1 nova-cloud-controller jujucharms 349 ubuntu nova-cloud-controller 23.0.0 active 1 nova-cloud-controller charmstore stable 521 ubuntu Unit is ready
nova-compute 22.0.0 active 3 nova-compute jujucharms 323 ubuntu nova-compute 23.0.0 active 3 nova-compute charmstore stable 539 ubuntu Unit is ready
ntp 3.5 active 4 ntp jujucharms 41 ubuntu ntp 3.5 active 4 ntp charmstore stable 45 ubuntu chrony: Ready
openstack-dashboard 18.6.1 active 1 openstack-dashboard jujucharms 309 ubuntu openstack-dashboard 19.2.0 active 1 openstack-dashboard charmstore stable 505 ubuntu Unit is ready
ovn-central 20.03.1 active 3 ovn-central jujucharms 2 ubuntu ovn-central 20.12.0 active 3 ovn-central charmstore stable 51 ubuntu Unit is ready (leader: ovnnb_db, ovnsb_db northd: active)
ovn-chassis 20.03.1 active 3 ovn-chassis jujucharms 7 ubuntu ovn-chassis 20.12.0 active 3 ovn-chassis charmstore stable 63 ubuntu Unit is ready
placement 4.0.0 active 1 placement jujucharms 15 ubuntu placement 5.0.0 active 1 placement charmstore stable 47 ubuntu Unit is ready
placement-mysql-router 8.0.22 active 1 mysql-router jujucharms 4 ubuntu placement-mysql-router 8.0.23 active 1 mysql-router charmstore stable 48 ubuntu Unit is ready
rabbitmq-server 3.8.2 active 1 rabbitmq-server jujucharms 106 ubuntu rabbitmq-server 3.8.2 active 1 rabbitmq-server charmstore stable 406 ubuntu Unit is ready
vault 1.5.4 active 1 vault jujucharms 41 ubuntu vault 1.5.4 active 1 vault charmstore stable 141 ubuntu Unit is ready (active: true, mlock: disabled)
vault-mysql-router 8.0.22 active 1 mysql-router jujucharms 4 ubuntu vault-mysql-router 8.0.23 active 1 mysql-router charmstore stable 48 ubuntu Unit is ready
Unit Workload Agent Machine Public address Ports Message Unit Workload Agent Machine Public address Ports Message
ceph-mon/0* active idle 0/lxd/3 10.0.0.191 Unit is ready and clustered ceph-mon/0 active idle 0/lxd/3 10.0.0.170 Unit is ready and clustered
ceph-mon/1 active idle 1/lxd/3 10.0.0.189 Unit is ready and clustered ceph-mon/1 active idle 1/lxd/3 10.0.0.169 Unit is ready and clustered
ceph-mon/2 active idle 2/lxd/4 10.0.0.190 Unit is ready and clustered ceph-mon/2* active idle 2/lxd/4 10.0.0.168 Unit is ready and clustered
ceph-osd/0* active idle 0 10.0.0.171 Unit is ready (1 OSD) ceph-osd/0* active idle 0 10.0.0.150 Unit is ready (2 OSD)
ntp/1 active idle 10.0.0.171 123/udp chrony: Ready ntp/3 active idle 10.0.0.150 123/udp chrony: Ready
ceph-osd/1 active idle 1 10.0.0.172 Unit is ready (1 OSD) ceph-osd/1 active idle 1 10.0.0.151 Unit is ready (2 OSD)
ntp/0* active idle 10.0.0.172 123/udp chrony: Ready ntp/2 active idle 10.0.0.151 123/udp chrony: Ready
ceph-osd/2 active idle 2 10.0.0.173 Unit is ready (1 OSD) ceph-osd/2 active idle 2 10.0.0.152 Unit is ready (2 OSD)
ntp/3 active idle 10.0.0.173 123/udp chrony: Ready ntp/1 active idle 10.0.0.152 123/udp chrony: Ready
ceph-osd/3 active idle 3 10.0.0.174 Unit is ready (1 OSD) ceph-osd/3 active idle 3 10.0.0.153 Unit is ready (2 OSD)
ntp/2 active idle 10.0.0.174 123/udp chrony: Ready ntp/0* active idle 10.0.0.153 123/udp chrony: Ready
ceph-radosgw/0* active idle 0/lxd/4 10.0.0.193 80/tcp Unit is ready ceph-radosgw/0* active idle 0/lxd/4 10.0.0.172 80/tcp Unit is ready
cinder/0* active idle 1/lxd/4 10.0.0.192 8776/tcp Unit is ready cinder/0* active idle 1/lxd/4 10.0.0.171 8776/tcp Unit is ready
cinder-ceph/0* active idle 10.0.0.192 Unit is ready cinder-ceph/0* active idle 10.0.0.171 Unit is ready
cinder-mysql-router/0* active idle 10.0.0.192 Unit is ready cinder-mysql-router/0* active idle 10.0.0.171 Unit is ready
glance/0* active idle 3/lxd/3 10.0.0.188 9292/tcp Unit is ready glance/0* active idle 3/lxd/3 10.0.0.167 9292/tcp Unit is ready
glance-mysql-router/0* active idle 10.0.0.188 Unit is ready glance-mysql-router/0* active idle 10.0.0.167 Unit is ready
keystone/0* active idle 0/lxd/2 10.0.0.29 5000/tcp Unit is ready keystone/0* active idle 0/lxd/2 10.0.0.162 5000/tcp Unit is ready
keystone-mysql-router/0* active idle 10.0.0.29 Unit is ready keystone-mysql-router/0* active idle 10.0.0.162 Unit is ready
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.175 Unit is ready: Mode: R/W mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.154 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.176 Unit is ready: Mode: R/O mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.155 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.177 Unit is ready: Mode: R/O mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.156 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
neutron-api/0* active idle 1/lxd/2 10.0.0.182 9696/tcp Unit is ready neutron-api/0* active idle 1/lxd/2 10.0.0.161 9696/tcp Unit is ready
neutron-api-mysql-router/0* active idle 10.0.0.182 Unit is ready neutron-api-mysql-router/0* active idle 10.0.0.161 Unit is ready
neutron-api-plugin-ovn/0* active idle 10.0.0.182 Unit is ready neutron-api-plugin-ovn/0* active idle 10.0.0.161 Unit is ready
nova-cloud-controller/0* active idle 3/lxd/1 10.0.0.185 8774/tcp,8775/tcp Unit is ready nova-cloud-controller/0* active idle 3/lxd/1 10.0.0.164 8774/tcp,8775/tcp Unit is ready
ncc-mysql-router/0* active idle 10.0.0.185 Unit is ready ncc-mysql-router/0* active idle 10.0.0.164 Unit is ready
nova-compute/0* active idle 1 10.0.0.172 Unit is ready nova-compute/0* active idle 1 10.0.0.151 Unit is ready
ovn-chassis/0* active idle 10.0.0.172 Unit is ready ovn-chassis/2 active idle 10.0.0.151 Unit is ready
nova-compute/1 active idle 2 10.0.0.173 Unit is ready nova-compute/1 active idle 2 10.0.0.152 Unit is ready
ovn-chassis/2 active idle 10.0.0.173 Unit is ready ovn-chassis/0* active idle 10.0.0.152 Unit is ready
nova-compute/2 active idle 3 10.0.0.174 Unit is ready nova-compute/2 active idle 3 10.0.0.153 Unit is ready
ovn-chassis/1 active idle 10.0.0.174 Unit is ready ovn-chassis/1 active idle 10.0.0.153 Unit is ready
openstack-dashboard/0* active idle 2/lxd/3 10.0.0.187 80/tcp,443/tcp Unit is ready openstack-dashboard/0* active idle 2/lxd/3 10.0.0.166 80/tcp,443/tcp Unit is ready
dashboard-mysql-router/0* active idle 10.0.0.187 Unit is ready dashboard-mysql-router/0* active idle 10.0.0.166 Unit is ready
ovn-central/0 active idle 0/lxd/1 10.0.0.181 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db) ovn-central/0* active idle 0/lxd/1 10.0.0.158 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db northd: active)
ovn-central/1 active idle 1/lxd/1 10.0.0.179 6641/tcp,6642/tcp Unit is ready ovn-central/1 active idle 1/lxd/1 10.0.0.159 6641/tcp,6642/tcp Unit is ready
ovn-central/2* active idle 2/lxd/1 10.0.0.180 6641/tcp,6642/tcp Unit is ready (leader: ovnsb_db northd: active) ovn-central/2 active idle 2/lxd/1 10.0.0.160 6641/tcp,6642/tcp Unit is ready
placement/0* active idle 3/lxd/2 10.0.0.186 8778/tcp Unit is ready placement/0* active idle 3/lxd/2 10.0.0.165 8778/tcp Unit is ready
placement-mysql-router/0* active idle 10.0.0.186 Unit is ready placement-mysql-router/0* active idle 10.0.0.165 Unit is ready
rabbitmq-server/0* active idle 2/lxd/2 10.0.0.184 5672/tcp Unit is ready rabbitmq-server/0* active idle 2/lxd/2 10.0.0.163 5672/tcp Unit is ready
vault/0* active idle 3/lxd/0 10.0.0.178 8200/tcp Unit is ready (active: true, mlock: disabled) vault/0* active idle 3/lxd/0 10.0.0.157 8200/tcp Unit is ready (active: true, mlock: disabled)
vault-mysql-router/0* active idle 10.0.0.178 Unit is ready vault-mysql-router/0* active idle 10.0.0.157 Unit is ready
Machine State DNS Inst id Series AZ Message Machine State DNS Inst id Series AZ Message
0 started 10.0.0.171 node2 focal default Deployed 0 started 10.0.0.150 node4 focal default Deployed
0/lxd/0 started 10.0.0.175 juju-bdbf2c-0-lxd-0 focal default Container started 0/lxd/0 started 10.0.0.154 juju-3d942c-0-lxd-0 focal default Container started
0/lxd/1 started 10.0.0.181 juju-bdbf2c-0-lxd-1 focal default Container started 0/lxd/1 started 10.0.0.158 juju-3d942c-0-lxd-1 focal default Container started
0/lxd/2 started 10.0.0.29 juju-bdbf2c-0-lxd-2 focal default Container started 0/lxd/2 started 10.0.0.162 juju-3d942c-0-lxd-2 focal default Container started
0/lxd/3 started 10.0.0.191 juju-bdbf2c-0-lxd-3 focal default Container started 0/lxd/3 started 10.0.0.170 juju-3d942c-0-lxd-3 focal default Container started
0/lxd/4 started 10.0.0.193 juju-bdbf2c-0-lxd-4 focal default Container started 0/lxd/4 started 10.0.0.172 juju-3d942c-0-lxd-4 focal default Container started
1 started 10.0.0.172 node1 focal default Deployed 1 started 10.0.0.151 node1 focal default Deployed
1/lxd/0 started 10.0.0.176 juju-bdbf2c-1-lxd-0 focal default Container started 1/lxd/0 started 10.0.0.155 juju-3d942c-1-lxd-0 focal default Container started
1/lxd/1 started 10.0.0.179 juju-bdbf2c-1-lxd-1 focal default Container started 1/lxd/1 started 10.0.0.159 juju-3d942c-1-lxd-1 focal default Container started
1/lxd/2 started 10.0.0.182 juju-bdbf2c-1-lxd-2 focal default Container started 1/lxd/2 started 10.0.0.161 juju-3d942c-1-lxd-2 focal default Container started
1/lxd/3 started 10.0.0.189 juju-bdbf2c-1-lxd-3 focal default Container started 1/lxd/3 started 10.0.0.169 juju-3d942c-1-lxd-3 focal default Container started
1/lxd/4 started 10.0.0.192 juju-bdbf2c-1-lxd-4 focal default Container started 1/lxd/4 started 10.0.0.171 juju-3d942c-1-lxd-4 focal default Container started
2 started 10.0.0.173 node3 focal default Deployed 2 started 10.0.0.152 node2 focal default Deployed
2/lxd/0 started 10.0.0.177 juju-bdbf2c-2-lxd-0 focal default Container started 2/lxd/0 started 10.0.0.156 juju-3d942c-2-lxd-0 focal default Container started
2/lxd/1 started 10.0.0.180 juju-bdbf2c-2-lxd-1 focal default Container started 2/lxd/1 started 10.0.0.160 juju-3d942c-2-lxd-1 focal default Container started
2/lxd/2 started 10.0.0.184 juju-bdbf2c-2-lxd-2 focal default Container started 2/lxd/2 started 10.0.0.163 juju-3d942c-2-lxd-2 focal default Container started
2/lxd/3 started 10.0.0.187 juju-bdbf2c-2-lxd-3 focal default Container started 2/lxd/3 started 10.0.0.166 juju-3d942c-2-lxd-3 focal default Container started
2/lxd/4 started 10.0.0.190 juju-bdbf2c-2-lxd-4 focal default Container started 2/lxd/4 started 10.0.0.168 juju-3d942c-2-lxd-4 focal default Container started
3 started 10.0.0.174 node4 focal default Deployed 3 started 10.0.0.153 node3 focal default Deployed
3/lxd/0 started 10.0.0.178 juju-bdbf2c-3-lxd-0 focal default Container started 3/lxd/0 started 10.0.0.157 juju-3d942c-3-lxd-0 focal default Container started
3/lxd/1 started 10.0.0.185 juju-bdbf2c-3-lxd-1 focal default Container started 3/lxd/1 started 10.0.0.164 juju-3d942c-3-lxd-1 focal default Container started
3/lxd/2 started 10.0.0.186 juju-bdbf2c-3-lxd-2 focal default Container started 3/lxd/2 started 10.0.0.165 juju-3d942c-3-lxd-2 focal default Container started
3/lxd/3 started 10.0.0.188 juju-bdbf2c-3-lxd-3 focal default Container started 3/lxd/3 started 10.0.0.167 juju-3d942c-3-lxd-3 focal default Container started
Relation provider Requirer Interface Type Message Relation provider Requirer Interface Type Message
ceph-mon:client cinder-ceph:ceph ceph-client regular ceph-mon:client cinder-ceph:ceph ceph-client regular
@ -178,6 +178,7 @@ installed from the instructions given on the :doc:`Install OpenStack
vault:certificates cinder:certificates tls-certificates regular vault:certificates cinder:certificates tls-certificates regular
vault:certificates glance:certificates tls-certificates regular vault:certificates glance:certificates tls-certificates regular
vault:certificates keystone:certificates tls-certificates regular vault:certificates keystone:certificates tls-certificates regular
vault:certificates mysql-innodb-cluster:certificates tls-certificates regular
vault:certificates neutron-api-plugin-ovn:certificates tls-certificates regular vault:certificates neutron-api-plugin-ovn:certificates tls-certificates regular
vault:certificates neutron-api:certificates tls-certificates regular vault:certificates neutron-api:certificates tls-certificates regular
vault:certificates nova-cloud-controller:certificates tls-certificates regular vault:certificates nova-cloud-controller:certificates tls-certificates regular

View File

@ -60,8 +60,8 @@ OpenStack release
do use this method). do use this method).
As the :doc:`Overview <install-overview>` of the Installation section states, As the :doc:`Overview <install-overview>` of the Installation section states,
OpenStack Victoria will be deployed atop Ubuntu 20.04 LTS (Focal) cloud nodes. OpenStack Wallaby will be deployed atop Ubuntu 20.04 LTS (Focal) cloud nodes.
In order to achieve this a cloud archive release of 'cloud:focal-victoria' will In order to achieve this a cloud archive release of 'cloud:focal-wallaby' will
be used during the install of each OpenStack application. Note that some be used during the install of each OpenStack application. Note that some
applications are not part of the OpenStack project per se and therefore do not applications are not part of the OpenStack project per se and therefore do not
apply (exceptionally, Ceph applications do use this method). Not using a more apply (exceptionally, Ceph applications do use this method). Not using a more
@ -75,7 +75,7 @@ releases and how they are used when upgrading OpenStack.
.. important:: .. important::
The chosen OpenStack release may impact the installation and configuration The chosen OpenStack release may impact the installation and configuration
instructions. **This guide assumes that OpenStack Victoria is being instructions. **This guide assumes that OpenStack Wallaby is being
deployed.** deployed.**
Installation progress Installation progress
@ -131,7 +131,7 @@ contains the configuration.
ceph-osd: ceph-osd:
osd-devices: /dev/sdb osd-devices: /dev/sdb
source: cloud:focal-victoria source: cloud:focal-wallaby
To deploy the application we'll make use of the 'compute' tag that we placed on To deploy the application we'll make use of the 'compute' tag that we placed on
each of these nodes on the :doc:`Install MAAS <install-maas>` page: each of these nodes on the :doc:`Install MAAS <install-maas>` page:
@ -162,10 +162,11 @@ charm. We'll then scale-out the application to two other machines. File
.. code-block:: yaml .. code-block:: yaml
nova-compute: nova-compute:
config-flags: default_ephemeral_format=ext4
enable-live-migration: true enable-live-migration: true
enable-resize: true enable-resize: true
migration-auth-type: ssh migration-auth-type: ssh
openstack-origin: cloud:focal-victoria openstack-origin: cloud:focal-wallaby
The initial node must be targeted by machine since there are no more free Juju The initial node must be targeted by machine since there are no more free Juju
machines (MAAS nodes) available. This means we're placing multiple services on machines (MAAS nodes) available. This means we're placing multiple services on
@ -221,13 +222,13 @@ Here are the corresponding commands for Vault:
juju add-relation vault-mysql-router:db-router mysql-innodb-cluster:db-router juju add-relation vault-mysql-router:db-router mysql-innodb-cluster:db-router
juju add-relation vault-mysql-router:shared-db vault:shared-db juju add-relation vault-mysql-router:shared-db vault:shared-db
Vault now needs to be initialised and unsealed. The vault charm will also need Vault must now be initialised and unsealed. The vault charm will also need to
to be authorised to carry out certain tasks. These steps are covered in the be authorised to carry out certain tasks. These steps are covered in the `vault
`vault charm`_ documentation. Perform them now. charm`_ documentation. Perform them now.
Vault must now be provided with a CA certificate in order for it to issue Provide Vault with a CA certificate so it can issue certificates to cloud API
certificates to cloud API services. This is covered on the :ref:`Managing TLS services. This is covered on the :ref:`Managing TLS certificates
certificates <add_ca_certificate>` page. Do this now. <add_ca_certificate>` page. Do this now.
Once the above is completed the Unit section output to command :command:`juju Once the above is completed the Unit section output to command :command:`juju
status` should look similar to this: status` should look similar to this:
@ -235,18 +236,18 @@ status` should look similar to this:
.. code-block:: console .. code-block:: console
Unit Workload Agent Machine Public address Ports Message Unit Workload Agent Machine Public address Ports Message
ceph-osd/0* blocked idle 0 10.0.0.171 Missing relation: monitor ceph-osd/0* blocked idle 0 10.0.0.150 Missing relation: monitor
ceph-osd/1 blocked idle 1 10.0.0.172 Missing relation: monitor ceph-osd/1 blocked idle 1 10.0.0.151 Missing relation: monitor
ceph-osd/2 blocked idle 2 10.0.0.173 Missing relation: monitor ceph-osd/2 blocked idle 2 10.0.0.152 Missing relation: monitor
ceph-osd/3 blocked idle 3 10.0.0.174 Missing relation: monitor ceph-osd/3 blocked idle 3 10.0.0.153 Missing relation: monitor
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.175 Unit is ready: Mode: R/W mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.154 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.176 Unit is ready: Mode: R/O mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.155 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.177 Unit is ready: Mode: R/O mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.156 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
nova-compute/0* blocked idle 1 10.0.0.172 Missing relations: messaging, image nova-compute/0* blocked idle 1 10.0.0.151 Missing relations: messaging, image
nova-compute/1 blocked idle 2 10.0.0.173 Missing relations: messaging, image nova-compute/1 blocked idle 2 10.0.0.152 Missing relations: messaging, image
nova-compute/2 blocked idle 3 10.0.0.174 Missing relations: messaging, image nova-compute/2 blocked idle 3 10.0.0.153 Missing relations: messaging, image
vault/0* active idle 3/lxd/0 10.0.0.178 8200/tcp Unit is ready (active: true, mlock: disabled) vault/0* active idle 3/lxd/0 10.0.0.157 8200/tcp Unit is ready (active: true, mlock: disabled)
vault-mysql-router/0* active idle 10.0.0.178 Unit is ready vault-mysql-router/0* active idle 10.0.0.157 Unit is ready
Cloud applications are TLS-enabled via the ``vault:certificates`` relation. Cloud applications are TLS-enabled via the ``vault:certificates`` relation.
Below we start with the cloud database. Although the latter has a self-signed Below we start with the cloud database. Although the latter has a self-signed
@ -278,9 +279,10 @@ File ``neutron.yaml`` contains the configuration necessary for three of them:
neutron-api: neutron-api:
neutron-security-groups: true neutron-security-groups: true
flat-network-providers: physnet1 flat-network-providers: physnet1
openstack-origin: cloud:focal-victoria worker-multiplier: 0.25
openstack-origin: cloud:focal-wallaby
ovn-central: ovn-central:
source: cloud:focal-victoria source: cloud:focal-wallaby
The ``bridge-interface-mappings`` setting refers to a network interface that The ``bridge-interface-mappings`` setting refers to a network interface that
the OVN Chassis will bind to. In the above example it is 'eth1' and it should the OVN Chassis will bind to. In the above example it is 'eth1' and it should
@ -341,11 +343,20 @@ Join neutron-api to the cloud database:
Keystone Keystone
~~~~~~~~ ~~~~~~~~
The keystone application will be containerised on machine 0: The keystone application will be containerised on machine 0. File
``keystone.yaml`` contains the configuration:
.. code-block:: yaml
keystone:
worker-multiplier: 0.25
openstack-origin: cloud:focal-wallaby
To deploy:
.. code-block:: none .. code-block:: none
juju deploy --to lxd:0 --config openstack-origin=cloud:focal-victoria keystone juju deploy --to lxd:0 --config keystone.yaml keystone
Join keystone to the cloud database: Join keystone to the cloud database:
@ -385,30 +396,30 @@ look similar to this:
.. code-block:: console .. code-block:: console
Unit Workload Agent Machine Public address Ports Message Unit Workload Agent Machine Public address Ports Message
ceph-osd/0* blocked idle 0 10.0.0.171 Missing relation: monitor ceph-osd/0* blocked idle 0 10.0.0.150 Missing relation: monitor
ceph-osd/1 blocked idle 1 10.0.0.172 Missing relation: monitor ceph-osd/1 blocked idle 1 10.0.0.151 Missing relation: monitor
ceph-osd/2 blocked idle 2 10.0.0.173 Missing relation: monitor ceph-osd/2 blocked idle 2 10.0.0.152 Missing relation: monitor
ceph-osd/3 blocked idle 3 10.0.0.174 Missing relation: monitor ceph-osd/3 blocked idle 3 10.0.0.153 Missing relation: monitor
keystone/0* active idle 0/lxd/2 10.0.0.183 5000/tcp Unit is ready keystone/0* active idle 0/lxd/2 10.0.0.162 5000/tcp Unit is ready
keystone-mysql-router/0* active idle 10.0.0.183 Unit is ready keystone-mysql-router/0* active idle 10.0.0.162 Unit is ready
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.175 Unit is ready: Mode: R/W mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.154 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.176 Unit is ready: Mode: R/O mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.155 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.177 Unit is ready: Mode: R/O mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.156 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
neutron-api/0* active idle 1/lxd/2 10.0.0.182 9696/tcp Unit is ready neutron-api/0* active idle 1/lxd/2 10.0.0.161 9696/tcp Unit is ready
neutron-api-mysql-router/0* active idle 10.0.0.182 Unit is ready neutron-api-mysql-router/0* active idle 10.0.0.161 Unit is ready
neutron-api-plugin-ovn/0* active idle 10.0.0.182 Unit is ready neutron-api-plugin-ovn/0* active idle 10.0.0.161 Unit is ready
nova-compute/0* blocked idle 1 10.0.0.172 Missing relations: image nova-compute/0* blocked idle 1 10.0.0.151 Missing relations: image
ovn-chassis/0* active idle 10.0.0.172 Unit is ready ovn-chassis/2 active idle 10.0.0.151 Unit is ready
nova-compute/1 blocked idle 2 10.0.0.173 Missing relations: image nova-compute/1 blocked idle 2 10.0.0.152 Missing relations: image
ovn-chassis/2 active idle 10.0.0.173 Unit is ready ovn-chassis/0* active idle 10.0.0.152 Unit is ready
nova-compute/2 blocked idle 3 10.0.0.174 Missing relations: image nova-compute/2 blocked idle 3 10.0.0.153 Missing relations: image
ovn-chassis/1 active idle 10.0.0.174 Unit is ready ovn-chassis/1 active idle 10.0.0.153 Unit is ready
ovn-central/0 active idle 0/lxd/1 10.0.0.181 6641/tcp,6642/tcp Unit is ready ovn-central/0* active idle 0/lxd/1 10.0.0.158 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db northd: active)
ovn-central/1 active idle 1/lxd/1 10.0.0.179 6641/tcp,6642/tcp Unit is ready ovn-central/1 active idle 1/lxd/1 10.0.0.159 6641/tcp,6642/tcp Unit is ready
ovn-central/2* active idle 2/lxd/1 10.0.0.180 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db northd: active) ovn-central/2 active idle 2/lxd/1 10.0.0.160 6641/tcp,6642/tcp Unit is ready
rabbitmq-server/0* active idle 2/lxd/2 10.0.0.184 5672/tcp Unit is ready rabbitmq-server/0* active idle 2/lxd/2 10.0.0.163 5672/tcp Unit is ready
vault/0* active idle 3/lxd/0 10.0.0.178 8200/tcp Unit is ready (active: true, mlock: disabled) vault/0* active idle 3/lxd/0 10.0.0.157 8200/tcp Unit is ready (active: true, mlock: disabled)
vault-mysql-router/0* active idle 10.0.0.178 Unit is ready vault-mysql-router/0* active idle 10.0.0.157 Unit is ready
Nova cloud controller Nova cloud controller
~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~
@ -422,7 +433,8 @@ the configuration:
nova-cloud-controller: nova-cloud-controller:
network-manager: Neutron network-manager: Neutron
openstack-origin: cloud:focal-victoria worker-multiplier: 0.25
openstack-origin: cloud:focal-wallaby
To deploy: To deploy:
@ -458,11 +470,19 @@ Placement
~~~~~~~~~ ~~~~~~~~~
The placement application will be containerised on machine 3 with the The placement application will be containerised on machine 3 with the
`placement`_ charm: `placement`_ charm. File ``placement.yaml`` contains the configuration:
.. code-block:: yaml
placement:
worker-multiplier: 0.25
openstack-origin: cloud:focal-wallaby
To deploy:
.. code-block:: none .. code-block:: none
juju deploy --to lxd:3 --config openstack-origin=cloud:focal-victoria placement juju deploy --to lxd:3 --config placement.yaml placement
Join placement to the cloud database: Join placement to the cloud database:
@ -488,7 +508,7 @@ The openstack-dashboard application (Horizon) will be containerised on machine
.. code-block:: none .. code-block:: none
juju deploy --to lxd:2 --config openstack-origin=cloud:focal-victoria openstack-dashboard juju deploy --to lxd:2 --config openstack-origin=cloud:focal-wallaby openstack-dashboard
Join openstack-dashboard to the cloud database: Join openstack-dashboard to the cloud database:
@ -515,11 +535,19 @@ Glance
~~~~~~ ~~~~~~
The glance application will be containerised on machine 3 with the `glance`_ The glance application will be containerised on machine 3 with the `glance`_
charm: charm. File ``glance.yaml`` contains the configuration:
.. code-block:: yaml
glance:
worker-multiplier: 0.25
openstack-origin: cloud:focal-wallaby
To deploy:
.. code-block:: none .. code-block:: none
juju deploy --to lxd:3 --config openstack-origin=cloud:focal-victoria glance juju deploy --to lxd:3 --config glance.yaml glance
Join glance to the cloud database: Join glance to the cloud database:
@ -544,48 +572,55 @@ look similar to this:
.. code-block:: console .. code-block:: console
Unit Workload Agent Machine Public address Ports Message Unit Workload Agent Machine Public address Ports Message
ceph-osd/0* blocked idle 0 10.0.0.171 Missing relation: monitor ceph-osd/0* blocked idle 0 10.0.0.150 Missing relation: monitor
ceph-osd/1 blocked idle 1 10.0.0.172 Missing relation: monitor ceph-osd/1 blocked idle 1 10.0.0.151 Missing relation: monitor
ceph-osd/2 blocked idle 2 10.0.0.173 Missing relation: monitor ceph-osd/2 blocked idle 2 10.0.0.152 Missing relation: monitor
ceph-osd/3 blocked idle 3 10.0.0.174 Missing relation: monitor ceph-osd/3 blocked idle 3 10.0.0.153 Missing relation: monitor
glance/0* active idle 3/lxd/3 10.0.0.188 9292/tcp Unit is ready glance/0* active idle 3/lxd/3 10.0.0.167 9292/tcp Unit is ready
glance-mysql-router/0* active idle 10.0.0.188 Unit is ready glance-mysql-router/0* active idle 10.0.0.167 Unit is ready
keystone/0* active idle 0/lxd/2 10.0.0.183 5000/tcp Unit is ready keystone/0* active idle 0/lxd/2 10.0.0.162 5000/tcp Unit is ready
keystone-mysql-router/0* active idle 10.0.0.183 Unit is ready keystone-mysql-router/0* active idle 10.0.0.162 Unit is ready
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.175 Unit is ready: Mode: R/W mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.154 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.176 Unit is ready: Mode: R/O mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.155 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.177 Unit is ready: Mode: R/O mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.156 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
neutron-api/0* active idle 1/lxd/2 10.0.0.182 9696/tcp Unit is ready neutron-api/0* active idle 1/lxd/2 10.0.0.161 9696/tcp Unit is ready
neutron-api-mysql-router/0* active idle 10.0.0.182 Unit is ready neutron-api-mysql-router/0* active idle 10.0.0.161 Unit is ready
neutron-api-plugin-ovn/0* active idle 10.0.0.182 Unit is ready neutron-api-plugin-ovn/0* active idle 10.0.0.161 Unit is ready
nova-cloud-controller/0* active idle 3/lxd/1 10.0.0.185 8774/tcp,8775/tcp Unit is ready nova-cloud-controller/0* active idle 3/lxd/1 10.0.0.164 8774/tcp,8775/tcp Unit is ready
ncc-mysql-router/0* active idle 10.0.0.185 Unit is ready ncc-mysql-router/0* active idle 10.0.0.164 Unit is ready
nova-compute/0* active idle 1 10.0.0.172 Unit is ready nova-compute/0* active idle 1 10.0.0.151 Unit is ready
ovn-chassis/0* active idle 10.0.0.172 Unit is ready ovn-chassis/2 active idle 10.0.0.151 Unit is ready
nova-compute/1 active idle 2 10.0.0.173 Unit is ready nova-compute/1 active idle 2 10.0.0.152 Unit is ready
ovn-chassis/2 active idle 10.0.0.173 Unit is ready ovn-chassis/0* active idle 10.0.0.152 Unit is ready
nova-compute/2 active idle 3 10.0.0.174 Unit is ready nova-compute/2 active idle 3 10.0.0.153 Unit is ready
ovn-chassis/1 active idle 10.0.0.174 Unit is ready ovn-chassis/1 active idle 10.0.0.153 Unit is ready
openstack-dashboard/0* active idle 2/lxd/3 10.0.0.187 80/tcp,443/tcp Unit is ready openstack-dashboard/0* active idle 2/lxd/3 10.0.0.166 80/tcp,443/tcp Unit is ready
dashboard-mysql-router/0* active idle 10.0.0.187 Unit is ready dashboard-mysql-router/0* active idle 10.0.0.166 Unit is ready
ovn-central/0 active idle 0/lxd/1 10.0.0.181 6641/tcp,6642/tcp Unit is ready ovn-central/0* active idle 0/lxd/1 10.0.0.158 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db northd: active)
ovn-central/1 active idle 1/lxd/1 10.0.0.179 6641/tcp,6642/tcp Unit is ready ovn-central/1 active idle 1/lxd/1 10.0.0.159 6641/tcp,6642/tcp Unit is ready
ovn-central/2* active idle 2/lxd/1 10.0.0.180 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db northd: active) ovn-central/2 active idle 2/lxd/1 10.0.0.160 6641/tcp,6642/tcp Unit is ready
placement/0* active idle 3/lxd/2 10.0.0.186 8778/tcp Unit is ready placement/0* active idle 3/lxd/2 10.0.0.165 8778/tcp Unit is ready
placement-mysql-router/0* active idle 10.0.0.186 Unit is ready placement-mysql-router/0* active idle 10.0.0.165 Unit is ready
rabbitmq-server/0* active idle 2/lxd/2 10.0.0.184 5672/tcp Unit is ready rabbitmq-server/0* active idle 2/lxd/2 10.0.0.163 5672/tcp Unit is ready
vault/0* active idle 3/lxd/0 10.0.0.178 8200/tcp Unit is ready (active: true, mlock: disabled) vault/0* active idle 3/lxd/0 10.0.0.157 8200/tcp Unit is ready (active: true, mlock: disabled)
vault-mysql-router/0* active idle 10.0.0.178 Unit is ready vault-mysql-router/0* active idle 10.0.0.157 Unit is ready
Ceph monitor Ceph monitor
~~~~~~~~~~~~ ~~~~~~~~~~~~
The ceph-mon application will be containerised on machines 0, 1, and 2 with the The ceph-mon application will be containerised on machines 0, 1, and 2 with the
`ceph-mon`_ charm: `ceph-mon`_ charm. File ``ceph-mon.yaml`` contains the configuration:
.. code-block:: yaml
ceph-mon:
expected-osd-count: 3
monitor-count: 3
source: cloud:focal-wallaby
.. code-block:: none .. code-block:: none
juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 --config source=cloud:focal-victoria ceph-mon juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 --config ceph-mon.yaml ceph-mon
Three relations can be added at this time: Three relations can be added at this time:
@ -612,9 +647,10 @@ charm. File ``cinder.yaml`` contains the configuration:
.. code-block:: yaml .. code-block:: yaml
cinder: cinder:
glance-api-version: 2
block-device: None block-device: None
openstack-origin: cloud:focal-victoria glance-api-version: 2
worker-multiplier: 0.25
openstack-origin: cloud:focal-wallaby
To deploy: To deploy:
@ -671,7 +707,7 @@ The ceph-radosgw application will be containerised on machine 0 with the
.. code-block:: none .. code-block:: none
juju deploy --to lxd:0 ceph-radosgw juju deploy --to lxd:0 --config source=cloud:focal-wallaby ceph-radosgw
A single relation is needed: A single relation is needed:
@ -716,40 +752,36 @@ Obtain the address in this way:
juju status --format=yaml openstack-dashboard | grep public-address | awk '{print $2}' | head -1 juju status --format=yaml openstack-dashboard | grep public-address | awk '{print $2}' | head -1
The password is queried from Keystone: In this example, the address is '10.0.0.166'.
The password can be queried from Keystone:
.. code-block:: none .. code-block:: none
juju run --unit keystone/0 leader-get admin_passwd juju run --unit keystone/leader leader-get admin_passwd
In this example, the address is '10.0.0.187' and the password is
'kohy6shoh3diWav5'.
The dashboard URL then becomes: The dashboard URL then becomes:
**http://10.0.0.187/horizon** **http://10.0.0.166/horizon**
And the credentials are: The final credentials needed to log in are:
| Domain: **admin_domain**
| User Name: **admin** | User Name: **admin**
| Password: **kohy6shoh3diWav5** | Password: ********************
| Domain: **admin_domain**
| |
.. tip::
To access the dasboard from your desktop you will need SSH local port
forwarding. Example: ``sudo ssh -L 8001:10.0.0.187:80 <user>@<host>``, where
<host> can contact 10.0.0.187 on port 80. Then go to
http://localhost:8001/horizon.
Once logged in you should see something like this: Once logged in you should see something like this:
.. figure:: ./media/install-openstack_horizon.png .. figure:: ./media/install-openstack_horizon.png
:scale: 70% :scale: 70%
:alt: Horizon dashboard :alt: Horizon dashboard
To enable instance console access from within Horizon: VM consoles
~~~~~~~~~~~
Enable a remote access protocol such as novnc (or spice) if you want to connect
to VM consoles from within the dashboard:
.. code-block:: none .. code-block:: none
@ -758,8 +790,8 @@ To enable instance console access from within Horizon:
Next steps Next steps
---------- ----------
You have successfully deployed OpenStack using both Juju and MAAS. The next You have successfully deployed OpenStack using Juju and MAAS. The next step is
step is to render the cloud functional for users. This will involve setting up to render the cloud functional for users. This will involve setting up
networks, images, and a user environment. Go to :doc:`Configure OpenStack networks, images, and a user environment. Go to :doc:`Configure OpenStack
<configure-openstack>` now. <configure-openstack>` now.

View File

@ -13,9 +13,9 @@ The software versions used in this guide are as follows:
* **Ubuntu 20.04 LTS (Focal)** for the MAAS server, Juju client, Juju * **Ubuntu 20.04 LTS (Focal)** for the MAAS server, Juju client, Juju
controller, and all cloud nodes (including containers) controller, and all cloud nodes (including containers)
* **MAAS 2.9.1** * **MAAS 2.9.2**
* **Juju 2.8.7** * **Juju 2.9.0**
* **OpenStack Victoria** * **OpenStack Wallaby**
Proceed to the :doc:`Install MAAS <install-maas>` page to begin your Proceed to the :doc:`Install MAAS <install-maas>` page to begin your
installation journey. Hardware requirements are also listed there. installation journey. Hardware requirements are also listed there.