Update to Jammy-Zed

I was forced to use an environment different from what I
normally use, hence the changes to infrastructure values.

Also closes a bug.

Closes-Bug: #2006729
Change-Id: I5d05c7d532b3433c8c53061f8a1727ab6c81e6e9
This commit is contained in:
Peter Matulis 2023-02-01 16:44:47 -05:00
parent 289f61afc9
commit 7a15d3a4b2
6 changed files with 251 additions and 259 deletions

View File

@ -62,18 +62,18 @@ Sample output:
.. code-block:: console
OS_REGION_NAME=RegionOne
OS_AUTH_VERSION=3
OS_CACERT=/home/ubuntu/snap/openstackclients/common/root-ca.crt
OS_AUTH_URL=https://10.0.0.174:5000/v3
OS_PROJECT_DOMAIN_NAME=admin_domain
OS_AUTH_PROTOCOL=https
OS_AUTH_URL=https://10.246.114.25:5000/v3
OS_USERNAME=admin
OS_AUTH_TYPE=password
OS_PASSWORD=Aichohv7aigheiba
OS_USER_DOMAIN_NAME=admin_domain
OS_PROJECT_NAME=admin
OS_PASSWORD=aegoaquoo1veZae6
OS_PROJECT_DOMAIN_NAME=admin_domain
OS_AUTH_VERSION=3
OS_IDENTITY_API_VERSION=3
OS_REGION_NAME=RegionOne
OS_AUTH_PROTOCOL=https
OS_CACERT=/home/ubuntu/snap/openstackclients/common/root-ca.crt
OS_AUTH_TYPE=password
Perform actions as the admin user
---------------------------------
@ -94,19 +94,19 @@ The output will look similar to this:
.. code-block:: console
+----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------------------+
| 153cac31650f4c3db2d4ed38cb21af5d | RegionOne | nova | compute | True | admin | https://10.0.0.176:8774/v2.1 |
| 163ea3aef1cb4e2cab7900a092437b8e | RegionOne | neutron | network | True | admin | https://10.0.0.173:9696 |
| 2ae599431cf641618da754446c827983 | RegionOne | keystone | identity | True | admin | https://10.0.0.174:35357/v3 |
| 42befdb50fd84719a7e1c1f60d5ead42 | RegionOne | cinderv3 | volumev3 | True | admin | https://10.0.0.183:8776/v3/$(tenant_id)s |
| d73168f18aba40efa152e304249d95ab | RegionOne | placement | placement | True | admin | https://10.0.0.177:8778 |
| f63768a3b71f415680b45835832b7860 | RegionOne | glance | image | True | admin | https://10.0.0.179:9292 |
+----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------------------+
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------------------+
| 3c48cac8e70e47698d38d1611fc6e224 | RegionOne | keystone | identity | True | admin | https://10.246.114.25:35357/v3 |
| 5ba390395df64adf89f45f32d27969ae | RegionOne | cinderv3 | volumev3 | True | admin | https://10.246.114.43:8776/v3/$(tenant_id)s |
| 761629b7f09547cc8b84de5b207b3726 | RegionOne | glance | image | True | admin | https://10.246.114.19:9292 |
| b58ea16e6e2e4919ba5ace59e376c361 | RegionOne | nova | compute | True | admin | https://10.246.114.37:8774/v2.1 |
| cca67377a66d4900820141284c93c52d | RegionOne | placement | placement | True | admin | https://10.246.114.38:8778 |
| ff4947f47e5f480fb8ba90dbde673c6f | RegionOne | neutron | network | True | admin | https://10.246.114.24:9696 |
+----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------------------+
If the endpoints aren't visible, it's likely your environment variables aren't
set correctly.
If the endpoints aren't displayed, it's likely your environment variables
aren't set correctly.
.. note::
@ -125,8 +125,8 @@ a Jammy amd64 image:
mkdir ~/cloud-images
curl http://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img \
--output ~/cloud-images/jammy-amd64.img
wget http://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img \
-O ~/cloud-images/jammy-amd64.img
Now import the image and call it 'jammy-amd64':
@ -163,13 +163,13 @@ page:
Create the subnet, here called 'ext_subnet', for the above network. The values
used are based on the local environment. For instance, recall that our MAAS
subnet is '10.0.0.0/24':
subnet is '10.246.112.0/21':
.. code-block:: none
openstack subnet create --network ext_net --no-dhcp \
--gateway 10.0.0.1 --subnet-range 10.0.0.0/24 \
--allocation-pool start=10.0.0.40,end=10.0.0.99 \
--gateway 10.246.112.1 --subnet-range 10.246.112.0/21 \
--allocation-pool start=10.246.116.23,end=10.246.116.87 \
ext_subnet
.. important::
@ -201,11 +201,11 @@ Sample output from the last command:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| default_project_id | 47c42bfc695c4efcba92ab2345336265 |
| domain_id | 884c9966c24f4db291e2b89b27ce692b |
| default_project_id | a67881c23bc840928b89054f35a6210e |
| domain_id | 228443ef0e054a89a36d75261b6531e9 |
| enabled | True |
| id | 8b16e5335976418e99bf0b798e83e413 |
| name | User1 |
| id | 37a3ab572ea14e659f1d885d44147b8a |
| name | user1 |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
@ -214,7 +214,7 @@ We'll use the user's ID to assign her the 'Member' role:
.. code-block:: none
openstack role add --user 8b16e5335976418e99bf0b798e83e413 \
openstack role add --user 37a3ab572ea14e659f1d885d44147b8a \
--project project1 Member
Create an OpenStack user authentication file for user 'user1'. All we're
@ -226,14 +226,14 @@ environment:
echo $OS_AUTH_URL
The output for the last command for this example is
**https://10.0.0.170:5000/v3**.
**https://10.246.114.25:5000/v3**.
The contents of the file, say ``project1-rc``, will therefore look like this
(assuming the user password is 'ubuntu'):
.. code-block:: ini
export OS_AUTH_URL=https://10.0.0.174:5000/v3
export OS_AUTH_URL=https://10.246.114.25:5000/v3
export OS_USER_DOMAIN_NAME=domain1
export OS_USERNAME=user1
export OS_PROJECT_DOMAIN_NAME=domain1
@ -279,15 +279,15 @@ link this network to the public network created earlier.
The non-admin user now creates a private internal network called 'user1_net'
and an accompanying subnet called 'user1_subnet' (here the DNS server is the
MAAS server at 10.0.0.2, but adjust to local conditions):
MAAS server at 10.246.112.3, but adjust to local conditions):
.. code-block:: none
openstack network create --internal user1_net
openstack subnet create --network user1_net --dns-nameserver 10.0.0.2 \
openstack subnet create --network user1_net --dns-nameserver 10.246.112.3 \
--subnet-range 192.168.0/24 \
--allocation-pool start=192.168.0.10,end=192.168.0.199 \
--allocation-pool start=192.168.0.10,end=192.168.0.99 \
user1_subnet
Now a router called 'user1_router' is created, added to the subnet, and told to
@ -358,11 +358,11 @@ Sample output:
.. code-block:: console
+--------------------------------------+---------+--------+-------------------------------------+-------------+----------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+---------+--------+-------------------------------------+-------------+----------+
| 687b96d0-ab22-459b-935b-a9d0b7e9964c | jammy-1 | ACTIVE | user1_net=192.168.0.154, 10.0.0.187 | jammy-amd64 | m1.small |
+--------------------------------------+---------+--------+-------------------------------------+-------------+----------+
+--------------------------------------+---------+--------+---------------------------------------+-------------+----------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+---------+--------+---------------------------------------+-------------+----------+
| 627a33c8-3c55-4878-bce3-3c12fc04e4b9 | jammy-1 | ACTIVE | user1_net=10.246.116.39, 192.168.0.98 | jammy-amd64 | m1.small |
+--------------------------------------+---------+--------+---------------------------------------+-------------+----------+
The first address listed is in the private network and the second one is in the
public network:
@ -380,7 +380,7 @@ The instance is ready when the output contains:
.
.
.
Ubuntu 22.04 LTS jammy-1 ttyS0
Ubuntu 22.04.1 LTS jammy-1 ttyS0
jammy-1 login:

View File

@ -25,9 +25,9 @@ The software versions used in this guide are as follows:
* **Ubuntu 22.04 LTS (Jammy)** for the Juju client, Juju controller, and all
cloud nodes (including containers)
* **MAAS 3.2.0**
* **Juju 2.9.33**
* **OpenStack Yoga**
* **MAAS 3.2.6**
* **Juju 2.9.38**
* **OpenStack Zed**
Cloud description
-----------------

View File

@ -28,7 +28,7 @@ this via a cloud definition file, such as ``maas-cloud.yaml``:
maas-one:
type: maas
auth-types: [oauth1]
endpoint: http://10.0.0.2:5240/MAAS
endpoint: http://10.246.112.3:5240/MAAS
We've called the cloud 'maas-one' and its endpoint is based on the IP address
of the MAAS system.
@ -79,7 +79,7 @@ call it 'maas-controller':
.. code-block:: none
juju bootstrap --bootstrap-series=focal --constraints tags=juju maas-one maas-controller
juju bootstrap --bootstrap-series=jammy --constraints tags=juju maas-one maas-controller
The ``--constraints`` option allows us to effectively select a node in the MAAS
cluster. Recall that we attached a tag of 'juju' to the lower-resourced MAAS
@ -108,7 +108,7 @@ the environment. It should now look very similar to this:
.. code-block:: none
Model Controller Cloud/Region Version SLA Timestamp
openstack maas-controller maas-one/default 2.9.29 unsupported 20:28:32Z
openstack maas-controller maas-one/default 2.9.38 unsupported 20:28:32Z
Model "admin/openstack" is empty.

View File

@ -53,8 +53,8 @@ MAAS is also considered to be the sole provider of DHCP and DNS for the network
hosting the MAAS cluster.
The MAAS system's single network interface resides on subnet
**10.0.0.0/24** and the system itself has an assigned IP address of
**10.0.0.2**.
**10.246.112.0/21** and the system itself has an assigned IP address of
**10.246.112.3**.
.. attention::
@ -72,8 +72,8 @@ instructions`_ for details:
.. code-block:: none
sudo snap install maas-test-db
sudo snap install maas --channel=3.1/stable
sudo maas init region+rack --maas-url http://10.0.0.2:5240/MAAS --database-uri maas-test-db:///
sudo snap install maas --channel=3.2/stable
sudo maas init region+rack --maas-url http://10.246.112.3:5240/MAAS --database-uri maas-test-db:///
sudo maas createadmin --username admin --password ubuntu --email admin@example.com --ssh-import lp:<unsername>
sudo maas apikey --username admin > ~ubuntu/admin-api-key
@ -107,11 +107,11 @@ MAAS administrator are:
| Password: **ubuntu**
|
In this example, the address of the MAAS system is 10.0.0.2.
In this example, the address of the MAAS system is 10.246.112.3.
The web UI URL then becomes:
**http://10.0.0.2:5240/MAAS**
**http://10.246.112.3:5240/MAAS**
You will be whisked through an on-boarding process when you access the web UI
for the first time. Recall that we require 22.04 LTS AMD64 images.

View File

@ -13,101 +13,101 @@ installed from the instructions given on the :doc:`Install OpenStack
.. code-block:: console
Model Controller Cloud/Region Version SLA Timestamp
openstack maas-controller maas-one/default 2.9.29 unsupported 18:51:46Z
openstack maas-controller maas-one/default 2.9.38 unsupported 18:51:46Z
App Version Status Scale Charm Channel Rev Exposed Message
ceph-mon 17.1.0 active 3 ceph-mon quincy/stable 106 no Unit is ready and clustered
ceph-osd 17.1.0 active 4 ceph-osd quincy/stable 534 no Unit is ready (2 OSD)
ceph-radosgw 17.1.0 active 1 ceph-radosgw quincy/stable 526 no Unit is ready
cinder 20.0.0 active 1 cinder yoga/stable 554 no Unit is ready
cinder-ceph 20.0.0 active 1 cinder-ceph yoga/stable 502 no Unit is ready
cinder-mysql-router 8.0.29 active 1 mysql-router 8.0/stable 26 no Unit is ready
dashboard-mysql-router 8.0.29 active 1 mysql-router 8.0/stable 26 no Unit is ready
glance 24.0.0 active 1 glance yoga/stable 544 no Unit is ready
glance-mysql-router 8.0.29 active 1 mysql-router 8.0/stable 26 no Unit is ready
keystone 21.0.0 active 1 keystone yoga/stable 568 no Application Ready
keystone-mysql-router 8.0.29 active 1 mysql-router 8.0/stable 26 no Unit is ready
mysql-innodb-cluster 8.0.29 active 3 mysql-innodb-cluster 8.0/stable 24 no Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
ncc-mysql-router 8.0.29 active 1 mysql-router 8.0/stable 26 no Unit is ready
neutron-api 20.0.0 active 1 neutron-api yoga/stable 526 no Unit is ready
neutron-api-mysql-router 8.0.29 active 1 mysql-router 8.0/stable 26 no Unit is ready
neutron-api-plugin-ovn 20.0.0 active 1 neutron-api-plugin-ovn yoga/stable 29 no Unit is ready
nova-cloud-controller 25.0.0 active 1 nova-cloud-controller yoga/stable 601 no Unit is ready
nova-compute 25.0.0 active 3 nova-compute yoga/stable 588 no Unit is ready
openstack-dashboard 22.1.0 active 1 openstack-dashboard yoga/stable 536 no Unit is ready
ovn-central 22.03.0 active 3 ovn-central 22.03/stable 31 no Unit is ready
ovn-chassis 22.03.0 active 3 ovn-chassis 22.03/stable 46 no Unit is ready
placement 7.0.0 active 1 placement yoga/stable 49 no Unit is ready
placement-mysql-router 8.0.29 active 1 mysql-router 8.0/stable 26 no Unit is ready
rabbitmq-server 3.9.13 active 1 rabbitmq-server 3.9/stable 149 no Unit is ready
vault 1.7.9 active 1 vault 1.7/stable 68 no Unit is ready (active: true, mlock: disabled)
vault-mysql-router 8.0.29 active 1 mysql-router 8.0/stable 26 no Unit is ready
ceph-mon 17.2.0 active 3 ceph-mon quincy/stable 149 no Unit is ready and clustered
ceph-osd 17.2.0 active 4 ceph-osd quincy/stable 541 no Unit is ready (4 OSD)
ceph-radosgw 17.2.0 active 1 ceph-radosgw quincy/stable 542 no Unit is ready
cinder 21.1.0 active 1 cinder zed/stable 594 no Unit is ready
cinder-ceph 21.1.0 active 1 cinder-ceph zed/stable 513 no Unit is ready
cinder-mysql-router 8.0.32 active 1 mysql-router 8.0/stable 35 no Unit is ready
dashboard-mysql-router 8.0.32 active 1 mysql-router 8.0/stable 35 no Unit is ready
glance 25.0.0 active 1 glance zed/stable 560 no Unit is ready
glance-mysql-router 8.0.32 active 1 mysql-router 8.0/stable 35 no Unit is ready
keystone 22.0.0 active 1 keystone zed/stable 591 no Application Ready
keystone-mysql-router 8.0.32 active 1 mysql-router 8.0/stable 35 no Unit is ready
mysql-innodb-cluster 8.0.32 active 3 mysql-innodb-cluster 8.0/stable 39 no Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
ncc-mysql-router 8.0.32 active 1 mysql-router 8.0/stable 35 no Unit is ready
neutron-api 21.0.0 active 1 neutron-api zed/stable 546 no Unit is ready
neutron-api-mysql-router 8.0.32 active 1 mysql-router 8.0/stable 35 no Unit is ready
neutron-api-plugin-ovn 21.0.0 active 1 neutron-api-plugin-ovn zed/stable 45 no Unit is ready
nova-cloud-controller 26.1.0 active 1 nova-cloud-controller zed/stable 633 no Unit is ready
nova-compute 26.1.0 active 3 nova-compute zed/stable 626 no Unit is ready
openstack-dashboard 23.0.0 active 1 openstack-dashboard zed/stable 564 no Unit is ready
ovn-central 22.09.0 active 3 ovn-central 22.09/stable 75 no Unit is ready (leader: ovnnb_db, ovnsb_db)
ovn-chassis 22.09.0 active 3 ovn-chassis 22.09/stable 109 no Unit is ready
placement 8.0.0 active 1 placement zed/stable 67 no Unit is ready
placement-mysql-router 8.0.32 active 1 mysql-router 8.0/stable 35 no Unit is ready
rabbitmq-server 3.9.13 active 1 rabbitmq-server 3.9/stable 154 no Unit is ready
vault 1.8.8 active 1 vault 1.8/stable 86 no Unit is ready (active: true, mlock: disabled)
vault-mysql-router 8.0.32 active 1 mysql-router 8.0/stable 35 no Unit is ready
Unit Workload Agent Machine Public address Ports Message
ceph-mon/0 active idle 0/lxd/4 10.0.0.180 Unit is ready and clustered
ceph-mon/1* active idle 1/lxd/4 10.0.0.182 Unit is ready and clustered
ceph-mon/2 active idle 2/lxd/5 10.0.0.181 Unit is ready and clustered
ceph-osd/0 active idle 0 10.0.0.160 Unit is ready (2 OSD)
ceph-osd/1* active idle 1 10.0.0.159 Unit is ready (2 OSD)
ceph-osd/2 active idle 2 10.0.0.162 Unit is ready (2 OSD)
ceph-osd/3 active idle 3 10.0.0.161 Unit is ready (2 OSD)
ceph-radosgw/0* active idle 0/lxd/5 10.0.0.184 80/tcp Unit is ready
cinder/0* active idle 1/lxd/5 10.0.0.183 8776/tcp Unit is ready
cinder-ceph/0* active idle 10.0.0.183 Unit is ready
cinder-mysql-router/0* active idle 10.0.0.183 Unit is ready
glance/0* active idle 3/lxd/3 10.0.0.179 9292/tcp Unit is ready
glance-mysql-router/0* active idle 10.0.0.179 Unit is ready
keystone/0* active idle 0/lxd/3 10.0.0.174 5000/tcp Unit is ready
keystone-mysql-router/0* active idle 10.0.0.174 Unit is ready
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.163 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.164 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.165 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
neutron-api/0* active idle 1/lxd/3 10.0.0.173 9696/tcp Unit is ready
neutron-api-mysql-router/0* active idle 10.0.0.173 Unit is ready
neutron-api-plugin-ovn/0* active idle 10.0.0.173 Unit is ready
nova-cloud-controller/0* active idle 3/lxd/1 10.0.0.176 8774/tcp,8775/tcp Unit is ready
ncc-mysql-router/0* active idle 10.0.0.176 Unit is ready
nova-compute/0* active idle 1 10.0.0.159 Unit is ready
ovn-chassis/0* active idle 10.0.0.159 Unit is ready
nova-compute/1 active idle 2 10.0.0.162 Unit is ready
ovn-chassis/2 active idle 10.0.0.162 Unit is ready
nova-compute/2 active idle 3 10.0.0.161 Unit is ready
ovn-chassis/1 active idle 10.0.0.161 Unit is ready
openstack-dashboard/0* active idle 2/lxd/4 10.0.0.178 80/tcp,443/tcp Unit is ready
dashboard-mysql-router/0* active idle 10.0.0.178 Unit is ready
ovn-central/3 active idle 0/lxd/2 10.0.0.170 6641/tcp,6642/tcp Unit is ready
ovn-central/4 active idle 1/lxd/2 10.0.0.171 6641/tcp,6642/tcp Unit is ready (northd: active)
ovn-central/5* active idle 2/lxd/2 10.0.0.172 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)
placement/0* active idle 3/lxd/2 10.0.0.177 8778/tcp Unit is ready
placement-mysql-router/0* active idle 10.0.0.177 Unit is ready
rabbitmq-server/0* active idle 2/lxd/3 10.0.0.175 5672/tcp,15672/tcp Unit is ready
vault/0* active idle 3/lxd/0 10.0.0.166 8200/tcp Unit is ready (active: true, mlock: disabled)
vault-mysql-router/0* active idle 10.0.0.166 Unit is ready
ceph-mon/0 active idle 0/lxd/4 10.246.114.20 Unit is ready and clustered
ceph-mon/1* active idle 1/lxd/4 10.246.114.22 Unit is ready and clustered
ceph-mon/2 active idle 2/lxd/5 10.246.114.21 Unit is ready and clustered
ceph-osd/0 active idle 0 10.246.114.17 Unit is ready (4 OSD)
ceph-osd/1* active idle 1 10.246.114.7 Unit is ready (4 OSD)
ceph-osd/2 active idle 2 10.246.114.11 Unit is ready (4 OSD)
ceph-osd/3 active idle 3 10.246.114.31 Unit is ready (2 OSD)
ceph-radosgw/0* active idle 0/lxd/5 10.246.114.44 80/tcp Unit is ready
cinder/0* active idle 1/lxd/5 10.246.114.43 8776/tcp Unit is ready
cinder-ceph/0* active idle 10.246.114.43 Unit is ready
cinder-mysql-router/0* active idle 10.246.114.43 Unit is ready
glance/0* active idle 3/lxd/3 10.246.114.19 9292/tcp Unit is ready
glance-mysql-router/0* active idle 10.246.114.19 Unit is ready
keystone/0* active idle 0/lxd/3 10.246.114.25 5000/tcp Unit is ready
keystone-mysql-router/0* active idle 10.246.114.25 Unit is ready
mysql-innodb-cluster/3* active idle 0/lxd/1 10.246.114.12 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/4 active idle 1/lxd/1 10.246.114.15 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/5 active idle 2/lxd/1 10.246.114.14 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
neutron-api/0* active idle 1/lxd/3 10.246.114.24 9696/tcp Unit is ready
neutron-api-mysql-router/0* active idle 10.246.114.24 Unit is ready
neutron-api-plugin-ovn/0* active idle 10.246.114.24 Unit is ready
nova-cloud-controller/0* active idle 3/lxd/1 10.246.114.37 8774/tcp,8775/tcp Unit is ready
ncc-mysql-router/0* active idle 10.246.114.37 Unit is ready
nova-compute/0* active idle 1 10.246.114.7 Unit is ready
ovn-chassis/0* active idle 10.246.114.7 Unit is ready
nova-compute/1 active idle 2 10.246.114.11 Unit is ready
ovn-chassis/1 active idle 10.246.114.11 Unit is ready
nova-compute/2 active idle 3 10.246.114.31 Unit is ready
ovn-chassis/2 active idle 10.246.114.31 Unit is ready
openstack-dashboard/0* active idle 2/lxd/4 10.246.114.39 80/tcp,443/tcp Unit is ready
dashboard-mysql-router/0* active idle 10.246.114.39 Unit is ready
ovn-central/0* active idle 0/lxd/2 10.246.114.29 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)
ovn-central/1 active idle 1/lxd/2 10.246.114.52 6641/tcp,6642/tcp Unit is ready
ovn-central/2 active idle 2/lxd/2 10.246.114.51 6641/tcp,6642/tcp Unit is ready (northd: active)
placement/0* active idle 3/lxd/2 10.246.114.38 8778/tcp Unit is ready
placement-mysql-router/0* active idle 10.246.114.38 Unit is ready
rabbitmq-server/0* active idle 2/lxd/3 10.246.114.26 5672/tcp,15672/tcp Unit is ready
vault/0* active idle 3/lxd/0 10.246.114.28 8200/tcp Unit is ready (active: true, mlock: disabled)
vault-mysql-router/0* active idle 10.246.114.28 Unit is ready
Machine State DNS Inst id Series AZ Message
0 started 10.0.0.160 node1 jammy default Deployed
0/lxd/0 started 10.0.0.163 juju-df2f3d-0-lxd-0 jammy default Container started
0/lxd/2 started 10.0.0.170 juju-df2f3d-0-lxd-2 jammy default Container started
0/lxd/3 started 10.0.0.174 juju-df2f3d-0-lxd-3 jammy default Container started
0/lxd/4 started 10.0.0.180 juju-df2f3d-0-lxd-4 jammy default Container started
0/lxd/5 started 10.0.0.184 juju-df2f3d-0-lxd-5 jammy default Container started
1 started 10.0.0.159 node2 jammy default Deployed
1/lxd/0 started 10.0.0.164 juju-df2f3d-1-lxd-0 jammy default Container started
1/lxd/2 started 10.0.0.171 juju-df2f3d-1-lxd-2 jammy default Container started
1/lxd/3 started 10.0.0.173 juju-df2f3d-1-lxd-3 jammy default Container started
1/lxd/4 started 10.0.0.182 juju-df2f3d-1-lxd-4 jammy default Container started
1/lxd/5 started 10.0.0.183 juju-df2f3d-1-lxd-5 jammy default Container started
2 started 10.0.0.162 node4 jammy default Deployed
2/lxd/0 started 10.0.0.165 juju-df2f3d-2-lxd-0 jammy default Container started
2/lxd/2 started 10.0.0.172 juju-df2f3d-2-lxd-2 jammy default Container started
2/lxd/3 started 10.0.0.175 juju-df2f3d-2-lxd-3 jammy default Container started
2/lxd/4 started 10.0.0.178 juju-df2f3d-2-lxd-4 jammy default Container started
2/lxd/5 started 10.0.0.181 juju-df2f3d-2-lxd-5 jammy default Container started
3 started 10.0.0.161 node3 jammy default Deployed
3/lxd/0 started 10.0.0.166 juju-df2f3d-3-lxd-0 jammy default Container started
3/lxd/1 started 10.0.0.176 juju-df2f3d-3-lxd-1 jammy default Container started
3/lxd/2 started 10.0.0.177 juju-df2f3d-3-lxd-2 jammy default Container started
3/lxd/3 started 10.0.0.179 juju-df2f3d-3-lxd-3 jammy default Container started
Machine State Address Inst id Series AZ Message
0 started 10.246.114.17 node-laveran jammy default Deployed
0/lxd/1 started 10.246.114.12 juju-57df23-0-lxd-1 jammy default Container started
0/lxd/2 started 10.246.114.29 juju-57df23-0-lxd-2 jammy default Container started
0/lxd/3 started 10.246.114.25 juju-57df23-0-lxd-3 jammy default Container started
0/lxd/4 started 10.246.114.20 juju-57df23-0-lxd-4 jammy default Container started
0/lxd/5 started 10.246.114.44 juju-57df23-0-lxd-5 jammy default Container started
1 started 10.246.114.7 node-mees jammy default Deployed
1/lxd/1 started 10.246.114.15 juju-57df23-1-lxd-1 jammy default Container started
1/lxd/2 started 10.246.114.52 juju-57df23-1-lxd-2 jammy default Container started
1/lxd/3 started 10.246.114.24 juju-57df23-1-lxd-3 jammy default Container started
1/lxd/4 started 10.246.114.22 juju-57df23-1-lxd-4 jammy default Container started
1/lxd/5 started 10.246.114.43 juju-57df23-1-lxd-5 jammy default Container started
2 started 10.246.114.11 node-fontana jammy default Deployed
2/lxd/1 started 10.246.114.14 juju-57df23-2-lxd-1 jammy default Container started
2/lxd/2 started 10.246.114.51 juju-57df23-2-lxd-2 jammy default Container started
2/lxd/3 started 10.246.114.26 juju-57df23-2-lxd-3 jammy default Container started
2/lxd/4 started 10.246.114.39 juju-57df23-2-lxd-4 jammy default Container started
2/lxd/5 started 10.246.114.21 juju-57df23-2-lxd-5 jammy default Container started
3 started 10.246.114.31 node-sparky jammy default Deployed
3/lxd/0 started 10.246.114.28 juju-57df23-3-lxd-0 jammy default Container started
3/lxd/1 started 10.246.114.37 juju-57df23-3-lxd-1 jammy default Container started
3/lxd/2 started 10.246.114.38 juju-57df23-3-lxd-2 jammy default Container started
3/lxd/3 started 10.246.114.19 juju-57df23-3-lxd-3 jammy default Container started
Relation provider Requirer Interface Type Message
ceph-mon:client cinder-ceph:ceph ceph-client regular
@ -155,6 +155,7 @@ installed from the instructions given on the :doc:`Install OpenStack
nova-compute:cloud-compute nova-cloud-controller:cloud-compute nova-compute regular
nova-compute:compute-peer nova-compute:compute-peer nova peer
openstack-dashboard:cluster openstack-dashboard:cluster openstack-dashboard-ha peer
ovn-central:coordinator ovn-central:coordinator coordinator peer
ovn-central:ovsdb ovn-chassis:ovsdb ovsdb regular
ovn-central:ovsdb-cms neutron-api-plugin-ovn:ovsdb-cms ovsdb-cms regular
ovn-central:ovsdb-peer ovn-central:ovsdb-peer ovsdb-cluster peer

View File

@ -12,7 +12,7 @@ to review these pertinent sections of the Juju documentation before continuing:
* `Deploying applications`_
* `Deploying to specific machines`_
* `Managing relations`_
* `Managing relations`_ (integrations)
.. TODO
Cloud topology section goes here (modelled on openstack-base README)
@ -29,9 +29,9 @@ The cloud deployment involves two levels of software:
* charm payload (e.g. Keystone service)
A charm's software version (its revision) is expressed via its channel (e.g.
'yoga/stable') whereas its payload is specified via its ``openstack-origin``
(or ``source``) configuration option (e.g. 'distro' or 'cloud:focal-yoga'). See
the Charm Guide for more information:
'zed/stable'). Its payload version is auto-configured based on the channel,
but it can be overridden via the ``source`` configuration option (e.g.
internal mirror or PPA is needed). See the Charm Guide for more information:
* :doc:`cg:project/charm-delivery` for charm channels
* :doc:`cg:concepts/software-sources` for charm payload
@ -45,20 +45,9 @@ the Charm Guide for more information:
OpenStack release
-----------------
.. TEMPLATE (alternate between the following two paragraphs each six months)
OpenStack Xena will be deployed atop Ubuntu 20.04 LTS (Focal) cloud nodes.
In order to achieve this a cloud archive release of 'cloud:focal-xena' will
be used during the install of each OpenStack application. Note that some
applications are not part of the OpenStack project per se and therefore do
not apply (exceptionally, Ceph applications do use this method). Not using a
more recent OpenStack release in this way will result in an Ussuri
deployment (i.e. Ussuri is in the Ubuntu package archive for Focal).
OpenStack Yoga will be deployed atop Ubuntu 22.04 LTS (Jammy) cloud nodes. In
order to achieve this the default package archive ("distro") for the cloud
nodes will be used during the install of each OpenStack application. Note that
some applications are not part of the OpenStack project per se and therefore do
not apply (exceptionally, Ceph applications do use this method).
OpenStack Zed will be deployed atop Ubuntu 22.04 LTS (Jammy) cloud nodes. In
order to achieve this, charm channels appropriate for the chosen OpenStack
release will be used (see :doc:`cg:project/charm-delivery`).
See :ref:`cg:perform_the_upgrade` in the Charm Guide for more details on cloud
archive releases and how they are used when upgrading OpenStack.
@ -66,7 +55,7 @@ archive releases and how they are used when upgrading OpenStack.
.. important::
The chosen OpenStack release may impact the installation and configuration
instructions. **This guide assumes that OpenStack Yoga is being deployed.**
instructions. **This guide assumes that OpenStack Zed is being deployed.**
Installation progress
---------------------
@ -113,14 +102,13 @@ The ceph-osd application is deployed to four nodes with the `ceph-osd`_ charm.
The name of the block devices backing the OSDs is dependent upon the hardware
on the nodes. All possible devices across the nodes should be given as the
value for the ``osd-devices`` option (space-separated). Here, we'll be using
the same devices on each node: ``/dev/vdb`` and ``/dev/vdc``. File
``ceph-osd.yaml`` contains the configuration:
the same devices on each node: ``/dev/sda``, ``/dev/sdb``, ``/dev/sdc``, and
``/dev/sdd``. File ``ceph-osd.yaml`` contains the configuration:
.. code-block:: yaml
ceph-osd:
osd-devices: /dev/vdb /dev/vdc
source: distro
osd-devices: /dev/sda /dev/sdb /dev/sdc /dev/sdd
To deploy the application we'll make use of the 'compute' tag that we placed on
each of these nodes on the :doc:`Install MAAS <install-maas>` page:
@ -155,7 +143,6 @@ The nova-compute application is deployed to three nodes with the
enable-resize: true
migration-auth-type: ssh
virt-type: qemu
openstack-origin: distro
The nodes must be targeted by machine ID since there are no more free Juju
machines (MAAS nodes) available. This means we're placing multiple services on
@ -163,7 +150,7 @@ our nodes. We've chosen machines 1, 2, and 3. To deploy:
.. code-block:: none
juju deploy -n 3 --to 1,2,3 --channel yoga/stable --config nova-compute.yaml nova-compute
juju deploy -n 3 --to 1,2,3 --channel zed/stable --config nova-compute.yaml nova-compute
.. note::
@ -192,7 +179,7 @@ communication between cloud applications. It will be containerised on machine
.. code-block:: none
juju deploy --to lxd:3 --channel 1.7/stable vault
juju deploy --to lxd:3 --channel 1.8/stable vault
This is the first application to be joined with the cloud database that was set
up in the previous section. The process is:
@ -225,18 +212,18 @@ status` should look similar to this:
.. code-block:: console
Unit Workload Agent Machine Public address Ports Message
ceph-osd/0 blocked idle 0 10.0.0.160 Missing relation: monitor
ceph-osd/1* blocked idle 1 10.0.0.159 Missing relation: monitor
ceph-osd/2 blocked idle 2 10.0.0.162 Missing relation: monitor
ceph-osd/3 blocked idle 3 10.0.0.161 Missing relation: monitor
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.163 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.164 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.165 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
nova-compute/0* blocked idle 1 10.0.0.159 Missing relations: image, messaging
nova-compute/1 blocked idle 2 10.0.0.162 Missing relations: messaging, image
nova-compute/2 blocked idle 3 10.0.0.161 Missing relations: image, messaging
vault/0* active idle 3/lxd/0 10.0.0.166 8200/tcp Unit is ready (active: true, mlock: disabled)
vault-mysql-router/0* active idle 10.0.0.166 Unit is ready
ceph-osd/0 blocked idle 0 10.246.114.17 Missing relation: monitor
ceph-osd/1* blocked idle 1 10.246.114.7 Missing relation: monitor
ceph-osd/2 blocked idle 2 10.246.114.11 Missing relation: monitor
ceph-osd/3 blocked idle 3 10.246.114.31 Missing relation: monitor
mysql-innodb-cluster/3* active idle 0/lxd/1 10.246.114.12 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/4 active idle 1/lxd/1 10.246.114.15 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/5 active idle 2/lxd/1 10.246.114.14 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
nova-compute/0* blocked idle 1 10.246.114.7 Missing relations: messaging, image
nova-compute/1 blocked idle 2 10.246.114.11 Missing relations: messaging, image
nova-compute/2 blocked idle 3 10.246.114.31 Missing relations: image, messaging
vault/0* active idle 3/lxd/0 10.246.114.28 8200/tcp Unit is ready (active: true, mlock: disabled)
vault-mysql-router/0* active idle 10.246.114.28 Unit is ready
Cloud applications are TLS-enabled via the ``vault:certificates`` relation.
Below we start with the cloud database. Although the latter has a self-signed
@ -258,7 +245,8 @@ Neutron networking is implemented with four applications:
* ovn-central
* ovn-chassis (subordinate)
File ``neutron.yaml`` contains the configuration necessary for three of them:
File ``neutron.yaml`` contains the configuration necessary (only two of them
require configuration):
.. code-block:: yaml
@ -268,14 +256,11 @@ File ``neutron.yaml`` contains the configuration necessary for three of them:
neutron-api:
neutron-security-groups: true
flat-network-providers: physnet1
openstack-origin: distro
ovn-central:
source: distro
The ``bridge-interface-mappings`` setting impacts the OVN Chassis and refers to
a mapping of OVS bridge to network interface. As described in the :ref:`Create
OVS bridge <ovs_bridge>` section on the :doc:`Install MAAS <install-maas>`
page, it is 'br-ex:enp1s0'.
page, for this example it is 'br-ex:enp1s0'.
.. note::
@ -304,20 +289,20 @@ They will be containerised on machines 0, 1, and 2. To deploy:
.. code-block:: none
juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 --channel 22.03/stable --config neutron.yaml ovn-central
juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 --channel 22.09/stable ovn-central
The neutron-api application will be containerised on machine 1:
.. code-block:: none
juju deploy --to lxd:1 --channel yoga/stable --config neutron.yaml neutron-api
juju deploy --to lxd:1 --channel zed/stable --config neutron.yaml neutron-api
Deploy the subordinate charm applications:
.. code-block:: none
juju deploy --channel yoga/stable neutron-api-plugin-ovn
juju deploy --channel 22.03/stable --config neutron.yaml ovn-chassis
juju deploy --channel zed/stable neutron-api-plugin-ovn
juju deploy --channel 22.09/stable --config neutron.yaml ovn-chassis
Add the necessary relations:
@ -348,7 +333,7 @@ The keystone application will be containerised on machine 0 with the
.. code-block:: none
juju deploy --to lxd:0 --channel yoga/stable keystone
juju deploy --to lxd:0 --channel zed/stable keystone
Join keystone to the cloud database:
@ -388,30 +373,33 @@ look similar to this:
.. code-block:: console
Unit Workload Agent Machine Public address Ports Message
ceph-osd/0 blocked idle 0 10.0.0.160 Missing relation: monitor
ceph-osd/1* blocked idle 1 10.0.0.159 Missing relation: monitor
ceph-osd/2 blocked idle 2 10.0.0.162 Missing relation: monitor
ceph-osd/3 blocked idle 3 10.0.0.161 Missing relation: monitor
keystone/0* active idle 0/lxd/3 10.0.0.174 5000/tcp Unit is ready
keystone-mysql-router/0* active idle 10.0.0.174 Unit is ready
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.163 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.164 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.165 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
neutron-api/0* active idle 1/lxd/3 10.0.0.173 9696/tcp Unit is ready
neutron-api-mysql-router/0* active idle 10.0.0.173 Unit is ready
neutron-api-plugin-ovn/0* blocked idle 10.0.0.173 'certificates' missing
nova-compute/0* blocked idle 1 10.0.0.159 Missing relations: image
ovn-chassis/0* active idle 10.0.0.159 Unit is ready
nova-compute/1 blocked idle 2 10.0.0.162 Missing relations: image
ovn-chassis/2 active idle 10.0.0.162 Unit is ready
nova-compute/2 blocked idle 3 10.0.0.161 Missing relations: image
ovn-chassis/1 active idle 10.0.0.161 Unit is ready
ovn-central/3 active idle 0/lxd/2 10.0.0.170 6641/tcp,6642/tcp Unit is ready
ovn-central/4 active idle 1/lxd/2 10.0.0.171 6641/tcp,6642/tcp Unit is ready (northd: active)
ovn-central/5* active idle 2/lxd/2 10.0.0.172 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)
rabbitmq-server/0* active idle 2/lxd/3 10.0.0.175 5672/tcp,15672/tcp Unit is ready
vault/0* active idle 3/lxd/0 10.0.0.166 8200/tcp Unit is ready (active: true, mlock: disabled)
vault-mysql-router/0* active idle 10.0.0.166 Unit is ready
ceph-osd/0 blocked idle 0 10.246.114.17 Missing relation: monitor
ceph-osd/1* blocked idle 1 10.246.114.7 Missing relation: monitor
ceph-osd/2 blocked idle 2 10.246.114.11 Missing relation: monitor
ceph-osd/3 blocked idle 3 10.246.114.31 Missing relation: monitor
keystone/0* active idle 0/lxd/3 10.246.114.25 5000/tcp Unit is ready
keystone-mysql-router/0* active idle 10.246.114.25 Unit is ready
mysql-innodb-cluster/3* active idle 0/lxd/1 10.246.114.12 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to
ONE failure.
mysql-innodb-cluster/4 active idle 1/lxd/1 10.246.114.15 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to
ONE failure.
mysql-innodb-cluster/5 active idle 2/lxd/1 10.246.114.14 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to
ONE failure.
neutron-api/0* active idle 1/lxd/3 10.246.114.24 9696/tcp Unit is ready
neutron-api-mysql-router/0* active idle 10.246.114.24 Unit is ready
neutron-api-plugin-ovn/0* active idle 10.246.114.24 Unit is ready
nova-compute/0* blocked idle 1 10.246.114.7 Missing relations: image
ovn-chassis/0* active idle 10.246.114.7 Unit is ready
nova-compute/1 blocked idle 2 10.246.114.11 Missing relations: image
ovn-chassis/1 active idle 10.246.114.11 Unit is ready
nova-compute/2 blocked idle 3 10.246.114.31 Missing relations: image
ovn-chassis/2 active idle 10.246.114.31 Unit is ready
ovn-central/0* active idle 0/lxd/2 10.246.114.29 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)
ovn-central/1 active idle 1/lxd/2 10.246.114.52 6641/tcp,6642/tcp Unit is ready
ovn-central/2 active idle 2/lxd/2 10.246.114.51 6641/tcp,6642/tcp Unit is ready (northd: active)
rabbitmq-server/0* active idle 2/lxd/3 10.246.114.26 5672/tcp,15672/tcp Unit is ready
vault/0* active idle 3/lxd/0 10.246.114.28 8200/tcp Unit is ready (active: true, mlock: disabled)
vault-mysql-router/0* active idle 10.246.114.28 Unit is ready
Nova cloud controller
~~~~~~~~~~~~~~~~~~~~~
@ -424,13 +412,12 @@ and nova-conductor services, will be containerised on machine 3 with the
nova-cloud-controller:
network-manager: Neutron
openstack-origin: distro
To deploy:
.. code-block:: none
juju deploy --to lxd:3 --channel yoga/stable --config ncc.yaml nova-cloud-controller
juju deploy --to lxd:3 --channel zed/stable --config ncc.yaml nova-cloud-controller
Join nova-cloud-controller to the cloud database:
@ -464,7 +451,7 @@ The placement application will be containerised on machine 3 with the
.. code-block:: none
juju deploy --to lxd:3 --channel yoga/stable placement
juju deploy --to lxd:3 --channel zed/stable placement
Join placement to the cloud database:
@ -490,7 +477,7 @@ The openstack-dashboard application (Horizon) will be containerised on machine
.. code-block:: none
juju deploy --to lxd:2 --channel yoga/stable openstack-dashboard
juju deploy --to lxd:2 --channel zed/stable openstack-dashboard
Join openstack-dashboard to the cloud database:
@ -521,7 +508,7 @@ charm. To deploy:
.. code-block:: none
juju deploy --to lxd:3 --channel yoga/stable glance
juju deploy --to lxd:3 --channel zed/stable glance
Join glance to the cloud database:
@ -546,38 +533,41 @@ look similar to this:
.. code-block:: console
Unit Workload Agent Machine Public address Ports Message
ceph-osd/0 blocked idle 0 10.0.0.160 Missing relation: monitor
ceph-osd/1* blocked idle 1 10.0.0.159 Missing relation: monitor
ceph-osd/2 blocked idle 2 10.0.0.162 Missing relation: monitor
ceph-osd/3 blocked idle 3 10.0.0.161 Missing relation: monitor
glance/0* active idle 3/lxd/3 10.0.0.179 9292/tcp Unit is ready
glance-mysql-router/0* active idle 10.0.0.179 Unit is ready
keystone/0* active idle 0/lxd/3 10.0.0.174 5000/tcp Unit is ready
keystone-mysql-router/0* active idle 10.0.0.174 Unit is ready
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.163 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.164 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.165 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
neutron-api/0* active idle 1/lxd/3 10.0.0.173 9696/tcp Unit is ready
neutron-api-mysql-router/0* active idle 10.0.0.173 Unit is ready
neutron-api-plugin-ovn/0* active idle 10.0.0.173 Unit is ready
nova-cloud-controller/0* active idle 3/lxd/1 10.0.0.176 8774/tcp,8775/tcp Unit is ready
ncc-mysql-router/0* active idle 10.0.0.176 Unit is ready
nova-compute/0* active idle 1 10.0.0.159 Unit is ready
ovn-chassis/0* active idle 10.0.0.159 Unit is ready
nova-compute/1 active idle 2 10.0.0.162 Unit is ready
ovn-chassis/2 active idle 10.0.0.162 Unit is ready
nova-compute/2 active idle 3 10.0.0.161 Unit is ready
ovn-chassis/1 active idle 10.0.0.161 Unit is ready
openstack-dashboard/0* active idle 2/lxd/4 10.0.0.178 80/tcp,443/tcp Unit is ready
dashboard-mysql-router/0* active idle 10.0.0.178 Unit is ready
ovn-central/3 active idle 0/lxd/2 10.0.0.170 6641/tcp,6642/tcp Unit is ready
ovn-central/4 active idle 1/lxd/2 10.0.0.171 6641/tcp,6642/tcp Unit is ready (northd: active)
ovn-central/5* active idle 2/lxd/2 10.0.0.172 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)
placement/0* active idle 3/lxd/2 10.0.0.177 8778/tcp Unit is ready
placement-mysql-router/0* active idle 10.0.0.177 Unit is ready
rabbitmq-server/0* active idle 2/lxd/3 10.0.0.175 5672/tcp,15672/tcp Unit is ready
vault/0* active idle 3/lxd/0 10.0.0.166 8200/tcp Unit is ready (active: true, mlock: disabled)
vault-mysql-router/0* active idle 10.0.0.166 Unit is ready
ceph-osd/0 blocked idle 0 10.246.114.17 Missing relation: monitor
ceph-osd/1* blocked idle 1 10.246.114.7 Missing relation: monitor
ceph-osd/2 blocked idle 2 10.246.114.11 Missing relation: monitor
ceph-osd/3 blocked idle 3 10.246.114.31 Missing relation: monitor
glance/0* active idle 3/lxd/3 10.246.114.19 9292/tcp Unit is ready
glance-mysql-router/0* active idle 10.246.114.19 Unit is ready
keystone/0* active idle 0/lxd/3 10.246.114.25 5000/tcp Unit is ready
keystone-mysql-router/0* active idle 10.246.114.25 Unit is ready
mysql-innodb-cluster/3* active idle 0/lxd/1 10.246.114.12 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to
ONE failure.
mysql-innodb-cluster/4 active idle 1/lxd/1 10.246.114.15 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to
ONE failure.
mysql-innodb-cluster/5 active idle 2/lxd/1 10.246.114.14 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to
ONE failure.
neutron-api/0* active idle 1/lxd/3 10.246.114.24 9696/tcp Unit is ready
neutron-api-mysql-router/0* active idle 10.246.114.24 Unit is ready
neutron-api-plugin-ovn/0* active idle 10.246.114.24 Unit is ready
nova-cloud-controller/0* active idle 3/lxd/1 10.246.114.37 8774/tcp,8775/tcp Unit is ready
ncc-mysql-router/0* active idle 10.246.114.37 Unit is ready
nova-compute/0* active idle 1 10.246.114.7 Unit is ready
ovn-chassis/0* active idle 10.246.114.7 Unit is ready
nova-compute/1 active idle 2 10.246.114.11 Unit is ready
ovn-chassis/1 active idle 10.246.114.11 Unit is ready
nova-compute/2 active idle 3 10.246.114.31 Unit is ready
ovn-chassis/2 active idle 10.246.114.31 Unit is ready
openstack-dashboard/0* active idle 2/lxd/4 10.246.114.39 80/tcp,443/tcp Unit is ready
dashboard-mysql-router/0* active idle 10.246.114.39 Unit is ready
ovn-central/0* active idle 0/lxd/2 10.246.114.29 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)
ovn-central/1 active idle 1/lxd/2 10.246.114.52 6641/tcp,6642/tcp Unit is ready
ovn-central/2 active idle 2/lxd/2 10.246.114.51 6641/tcp,6642/tcp Unit is ready (northd: active)
placement/0* active idle 3/lxd/2 10.246.114.38 8778/tcp Unit is ready
placement-mysql-router/0* active idle 10.246.114.38 Unit is ready
rabbitmq-server/0* active idle 2/lxd/3 10.246.114.26 5672/tcp,15672/tcp Unit is ready
vault/0* active idle 3/lxd/0 10.246.114.28 8200/tcp Unit is ready (active: true, mlock: disabled)
vault-mysql-router/0* active idle 10.246.114.28 Unit is ready
Ceph monitor
~~~~~~~~~~~~
@ -590,7 +580,9 @@ The ceph-mon application will be containerised on machines 0, 1, and 2 with the
ceph-mon:
expected-osd-count: 4
monitor-count: 3
source: distro
The above informs the MON cluster that it is comprised of three nodes and that
it should expect at least four OSDs (disks).
To deploy:
@ -625,13 +617,12 @@ charm. File ``cinder.yaml`` contains the configuration:
cinder:
block-device: None
glance-api-version: 2
openstack-origin: distro
To deploy:
.. code-block:: none
juju deploy --to lxd:1 --channel yoga/stable --config cinder.yaml cinder
juju deploy --to lxd:1 --channel zed/stable --config cinder.yaml cinder
Join cinder to the cloud database:
@ -661,7 +652,7 @@ None`` in the configuration file). This will be implemented via the
.. code-block:: none
juju deploy --channel yoga/stable cinder-ceph
juju deploy --channel zed/stable cinder-ceph
Three relations need to be added:
@ -690,7 +681,7 @@ A single relation is needed:
juju add-relation ceph-radosgw:mon ceph-mon:radosgw
.. COMMENT
.. COMMENT (still: Feb 14, 2023)
At the time of writing a jammy-aware ntp charm was not available.
NTP
~~~
@ -729,7 +720,7 @@ Obtain the address in this way:
juju status --format=yaml openstack-dashboard | grep public-address | awk '{print $2}' | head -1
In this example, the address is '10.0.0.178'.
In this example, the address is '10.246.114.39'.
The password can be queried from Keystone:
@ -739,7 +730,7 @@ The password can be queried from Keystone:
The dashboard URL then becomes:
**http://10.0.0.178/horizon**
**http://10.246.114.39/horizon**
The final credentials needed to log in are: