Merge "Update guide to Jammy Yoga"
This commit is contained in:
commit
2fa962231e
|
@ -25,7 +25,7 @@ command line. Install them now:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
sudo snap install openstackclients --classic
|
||||
sudo snap install openstackclients
|
||||
|
||||
Create the admin user environment
|
||||
---------------------------------
|
||||
|
@ -65,7 +65,7 @@ Sample output:
|
|||
OS_REGION_NAME=RegionOne
|
||||
OS_AUTH_VERSION=3
|
||||
OS_CACERT=/home/ubuntu/snap/openstackclients/common/root-ca.crt
|
||||
OS_AUTH_URL=https://10.0.0.170:5000/v3
|
||||
OS_AUTH_URL=https://10.0.0.174:5000/v3
|
||||
OS_PROJECT_DOMAIN_NAME=admin_domain
|
||||
OS_AUTH_PROTOCOL=https
|
||||
OS_USERNAME=admin
|
||||
|
@ -97,13 +97,12 @@ The output will look similar to this:
|
|||
+----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------------------+
|
||||
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
|
||||
+----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------------------+
|
||||
| 12011a63a8e24e2290986cf7d8c285db | RegionOne | cinderv3 | volumev3 | True | admin | https://10.0.0.179:8776/v3/$(tenant_id)s |
|
||||
| 17a66b67744c42beb20135dca647a9a4 | RegionOne | keystone | identity | True | admin | https://10.0.0.170:35357/v3 |
|
||||
| 296755b4627641379fd43095c5fab3ba | RegionOne | nova | compute | True | admin | https://10.0.0.172:8774/v2.1 |
|
||||
| 682fd715c05f492fb0abc08f56e25439 | RegionOne | placement | placement | True | admin | https://10.0.0.173:8778 |
|
||||
| 7b20063d208c40aa9d3e3d1152259868 | RegionOne | neutron | network | True | admin | https://10.0.0.169:9696 |
|
||||
| a613af1a0d8349ee9329e1230e76b764 | RegionOne | cinderv2 | volumev2 | True | admin | https://10.0.0.179:8776/v2/$(tenant_id)s |
|
||||
| b4fe417933704e8b86cfbca91811fcbf | RegionOne | glance | image | True | admin | https://10.0.0.175:9292 |
|
||||
| 153cac31650f4c3db2d4ed38cb21af5d | RegionOne | nova | compute | True | admin | https://10.0.0.176:8774/v2.1 |
|
||||
| 163ea3aef1cb4e2cab7900a092437b8e | RegionOne | neutron | network | True | admin | https://10.0.0.173:9696 |
|
||||
| 2ae599431cf641618da754446c827983 | RegionOne | keystone | identity | True | admin | https://10.0.0.174:35357/v3 |
|
||||
| 42befdb50fd84719a7e1c1f60d5ead42 | RegionOne | cinderv3 | volumev3 | True | admin | https://10.0.0.183:8776/v3/$(tenant_id)s |
|
||||
| d73168f18aba40efa152e304249d95ab | RegionOne | placement | placement | True | admin | https://10.0.0.177:8778 |
|
||||
| f63768a3b71f415680b45835832b7860 | RegionOne | glance | image | True | admin | https://10.0.0.179:9292 |
|
||||
+----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------------------+
|
||||
|
||||
If the endpoints aren't visible, it's likely your environment variables aren't
|
||||
|
@ -120,20 +119,22 @@ Create an image and flavor
|
|||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Import a boot image into Glance to create server instances with. Here we import
|
||||
a Focal amd64 image:
|
||||
a Jammy amd64 image:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
curl http://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img \
|
||||
--output ~/cloud-images/focal-amd64.img
|
||||
mkdir ~/cloud-images
|
||||
|
||||
Now import the image and call it 'focal-amd64':
|
||||
curl http://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img \
|
||||
--output ~/cloud-images/jammy-amd64.img
|
||||
|
||||
Now import the image and call it 'jammy-amd64':
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
openstack image create --public --container-format bare \
|
||||
--disk-format qcow2 --file ~/cloud-images/focal-amd64.img \
|
||||
focal-amd64
|
||||
--disk-format qcow2 --file ~/cloud-images/jammy-amd64.img \
|
||||
jammy-amd64
|
||||
|
||||
Create at least one flavor to define a hardware profile for new instances. Here
|
||||
we create one called 'm1.small':
|
||||
|
@ -232,7 +233,7 @@ The contents of the file, say ``project1-rc``, will therefore look like this
|
|||
|
||||
.. code-block:: ini
|
||||
|
||||
export OS_AUTH_URL=https://10.0.0.170:5000/v3
|
||||
export OS_AUTH_URL=https://10.0.0.174:5000/v3
|
||||
export OS_USER_DOMAIN_NAME=domain1
|
||||
export OS_USERNAME=user1
|
||||
export OS_PROJECT_DOMAIN_NAME=domain1
|
||||
|
@ -264,7 +265,7 @@ Perform a cloud query to ensure the user environment is functioning correctly:
|
|||
+--------------------------------------+-------------+--------+
|
||||
| ID | Name | Status |
|
||||
+--------------------------------------+-------------+--------+
|
||||
| 82517c74-1226-4dab-8a6b-59b4fe07f681 | focal-amd64 | active |
|
||||
| 82517c74-1226-4dab-8a6b-59b4fe07f681 | jammy-amd64 | active |
|
||||
+--------------------------------------+-------------+--------+
|
||||
|
||||
The image that was previously imported by the admin user should be returned.
|
||||
|
@ -277,15 +278,15 @@ project-specific network with a private subnet. We'll also need a router to
|
|||
link this network to the public network created earlier.
|
||||
|
||||
The non-admin user now creates a private internal network called 'user1_net'
|
||||
and an accompanying subnet called 'user1_subnet' (the DNS server is the MAAS
|
||||
server at 10.0.0.2):
|
||||
and an accompanying subnet called 'user1_subnet' (here the DNS server is the
|
||||
MAAS server at 10.0.0.2, but adjust to local conditions):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
openstack network create --internal user1_net
|
||||
|
||||
openstack subnet create --network user1_net --dns-nameserver 10.0.0.2 \
|
||||
--gateway 192.168.0.1 --subnet-range 192.168.0/24 \
|
||||
--subnet-range 192.168.0/24 \
|
||||
--allocation-pool start=192.168.0.10,end=192.168.0.199 \
|
||||
user1_subnet
|
||||
|
||||
|
@ -309,6 +310,8 @@ passphraseless keypair (remove the ``-N`` option to avoid that):
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
mkdir ~/cloud-keys
|
||||
|
||||
ssh-keygen -q -N '' -f ~/cloud-keys/user1-key
|
||||
|
||||
To import a keypair:
|
||||
|
@ -329,20 +332,20 @@ We do the latter by creating a group called 'Allow_SSH':
|
|||
Create and access an instance
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Create a Focal amd64 instance called 'focal-1':
|
||||
Create a Jammy amd64 instance called 'jammy-1':
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
openstack server create --image focal-amd64 --flavor m1.small \
|
||||
openstack server create --image jammy-amd64 --flavor m1.small \
|
||||
--key-name user1 --network user1_net --security-group Allow_SSH \
|
||||
focal-1
|
||||
jammy-1
|
||||
|
||||
Request and assign a floating IP address to the new instance:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
FLOATING_IP=$(openstack floating ip create -f value -c floating_ip_address ext_net)
|
||||
openstack server add floating ip focal-1 $FLOATING_IP
|
||||
openstack server add floating ip jammy-1 $FLOATING_IP
|
||||
|
||||
Ask for a listing of all instances within the context of the current project
|
||||
('project1'):
|
||||
|
@ -358,7 +361,7 @@ Sample output:
|
|||
+--------------------------------------+---------+--------+-------------------------------------+-------------+----------+
|
||||
| ID | Name | Status | Networks | Image | Flavor |
|
||||
+--------------------------------------+---------+--------+-------------------------------------+-------------+----------+
|
||||
| 687b96d0-ab22-459b-935b-a9d0b7e9964c | focal-1 | ACTIVE | user1_net=192.168.0.154, 10.0.0.187 | focal-amd64 | m1.small |
|
||||
| 687b96d0-ab22-459b-935b-a9d0b7e9964c | jammy-1 | ACTIVE | user1_net=192.168.0.154, 10.0.0.187 | jammy-amd64 | m1.small |
|
||||
+--------------------------------------+---------+--------+-------------------------------------+-------------+----------+
|
||||
|
||||
The first address listed is in the private network and the second one is in the
|
||||
|
@ -368,7 +371,7 @@ You can monitor the booting of the instance with this command:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
openstack console log show focal-1
|
||||
openstack console log show jammy-1
|
||||
|
||||
The instance is ready when the output contains:
|
||||
|
||||
|
@ -377,9 +380,9 @@ The instance is ready when the output contains:
|
|||
.
|
||||
.
|
||||
.
|
||||
Ubuntu 20.04.3 LTS focal-1 ttyS0
|
||||
Ubuntu 22.04 LTS jammy-1 ttyS0
|
||||
|
||||
focal-1 login:
|
||||
jammy-1 login:
|
||||
|
||||
Connect to the instance in this way:
|
||||
|
||||
|
@ -392,10 +395,14 @@ Next steps
|
|||
|
||||
You now have a functional OpenStack cloud managed by MAAS-backed Juju.
|
||||
|
||||
Go on to read the many Charmed OpenStack topics in this guide or consider the
|
||||
`OpenStack Administrator Guides`_ for upstream OpenStack administrative help.
|
||||
As next steps, consider browsing these documentation sources:
|
||||
|
||||
* `OpenStack Charm Guide`_: the primary source of information for OpenStack
|
||||
charms
|
||||
* `OpenStack Administrator Guides`_: upstream OpenStack administrative help
|
||||
|
||||
.. LINKS
|
||||
.. _openstack-bundles: https://github.com/openstack-charmers/openstack-bundles
|
||||
.. _Reserved IP range: https://maas.io/docs/concepts-and-terms#heading--ip-ranges
|
||||
.. _OpenStack Charm Guide: https://docs.openstack.org/charm-guide
|
||||
.. _OpenStack Administrator Guides: http://docs.openstack.org/user-guide-admin/content
|
||||
|
|
|
@ -22,22 +22,22 @@ Add MAAS to Juju
|
|||
Add the MAAS cluster so Juju will be able to manage it as a cloud. We'll do
|
||||
this via a cloud definition file, such as ``maas-cloud.yaml``:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: yaml
|
||||
|
||||
clouds:
|
||||
mymaas:
|
||||
maas-one:
|
||||
type: maas
|
||||
auth-types: [oauth1]
|
||||
endpoint: http://10.0.0.2:5240/MAAS
|
||||
|
||||
We've called the cloud 'mymaas' and its endpoint is based on the IP address of
|
||||
the MAAS system.
|
||||
We've called the cloud 'maas-one' and its endpoint is based on the IP address
|
||||
of the MAAS system.
|
||||
|
||||
The cloud is added in this way:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju add-cloud --client -f maas-cloud.yaml mymaas
|
||||
juju add-cloud --client -f maas-cloud.yaml maas-one
|
||||
|
||||
View the updated list of clouds known to the current Juju client with the
|
||||
:command:`juju clouds --client` command.
|
||||
|
@ -48,15 +48,15 @@ Add the MAAS credentials
|
|||
Add the MAAS credentials so Juju can interact with the newly added cloud.
|
||||
We'll again use a file to import our information, such as ``maas-creds.yaml``:
|
||||
|
||||
.. code-block:: ini
|
||||
.. code-block:: yaml
|
||||
|
||||
credentials:
|
||||
mymaas:
|
||||
maas-one:
|
||||
anyuser:
|
||||
auth-type: oauth1
|
||||
maas-oauth: LGJ8svffZZ5kSdeA8E:9kVM7jJpHGG6J9apk3:KE65tLnjpPuqVHZ6vb97T8VWfVB9tM3j
|
||||
|
||||
We've included the name of the cloud 'mymaas' and a new user 'anyuser'. The
|
||||
We've included the name of the cloud 'maas-one' and a new user 'anyuser'. The
|
||||
long key is the MAAS API key for the MAAS 'admin' user. This key was placed in
|
||||
file ``~/admin-api-key`` on the MAAS system during the :ref:`Install MAAS
|
||||
<install_maas>` step on the previous page. It can also be obtained from the
|
||||
|
@ -66,7 +66,7 @@ The credentials are added in this way:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju add-credential --client -f maas-creds.yaml mymaas
|
||||
juju add-credential --client -f maas-creds.yaml maas-one
|
||||
|
||||
View the updated list of credentials known to the current Juju client with the
|
||||
:command:`juju credentials --client --show-secrets --format yaml` command.
|
||||
|
@ -74,12 +74,12 @@ View the updated list of credentials known to the current Juju client with the
|
|||
Create the Juju controller
|
||||
--------------------------
|
||||
|
||||
Create the controller (using the 'focal' series) for the 'mymaas' cloud, and
|
||||
Create the controller (using the 'jammy' series) for the 'maas-one' cloud, and
|
||||
call it 'maas-controller':
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju bootstrap --bootstrap-series=focal --constraints tags=juju mymaas maas-controller
|
||||
juju bootstrap --bootstrap-series=focal --constraints tags=juju maas-one maas-controller
|
||||
|
||||
The ``--constraints`` option allows us to effectively select a node in the MAAS
|
||||
cluster. Recall that we attached a tag of 'juju' to the lower-resourced MAAS
|
||||
|
@ -96,11 +96,17 @@ Create the model
|
|||
|
||||
The OpenStack deployment will be placed in its own Juju model for
|
||||
organisational purposes. Create the model 'openstack' and specify our desired
|
||||
series of 'focal':
|
||||
series of 'jammy':
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju add-model --config default-series=focal openstack
|
||||
juju add-model --config default-series=jammy openstack
|
||||
|
||||
.. note::
|
||||
|
||||
Due to Juju issue `LP #1966664`_, a model's default series is not honoured.
|
||||
Consequently, the series will be explicitly requested during the deployment
|
||||
of each principal application.
|
||||
|
||||
The output of the :command:`juju status` command summarises the Juju aspect of
|
||||
the environment. It should now look very similar to this:
|
||||
|
@ -108,7 +114,7 @@ the environment. It should now look very similar to this:
|
|||
.. code-block:: none
|
||||
|
||||
Model Controller Cloud/Region Version SLA Timestamp
|
||||
openstack maas-controller mymaas/default 2.9.15 unsupported 15:56:13Z
|
||||
openstack maas-controller maas-one/default 2.9.29 unsupported 20:28:32Z
|
||||
|
||||
Model "admin/openstack" is empty.
|
||||
|
||||
|
@ -122,3 +128,4 @@ the OpenStack applications and adding relations between them. Go to
|
|||
.. LINKS
|
||||
.. _Juju: https://juju.is
|
||||
.. _MAAS: https://maas.io
|
||||
.. _LP #1966664: https://bugs.launchpad.net/juju/+bug/1966664
|
||||
|
|
|
@ -59,7 +59,7 @@ The MAAS system's single network interface resides on subnet
|
|||
|
||||
.. attention::
|
||||
|
||||
The MAAS-provisioned nodes rely upon Focal AMD64 images provided by MAAS.
|
||||
The MAAS-provisioned nodes rely upon Jammy AMD64 images provided by MAAS.
|
||||
|
||||
.. _install_maas:
|
||||
|
||||
|
@ -73,7 +73,7 @@ instructions`_ for details:
|
|||
.. code-block:: none
|
||||
|
||||
sudo snap install maas-test-db
|
||||
sudo snap install maas --channel=3.0/stable
|
||||
sudo snap install maas --channel=3.1/stable
|
||||
sudo maas init region+rack --maas-url http://10.0.0.2:5240/MAAS --database-uri maas-test-db:///
|
||||
sudo maas createadmin --username admin --password ubuntu --email admin@example.com --ssh-import lp:<unsername>
|
||||
sudo maas apikey --username admin > ~ubuntu/admin-api-key
|
||||
|
@ -115,7 +115,7 @@ The web UI URL then becomes:
|
|||
**http://10.0.0.2:5240/MAAS**
|
||||
|
||||
You will be whisked through an on-boarding process when you access the web UI
|
||||
for the first time. Recall that we require 20.04 LTS AMD64 images.
|
||||
for the first time. Recall that we require 22.04 LTS AMD64 images.
|
||||
|
||||
Enable DHCP
|
||||
~~~~~~~~~~~
|
||||
|
|
|
@ -13,106 +13,101 @@ installed from the instructions given on the :doc:`Install OpenStack
|
|||
.. code-block:: console
|
||||
|
||||
Model Controller Cloud/Region Version SLA Timestamp
|
||||
openstack maas-controller mymaas/default 2.9.15 unsupported 22:00:48Z
|
||||
openstack maas-controller maas-one/default 2.9.29 unsupported 18:51:46Z
|
||||
|
||||
App Version Status Scale Charm Store Channel Rev OS Message
|
||||
ceph-mon 16.2.6 active 3 ceph-mon charmstore stable 482 ubuntu Unit is ready and clustered
|
||||
ceph-osd 16.2.6 active 4 ceph-osd charmstore stable 502 ubuntu Unit is ready (1 OSD)
|
||||
ceph-radosgw 16.2.6 active 1 ceph-radosgw charmstore stable 412 ubuntu Unit is ready
|
||||
cinder 19.0.0 active 1 cinder charmstore stable 448 ubuntu Unit is ready
|
||||
cinder-ceph 19.0.0 active 1 cinder-ceph charmstore stable 360 ubuntu Unit is ready
|
||||
cinder-mysql-router 8.0.26 active 1 mysql-router charmstore stable 60 ubuntu Unit is ready
|
||||
dashboard-mysql-router 8.0.26 active 1 mysql-router charmstore stable 60 ubuntu Unit is ready
|
||||
glance 23.0.0 active 1 glance charmstore stable 473 ubuntu Unit is ready
|
||||
glance-mysql-router 8.0.26 active 1 mysql-router charmstore stable 60 ubuntu Unit is ready
|
||||
keystone 20.0.0 active 1 keystone charmstore stable 565 ubuntu Application Ready
|
||||
keystone-mysql-router 8.0.26 active 1 mysql-router charmstore stable 60 ubuntu Unit is ready
|
||||
mysql-innodb-cluster 8.0.26 active 3 mysql-innodb-cluster charmstore stable 88 ubuntu Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
|
||||
ncc-mysql-router 8.0.26 active 1 mysql-router charmstore stable 60 ubuntu Unit is ready
|
||||
neutron-api 19.0.0 active 1 neutron-api charmstore stable 485 ubuntu Unit is ready
|
||||
neutron-api-mysql-router 8.0.26 active 1 mysql-router charmstore stable 60 ubuntu Unit is ready
|
||||
neutron-api-plugin-ovn 19.0.0 active 1 neutron-api-plugin-ovn charmstore stable 46 ubuntu Unit is ready
|
||||
nova-cloud-controller 24.0.0 active 1 nova-cloud-controller charmstore stable 552 ubuntu Unit is ready
|
||||
nova-compute 24.0.0 active 3 nova-compute charmstore stable 577 ubuntu Unit is ready
|
||||
ntp 3.5 active 4 ntp charmhub stable 47 ubuntu chrony: Ready
|
||||
openstack-dashboard 20.1.0 active 1 openstack-dashboard charmstore stable 513 ubuntu Unit is ready
|
||||
ovn-central 21.09.0~git2... active 3 ovn-central charmstore stable 68 ubuntu Unit is ready
|
||||
ovn-chassis 21.09.0~git2... active 3 ovn-chassis charmstore stable 86 ubuntu Unit is ready
|
||||
placement 6.0.0 active 1 placement charmstore stable 64 ubuntu Unit is ready
|
||||
placement-mysql-router 8.0.26 active 1 mysql-router charmstore stable 60 ubuntu Unit is ready
|
||||
rabbitmq-server 3.8.2 active 1 rabbitmq-server charmstore stable 440 ubuntu Unit is ready
|
||||
vault 1.5.9 active 1 vault charmstore stable 153 ubuntu Unit is ready (active: true, mlock: disabled)
|
||||
vault-mysql-router 8.0.26 active 1 mysql-router charmstore stable 60 ubuntu Unit is ready
|
||||
App Version Status Scale Charm Channel Rev Exposed Message
|
||||
ceph-mon 17.1.0 active 3 ceph-mon quincy/stable 106 no Unit is ready and clustered
|
||||
ceph-osd 17.1.0 active 4 ceph-osd quincy/stable 534 no Unit is ready (2 OSD)
|
||||
ceph-radosgw 17.1.0 active 1 ceph-radosgw quincy/stable 526 no Unit is ready
|
||||
cinder 20.0.0 active 1 cinder yoga/stable 554 no Unit is ready
|
||||
cinder-ceph 20.0.0 active 1 cinder-ceph yoga/stable 502 no Unit is ready
|
||||
cinder-mysql-router 8.0.29 active 1 mysql-router 8.0/stable 26 no Unit is ready
|
||||
dashboard-mysql-router 8.0.29 active 1 mysql-router 8.0/stable 26 no Unit is ready
|
||||
glance 24.0.0 active 1 glance yoga/stable 544 no Unit is ready
|
||||
glance-mysql-router 8.0.29 active 1 mysql-router 8.0/stable 26 no Unit is ready
|
||||
keystone 21.0.0 active 1 keystone yoga/stable 568 no Application Ready
|
||||
keystone-mysql-router 8.0.29 active 1 mysql-router 8.0/stable 26 no Unit is ready
|
||||
mysql-innodb-cluster 8.0.29 active 3 mysql-innodb-cluster 8.0/stable 24 no Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
|
||||
ncc-mysql-router 8.0.29 active 1 mysql-router 8.0/stable 26 no Unit is ready
|
||||
neutron-api 20.0.0 active 1 neutron-api yoga/stable 526 no Unit is ready
|
||||
neutron-api-mysql-router 8.0.29 active 1 mysql-router 8.0/stable 26 no Unit is ready
|
||||
neutron-api-plugin-ovn 20.0.0 active 1 neutron-api-plugin-ovn yoga/stable 29 no Unit is ready
|
||||
nova-cloud-controller 25.0.0 active 1 nova-cloud-controller yoga/stable 601 no Unit is ready
|
||||
nova-compute 25.0.0 active 3 nova-compute yoga/stable 588 no Unit is ready
|
||||
openstack-dashboard 22.1.0 active 1 openstack-dashboard yoga/stable 536 no Unit is ready
|
||||
ovn-central 22.03.0 active 3 ovn-central 22.03/stable 31 no Unit is ready
|
||||
ovn-chassis 22.03.0 active 3 ovn-chassis 22.03/stable 46 no Unit is ready
|
||||
placement 7.0.0 active 1 placement yoga/stable 49 no Unit is ready
|
||||
placement-mysql-router 8.0.29 active 1 mysql-router 8.0/stable 26 no Unit is ready
|
||||
rabbitmq-server 3.9.13 active 1 rabbitmq-server 3.9/stable 149 no Unit is ready
|
||||
vault 1.7.9 active 1 vault 1.7/stable 68 no Unit is ready (active: true, mlock: disabled)
|
||||
vault-mysql-router 8.0.29 active 1 mysql-router 8.0/stable 26 no Unit is ready
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
ceph-mon/0* active idle 0/lxd/3 10.0.0.176 Unit is ready and clustered
|
||||
ceph-mon/1 active idle 1/lxd/3 10.0.0.177 Unit is ready and clustered
|
||||
ceph-mon/2 active idle 2/lxd/4 10.0.0.178 Unit is ready and clustered
|
||||
ceph-osd/0 active idle 0 10.0.0.158 Unit is ready (1 OSD)
|
||||
ntp/1 active idle 10.0.0.158 123/udp chrony: Ready
|
||||
ceph-osd/1* active idle 1 10.0.0.159 Unit is ready (1 OSD)
|
||||
ntp/2 active idle 10.0.0.159 123/udp chrony: Ready
|
||||
ceph-osd/2 active idle 2 10.0.0.160 Unit is ready (1 OSD)
|
||||
ntp/0* active idle 10.0.0.160 123/udp chrony: Ready
|
||||
ceph-osd/3 active idle 3 10.0.0.161 Unit is ready (1 OSD)
|
||||
ntp/3 active idle 10.0.0.161 123/udp chrony: Ready
|
||||
ceph-radosgw/0* active idle 0/lxd/4 10.0.0.180 80/tcp Unit is ready
|
||||
cinder/0* active idle 1/lxd/4 10.0.0.179 8776/tcp Unit is ready
|
||||
cinder-ceph/0* active idle 10.0.0.179 Unit is ready
|
||||
cinder-mysql-router/0* active idle 10.0.0.179 Unit is ready
|
||||
glance/0* active idle 3/lxd/3 10.0.0.175 9292/tcp Unit is ready
|
||||
glance-mysql-router/0* active idle 10.0.0.175 Unit is ready
|
||||
keystone/0* active idle 0/lxd/2 10.0.0.170 5000/tcp Unit is ready
|
||||
keystone-mysql-router/0* active idle 10.0.0.170 Unit is ready
|
||||
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.162 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
|
||||
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.163 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
|
||||
ceph-mon/0 active idle 0/lxd/4 10.0.0.180 Unit is ready and clustered
|
||||
ceph-mon/1* active idle 1/lxd/4 10.0.0.182 Unit is ready and clustered
|
||||
ceph-mon/2 active idle 2/lxd/5 10.0.0.181 Unit is ready and clustered
|
||||
ceph-osd/0 active idle 0 10.0.0.160 Unit is ready (2 OSD)
|
||||
ceph-osd/1* active idle 1 10.0.0.159 Unit is ready (2 OSD)
|
||||
ceph-osd/2 active idle 2 10.0.0.162 Unit is ready (2 OSD)
|
||||
ceph-osd/3 active idle 3 10.0.0.161 Unit is ready (2 OSD)
|
||||
ceph-radosgw/0* active idle 0/lxd/5 10.0.0.184 80/tcp Unit is ready
|
||||
cinder/0* active idle 1/lxd/5 10.0.0.183 8776/tcp Unit is ready
|
||||
cinder-ceph/0* active idle 10.0.0.183 Unit is ready
|
||||
cinder-mysql-router/0* active idle 10.0.0.183 Unit is ready
|
||||
glance/0* active idle 3/lxd/3 10.0.0.179 9292/tcp Unit is ready
|
||||
glance-mysql-router/0* active idle 10.0.0.179 Unit is ready
|
||||
keystone/0* active idle 0/lxd/3 10.0.0.174 5000/tcp Unit is ready
|
||||
keystone-mysql-router/0* active idle 10.0.0.174 Unit is ready
|
||||
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.163 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
|
||||
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.164 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
|
||||
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.165 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
|
||||
neutron-api/0* active idle 1/lxd/2 10.0.0.169 9696/tcp Unit is ready
|
||||
neutron-api-mysql-router/0* active idle 10.0.0.169 Unit is ready
|
||||
neutron-api-plugin-ovn/0* active idle 10.0.0.169 Unit is ready
|
||||
nova-cloud-controller/0* active idle 3/lxd/1 10.0.0.172 8774/tcp,8775/tcp Unit is ready
|
||||
ncc-mysql-router/0* active idle 10.0.0.172 Unit is ready
|
||||
neutron-api/0* active idle 1/lxd/3 10.0.0.173 9696/tcp Unit is ready
|
||||
neutron-api-mysql-router/0* active idle 10.0.0.173 Unit is ready
|
||||
neutron-api-plugin-ovn/0* active idle 10.0.0.173 Unit is ready
|
||||
nova-cloud-controller/0* active idle 3/lxd/1 10.0.0.176 8774/tcp,8775/tcp Unit is ready
|
||||
ncc-mysql-router/0* active idle 10.0.0.176 Unit is ready
|
||||
nova-compute/0* active idle 1 10.0.0.159 Unit is ready
|
||||
ovn-chassis/3 active idle 10.0.0.159 Unit is ready
|
||||
nova-compute/1 active idle 2 10.0.0.160 Unit is ready
|
||||
ovn-chassis/2 active idle 10.0.0.160 Unit is ready
|
||||
ovn-chassis/0* active idle 10.0.0.159 Unit is ready
|
||||
nova-compute/1 active idle 2 10.0.0.162 Unit is ready
|
||||
ovn-chassis/2 active idle 10.0.0.162 Unit is ready
|
||||
nova-compute/2 active idle 3 10.0.0.161 Unit is ready
|
||||
ovn-chassis/1* active idle 10.0.0.161 Unit is ready
|
||||
openstack-dashboard/0* active idle 2/lxd/3 10.0.0.174 80/tcp,443/tcp Unit is ready
|
||||
dashboard-mysql-router/0* active idle 10.0.0.174 Unit is ready
|
||||
ovn-central/0 active idle 0/lxd/1 10.0.0.166 6641/tcp,6642/tcp Unit is ready
|
||||
ovn-central/1 active idle 1/lxd/1 10.0.0.167 6641/tcp,6642/tcp Unit is ready
|
||||
ovn-central/2* active idle 2/lxd/1 10.0.0.168 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db northd: active)
|
||||
placement/0* active idle 3/lxd/2 10.0.0.173 8778/tcp Unit is ready
|
||||
placement-mysql-router/0* active idle 10.0.0.173 Unit is ready
|
||||
rabbitmq-server/0* active idle 2/lxd/2 10.0.0.171 5672/tcp Unit is ready
|
||||
vault/0* active idle 3/lxd/0 10.0.0.164 8200/tcp Unit is ready (active: true, mlock: disabled)
|
||||
vault-mysql-router/0* active idle 10.0.0.164 Unit is ready
|
||||
ovn-chassis/1 active idle 10.0.0.161 Unit is ready
|
||||
openstack-dashboard/0* active idle 2/lxd/4 10.0.0.178 80/tcp,443/tcp Unit is ready
|
||||
dashboard-mysql-router/0* active idle 10.0.0.178 Unit is ready
|
||||
ovn-central/3 active idle 0/lxd/2 10.0.0.170 6641/tcp,6642/tcp Unit is ready
|
||||
ovn-central/4 active idle 1/lxd/2 10.0.0.171 6641/tcp,6642/tcp Unit is ready (northd: active)
|
||||
ovn-central/5* active idle 2/lxd/2 10.0.0.172 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)
|
||||
placement/0* active idle 3/lxd/2 10.0.0.177 8778/tcp Unit is ready
|
||||
placement-mysql-router/0* active idle 10.0.0.177 Unit is ready
|
||||
rabbitmq-server/0* active idle 2/lxd/3 10.0.0.175 5672/tcp,15672/tcp Unit is ready
|
||||
vault/0* active idle 3/lxd/0 10.0.0.166 8200/tcp Unit is ready (active: true, mlock: disabled)
|
||||
vault-mysql-router/0* active idle 10.0.0.166 Unit is ready
|
||||
|
||||
Machine State DNS Inst id Series AZ Message
|
||||
0 started 10.0.0.158 node1 focal default Deployed
|
||||
0/lxd/0 started 10.0.0.162 juju-c6e3fb-0-lxd-0 focal default Container started
|
||||
0/lxd/1 started 10.0.0.166 juju-c6e3fb-0-lxd-1 focal default Container started
|
||||
0/lxd/2 started 10.0.0.170 juju-c6e3fb-0-lxd-2 focal default Container started
|
||||
0/lxd/3 started 10.0.0.176 juju-c6e3fb-0-lxd-3 focal default Container started
|
||||
0/lxd/4 started 10.0.0.180 juju-c6e3fb-0-lxd-4 focal default Container started
|
||||
1 started 10.0.0.159 node2 focal default Deployed
|
||||
1/lxd/0 started 10.0.0.163 juju-c6e3fb-1-lxd-0 focal default Container started
|
||||
1/lxd/1 started 10.0.0.167 juju-c6e3fb-1-lxd-1 focal default Container started
|
||||
1/lxd/2 started 10.0.0.169 juju-c6e3fb-1-lxd-2 focal default Container started
|
||||
1/lxd/3 started 10.0.0.177 juju-c6e3fb-1-lxd-3 focal default Container started
|
||||
1/lxd/4 started 10.0.0.179 juju-c6e3fb-1-lxd-4 focal default Container started
|
||||
2 started 10.0.0.160 node3 focal default Deployed
|
||||
2/lxd/0 started 10.0.0.165 juju-c6e3fb-2-lxd-0 focal default Container started
|
||||
2/lxd/1 started 10.0.0.168 juju-c6e3fb-2-lxd-1 focal default Container started
|
||||
2/lxd/2 started 10.0.0.171 juju-c6e3fb-2-lxd-2 focal default Container started
|
||||
2/lxd/3 started 10.0.0.174 juju-c6e3fb-2-lxd-3 focal default Container started
|
||||
2/lxd/4 started 10.0.0.178 juju-c6e3fb-2-lxd-4 focal default Container started
|
||||
3 started 10.0.0.161 node4 focal default Deployed
|
||||
3/lxd/0 started 10.0.0.164 juju-c6e3fb-3-lxd-0 focal default Container started
|
||||
3/lxd/1 started 10.0.0.172 juju-c6e3fb-3-lxd-1 focal default Container started
|
||||
3/lxd/2 started 10.0.0.173 juju-c6e3fb-3-lxd-2 focal default Container started
|
||||
3/lxd/3 started 10.0.0.175 juju-c6e3fb-3-lxd-3 focal default Container started
|
||||
0 started 10.0.0.160 node1 jammy default Deployed
|
||||
0/lxd/0 started 10.0.0.163 juju-df2f3d-0-lxd-0 jammy default Container started
|
||||
0/lxd/2 started 10.0.0.170 juju-df2f3d-0-lxd-2 jammy default Container started
|
||||
0/lxd/3 started 10.0.0.174 juju-df2f3d-0-lxd-3 jammy default Container started
|
||||
0/lxd/4 started 10.0.0.180 juju-df2f3d-0-lxd-4 jammy default Container started
|
||||
0/lxd/5 started 10.0.0.184 juju-df2f3d-0-lxd-5 jammy default Container started
|
||||
1 started 10.0.0.159 node2 jammy default Deployed
|
||||
1/lxd/0 started 10.0.0.164 juju-df2f3d-1-lxd-0 jammy default Container started
|
||||
1/lxd/2 started 10.0.0.171 juju-df2f3d-1-lxd-2 jammy default Container started
|
||||
1/lxd/3 started 10.0.0.173 juju-df2f3d-1-lxd-3 jammy default Container started
|
||||
1/lxd/4 started 10.0.0.182 juju-df2f3d-1-lxd-4 jammy default Container started
|
||||
1/lxd/5 started 10.0.0.183 juju-df2f3d-1-lxd-5 jammy default Container started
|
||||
2 started 10.0.0.162 node4 jammy default Deployed
|
||||
2/lxd/0 started 10.0.0.165 juju-df2f3d-2-lxd-0 jammy default Container started
|
||||
2/lxd/2 started 10.0.0.172 juju-df2f3d-2-lxd-2 jammy default Container started
|
||||
2/lxd/3 started 10.0.0.175 juju-df2f3d-2-lxd-3 jammy default Container started
|
||||
2/lxd/4 started 10.0.0.178 juju-df2f3d-2-lxd-4 jammy default Container started
|
||||
2/lxd/5 started 10.0.0.181 juju-df2f3d-2-lxd-5 jammy default Container started
|
||||
3 started 10.0.0.161 node3 jammy default Deployed
|
||||
3/lxd/0 started 10.0.0.166 juju-df2f3d-3-lxd-0 jammy default Container started
|
||||
3/lxd/1 started 10.0.0.176 juju-df2f3d-3-lxd-1 jammy default Container started
|
||||
3/lxd/2 started 10.0.0.177 juju-df2f3d-3-lxd-2 jammy default Container started
|
||||
3/lxd/3 started 10.0.0.179 juju-df2f3d-3-lxd-3 jammy default Container started
|
||||
|
||||
Relation provider Requirer Interface Type Message
|
||||
ceph-mon:client cinder-ceph:ceph ceph-client regular
|
||||
|
@ -121,7 +116,6 @@ installed from the instructions given on the :doc:`Install OpenStack
|
|||
ceph-mon:mon ceph-mon:mon ceph peer
|
||||
ceph-mon:osd ceph-osd:mon ceph-osd regular
|
||||
ceph-mon:radosgw ceph-radosgw:mon ceph-radosgw regular
|
||||
ceph-osd:juju-info ntp:juju-info juju-info subordinate
|
||||
ceph-radosgw:cluster ceph-radosgw:cluster swift-ha peer
|
||||
cinder-ceph:ceph-access nova-compute:ceph-access cinder-ceph-key regular
|
||||
cinder-ceph:storage-backend cinder:storage-backend cinder-backend subordinate
|
||||
|
@ -160,7 +154,6 @@ installed from the instructions given on the :doc:`Install OpenStack
|
|||
nova-cloud-controller:cluster nova-cloud-controller:cluster nova-ha peer
|
||||
nova-compute:cloud-compute nova-cloud-controller:cloud-compute nova-compute regular
|
||||
nova-compute:compute-peer nova-compute:compute-peer nova peer
|
||||
ntp:ntp-peers ntp:ntp-peers ntp peer
|
||||
openstack-dashboard:cluster openstack-dashboard:cluster openstack-dashboard-ha peer
|
||||
ovn-central:ovsdb ovn-chassis:ovsdb ovsdb regular
|
||||
ovn-central:ovsdb-cms neutron-api-plugin-ovn:ovsdb-cms ovsdb-cms regular
|
||||
|
|
|
@ -26,16 +26,16 @@ bundle <install-openstack-bundle>` for method #2.
|
|||
#. The entire suite of charms used to manage the cloud should be upgraded to
|
||||
the latest stable charm revision before any major change is made to the
|
||||
cloud (e.g. migrating to new charms, upgrading cloud services, upgrading
|
||||
machine series). See `Charms upgrade`_ for details.
|
||||
machine series). See :doc:`Charms upgrade <upgrade-charms>` for details.
|
||||
|
||||
#. The Juju machines that comprise the cloud should all be running the same
|
||||
series (e.g. 'bionic' or 'focal', but not a mix of the two). See `Series
|
||||
upgrade`_ for details.
|
||||
series (e.g. 'focal' or 'jammy', but not a mix of the two). See
|
||||
:doc:`Series upgrade <upgrade-series>` for details.
|
||||
|
||||
Despite the length of this page, only three distinct Juju commands will be
|
||||
employed: :command:`juju deploy`, :command:`juju add-unit`, and :command:`juju
|
||||
add-relation`. You may want to review these pertinent sections of the Juju
|
||||
documentation before continuing:
|
||||
Despite the length of this page, only two distinct Juju commands will be
|
||||
employed: :command:`juju deploy`, and :command:`juju add-relation`. You may
|
||||
want to review these pertinent sections of the Juju documentation before
|
||||
continuing:
|
||||
|
||||
* `Deploying applications`_
|
||||
* `Deploying to specific machines`_
|
||||
|
@ -50,23 +50,23 @@ This page will show how to install a minimal non-HA OpenStack cloud. See
|
|||
OpenStack release
|
||||
-----------------
|
||||
|
||||
.. TEMPLATE
|
||||
.. TEMPLATE (alternate between the following two paragraphs each six months)
|
||||
As the :doc:`Overview <install-overview>` of the Installation section
|
||||
states, OpenStack Ussuri will be deployed atop Ubuntu 20.04 LTS (Focal)
|
||||
cloud nodes. In order to achieve this the default package archive ("distro")
|
||||
for the cloud nodes will be used during the install of each OpenStack
|
||||
states, OpenStack Xena will be deployed atop Ubuntu 20.04 LTS (Focal) cloud
|
||||
nodes. In order to achieve this a cloud archive release of
|
||||
'cloud:focal-xena' will be used during the install of each OpenStack
|
||||
application. Note that some applications are not part of the OpenStack
|
||||
project per se and therefore do not apply (exceptionally, Ceph applications
|
||||
do use this method).
|
||||
do use this method). Not using a more recent OpenStack release in this way
|
||||
will result in an Ussuri deployment (i.e. Ussuri is in the Ubuntu package
|
||||
archive for Focal).
|
||||
|
||||
As the :doc:`Overview <install-overview>` of the Installation section states,
|
||||
OpenStack Xena will be deployed atop Ubuntu 20.04 LTS (Focal) cloud nodes. In
|
||||
order to achieve this a cloud archive release of 'cloud:focal-xena' will be
|
||||
used during the install of each OpenStack application. Note that some
|
||||
applications are not part of the OpenStack project per se and therefore do not
|
||||
apply (exceptionally, Ceph applications do use this method). Not using a more
|
||||
recent OpenStack release in this way will result in an Ussuri deployment (i.e.
|
||||
Ussuri is in the Ubuntu package archive for Focal).
|
||||
OpenStack Yoga will be deployed atop Ubuntu 22.04 LTS (Jammy) cloud nodes. In
|
||||
order to achieve this the default package archive ("distro") for the cloud
|
||||
nodes will be used during the install of each OpenStack application. Note that
|
||||
some applications are not part of the OpenStack project per se and therefore do
|
||||
not apply (exceptionally, Ceph applications do use this method).
|
||||
|
||||
See :ref:`Perform the upgrade <perform_the_upgrade>` on the :doc:`OpenStack
|
||||
Upgrade <upgrade-openstack>` page for more details on cloud archive releases
|
||||
|
@ -75,7 +75,7 @@ and how they are used when upgrading OpenStack.
|
|||
.. important::
|
||||
|
||||
The chosen OpenStack release may impact the installation and configuration
|
||||
instructions. **This guide assumes that OpenStack Xena is being deployed.**
|
||||
instructions. **This guide assumes that OpenStack Yoga is being deployed.**
|
||||
|
||||
Installation progress
|
||||
---------------------
|
||||
|
@ -107,8 +107,7 @@ context now:
|
|||
|
||||
In the following sections, the various OpenStack components will be added to
|
||||
the 'openstack' model. Each application will be installed from the online
|
||||
`Charm store`_ and many will have configuration options specified via a YAML
|
||||
file.
|
||||
`Charmhub`_ and many will have configuration options specified via a YAML file.
|
||||
|
||||
.. note::
|
||||
|
||||
|
@ -123,25 +122,25 @@ The ceph-osd application is deployed to four nodes with the `ceph-osd`_ charm.
|
|||
The name of the block devices backing the OSDs is dependent upon the hardware
|
||||
on the nodes. All possible devices across the nodes should be given as the
|
||||
value for the ``osd-devices`` option (space-separated). Here, we'll be using
|
||||
the same device on each cloud node: ``/dev/sdb``. File ``ceph-osd.yaml``
|
||||
contains the configuration:
|
||||
the same devices on each node: ``/dev/vdb`` and ``/dev/vdc``. File
|
||||
``ceph-osd.yaml`` contains the configuration:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
ceph-osd:
|
||||
osd-devices: /dev/sdb
|
||||
source: cloud:focal-xena
|
||||
osd-devices: /dev/vdb /dev/vdc
|
||||
source: distro
|
||||
|
||||
To deploy the application we'll make use of the 'compute' tag that we placed on
|
||||
each of these nodes on the :doc:`Install MAAS <install-maas>` page:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy -n 4 --config ceph-osd.yaml --constraints tags=compute ceph-osd
|
||||
juju deploy -n 4 --series jammy --channel quincy/stable --config ceph-osd.yaml --constraints tags=compute ceph-osd
|
||||
|
||||
If a message from a ceph-osd unit like "Non-pristine devices detected" appears
|
||||
in the output of :command:`juju status` you will need to use actions
|
||||
``zap-disk`` and ``add-disk`` that come with the 'ceph-osd' charm. The
|
||||
``zap-disk`` and ``add-disk`` that come with the ceph-osd charm. The
|
||||
``zap-disk`` action is destructive in nature. Only use it if you want to purge
|
||||
the disk of all data and signatures for use by Ceph.
|
||||
|
||||
|
@ -149,14 +148,13 @@ the disk of all data and signatures for use by Ceph.
|
|||
|
||||
Since ceph-osd was deployed on four nodes and there are only four nodes
|
||||
available in this environment, the usage of the 'compute' tag is not
|
||||
strictly necessary.
|
||||
strictly necessary. A tag can help if there are a surplus of nodes however.
|
||||
|
||||
Nova compute
|
||||
Nova Compute
|
||||
~~~~~~~~~~~~
|
||||
|
||||
The nova-compute application is deployed to one node with the `nova-compute`_
|
||||
charm. We'll then scale-out the application to two other machines. File
|
||||
``nova-compute.yaml`` contains the configuration:
|
||||
The nova-compute application is deployed to three nodes with the
|
||||
`nova-compute`_ charm. File ``nova-compute.yaml`` contains the configuration:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
|
@ -165,15 +163,16 @@ charm. We'll then scale-out the application to two other machines. File
|
|||
enable-live-migration: true
|
||||
enable-resize: true
|
||||
migration-auth-type: ssh
|
||||
openstack-origin: cloud:focal-xena
|
||||
virt-type: qemu
|
||||
openstack-origin: distro
|
||||
|
||||
The initial node must be targeted by machine since there are no more free Juju
|
||||
The nodes must be targeted by machine ID since there are no more free Juju
|
||||
machines (MAAS nodes) available. This means we're placing multiple services on
|
||||
our nodes. We've chosen machines 1, 2, and 3:
|
||||
our nodes. We've chosen machines 1, 2, and 3. To deploy:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy -n 3 --to 1,2,3 --config nova-compute.yaml nova-compute
|
||||
juju deploy -n 3 --to 1,2,3 --series jammy --channel yoga/stable --config nova-compute.yaml nova-compute
|
||||
|
||||
.. note::
|
||||
|
||||
|
@ -187,29 +186,29 @@ MySQL InnoDB Cluster
|
|||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
MySQL InnoDB Cluster always requires at least three database units. They will
|
||||
be containerised on machines 0, 1, and 2:
|
||||
be containerised on machines 0, 1, and 2. To deploy:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 mysql-innodb-cluster
|
||||
juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 --series jammy --channel 8.0/stable mysql-innodb-cluster
|
||||
|
||||
Vault
|
||||
~~~~~
|
||||
|
||||
Vault is necessary for managing the TLS certificates that will enable encrypted
|
||||
communication between cloud applications. It will be containerised on machine
|
||||
3:
|
||||
3. To deploy:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy --to lxd:3 vault
|
||||
juju deploy --to lxd:3 --series jammy --channel 1.7/stable vault
|
||||
|
||||
This is the first application to be joined with the cloud database that was set
|
||||
up in the previous section. The process is:
|
||||
|
||||
#. create an application-specific instance of mysql-router (a subordinate)
|
||||
#. add a relation between that mysql-router instance and the database
|
||||
#. add a relation between the application and the mysql-router instance
|
||||
#. add a relation between the mysql-router instance and the database
|
||||
#. add a relation between the mysql-router instance and the application
|
||||
|
||||
The combination of steps 2 and 3 joins the application to the cloud database.
|
||||
|
||||
|
@ -217,7 +216,7 @@ Here are the corresponding commands for Vault:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy mysql-router vault-mysql-router
|
||||
juju deploy --channel 8.0/stable mysql-router vault-mysql-router
|
||||
juju add-relation vault-mysql-router:db-router mysql-innodb-cluster:db-router
|
||||
juju add-relation vault-mysql-router:shared-db vault:shared-db
|
||||
|
||||
|
@ -235,18 +234,18 @@ status` should look similar to this:
|
|||
.. code-block:: console
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
ceph-osd/0 blocked idle 0 10.0.0.158 Missing relation: monitor
|
||||
ceph-osd/0 blocked idle 0 10.0.0.160 Missing relation: monitor
|
||||
ceph-osd/1* blocked idle 1 10.0.0.159 Missing relation: monitor
|
||||
ceph-osd/2 blocked idle 2 10.0.0.160 Missing relation: monitor
|
||||
ceph-osd/2 blocked idle 2 10.0.0.162 Missing relation: monitor
|
||||
ceph-osd/3 blocked idle 3 10.0.0.161 Missing relation: monitor
|
||||
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.162 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
|
||||
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.163 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
|
||||
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.163 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
|
||||
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.164 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
|
||||
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.165 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
|
||||
nova-compute/0* blocked idle 1 10.0.0.159 Missing relations: messaging, image
|
||||
nova-compute/1 blocked idle 2 10.0.0.160 Missing relations: messaging, image
|
||||
nova-compute/0* blocked idle 1 10.0.0.159 Missing relations: image, messaging
|
||||
nova-compute/1 blocked idle 2 10.0.0.162 Missing relations: messaging, image
|
||||
nova-compute/2 blocked idle 3 10.0.0.161 Missing relations: image, messaging
|
||||
vault/0* active idle 3/lxd/0 10.0.0.164 8200/tcp Unit is ready (active: true, mlock: disabled)
|
||||
vault-mysql-router/0* active idle 10.0.0.164 Unit is ready
|
||||
vault/0* active idle 3/lxd/0 10.0.0.166 8200/tcp Unit is ready (active: true, mlock: disabled)
|
||||
vault-mysql-router/0* active idle 10.0.0.166 Unit is ready
|
||||
|
||||
Cloud applications are TLS-enabled via the ``vault:certificates`` relation.
|
||||
Below we start with the cloud database. Although the latter has a self-signed
|
||||
|
@ -278,10 +277,9 @@ File ``neutron.yaml`` contains the configuration necessary for three of them:
|
|||
neutron-api:
|
||||
neutron-security-groups: true
|
||||
flat-network-providers: physnet1
|
||||
worker-multiplier: 0.25
|
||||
openstack-origin: cloud:focal-xena
|
||||
openstack-origin: distro
|
||||
ovn-central:
|
||||
source: cloud:focal-xena
|
||||
source: distro
|
||||
|
||||
The ``bridge-interface-mappings`` setting impacts the OVN Chassis and refers to
|
||||
a mapping of OVS bridge to network interface. As described in the :ref:`Create
|
||||
|
@ -297,24 +295,24 @@ The ``ovn-bridge-mappings`` setting maps the data-port interface to the flat
|
|||
network provider.
|
||||
|
||||
The main OVN application is ovn-central and it requires at least three units.
|
||||
They will be containerised on machines 0, 1, and 2:
|
||||
They will be containerised on machines 0, 1, and 2. To deploy:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 --config neutron.yaml ovn-central
|
||||
juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 --series jammy --channel 22.03/stable --config neutron.yaml ovn-central
|
||||
|
||||
The neutron-api application will be containerised on machine 1:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy --to lxd:1 --config neutron.yaml neutron-api
|
||||
juju deploy --to lxd:1 --series jammy --channel yoga/stable --config neutron.yaml neutron-api
|
||||
|
||||
Deploy the subordinate charm applications:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy neutron-api-plugin-ovn
|
||||
juju deploy --config neutron.yaml ovn-chassis
|
||||
juju deploy --channel yoga/stable neutron-api-plugin-ovn
|
||||
juju deploy --channel 22.03/stable --config neutron.yaml ovn-chassis
|
||||
|
||||
Add the necessary relations:
|
||||
|
||||
|
@ -333,33 +331,25 @@ Join neutron-api to the cloud database:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy mysql-router neutron-api-mysql-router
|
||||
juju deploy --channel 8.0/stable mysql-router neutron-api-mysql-router
|
||||
juju add-relation neutron-api-mysql-router:db-router mysql-innodb-cluster:db-router
|
||||
juju add-relation neutron-api-mysql-router:shared-db neutron-api:shared-db
|
||||
|
||||
Keystone
|
||||
~~~~~~~~
|
||||
|
||||
The keystone application will be containerised on machine 0. File
|
||||
``keystone.yaml`` contains the configuration:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
keystone:
|
||||
worker-multiplier: 0.25
|
||||
openstack-origin: cloud:focal-xena
|
||||
|
||||
To deploy:
|
||||
The keystone application will be containerised on machine 0 with the
|
||||
`keystone`_ charm. To deploy:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy --to lxd:0 --config keystone.yaml keystone
|
||||
juju deploy --to lxd:0 --series jammy --channel yoga/stable keystone
|
||||
|
||||
Join keystone to the cloud database:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy mysql-router keystone-mysql-router
|
||||
juju deploy --channel 8.0/stable mysql-router keystone-mysql-router
|
||||
juju add-relation keystone-mysql-router:db-router mysql-innodb-cluster:db-router
|
||||
juju add-relation keystone-mysql-router:shared-db keystone:shared-db
|
||||
|
||||
|
@ -374,11 +364,11 @@ RabbitMQ
|
|||
~~~~~~~~
|
||||
|
||||
The rabbitmq-server application will be containerised on machine 2 with the
|
||||
`rabbitmq-server`_ charm:
|
||||
`rabbitmq-server`_ charm. To deploy:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy --to lxd:2 rabbitmq-server
|
||||
juju deploy --to lxd:2 --series jammy --channel 3.9/stable rabbitmq-server
|
||||
|
||||
Two relations can be added at this time:
|
||||
|
||||
|
@ -392,62 +382,56 @@ look similar to this:
|
|||
|
||||
.. code-block:: console
|
||||
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
ceph-osd/0 blocked idle 0 10.0.0.158 Missing relation: monitor
|
||||
ceph-osd/0 blocked idle 0 10.0.0.160 Missing relation: monitor
|
||||
ceph-osd/1* blocked idle 1 10.0.0.159 Missing relation: monitor
|
||||
ceph-osd/2 blocked idle 2 10.0.0.160 Missing relation: monitor
|
||||
ceph-osd/2 blocked idle 2 10.0.0.162 Missing relation: monitor
|
||||
ceph-osd/3 blocked idle 3 10.0.0.161 Missing relation: monitor
|
||||
keystone/0* active idle 0/lxd/2 10.0.0.170 5000/tcp Unit is ready
|
||||
keystone-mysql-router/0* active idle 10.0.0.170 Unit is ready
|
||||
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.162 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to
|
||||
ONE failure.
|
||||
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.163 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to
|
||||
ONE failure.
|
||||
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.165 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to
|
||||
ONE failure.
|
||||
neutron-api/0* active idle 1/lxd/2 10.0.0.169 9696/tcp Unit is ready
|
||||
neutron-api-mysql-router/0* active idle 10.0.0.169 Unit is ready
|
||||
neutron-api-plugin-ovn/0* active idle 10.0.0.169 Unit is ready
|
||||
keystone/0* active idle 0/lxd/3 10.0.0.174 5000/tcp Unit is ready
|
||||
keystone-mysql-router/0* active idle 10.0.0.174 Unit is ready
|
||||
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.163 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
|
||||
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.164 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
|
||||
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.165 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
|
||||
neutron-api/0* active idle 1/lxd/3 10.0.0.173 9696/tcp Unit is ready
|
||||
neutron-api-mysql-router/0* active idle 10.0.0.173 Unit is ready
|
||||
neutron-api-plugin-ovn/0* blocked idle 10.0.0.173 'certificates' missing
|
||||
nova-compute/0* blocked idle 1 10.0.0.159 Missing relations: image
|
||||
ovn-chassis/3 active idle 10.0.0.159 Unit is ready
|
||||
nova-compute/1 blocked idle 2 10.0.0.160 Missing relations: image
|
||||
ovn-chassis/2 active idle 10.0.0.160 Unit is ready
|
||||
ovn-chassis/0* active idle 10.0.0.159 Unit is ready
|
||||
nova-compute/1 blocked idle 2 10.0.0.162 Missing relations: image
|
||||
ovn-chassis/2 active idle 10.0.0.162 Unit is ready
|
||||
nova-compute/2 blocked idle 3 10.0.0.161 Missing relations: image
|
||||
ovn-chassis/1* active idle 10.0.0.161 Unit is ready
|
||||
ovn-central/0 active idle 0/lxd/1 10.0.0.166 6641/tcp,6642/tcp Unit is ready
|
||||
ovn-central/1 active idle 1/lxd/1 10.0.0.167 6641/tcp,6642/tcp Unit is ready
|
||||
ovn-central/2* active idle 2/lxd/1 10.0.0.168 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db northd: active)
|
||||
rabbitmq-server/0* active idle 2/lxd/2 10.0.0.171 5672/tcp Unit is ready
|
||||
vault/0* active idle 3/lxd/0 10.0.0.164 8200/tcp Unit is ready (active: true, mlock: disabled)
|
||||
vault-mysql-router/0* active idle 10.0.0.164 Unit is ready
|
||||
ovn-chassis/1 active idle 10.0.0.161 Unit is ready
|
||||
ovn-central/3 active idle 0/lxd/2 10.0.0.170 6641/tcp,6642/tcp Unit is ready
|
||||
ovn-central/4 active idle 1/lxd/2 10.0.0.171 6641/tcp,6642/tcp Unit is ready (northd: active)
|
||||
ovn-central/5* active idle 2/lxd/2 10.0.0.172 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)
|
||||
rabbitmq-server/0* active idle 2/lxd/3 10.0.0.175 5672/tcp,15672/tcp Unit is ready
|
||||
vault/0* active idle 3/lxd/0 10.0.0.166 8200/tcp Unit is ready (active: true, mlock: disabled)
|
||||
vault-mysql-router/0* active idle 10.0.0.166 Unit is ready
|
||||
|
||||
Nova cloud controller
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The nova-cloud-controller application, which includes nova-scheduler, nova-api,
|
||||
and nova-conductor services, will be containerised on machine 3 with the
|
||||
`nova-cloud-controller`_ charm. File ``nova-cloud-controller.yaml`` contains
|
||||
the configuration:
|
||||
`nova-cloud-controller`_ charm. File ``ncc.yaml`` contains the configuration:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
nova-cloud-controller:
|
||||
network-manager: Neutron
|
||||
worker-multiplier: 0.25
|
||||
openstack-origin: cloud:focal-xena
|
||||
openstack-origin: distro
|
||||
|
||||
To deploy:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy --to lxd:3 --config nova-cloud-controller.yaml nova-cloud-controller
|
||||
juju deploy --to lxd:3 --series jammy --channel yoga/stable --config ncc.yaml nova-cloud-controller
|
||||
|
||||
Join nova-cloud-controller to the cloud database:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy mysql-router ncc-mysql-router
|
||||
juju deploy --channel 8.0/stable mysql-router ncc-mysql-router
|
||||
juju add-relation ncc-mysql-router:db-router mysql-innodb-cluster:db-router
|
||||
juju add-relation ncc-mysql-router:shared-db nova-cloud-controller:shared-db
|
||||
|
||||
|
@ -471,25 +455,17 @@ Placement
|
|||
~~~~~~~~~
|
||||
|
||||
The placement application will be containerised on machine 3 with the
|
||||
`placement`_ charm. File ``placement.yaml`` contains the configuration:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
placement:
|
||||
worker-multiplier: 0.25
|
||||
openstack-origin: cloud:focal-xena
|
||||
|
||||
To deploy:
|
||||
`placement`_ charm. To deploy:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy --to lxd:3 --config placement.yaml placement
|
||||
juju deploy --to lxd:3 --series jammy --channel yoga/stable placement
|
||||
|
||||
Join placement to the cloud database:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy mysql-router placement-mysql-router
|
||||
juju deploy --channel 8.0/stable mysql-router placement-mysql-router
|
||||
juju add-relation placement-mysql-router:db-router mysql-innodb-cluster:db-router
|
||||
juju add-relation placement-mysql-router:shared-db placement:shared-db
|
||||
|
||||
|
@ -505,17 +481,17 @@ OpenStack dashboard
|
|||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The openstack-dashboard application (Horizon) will be containerised on machine
|
||||
2 with the `openstack-dashboard`_ charm:
|
||||
2 with the `openstack-dashboard`_ charm. To deploy:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy --to lxd:2 --config openstack-origin=cloud:focal-xena openstack-dashboard
|
||||
juju deploy --to lxd:2 --series jammy --channel yoga/stable openstack-dashboard
|
||||
|
||||
Join openstack-dashboard to the cloud database:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy mysql-router dashboard-mysql-router
|
||||
juju deploy --channel 8.0/stable mysql-router dashboard-mysql-router
|
||||
juju add-relation dashboard-mysql-router:db-router mysql-innodb-cluster:db-router
|
||||
juju add-relation dashboard-mysql-router:shared-db openstack-dashboard:shared-db
|
||||
|
||||
|
@ -536,25 +512,17 @@ Glance
|
|||
~~~~~~
|
||||
|
||||
The glance application will be containerised on machine 3 with the `glance`_
|
||||
charm. File ``glance.yaml`` contains the configuration:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
glance:
|
||||
worker-multiplier: 0.25
|
||||
openstack-origin: cloud:focal-xena
|
||||
|
||||
To deploy:
|
||||
charm. To deploy:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy --to lxd:3 --config glance.yaml glance
|
||||
juju deploy --to lxd:3 --series jammy --channel yoga/stable glance
|
||||
|
||||
Join glance to the cloud database:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy mysql-router glance-mysql-router
|
||||
juju deploy --channel 8.0/stable mysql-router glance-mysql-router
|
||||
juju add-relation glance-mysql-router:db-router mysql-innodb-cluster:db-router
|
||||
juju add-relation glance-mysql-router:shared-db glance:shared-db
|
||||
|
||||
|
@ -573,40 +541,38 @@ look similar to this:
|
|||
.. code-block:: console
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
ceph-osd/0 blocked idle 0 10.0.0.158 Missing relation: monitor
|
||||
ceph-osd/0 blocked idle 0 10.0.0.160 Missing relation: monitor
|
||||
ceph-osd/1* blocked idle 1 10.0.0.159 Missing relation: monitor
|
||||
ceph-osd/2 blocked idle 2 10.0.0.160 Missing relation: monitor
|
||||
ceph-osd/2 blocked idle 2 10.0.0.162 Missing relation: monitor
|
||||
ceph-osd/3 blocked idle 3 10.0.0.161 Missing relation: monitor
|
||||
glance/0* active idle 3/lxd/3 10.0.0.175 9292/tcp Unit is ready
|
||||
glance-mysql-router/0* active idle 10.0.0.175 Unit is ready
|
||||
keystone/0* active idle 0/lxd/2 10.0.0.170 5000/tcp Unit is ready
|
||||
keystone-mysql-router/0* active idle 10.0.0.170 Unit is ready
|
||||
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.162 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to
|
||||
ONE failure.
|
||||
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.163 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to
|
||||
ONE failure.
|
||||
glance/0* active idle 3/lxd/3 10.0.0.179 9292/tcp Unit is ready
|
||||
glance-mysql-router/0* active idle 10.0.0.179 Unit is ready
|
||||
keystone/0* active idle 0/lxd/3 10.0.0.174 5000/tcp Unit is ready
|
||||
keystone-mysql-router/0* active idle 10.0.0.174 Unit is ready
|
||||
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.163 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
|
||||
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.164 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
|
||||
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.165 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
|
||||
neutron-api/0* active idle 1/lxd/2 10.0.0.169 9696/tcp Unit is ready
|
||||
neutron-api-mysql-router/0* active idle 10.0.0.169 Unit is ready
|
||||
neutron-api-plugin-ovn/0* active idle 10.0.0.169 Unit is ready
|
||||
nova-cloud-controller/0* active idle 3/lxd/1 10.0.0.172 8774/tcp,8775/tcp Unit is ready
|
||||
ncc-mysql-router/0* active idle 10.0.0.172 Unit is ready
|
||||
neutron-api/0* active idle 1/lxd/3 10.0.0.173 9696/tcp Unit is ready
|
||||
neutron-api-mysql-router/0* active idle 10.0.0.173 Unit is ready
|
||||
neutron-api-plugin-ovn/0* active idle 10.0.0.173 Unit is ready
|
||||
nova-cloud-controller/0* active idle 3/lxd/1 10.0.0.176 8774/tcp,8775/tcp Unit is ready
|
||||
ncc-mysql-router/0* active idle 10.0.0.176 Unit is ready
|
||||
nova-compute/0* active idle 1 10.0.0.159 Unit is ready
|
||||
ovn-chassis/3 active idle 10.0.0.159 Unit is ready
|
||||
nova-compute/1 active idle 2 10.0.0.160 Unit is ready
|
||||
ovn-chassis/2 active idle 10.0.0.160 Unit is ready
|
||||
ovn-chassis/0* active idle 10.0.0.159 Unit is ready
|
||||
nova-compute/1 active idle 2 10.0.0.162 Unit is ready
|
||||
ovn-chassis/2 active idle 10.0.0.162 Unit is ready
|
||||
nova-compute/2 active idle 3 10.0.0.161 Unit is ready
|
||||
ovn-chassis/1* active idle 10.0.0.161 Unit is ready
|
||||
openstack-dashboard/0* active idle 2/lxd/3 10.0.0.174 80/tcp,443/tcp Unit is ready
|
||||
dashboard-mysql-router/0* active idle 10.0.0.174 Unit is ready
|
||||
ovn-central/0 active idle 0/lxd/1 10.0.0.166 6641/tcp,6642/tcp Unit is ready
|
||||
ovn-central/1 active idle 1/lxd/1 10.0.0.167 6641/tcp,6642/tcp Unit is ready
|
||||
ovn-central/2* active idle 2/lxd/1 10.0.0.168 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db northd: active)
|
||||
placement/0* active idle 3/lxd/2 10.0.0.173 8778/tcp Unit is ready
|
||||
placement-mysql-router/0* active idle 10.0.0.173 Unit is ready
|
||||
rabbitmq-server/0* active idle 2/lxd/2 10.0.0.171 5672/tcp Unit is ready
|
||||
vault/0* active idle 3/lxd/0 10.0.0.164 8200/tcp Unit is ready (active: true, mlock: disabled)
|
||||
vault-mysql-router/0* active idle 10.0.0.164 Unit is ready
|
||||
ovn-chassis/1 active idle 10.0.0.161 Unit is ready
|
||||
openstack-dashboard/0* active idle 2/lxd/4 10.0.0.178 80/tcp,443/tcp Unit is ready
|
||||
dashboard-mysql-router/0* active idle 10.0.0.178 Unit is ready
|
||||
ovn-central/3 active idle 0/lxd/2 10.0.0.170 6641/tcp,6642/tcp Unit is ready
|
||||
ovn-central/4 active idle 1/lxd/2 10.0.0.171 6641/tcp,6642/tcp Unit is ready (northd: active)
|
||||
ovn-central/5* active idle 2/lxd/2 10.0.0.172 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)
|
||||
placement/0* active idle 3/lxd/2 10.0.0.177 8778/tcp Unit is ready
|
||||
placement-mysql-router/0* active idle 10.0.0.177 Unit is ready
|
||||
rabbitmq-server/0* active idle 2/lxd/3 10.0.0.175 5672/tcp,15672/tcp Unit is ready
|
||||
vault/0* active idle 3/lxd/0 10.0.0.166 8200/tcp Unit is ready (active: true, mlock: disabled)
|
||||
vault-mysql-router/0* active idle 10.0.0.166 Unit is ready
|
||||
|
||||
Ceph monitor
|
||||
~~~~~~~~~~~~
|
||||
|
@ -619,11 +585,13 @@ The ceph-mon application will be containerised on machines 0, 1, and 2 with the
|
|||
ceph-mon:
|
||||
expected-osd-count: 4
|
||||
monitor-count: 3
|
||||
source: cloud:focal-xena
|
||||
source: distro
|
||||
|
||||
To deploy:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 --config ceph-mon.yaml ceph-mon
|
||||
juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 --series jammy --channel quincy/stable --config ceph-mon.yaml ceph-mon
|
||||
|
||||
Three relations can be added at this time:
|
||||
|
||||
|
@ -652,20 +620,19 @@ charm. File ``cinder.yaml`` contains the configuration:
|
|||
cinder:
|
||||
block-device: None
|
||||
glance-api-version: 2
|
||||
worker-multiplier: 0.25
|
||||
openstack-origin: cloud:focal-xena
|
||||
openstack-origin: distro
|
||||
|
||||
To deploy:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy --to lxd:1 --config cinder.yaml cinder
|
||||
juju deploy --to lxd:1 --series jammy --channel yoga/stable --config cinder.yaml cinder
|
||||
|
||||
Join cinder to the cloud database:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy mysql-router cinder-mysql-router
|
||||
juju deploy --channel 8.0/stable mysql-router cinder-mysql-router
|
||||
juju add-relation cinder-mysql-router:db-router mysql-innodb-cluster:db-router
|
||||
juju add-relation cinder-mysql-router:shared-db cinder:shared-db
|
||||
|
||||
|
@ -689,7 +656,7 @@ None`` in the configuration file). This will be implemented via the
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy cinder-ceph
|
||||
juju deploy --channel yoga/stable cinder-ceph
|
||||
|
||||
Three relations need to be added:
|
||||
|
||||
|
@ -706,11 +673,11 @@ The Ceph RADOS Gateway will be deployed to offer an S3 and Swift compatible
|
|||
HTTP gateway. This is an alternative to using OpenStack Swift.
|
||||
|
||||
The ceph-radosgw application will be containerised on machine 0 with the
|
||||
`ceph-radosgw`_ charm.
|
||||
`ceph-radosgw`_ charm. To deploy:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy --to lxd:0 --config source=cloud:focal-xena ceph-radosgw
|
||||
juju deploy --to lxd:0 --series jammy --channel quincy/stable ceph-radosgw
|
||||
|
||||
A single relation is needed:
|
||||
|
||||
|
@ -718,20 +685,22 @@ A single relation is needed:
|
|||
|
||||
juju add-relation ceph-radosgw:mon ceph-mon:radosgw
|
||||
|
||||
NTP
|
||||
~~~
|
||||
.. COMMENT
|
||||
At the time of writing a jammy-aware ntp charm was not available.
|
||||
NTP
|
||||
~~~
|
||||
|
||||
The final component is an NTP client to keep the time on each cloud node
|
||||
synchronised. This is done with the `ntp`_ subordinate charm:
|
||||
The final component is an NTP client to keep the time on each cloud node
|
||||
synchronised. This is done with the `ntp`_ subordinate charm. To deploy:
|
||||
|
||||
.. code-block:: none
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy ntp
|
||||
|
||||
The below relation will add an ntp unit alongside each ceph-osd unit, and
|
||||
thus on each of the four cloud nodes:
|
||||
The below relation will add an ntp unit alongside each ceph-osd unit, and
|
||||
thus on each of the four cloud nodes:
|
||||
|
||||
.. code-block:: none
|
||||
.. code-block:: none
|
||||
|
||||
juju add-relation ceph-osd:juju-info ntp:juju-info
|
||||
|
||||
|
@ -755,7 +724,7 @@ Obtain the address in this way:
|
|||
|
||||
juju status --format=yaml openstack-dashboard | grep public-address | awk '{print $2}' | head -1
|
||||
|
||||
In this example, the address is '10.0.0.166'.
|
||||
In this example, the address is '10.0.0.178'.
|
||||
|
||||
The password can be queried from Keystone:
|
||||
|
||||
|
@ -765,7 +734,7 @@ The password can be queried from Keystone:
|
|||
|
||||
The dashboard URL then becomes:
|
||||
|
||||
**http://10.0.0.166/horizon**
|
||||
**http://10.0.0.178/horizon**
|
||||
|
||||
The final credentials needed to log in are:
|
||||
|
||||
|
@ -800,33 +769,31 @@ networks, images, and a user environment. Go to :doc:`Configure OpenStack
|
|||
|
||||
.. LINKS
|
||||
.. _OpenStack Charms: https://docs.openstack.org/charm-guide/latest/openstack-charms.html
|
||||
.. _Charms upgrade: upgrade-charms.html
|
||||
.. _Series upgrade: upgrade-series.html
|
||||
.. _Charm store: https://jaas.ai/store
|
||||
.. _Deploying applications: https://juju.is/docs/olm/deploying-applications
|
||||
.. _Deploying to specific machines: https://juju.is/docs/olm/advanced-application-deployment#heading--deploying-to-specific-machines
|
||||
.. _Managing relations: https://juju.is/docs/olm/relations
|
||||
.. _vault charm: https://jaas.ai/vault/
|
||||
.. _Charmhub: https://charmhub.io
|
||||
.. _Deploying applications: https://juju.is/docs/olm/deploy-a-charm-from-charmhub
|
||||
.. _Deploying to specific machines: https://juju.is/docs/olm/deploy-to-a-specific-machine
|
||||
.. _Managing relations: https://juju.is/docs/olm/manage-relations
|
||||
.. _vault charm: https://charmhub.io/vault/
|
||||
.. _Infrastructure high availability: https://docs.openstack.org/charm-guide/latest/admin/ha.html
|
||||
|
||||
.. CHARMS
|
||||
.. _ceph-mon: https://jaas.ai/ceph-mon
|
||||
.. _ceph-osd: https://jaas.ai/ceph-osd
|
||||
.. _ceph-radosgw: https://jaas.ai/ceph-radosgw
|
||||
.. _cinder: https://jaas.ai/cinder
|
||||
.. _cinder-ceph: https://jaas.ai/cinder-ceph
|
||||
.. _glance: https://jaas.ai/glance
|
||||
.. _keystone: https://jaas.ai/keystone
|
||||
.. _neutron-gateway: https://jaas.ai/neutron-gateway
|
||||
.. _neutron-api: https://jaas.ai/neutron-api
|
||||
.. _neutron-openvswitch: https://jaas.ai/neutron-openvswitch
|
||||
.. _nova-cloud-controller: https://jaas.ai/nova-cloud-controller
|
||||
.. _nova-compute: https://jaas.ai/nova-compute
|
||||
.. _ntp: https://jaas.ai/ntp
|
||||
.. _openstack-dashboard: https://jaas.ai/openstack-dashboard
|
||||
.. _percona-cluster: https://jaas.ai/percona-cluster
|
||||
.. _placement: https://jaas.ai/placement
|
||||
.. _rabbitmq-server: https://jaas.ai/rabbitmq-server
|
||||
.. _ceph-mon: https://charmhub.io/ceph-mon
|
||||
.. _ceph-osd: https://charmhub.io/ceph-osd
|
||||
.. _ceph-radosgw: https://charmhub.io/ceph-radosgw
|
||||
.. _cinder: https://charmhub.io/cinder
|
||||
.. _cinder-ceph: https://charmhub.io/cinder-ceph
|
||||
.. _glance: https://charmhub.io/glance
|
||||
.. _keystone: https://charmhub.io/keystone
|
||||
.. _neutron-gateway: https://charmhub.io/neutron-gateway
|
||||
.. _neutron-api: https://charmhub.io/neutron-api
|
||||
.. _neutron-openvswitch: https://charmhub.io/neutron-openvswitch
|
||||
.. _nova-cloud-controller: https://charmhub.io/nova-cloud-controller
|
||||
.. _nova-compute: https://charmhub.io/nova-compute
|
||||
.. _ntp: https://charmhub.io/ntp
|
||||
.. _openstack-dashboard: https://charmhub.io/openstack-dashboard
|
||||
.. _percona-cluster: https://charmhub.io/percona-cluster
|
||||
.. _placement: https://charmhub.io/placement
|
||||
.. _rabbitmq-server: https://charmhub.io/rabbitmq-server
|
||||
|
||||
.. BUGS
|
||||
.. _LP #1826888: https://bugs.launchpad.net/charm-deployment-guide/+bug/1826888
|
||||
|
|
|
@ -11,11 +11,12 @@ MySQL, OVN, Swift, and RabbitMQ).
|
|||
|
||||
The software versions used in this guide are as follows:
|
||||
|
||||
* **Ubuntu 20.04 LTS (Focal)** for the MAAS server, Juju client, Juju
|
||||
controller, and all cloud nodes (including containers)
|
||||
* **MAAS 3.0.0**
|
||||
* **Juju 2.9.15**
|
||||
* **OpenStack Xena**
|
||||
* **Ubuntu 20.04 LTS (Focal)** for the MAAS server, Juju client, and Juju
|
||||
controller
|
||||
* **Ubuntu 22.04 LTS (Jammy)** for all cloud nodes (including containers)
|
||||
* **MAAS 3.1.0**
|
||||
* **Juju 2.9.29**
|
||||
* **OpenStack Yoga**
|
||||
|
||||
Proceed to the :doc:`Install MAAS <install-maas>` page to begin your
|
||||
installation journey. Hardware requirements are also listed there.
|
||||
|
|
Binary file not shown.
Before Width: | Height: | Size: 2.1 MiB After Width: | Height: | Size: 62 KiB |
Loading…
Reference in New Issue