Update install pages to Victoria
Update from Ussuri to Victoria. Replace Swift with Ceph RADOS gateway. Miscellaneous improvements. Change-Id: I8aa466a87e45157ddae2fd6b697591ac5a57e2a9
This commit is contained in:
parent
179449df29
commit
cb37b9e363
|
@ -14,6 +14,9 @@ Domains, projects, users, and roles are a vital part of OpenStack operations.
|
|||
For the non-admin case, we'll create a single domain with a single project and
|
||||
single user.
|
||||
|
||||
The tasks on this page should be performed on the host where the Juju client is
|
||||
installed.
|
||||
|
||||
Install the OpenStack clients
|
||||
-----------------------------
|
||||
|
||||
|
@ -24,7 +27,6 @@ command line. Install them now:
|
|||
|
||||
sudo snap install openstackclients --classic
|
||||
|
||||
|
||||
Create the admin user environment
|
||||
---------------------------------
|
||||
|
||||
|
@ -109,14 +111,14 @@ Create an image and flavor
|
|||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Import a boot image into Glance to create server instances with. Here we import
|
||||
a Bionic amd64 image and call it 'bionic x86_64':
|
||||
a Focal amd64 image and call it 'focal x86_64':
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
curl http://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img | \
|
||||
curl http://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img | \
|
||||
openstack image create --public --container-format bare --disk-format qcow2 \
|
||||
--property architecture=x86_64 --property hw_disk_bus=virtio \
|
||||
--property hw_vif_model=virtio "bionic x86_64"
|
||||
--property hw_vif_model=virtio "focal x86_64"
|
||||
|
||||
Create at least one flavor to define a hardware profile for new instances. Here
|
||||
we create one called 'm1.micro':
|
||||
|
@ -243,11 +245,11 @@ Perform a cloud query to ensure the user environment is functioning correctly:
|
|||
.. code-block:: none
|
||||
|
||||
openstack image list
|
||||
+--------------------------------------+---------------+--------+
|
||||
| ID | Name | Status |
|
||||
+--------------------------------------+---------------+--------+
|
||||
| 429f79c7-9ed9-4873-b6da-41580acd2d5f | bionic x86_64 | active |
|
||||
+--------------------------------------+---------------+--------+
|
||||
+--------------------------------------+--------------+--------+
|
||||
| ID | Name | Status |
|
||||
+--------------------------------------+--------------+--------+
|
||||
| 429f79c7-9ed9-4873-b6da-41580acd2d5f | focal x86_64 | active |
|
||||
+--------------------------------------+--------------+--------+
|
||||
|
||||
The image that was previously imported by the admin user should be returned.
|
||||
|
||||
|
@ -305,22 +307,15 @@ own rules. We do the latter by creating a group called 'Allow_SSH':
|
|||
Create and access an instance
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. important::
|
||||
|
||||
It has been observed in some newly-deployed clouds that the configuration of
|
||||
OVN remains incomplete, which prevents cloud instances from being created.
|
||||
The workaround is to restart the ``ovn-northd`` daemon on each ovn-central
|
||||
unit. See `LP #1895303`_ for details.
|
||||
|
||||
Determine the network ID of private network 'Network1' and then create an
|
||||
instance called 'bionic-1':
|
||||
instance called 'focal-1':
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
NET_ID=$(openstack network list | grep Network1 | awk '{ print $2 }')
|
||||
openstack server create --image 'bionic x86_64' --flavor m1.micro \
|
||||
openstack server create --image 'focal x86_64' --flavor m1.micro \
|
||||
--key-name User1-key --security-group Allow_SSH --nic net-id=$NET_ID \
|
||||
bionic-1
|
||||
focal-1
|
||||
|
||||
Request a floating IP address from the public network 'Pub_Net' and assign it
|
||||
to a variable:
|
||||
|
@ -329,11 +324,11 @@ to a variable:
|
|||
|
||||
FLOATING_IP=$(openstack floating ip create -f value -c floating_ip_address Pub_Net)
|
||||
|
||||
Now add that floating IP address to the newly-created instance 'bionic-1':
|
||||
Now add that floating IP address to the newly-created instance 'focal-1':
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
openstack server add floating ip bionic-1 $FLOATING_IP
|
||||
openstack server add floating ip focal-1 $FLOATING_IP
|
||||
|
||||
Ask for a listing of all instances within the context of the current project
|
||||
('Project1'):
|
||||
|
@ -346,11 +341,11 @@ Sample output:
|
|||
|
||||
.. code-block:: console
|
||||
|
||||
+--------------------------------------+----------+--------+-----------------------------------+---------------+----------+
|
||||
| ID | Name | Status | Networks | Image | Flavor |
|
||||
+--------------------------------------+----------+--------+-----------------------------------+---------------+----------+
|
||||
| 9167b3e9-c653-43fc-858a-2d6f6da36daa | bionic-1 | ACTIVE | Network1=192.168.0.131, 10.0.8.10 | bionic x86_64 | m1.micro |
|
||||
+--------------------------------------+----------+--------+-----------------------------------+---------------+----------+
|
||||
+--------------------------------------+---------+--------+-----------------------------------+---------------+----------+
|
||||
| ID | Name | Status | Networks | Image | Flavor |
|
||||
+--------------------------------------+---------+--------+-----------------------------------+---------------+----------+
|
||||
| 9167b3e9-c653-43fc-858a-2d6f6da36daa | focal-1 | ACTIVE | Network1=192.168.0.131, 10.0.8.10 | focal x86_64 | m1.micro |
|
||||
+--------------------------------------+---------+--------+-----------------------------------+---------------+----------+
|
||||
|
||||
The first address listed is in the private network and the second one is in the
|
||||
public network:
|
||||
|
@ -359,7 +354,7 @@ You can monitor the booting of the instance with this command:
|
|||
|
||||
.. code-block:: none
|
||||
|
||||
openstack console log show bionic-1
|
||||
openstack console log show focal-1
|
||||
|
||||
The instance is ready when the output contains:
|
||||
|
||||
|
@ -368,9 +363,9 @@ The instance is ready when the output contains:
|
|||
.
|
||||
.
|
||||
.
|
||||
Ubuntu 18.04.3 LTS bionic-1 ttyS0
|
||||
Ubuntu 20.04.1 LTS focal-1 ttyS0
|
||||
|
||||
bionic-1 login:
|
||||
focal-1 login:
|
||||
|
||||
You can connect to the instance in this way:
|
||||
|
||||
|
@ -394,7 +389,6 @@ guidance.
|
|||
.. LINKS
|
||||
.. _openstack-bundles: https://github.com/openstack-charmers/openstack-bundles/blob/master/stable/shared/openrcv3_project
|
||||
.. _Reserved IP range: https://maas.io/docs/concepts-and-terms#heading--ip-ranges
|
||||
.. _Using OpenStack with Juju: https://jaas.ai/docs/openstack-cloud
|
||||
.. _Using OpenStack with Juju: https://juju.is/docs/openstack-cloud
|
||||
|
||||
.. BUGS
|
||||
.. _LP #1895303: https://bugs.launchpad.net/charm-ovn-central/+bug/1895303
|
||||
|
|
|
@ -28,7 +28,7 @@ this via a cloud definition file, such as ``maas-cloud.yaml``:
|
|||
mymaas:
|
||||
type: maas
|
||||
auth-types: [oauth1]
|
||||
endpoint: http://10.0.0.3:5240/MAAS
|
||||
endpoint: http://10.0.0.2:5240/MAAS
|
||||
|
||||
We've called the cloud 'mymaas' and its endpoint is based on the IP address of
|
||||
the MAAS system.
|
||||
|
@ -95,12 +95,12 @@ Create the model
|
|||
----------------
|
||||
|
||||
The OpenStack deployment will be placed in its own Juju model for
|
||||
organisational purposes. It will be called 'openstack'. Create the model, and
|
||||
switch to it, with this one command:
|
||||
organisational purposes. Create the model 'openstack' and specify our desired
|
||||
series of 'focal':
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju add-model openstack
|
||||
juju add-model --config default-series=focal openstack
|
||||
|
||||
The output of the :command:`juju status` command summarises the Juju aspect of
|
||||
the environment. It should now look very similar to this:
|
||||
|
@ -108,7 +108,7 @@ the environment. It should now look very similar to this:
|
|||
.. code-block:: none
|
||||
|
||||
Model Controller Cloud/Region Version SLA Timestamp
|
||||
openstack maas-controller mymaas/default 2.7.0 unsupported 04:28:49Z
|
||||
openstack maas-controller mymaas/default 2.8.6 unsupported 04:28:49Z
|
||||
|
||||
Model "admin/openstack" is empty
|
||||
|
||||
|
@ -120,5 +120,5 @@ the OpenStack applications and adding relations between them. Go to
|
|||
:doc:`Install OpenStack <install-openstack>` now.
|
||||
|
||||
.. LINKS
|
||||
.. _Juju: https://jaas.ai
|
||||
.. _Juju: https://juju.is
|
||||
.. _MAAS: https://maas.io
|
||||
|
|
|
@ -13,110 +13,106 @@ installed from the instructions given on the :doc:`Install OpenStack
|
|||
.. code-block:: console
|
||||
|
||||
Model Controller Cloud/Region Version SLA Timestamp
|
||||
openstack maas-controller mymaas/default 2.8.1 unsupported 02:53:54Z
|
||||
openstack maas-controller mymaas/default 2.8.6 unsupported 01:12:49Z
|
||||
|
||||
App Version Status Scale Charm Store Rev OS Notes
|
||||
ceph-mon 15.2.3 active 3 ceph-mon jujucharms 49 ubuntu
|
||||
ceph-osd 15.2.3 active 4 ceph-osd jujucharms 304 ubuntu
|
||||
cinder 16.1.0 active 1 cinder jujucharms 304 ubuntu
|
||||
cinder-ceph 16.1.0 active 1 cinder-ceph jujucharms 257 ubuntu
|
||||
cinder-mysql-router 8.0.21 active 1 mysql-router jujucharms 3 ubuntu
|
||||
dashboard-mysql-router 8.0.21 active 1 mysql-router jujucharms 3 ubuntu
|
||||
glance 20.0.0 active 1 glance jujucharms 299 ubuntu
|
||||
glance-mysql-router 8.0.21 active 1 mysql-router jujucharms 3 ubuntu
|
||||
keystone 17.0.0 active 1 keystone jujucharms 317 ubuntu
|
||||
keystone-mysql-router 8.0.21 active 1 mysql-router jujucharms 3 ubuntu
|
||||
mysql-innodb-cluster 8.0.21 active 3 mysql-innodb-cluster jujucharms 1 ubuntu
|
||||
ncc-mysql-router 8.0.21 active 1 mysql-router jujucharms 3 ubuntu
|
||||
neutron-api 16.0.0 active 1 neutron-api jujucharms 288 ubuntu
|
||||
neutron-api-mysql-router 8.0.21 active 1 mysql-router jujucharms 3 ubuntu
|
||||
neutron-api-plugin-ovn 16.0.0 active 1 neutron-api-plugin-ovn jujucharms 1 ubuntu
|
||||
nova-cloud-controller 21.0.0 active 1 nova-cloud-controller jujucharms 346 ubuntu
|
||||
nova-compute 21.0.0 active 3 nova-compute jujucharms 320 ubuntu
|
||||
ceph-mon 15.2.5 active 3 ceph-mon jujucharms 50 ubuntu
|
||||
ceph-osd 15.2.5 active 4 ceph-osd jujucharms 306 ubuntu
|
||||
ceph-radosgw 15.2.5 active 1 ceph-radosgw jujucharms 291 ubuntu
|
||||
cinder 17.0.0 active 1 cinder jujucharms 306 ubuntu
|
||||
cinder-ceph 17.0.0 active 1 cinder-ceph jujucharms 258 ubuntu
|
||||
cinder-mysql-router 8.0.22 active 1 mysql-router jujucharms 4 ubuntu
|
||||
dashboard-mysql-router 8.0.22 active 1 mysql-router jujucharms 4 ubuntu
|
||||
glance 21.0.0 active 1 glance jujucharms 301 ubuntu
|
||||
glance-mysql-router 8.0.22 active 1 mysql-router jujucharms 4 ubuntu
|
||||
keystone 18.0.0 active 1 keystone jujucharms 319 ubuntu
|
||||
keystone-mysql-router 8.0.22 active 1 mysql-router jujucharms 4 ubuntu
|
||||
mysql-innodb-cluster 8.0.22 active 3 mysql-innodb-cluster jujucharms 3 ubuntu
|
||||
ncc-mysql-router 8.0.22 active 1 mysql-router jujucharms 4 ubuntu
|
||||
neutron-api 17.0.0 active 1 neutron-api jujucharms 290 ubuntu
|
||||
neutron-api-mysql-router 8.0.22 active 1 mysql-router jujucharms 4 ubuntu
|
||||
neutron-api-plugin-ovn 17.0.0 active 1 neutron-api-plugin-ovn jujucharms 2 ubuntu
|
||||
nova-cloud-controller 22.0.0 active 1 nova-cloud-controller jujucharms 349 ubuntu
|
||||
nova-compute 22.0.0 active 3 nova-compute jujucharms 323 ubuntu
|
||||
ntp 3.5 active 4 ntp jujucharms 41 ubuntu
|
||||
openstack-dashboard 18.3.2 active 1 openstack-dashboard jujucharms 305 ubuntu
|
||||
ovn-central 20.03.0 active 3 ovn-central jujucharms 1 ubuntu
|
||||
ovn-chassis 20.03.0 active 3 ovn-chassis jujucharms 4 ubuntu
|
||||
placement 3.0.0 active 1 placement jujucharms 12 ubuntu
|
||||
placement-mysql-router 8.0.21 active 1 mysql-router jujucharms 3 ubuntu
|
||||
rabbitmq-server 3.8.2 active 1 rabbitmq-server jujucharms 104 ubuntu
|
||||
swift-proxy 2.25.0 active 1 swift-proxy jujucharms 94 ubuntu
|
||||
swift-storage 2.25.0 active 3 swift-storage jujucharms 271 ubuntu
|
||||
vault 1.1.1 active 1 vault jujucharms 40 ubuntu
|
||||
vault-mysql-router 8.0.21 active 1 mysql-router jujucharms 3 ubuntu
|
||||
openstack-dashboard 18.6.1 active 1 openstack-dashboard jujucharms 309 ubuntu
|
||||
ovn-central 20.03.1 active 3 ovn-central jujucharms 2 ubuntu
|
||||
ovn-chassis 20.03.1 active 3 ovn-chassis jujucharms 7 ubuntu
|
||||
placement 4.0.0 active 1 placement jujucharms 15 ubuntu
|
||||
placement-mysql-router 8.0.22 active 1 mysql-router jujucharms 4 ubuntu
|
||||
rabbitmq-server 3.8.2 active 1 rabbitmq-server jujucharms 106 ubuntu
|
||||
vault 1.5.4 active 1 vault jujucharms 41 ubuntu
|
||||
vault-mysql-router 8.0.22 active 1 mysql-router jujucharms 4 ubuntu
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
ceph-mon/0* active idle 0/lxd/3 10.0.0.227 Unit is ready and clustered
|
||||
ceph-mon/1 active idle 1/lxd/4 10.0.0.226 Unit is ready and clustered
|
||||
ceph-mon/2 active idle 2/lxd/3 10.0.0.225 Unit is ready and clustered
|
||||
ceph-osd/0* active idle 0 10.0.0.206 Unit is ready (1 OSD)
|
||||
ntp/1 active idle 10.0.0.206 123/udp chrony: Ready
|
||||
ceph-osd/1 active idle 1 10.0.0.208 Unit is ready (1 OSD)
|
||||
ntp/0* active idle 10.0.0.208 123/udp chrony: Ready
|
||||
ceph-osd/2 active idle 2 10.0.0.209 Unit is ready (1 OSD)
|
||||
ntp/3 active idle 10.0.0.209 123/udp chrony: Ready
|
||||
ceph-osd/3 active idle 3 10.0.0.213 Unit is ready (1 OSD)
|
||||
ntp/2 active idle 10.0.0.213 123/udp chrony: Ready
|
||||
cinder/0* active idle 1/lxd/5 10.0.0.228 8776/tcp Unit is ready
|
||||
cinder-ceph/0* active idle 10.0.0.228 Unit is ready
|
||||
cinder-mysql-router/0* active idle 10.0.0.228 Unit is ready
|
||||
glance/0* active idle 3/lxd/3 10.0.0.224 9292/tcp Unit is ready
|
||||
glance-mysql-router/0* active idle 10.0.0.224 Unit is ready
|
||||
keystone/0* active idle 0/lxd/2 10.0.0.223 5000/tcp Unit is ready
|
||||
keystone-mysql-router/0* active idle 10.0.0.223 Unit is ready
|
||||
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.211 Unit is ready: Mode: R/W
|
||||
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.212 Unit is ready: Mode: R/O
|
||||
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.214 Unit is ready: Mode: R/O
|
||||
neutron-api/0* active idle 1/lxd/2 10.0.0.220 9696/tcp Unit is ready
|
||||
neutron-api-mysql-router/0* active idle 10.0.0.220 Unit is ready
|
||||
neutron-api-plugin-ovn/0* active idle 10.0.0.220 Unit is ready
|
||||
nova-cloud-controller/0* active idle 3/lxd/1 10.0.0.216 8774/tcp,8775/tcp Unit is ready
|
||||
ncc-mysql-router/0* active idle 10.0.0.216 Unit is ready
|
||||
nova-compute/0* active idle 1 10.0.0.208 Unit is ready
|
||||
ovn-chassis/1 active idle 10.0.0.208 Unit is ready
|
||||
nova-compute/1 active idle 2 10.0.0.209 Unit is ready
|
||||
ovn-chassis/0* active idle 10.0.0.209 Unit is ready
|
||||
nova-compute/2 active idle 3 10.0.0.213 Unit is ready
|
||||
ovn-chassis/2 active idle 10.0.0.213 Unit is ready
|
||||
openstack-dashboard/0* active idle 1/lxd/3 10.0.0.210 80/tcp,443/tcp Unit is ready
|
||||
dashboard-mysql-router/0* active idle 10.0.0.210 Unit is ready
|
||||
ovn-central/0* active idle 0/lxd/1 10.0.0.218 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db northd: active)
|
||||
ovn-central/1 active idle 1/lxd/1 10.0.0.221 6641/tcp,6642/tcp Unit is ready
|
||||
ovn-central/2 active idle 2/lxd/1 10.0.0.219 6641/tcp,6642/tcp Unit is ready
|
||||
placement/0* active idle 3/lxd/2 10.0.0.215 8778/tcp Unit is ready
|
||||
placement-mysql-router/0* active idle 10.0.0.215 Unit is ready
|
||||
rabbitmq-server/0* active idle 2/lxd/2 10.0.0.222 5672/tcp Unit is ready
|
||||
swift-proxy/0* active idle 3/lxd/4 10.0.0.231 8080/tcp Unit is ready
|
||||
swift-storage/0* active idle 0 10.0.0.206 Unit is ready
|
||||
swift-storage/1 active idle 2 10.0.0.209 Unit is ready
|
||||
swift-storage/2 active idle 3 10.0.0.213 Unit is ready
|
||||
vault/0* active idle 3/lxd/0 10.0.0.217 8200/tcp Unit is ready (active: true, mlock: disabled)
|
||||
vault-mysql-router/0* active idle 10.0.0.217 Unit is ready
|
||||
ceph-mon/0* active idle 0/lxd/3 10.0.0.191 Unit is ready and clustered
|
||||
ceph-mon/1 active idle 1/lxd/3 10.0.0.189 Unit is ready and clustered
|
||||
ceph-mon/2 active idle 2/lxd/4 10.0.0.190 Unit is ready and clustered
|
||||
ceph-osd/0* active idle 0 10.0.0.171 Unit is ready (1 OSD)
|
||||
ntp/1 active idle 10.0.0.171 123/udp chrony: Ready
|
||||
ceph-osd/1 active idle 1 10.0.0.172 Unit is ready (1 OSD)
|
||||
ntp/0* active idle 10.0.0.172 123/udp chrony: Ready
|
||||
ceph-osd/2 active idle 2 10.0.0.173 Unit is ready (1 OSD)
|
||||
ntp/3 active idle 10.0.0.173 123/udp chrony: Ready
|
||||
ceph-osd/3 active idle 3 10.0.0.174 Unit is ready (1 OSD)
|
||||
ntp/2 active idle 10.0.0.174 123/udp chrony: Ready
|
||||
ceph-radosgw/0* active idle 0/lxd/4 10.0.0.193 80/tcp Unit is ready
|
||||
cinder/0* active idle 1/lxd/4 10.0.0.192 8776/tcp Unit is ready
|
||||
cinder-ceph/0* active idle 10.0.0.192 Unit is ready
|
||||
cinder-mysql-router/0* active idle 10.0.0.192 Unit is ready
|
||||
glance/0* active idle 3/lxd/3 10.0.0.188 9292/tcp Unit is ready
|
||||
glance-mysql-router/0* active idle 10.0.0.188 Unit is ready
|
||||
keystone/0* active idle 0/lxd/2 10.0.0.183 5000/tcp Unit is ready
|
||||
keystone-mysql-router/0* active idle 10.0.0.183 Unit is ready
|
||||
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.175 Unit is ready: Mode: R/W
|
||||
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.176 Unit is ready: Mode: R/O
|
||||
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.177 Unit is ready: Mode: R/O
|
||||
neutron-api/0* active idle 1/lxd/2 10.0.0.182 9696/tcp Unit is ready
|
||||
neutron-api-mysql-router/0* active idle 10.0.0.182 Unit is ready
|
||||
neutron-api-plugin-ovn/0* active idle 10.0.0.182 Unit is ready
|
||||
nova-cloud-controller/0* active idle 3/lxd/1 10.0.0.185 8774/tcp,8775/tcp Unit is ready
|
||||
ncc-mysql-router/0* active idle 10.0.0.185 Unit is ready
|
||||
nova-compute/0* active idle 1 10.0.0.172 Unit is ready
|
||||
ovn-chassis/0* active idle 10.0.0.172 Unit is ready
|
||||
nova-compute/1 active idle 2 10.0.0.173 Unit is ready
|
||||
ovn-chassis/2 active idle 10.0.0.173 Unit is ready
|
||||
nova-compute/2 active idle 3 10.0.0.174 Unit is ready
|
||||
ovn-chassis/1 active idle 10.0.0.174 Unit is ready
|
||||
openstack-dashboard/0* active idle 2/lxd/3 10.0.0.187 80/tcp,443/tcp Unit is ready
|
||||
dashboard-mysql-router/0* active idle 10.0.0.187 Unit is ready
|
||||
ovn-central/0 active idle 0/lxd/1 10.0.0.181 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db)
|
||||
ovn-central/1 active idle 1/lxd/1 10.0.0.179 6641/tcp,6642/tcp Unit is ready
|
||||
ovn-central/2* active idle 2/lxd/1 10.0.0.180 6641/tcp,6642/tcp Unit is ready (leader: ovnsb_db northd: active)
|
||||
placement/0* active idle 3/lxd/2 10.0.0.186 8778/tcp Unit is ready
|
||||
placement-mysql-router/0* active idle 10.0.0.186 Unit is ready
|
||||
rabbitmq-server/0* active idle 2/lxd/2 10.0.0.184 5672/tcp Unit is ready
|
||||
vault/0* active idle 3/lxd/0 10.0.0.178 8200/tcp Unit is ready (active: true, mlock: disabled)
|
||||
vault-mysql-router/0* active idle 10.0.0.178 Unit is ready
|
||||
|
||||
Machine State DNS Inst id Series AZ Message
|
||||
0 started 10.0.0.206 node1 focal default Deployed
|
||||
0/lxd/0 started 10.0.0.211 juju-6f106b-0-lxd-0 focal default Container started
|
||||
0/lxd/1 started 10.0.0.218 juju-6f106b-0-lxd-1 focal default Container started
|
||||
0/lxd/2 started 10.0.0.223 juju-6f106b-0-lxd-2 focal default Container started
|
||||
0/lxd/3 started 10.0.0.227 juju-6f106b-0-lxd-3 focal default Container started
|
||||
1 started 10.0.0.208 node2 focal default Deployed
|
||||
1/lxd/0 started 10.0.0.212 juju-6f106b-1-lxd-0 focal default Container started
|
||||
1/lxd/1 started 10.0.0.221 juju-6f106b-1-lxd-1 focal default Container started
|
||||
1/lxd/2 started 10.0.0.220 juju-6f106b-1-lxd-2 focal default Container started
|
||||
1/lxd/3 started 10.0.0.210 juju-6f106b-1-lxd-3 focal default Container started
|
||||
1/lxd/4 started 10.0.0.226 juju-6f106b-1-lxd-4 focal default Container started
|
||||
1/lxd/5 started 10.0.0.228 juju-6f106b-1-lxd-5 focal default Container started
|
||||
2 started 10.0.0.209 node3 focal default Deployed
|
||||
2/lxd/0 started 10.0.0.214 juju-6f106b-2-lxd-0 focal default Container started
|
||||
2/lxd/1 started 10.0.0.219 juju-6f106b-2-lxd-1 focal default Container started
|
||||
2/lxd/2 started 10.0.0.222 juju-6f106b-2-lxd-2 focal default Container started
|
||||
2/lxd/3 started 10.0.0.225 juju-6f106b-2-lxd-3 focal default Container started
|
||||
3 started 10.0.0.213 node4 focal default Deployed
|
||||
3/lxd/0 started 10.0.0.217 juju-6f106b-3-lxd-0 focal default Container started
|
||||
3/lxd/1 started 10.0.0.216 juju-6f106b-3-lxd-1 focal default Container started
|
||||
3/lxd/2 started 10.0.0.215 juju-6f106b-3-lxd-2 focal default Container started
|
||||
3/lxd/3 started 10.0.0.224 juju-6f106b-3-lxd-3 focal default Container started
|
||||
3/lxd/4 started 10.0.0.231 juju-6f106b-3-lxd-4 focal default Container started
|
||||
0 started 10.0.0.171 node2 focal default Deployed
|
||||
0/lxd/0 started 10.0.0.175 juju-bdbf2c-0-lxd-0 focal default Container started
|
||||
0/lxd/1 started 10.0.0.181 juju-bdbf2c-0-lxd-1 focal default Container started
|
||||
0/lxd/2 started 10.0.0.183 juju-bdbf2c-0-lxd-2 focal default Container started
|
||||
0/lxd/3 started 10.0.0.191 juju-bdbf2c-0-lxd-3 focal default Container started
|
||||
0/lxd/4 started 10.0.0.193 juju-bdbf2c-0-lxd-4 focal default Container started
|
||||
1 started 10.0.0.172 node1 focal default Deployed
|
||||
1/lxd/0 started 10.0.0.176 juju-bdbf2c-1-lxd-0 focal default Container started
|
||||
1/lxd/1 started 10.0.0.179 juju-bdbf2c-1-lxd-1 focal default Container started
|
||||
1/lxd/2 started 10.0.0.182 juju-bdbf2c-1-lxd-2 focal default Container started
|
||||
1/lxd/3 started 10.0.0.189 juju-bdbf2c-1-lxd-3 focal default Container started
|
||||
1/lxd/4 started 10.0.0.192 juju-bdbf2c-1-lxd-4 focal default Container started
|
||||
2 started 10.0.0.173 node3 focal default Deployed
|
||||
2/lxd/0 started 10.0.0.177 juju-bdbf2c-2-lxd-0 focal default Container started
|
||||
2/lxd/1 started 10.0.0.180 juju-bdbf2c-2-lxd-1 focal default Container started
|
||||
2/lxd/2 started 10.0.0.184 juju-bdbf2c-2-lxd-2 focal default Container started
|
||||
2/lxd/3 started 10.0.0.187 juju-bdbf2c-2-lxd-3 focal default Container started
|
||||
2/lxd/4 started 10.0.0.190 juju-bdbf2c-2-lxd-4 focal default Container started
|
||||
3 started 10.0.0.174 node4 focal default Deployed
|
||||
3/lxd/0 started 10.0.0.178 juju-bdbf2c-3-lxd-0 focal default Container started
|
||||
3/lxd/1 started 10.0.0.185 juju-bdbf2c-3-lxd-1 focal default Container started
|
||||
3/lxd/2 started 10.0.0.186 juju-bdbf2c-3-lxd-2 focal default Container started
|
||||
3/lxd/3 started 10.0.0.188 juju-bdbf2c-3-lxd-3 focal default Container started
|
||||
|
||||
Relation provider Requirer Interface Type Message
|
||||
ceph-mon:client cinder-ceph:ceph ceph-client regular
|
||||
|
@ -124,7 +120,9 @@ installed from the instructions given on the :doc:`Install OpenStack
|
|||
ceph-mon:client nova-compute:ceph ceph-client regular
|
||||
ceph-mon:mon ceph-mon:mon ceph peer
|
||||
ceph-mon:osd ceph-osd:mon ceph-osd regular
|
||||
ceph-mon:radosgw ceph-radosgw:mon ceph-radosgw regular
|
||||
ceph-osd:juju-info ntp:juju-info juju-info subordinate
|
||||
ceph-radosgw:cluster ceph-radosgw:cluster swift-ha peer
|
||||
cinder-ceph:ceph-access nova-compute:ceph-access cinder-ceph-key regular
|
||||
cinder-ceph:storage-backend cinder:storage-backend cinder-backend subordinate
|
||||
cinder-mysql-router:shared-db cinder:shared-db mysql-shared subordinate
|
||||
|
@ -144,7 +142,6 @@ installed from the instructions given on the :doc:`Install OpenStack
|
|||
keystone:identity-service nova-cloud-controller:identity-service keystone regular
|
||||
keystone:identity-service openstack-dashboard:identity-service keystone regular
|
||||
keystone:identity-service placement:identity-service keystone regular
|
||||
keystone:identity-service swift-proxy:identity-service keystone regular
|
||||
mysql-innodb-cluster:cluster mysql-innodb-cluster:cluster mysql-innodb-cluster peer
|
||||
mysql-innodb-cluster:coordinator mysql-innodb-cluster:coordinator coordinator peer
|
||||
mysql-innodb-cluster:db-router cinder-mysql-router:db-router mysql-router regular
|
||||
|
@ -177,8 +174,6 @@ installed from the instructions given on the :doc:`Install OpenStack
|
|||
rabbitmq-server:amqp nova-cloud-controller:amqp rabbitmq regular
|
||||
rabbitmq-server:amqp nova-compute:amqp rabbitmq regular
|
||||
rabbitmq-server:cluster rabbitmq-server:cluster rabbitmq-ha peer
|
||||
swift-proxy:cluster swift-proxy:cluster swift-ha peer
|
||||
swift-storage:swift-storage swift-proxy:swift-storage swift regular
|
||||
vault-mysql-router:shared-db vault:shared-db mysql-shared subordinate
|
||||
vault:certificates cinder:certificates tls-certificates regular
|
||||
vault:certificates glance:certificates tls-certificates regular
|
||||
|
|
|
@ -51,21 +51,22 @@ OpenStack release
|
|||
-----------------
|
||||
|
||||
.. TEMPLATE
|
||||
As the guide's :doc:`Overview <index>` section states, OpenStack Ussuri will
|
||||
be deployed atop Ubuntu 18.04 LTS (Bionic) cloud nodes. In order to achieve
|
||||
this a cloud archive release of 'cloud:bionic-train' will be used during the
|
||||
install of each OpenStack application. Note that some applications are not
|
||||
part of the OpenStack project per se and therefore do not apply
|
||||
(exceptionally, Ceph applications do use this method). Not using a more
|
||||
recent OpenStack release in this way will result in a Queens deployment
|
||||
(i.e. Queens is in the Ubuntu package archive for Bionic).
|
||||
As the :doc:`Overview <install-overview>` of the Installation section
|
||||
states, OpenStack Ussuri will be deployed atop Ubuntu 20.04 LTS (Focal)
|
||||
cloud nodes. In order to achieve this the default package archive ("distro")
|
||||
for the cloud nodes will be used during the install of each OpenStack
|
||||
application. Note that some applications are not part of the OpenStack
|
||||
project per se and therefore do not apply (exceptionally, Ceph applications
|
||||
do use this method).
|
||||
|
||||
As the guide's :doc:`Overview <index>` section states, OpenStack Ussuri will be
|
||||
deployed atop Ubuntu 20.04 LTS (Focal) cloud nodes. In order to achieve this
|
||||
the default package archive ("distro") for the cloud nodes will be used during
|
||||
the install of each OpenStack application. Note that some applications are not
|
||||
part of the OpenStack project per se and therefore do not apply (exceptionally,
|
||||
Ceph applications do use this method).
|
||||
As the :doc:`Overview <install-overview>` of the Installation section states,
|
||||
OpenStack Victoria will be deployed atop Ubuntu 20.04 LTS (Focal) cloud nodes.
|
||||
In order to achieve this a cloud archive release of 'cloud:focal-victoria' will
|
||||
be used during the install of each OpenStack application. Note that some
|
||||
applications are not part of the OpenStack project per se and therefore do not
|
||||
apply (exceptionally, Ceph applications do use this method). Not using a more
|
||||
recent OpenStack release in this way will result in an Ussuri deployment (i.e.
|
||||
Ussuri is in the Ubuntu package archive for Focal).
|
||||
|
||||
See :ref:`Perform the upgrade <perform_the_upgrade>` in the :doc:`OpenStack
|
||||
Upgrades <app-upgrade-openstack>` appendix for more details on cloud
|
||||
|
@ -74,7 +75,7 @@ archive releases and how they are used when upgrading OpenStack.
|
|||
.. important::
|
||||
|
||||
The chosen OpenStack release may impact the installation and configuration
|
||||
instructions. **This guide assumes that OpenStack Ussuri is being
|
||||
instructions. **This guide assumes that OpenStack Victoria is being
|
||||
deployed.**
|
||||
|
||||
Installation progress
|
||||
|
@ -130,10 +131,10 @@ contains the configuration.
|
|||
|
||||
ceph-osd:
|
||||
osd-devices: /dev/sdb
|
||||
source: distro
|
||||
source: cloud:focal-victoria
|
||||
|
||||
To deploy the application we'll make use of the 'compute' tag we placed on each
|
||||
of these nodes on the :doc:`Install MAAS <install-maas>` page.
|
||||
To deploy the application we'll make use of the 'compute' tag that we placed on
|
||||
each of these nodes on the :doc:`Install MAAS <install-maas>` page:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
|
@ -164,7 +165,7 @@ charm. We'll then scale-out the application to two other machines. File
|
|||
enable-live-migration: true
|
||||
enable-resize: true
|
||||
migration-auth-type: ssh
|
||||
openstack-origin: distro
|
||||
openstack-origin: cloud:focal-victoria
|
||||
|
||||
The initial node must be targeted by machine since there are no more free Juju
|
||||
machines (MAAS nodes) available. This means we're placing multiple services on
|
||||
|
@ -182,30 +183,6 @@ our nodes. We've chosen machines 1, 2, and 3:
|
|||
format will require manual image conversion for each instance. See bug `LP
|
||||
#1826888`_.
|
||||
|
||||
Swift storage
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
The swift-storage application is deployed to three nodes (machines 0, 2, and
|
||||
3) with the `swift-storage`_ charm. File ``swift-storage.yaml`` contains the
|
||||
configuration:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
swift-storage:
|
||||
block-device: sdc
|
||||
overwrite: "true"
|
||||
openstack-origin: distro
|
||||
|
||||
This configuration points to block device ``/dev/sdc``. Adjust according to
|
||||
your available hardware. In a production environment, avoid using a loopback
|
||||
device.
|
||||
|
||||
Deploy to the three machines:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy -n 3 --to 0,2,3 --config swift-storage.yaml swift-storage
|
||||
|
||||
MySQL InnoDB Cluster
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
|
@ -220,9 +197,8 @@ Vault
|
|||
~~~~~
|
||||
|
||||
Vault is necessary for managing the TLS certificates that will enable encrypted
|
||||
communication between cloud applications.
|
||||
|
||||
Deploy it in this way:
|
||||
communication between cloud applications. It will be containerised on machine
|
||||
3:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
|
@ -255,21 +231,18 @@ status` should look similar to this:
|
|||
.. code-block:: console
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
ceph-osd/0* blocked idle 0 10.0.0.206 Missing relation: monitor
|
||||
ceph-osd/1 blocked idle 1 10.0.0.208 Missing relation: monitor
|
||||
ceph-osd/2 blocked idle 2 10.0.0.209 Missing relation: monitor
|
||||
ceph-osd/3 blocked idle 3 10.0.0.213 Missing relation: monitor
|
||||
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.211 Unit is ready: Mode: R/W
|
||||
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.212 Unit is ready: Mode: R/O
|
||||
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.214 Unit is ready: Mode: R/O
|
||||
nova-compute/0* blocked idle 1 10.0.0.208 Missing relations: image, messaging
|
||||
nova-compute/1 blocked idle 2 10.0.0.209 Missing relations: image, messaging
|
||||
nova-compute/2 blocked idle 3 10.0.0.213 Missing relations: messaging, image
|
||||
swift-storage/0* blocked idle 0 10.0.0.206 Missing relations: proxy
|
||||
swift-storage/1 blocked idle 2 10.0.0.209 Missing relations: proxy
|
||||
swift-storage/2 blocked idle 3 10.0.0.213 Missing relations: proxy
|
||||
vault/0* active idle 3/lxd/0 10.0.0.217 8200/tcp Unit is ready (active: true, mlock: disabled)
|
||||
vault-mysql-router/0* active idle 10.0.0.217 Unit is ready
|
||||
ceph-osd/0* blocked idle 0 10.0.0.171 Missing relation: monitor
|
||||
ceph-osd/1 blocked idle 1 10.0.0.172 Missing relation: monitor
|
||||
ceph-osd/2 blocked idle 2 10.0.0.173 Missing relation: monitor
|
||||
ceph-osd/3 blocked idle 3 10.0.0.174 Missing relation: monitor
|
||||
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.175 Unit is ready: Mode: R/W
|
||||
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.176 Unit is ready: Mode: R/O
|
||||
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.177 Unit is ready: Mode: R/O
|
||||
nova-compute/0* blocked idle 1 10.0.0.172 Missing relations: messaging, image
|
||||
nova-compute/1 blocked idle 2 10.0.0.173 Missing relations: messaging, image
|
||||
nova-compute/2 blocked idle 3 10.0.0.174 Missing relations: messaging, image
|
||||
vault/0* active idle 3/lxd/0 10.0.0.178 8200/tcp Unit is ready (active: true, mlock: disabled)
|
||||
vault-mysql-router/0* active idle 10.0.0.178 Unit is ready
|
||||
|
||||
.. _neutron_networking:
|
||||
|
||||
|
@ -293,9 +266,9 @@ File ``neutron.yaml`` contains the configuration necessary for three of them:
|
|||
neutron-api:
|
||||
neutron-security-groups: true
|
||||
flat-network-providers: physnet1
|
||||
openstack-origin: distro
|
||||
openstack-origin: cloud:focal-victoria
|
||||
ovn-central:
|
||||
source: distro
|
||||
source: cloud:focal-victoria
|
||||
|
||||
The ``bridge-interface-mappings`` setting refers to a network interface that
|
||||
the OVN Chassis will bind to. In the above example it is 'eth1' and it should
|
||||
|
@ -356,13 +329,11 @@ Join neutron-api to the cloud database:
|
|||
Keystone
|
||||
~~~~~~~~
|
||||
|
||||
The keystone application will be containerised on machine 0.
|
||||
|
||||
To deploy:
|
||||
The keystone application will be containerised on machine 0:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy --to lxd:0 --config openstack-origin=distro keystone
|
||||
juju deploy --to lxd:0 --config openstack-origin=cloud:focal-victoria keystone
|
||||
|
||||
Join keystone to the cloud database:
|
||||
|
||||
|
@ -402,33 +373,30 @@ look similar to this:
|
|||
.. code-block:: console
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
ceph-osd/0* blocked idle 0 10.0.0.206 Missing relation: monitor
|
||||
ceph-osd/1 blocked idle 1 10.0.0.208 Missing relation: monitor
|
||||
ceph-osd/2 blocked idle 2 10.0.0.209 Missing relation: monitor
|
||||
ceph-osd/3 blocked idle 3 10.0.0.213 Missing relation: monitor
|
||||
keystone/0* active idle 0/lxd/2 10.0.0.223 5000/tcp Unit is ready
|
||||
keystone-mysql-router/0* active idle 10.0.0.223 Unit is ready
|
||||
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.211 Unit is ready: Mode: R/W
|
||||
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.212 Unit is ready: Mode: R/O
|
||||
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.214 Unit is ready: Mode: R/O
|
||||
neutron-api/0* active idle 1/lxd/2 10.0.0.220 9696/tcp Unit is ready
|
||||
neutron-api-mysql-router/0* active idle 10.0.0.220 Unit is ready
|
||||
neutron-api-plugin-ovn/0* active idle 10.0.0.220 Unit is ready
|
||||
nova-compute/0* blocked idle 1 10.0.0.208 Missing relations: image
|
||||
ovn-chassis/1 active idle 10.0.0.208 Unit is ready
|
||||
nova-compute/1 blocked idle 2 10.0.0.209 Missing relations: image
|
||||
ovn-chassis/0* active idle 10.0.0.209 Unit is ready
|
||||
nova-compute/2 blocked idle 3 10.0.0.213 Missing relations: image
|
||||
ovn-chassis/2 active idle 10.0.0.213 Unit is ready
|
||||
ovn-central/0* active idle 0/lxd/1 10.0.0.218 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db northd: active)
|
||||
ovn-central/1 active idle 1/lxd/1 10.0.0.221 6641/tcp,6642/tcp Unit is ready
|
||||
ovn-central/2 active idle 2/lxd/1 10.0.0.219 6641/tcp,6642/tcp Unit is ready
|
||||
rabbitmq-server/0* active idle 2/lxd/2 10.0.0.222 5672/tcp Unit is ready
|
||||
swift-storage/0* blocked idle 0 10.0.0.206 Missing relations: proxy
|
||||
swift-storage/1 blocked idle 2 10.0.0.209 Missing relations: proxy
|
||||
swift-storage/2 blocked idle 3 10.0.0.213 Missing relations: proxy
|
||||
vault/0* active idle 3/lxd/0 10.0.0.217 8200/tcp Unit is ready (active: true, mlock: disabled)
|
||||
vault-mysql-router/0* active idle 10.0.0.217 Unit is ready
|
||||
ceph-osd/0* blocked idle 0 10.0.0.171 Missing relation: monitor
|
||||
ceph-osd/1 blocked idle 1 10.0.0.172 Missing relation: monitor
|
||||
ceph-osd/2 blocked idle 2 10.0.0.173 Missing relation: monitor
|
||||
ceph-osd/3 blocked idle 3 10.0.0.174 Missing relation: monitor
|
||||
keystone/0* active idle 0/lxd/2 10.0.0.183 5000/tcp Unit is ready
|
||||
keystone-mysql-router/0* active idle 10.0.0.183 Unit is ready
|
||||
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.175 Unit is ready: Mode: R/W
|
||||
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.176 Unit is ready: Mode: R/O
|
||||
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.177 Unit is ready: Mode: R/O
|
||||
neutron-api/0* active idle 1/lxd/2 10.0.0.182 9696/tcp Unit is ready
|
||||
neutron-api-mysql-router/0* active idle 10.0.0.182 Unit is ready
|
||||
neutron-api-plugin-ovn/0* active idle 10.0.0.182 Unit is ready
|
||||
nova-compute/0* blocked idle 1 10.0.0.172 Missing relations: image
|
||||
ovn-chassis/0* active idle 10.0.0.172 Unit is ready
|
||||
nova-compute/1 blocked idle 2 10.0.0.173 Missing relations: image
|
||||
ovn-chassis/2 active idle 10.0.0.173 Unit is ready
|
||||
nova-compute/2 blocked idle 3 10.0.0.174 Missing relations: image
|
||||
ovn-chassis/1 active idle 10.0.0.174 Unit is ready
|
||||
ovn-central/0 active idle 0/lxd/1 10.0.0.181 6641/tcp,6642/tcp Unit is ready
|
||||
ovn-central/1 active idle 1/lxd/1 10.0.0.179 6641/tcp,6642/tcp Unit is ready
|
||||
ovn-central/2* active idle 2/lxd/1 10.0.0.180 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db northd: active)
|
||||
rabbitmq-server/0* active idle 2/lxd/2 10.0.0.184 5672/tcp Unit is ready
|
||||
vault/0* active idle 3/lxd/0 10.0.0.178 8200/tcp Unit is ready (active: true, mlock: disabled)
|
||||
vault-mysql-router/0* active idle 10.0.0.178 Unit is ready
|
||||
|
||||
Nova cloud controller
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
@ -442,7 +410,7 @@ the configuration:
|
|||
|
||||
nova-cloud-controller:
|
||||
network-manager: Neutron
|
||||
openstack-origin: distro
|
||||
openstack-origin: cloud:focal-victoria
|
||||
|
||||
To deploy:
|
||||
|
||||
|
@ -477,14 +445,12 @@ Five additional relations can be added at this time:
|
|||
Placement
|
||||
~~~~~~~~~
|
||||
|
||||
The placement application will be containerised on machine 2 with the
|
||||
`placement`_ charm.
|
||||
|
||||
To deploy:
|
||||
The placement application will be containerised on machine 3 with the
|
||||
`placement`_ charm:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy --to lxd:3 --config openstack-origin=distro placement
|
||||
juju deploy --to lxd:3 --config openstack-origin=cloud:focal-victoria placement
|
||||
|
||||
Join placement to the cloud database:
|
||||
|
||||
|
@ -506,13 +472,11 @@ OpenStack dashboard
|
|||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The openstack-dashboard application (Horizon) will be containerised on machine
|
||||
1 with the `openstack-dashboard`_ charm.
|
||||
|
||||
To deploy:
|
||||
2 with the `openstack-dashboard`_ charm:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy --to lxd:1 --config openstack-origin=distro openstack-dashboard
|
||||
juju deploy --to lxd:2 --config openstack-origin=cloud:focal-victoria openstack-dashboard
|
||||
|
||||
Join openstack-dashboard to the cloud database:
|
||||
|
||||
|
@ -538,14 +502,12 @@ Two additional relations are required:
|
|||
Glance
|
||||
~~~~~~
|
||||
|
||||
The glance application will be containerised on machine 2 with the `glance`_
|
||||
charm.
|
||||
|
||||
To deploy:
|
||||
The glance application will be containerised on machine 3 with the `glance`_
|
||||
charm:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy --to lxd:3 --config openstack-origin=distro glance
|
||||
juju deploy --to lxd:3 --config openstack-origin=cloud:focal-victoria glance
|
||||
|
||||
Join glance to the cloud database:
|
||||
|
||||
|
@ -570,53 +532,48 @@ look similar to this:
|
|||
.. code-block:: console
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
ceph-osd/0* blocked idle 0 10.0.0.206 Missing relation: monitor
|
||||
ceph-osd/1 blocked idle 1 10.0.0.208 Missing relation: monitor
|
||||
ceph-osd/2 blocked idle 2 10.0.0.209 Missing relation: monitor
|
||||
ceph-osd/3 blocked idle 3 10.0.0.213 Missing relation: monitor
|
||||
glance/0* active idle 3/lxd/3 10.0.0.224 9292/tcp Unit is ready
|
||||
glance-mysql-router/0* active idle 10.0.0.224 Unit is ready
|
||||
keystone/0* active idle 0/lxd/2 10.0.0.223 5000/tcp Unit is ready
|
||||
keystone-mysql-router/0* active idle 10.0.0.223 Unit is ready
|
||||
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.211 Unit is ready: Mode: R/W
|
||||
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.212 Unit is ready: Mode: R/O
|
||||
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.214 Unit is ready: Mode: R/O
|
||||
neutron-api/0* active idle 1/lxd/2 10.0.0.220 9696/tcp Unit is ready
|
||||
neutron-api-mysql-router/0* active idle 10.0.0.220 Unit is ready
|
||||
neutron-api-plugin-ovn/0* active idle 10.0.0.220 Unit is ready
|
||||
nova-cloud-controller/0* active idle 3/lxd/1 10.0.0.216 8774/tcp,8775/tcp Unit is ready
|
||||
ncc-mysql-router/0* active idle 10.0.0.216 Unit is ready
|
||||
nova-compute/0* active idle 1 10.0.0.208 Unit is ready
|
||||
ovn-chassis/1 active idle 10.0.0.208 Unit is ready
|
||||
nova-compute/1 active idle 2 10.0.0.209 Unit is ready
|
||||
ovn-chassis/0* active idle 10.0.0.209 Unit is ready
|
||||
nova-compute/2 active idle 3 10.0.0.213 Unit is ready
|
||||
ovn-chassis/2 active idle 10.0.0.213 Unit is ready
|
||||
openstack-dashboard/0* active idle 1/lxd/3 10.0.0.210 80/tcp,443/tcp Unit is ready
|
||||
dashboard-mysql-router/0* active idle 10.0.0.210 Unit is ready
|
||||
ovn-central/0* active idle 0/lxd/1 10.0.0.218 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db northd: active)
|
||||
ovn-central/1 active idle 1/lxd/1 10.0.0.221 6641/tcp,6642/tcp Unit is ready
|
||||
ovn-central/2 active idle 2/lxd/1 10.0.0.219 6641/tcp,6642/tcp Unit is ready
|
||||
placement/0* active idle 3/lxd/2 10.0.0.215 8778/tcp Unit is ready
|
||||
placement-mysql-router/0* active idle 10.0.0.215 Unit is ready
|
||||
rabbitmq-server/0* active idle 2/lxd/2 10.0.0.222 5672/tcp Unit is ready
|
||||
swift-storage/0* blocked idle 0 10.0.0.206 Missing relations: proxy
|
||||
swift-storage/1 blocked idle 2 10.0.0.209 Missing relations: proxy
|
||||
swift-storage/2 blocked idle 3 10.0.0.213 Missing relations: proxy
|
||||
vault/0* active idle 3/lxd/0 10.0.0.217 8200/tcp Unit is ready (active: true, mlock: disabled)
|
||||
vault-mysql-router/0* active idle 10.0.0.217 Unit is ready
|
||||
ceph-osd/0* blocked idle 0 10.0.0.171 Missing relation: monitor
|
||||
ceph-osd/1 blocked idle 1 10.0.0.172 Missing relation: monitor
|
||||
ceph-osd/2 blocked idle 2 10.0.0.173 Missing relation: monitor
|
||||
ceph-osd/3 blocked idle 3 10.0.0.174 Missing relation: monitor
|
||||
glance/0* active idle 3/lxd/3 10.0.0.188 9292/tcp Unit is ready
|
||||
glance-mysql-router/0* active idle 10.0.0.188 Unit is ready
|
||||
keystone/0* active idle 0/lxd/2 10.0.0.183 5000/tcp Unit is ready
|
||||
keystone-mysql-router/0* active idle 10.0.0.183 Unit is ready
|
||||
mysql-innodb-cluster/0* active idle 0/lxd/0 10.0.0.175 Unit is ready: Mode: R/W
|
||||
mysql-innodb-cluster/1 active idle 1/lxd/0 10.0.0.176 Unit is ready: Mode: R/O
|
||||
mysql-innodb-cluster/2 active idle 2/lxd/0 10.0.0.177 Unit is ready: Mode: R/O
|
||||
neutron-api/0* active idle 1/lxd/2 10.0.0.182 9696/tcp Unit is ready
|
||||
neutron-api-mysql-router/0* active idle 10.0.0.182 Unit is ready
|
||||
neutron-api-plugin-ovn/0* active idle 10.0.0.182 Unit is ready
|
||||
nova-cloud-controller/0* active idle 3/lxd/1 10.0.0.185 8774/tcp,8775/tcp Unit is ready
|
||||
ncc-mysql-router/0* active idle 10.0.0.185 Unit is ready
|
||||
nova-compute/0* active idle 1 10.0.0.172 Unit is ready
|
||||
ovn-chassis/0* active idle 10.0.0.172 Unit is ready
|
||||
nova-compute/1 active idle 2 10.0.0.173 Unit is ready
|
||||
ovn-chassis/2 active idle 10.0.0.173 Unit is ready
|
||||
nova-compute/2 active idle 3 10.0.0.174 Unit is ready
|
||||
ovn-chassis/1 active idle 10.0.0.174 Unit is ready
|
||||
openstack-dashboard/0* active idle 2/lxd/3 10.0.0.187 80/tcp,443/tcp Unit is ready
|
||||
dashboard-mysql-router/0* active idle 10.0.0.187 Unit is ready
|
||||
ovn-central/0 active idle 0/lxd/1 10.0.0.181 6641/tcp,6642/tcp Unit is ready
|
||||
ovn-central/1 active idle 1/lxd/1 10.0.0.179 6641/tcp,6642/tcp Unit is ready
|
||||
ovn-central/2* active idle 2/lxd/1 10.0.0.180 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db northd: active)
|
||||
placement/0* active idle 3/lxd/2 10.0.0.186 8778/tcp Unit is ready
|
||||
placement-mysql-router/0* active idle 10.0.0.186 Unit is ready
|
||||
rabbitmq-server/0* active idle 2/lxd/2 10.0.0.184 5672/tcp Unit is ready
|
||||
vault/0* active idle 3/lxd/0 10.0.0.178 8200/tcp Unit is ready (active: true, mlock: disabled)
|
||||
vault-mysql-router/0* active idle 10.0.0.178 Unit is ready
|
||||
|
||||
Ceph monitor
|
||||
~~~~~~~~~~~~
|
||||
|
||||
The ceph-mon application will be containerised on machines 0, 1, and 2 with the
|
||||
`ceph-mon`_ charm.
|
||||
|
||||
To deploy:
|
||||
`ceph-mon`_ charm:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 --config source=distro ceph-mon
|
||||
juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 --config source=cloud:focal-victoria ceph-mon
|
||||
|
||||
Three relations can be added at this time:
|
||||
|
||||
|
@ -632,7 +589,7 @@ For the above relations,
|
|||
non-bootable disk images. The nova-compute charm option
|
||||
``libvirt-image-backend`` must be set to 'rbd' for this to take effect.
|
||||
|
||||
* The glance:ceph relation makes Ceph the storage backend for Glance.
|
||||
* The ``glance:ceph`` relation makes Ceph the storage backend for Glance.
|
||||
|
||||
Cinder
|
||||
~~~~~~
|
||||
|
@ -645,7 +602,7 @@ charm. File ``cinder.yaml`` contains the configuration:
|
|||
cinder:
|
||||
glance-api-version: 2
|
||||
block-device: None
|
||||
openstack-origin: distro
|
||||
openstack-origin: cloud:focal-victoria
|
||||
|
||||
To deploy:
|
||||
|
||||
|
@ -661,7 +618,7 @@ Join cinder to the cloud database:
|
|||
juju add-relation cinder-mysql-router:db-router mysql-innodb-cluster:db-router
|
||||
juju add-relation cinder-mysql-router:shared-db cinder:shared-db
|
||||
|
||||
Four additional relations can be added at this time:
|
||||
Five additional relations can be added at this time:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
|
@ -669,8 +626,9 @@ Four additional relations can be added at this time:
|
|||
juju add-relation cinder:identity-service keystone:identity-service
|
||||
juju add-relation cinder:amqp rabbitmq-server:amqp
|
||||
juju add-relation cinder:image-service glance:image-service
|
||||
juju add-relation cinder:certificates vault:certificates
|
||||
|
||||
The above glance:image-service relation will enable Cinder to consume the
|
||||
The above ``glance:image-service`` relation will enable Cinder to consume the
|
||||
Glance API (e.g. making Cinder able to perform volume snapshots of Glance
|
||||
images).
|
||||
|
||||
|
@ -682,43 +640,32 @@ None`` in the configuration file). This will be implemented via the
|
|||
|
||||
juju deploy cinder-ceph
|
||||
|
||||
Four relations need to be added:
|
||||
Three relations need to be added:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju add-relation cinder-ceph:storage-backend cinder:storage-backend
|
||||
juju add-relation cinder-ceph:ceph ceph-mon:client
|
||||
juju add-relation cinder-ceph:ceph-access nova-compute:ceph-access
|
||||
juju add-relation cinder:certificates vault:certificates
|
||||
|
||||
Swift proxy
|
||||
~~~~~~~~~~~
|
||||
Ceph RADOS Gateway
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The swift-proxy application will be containerised on machine 3 with the
|
||||
`swift-proxy`_ charm. File ``swift-proxy.yaml`` contains the configuration:
|
||||
The Ceph RADOS Gateway will be deployed to offer an S3 and Swift compatible
|
||||
HTTP gateway. This is an alternative to using OpenStack Swift.
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
swift-proxy:
|
||||
zone-assignment: auto
|
||||
swift-hash: "<uuid>"
|
||||
|
||||
Swift proxy needs to be supplied with a unique identifier (UUID). Generate one
|
||||
with the :command:`uuid -v 4` command (you may need to first install the
|
||||
``uuid`` deb package) and insert it into the file.
|
||||
|
||||
To deploy:
|
||||
The ceph-radosgw application will be containerised on machine 0 with the
|
||||
`ceph-radosgw`_ charm.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy --to lxd:3 --config swift-proxy.yaml swift-proxy
|
||||
juju deploy --to lxd:0 ceph-radosgw
|
||||
|
||||
Two relations are needed:
|
||||
A single relation is needed:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju add-relation swift-proxy:swift-storage swift-storage:swift-storage
|
||||
juju add-relation swift-proxy:identity-service keystone:identity-service
|
||||
juju add-relation ceph-radosgw:mon ceph-mon:radosgw
|
||||
|
||||
NTP
|
||||
~~~
|
||||
|
@ -763,12 +710,12 @@ The password is queried from Keystone:
|
|||
|
||||
juju run --unit keystone/0 leader-get admin_passwd
|
||||
|
||||
In this example, the address is '10.0.0.210' and the password is
|
||||
In this example, the address is '10.0.0.187' and the password is
|
||||
'kohy6shoh3diWav5'.
|
||||
|
||||
The dashboard URL then becomes:
|
||||
|
||||
**http://10.0.0.210/horizon**
|
||||
**http://10.0.0.187/horizon**
|
||||
|
||||
And the credentials are:
|
||||
|
||||
|
@ -777,6 +724,13 @@ And the credentials are:
|
|||
| Password: **kohy6shoh3diWav5**
|
||||
|
|
||||
|
||||
.. tip::
|
||||
|
||||
To access the dasboard from your desktop you will need SSH local port
|
||||
forwarding. Example: ``sudo ssh -L 8001:10.0.0.187:80 <user>@<host>``, where
|
||||
<host> can contact 10.0.0.187 on port 80. Then go to
|
||||
http://localhost:8001/horizon.
|
||||
|
||||
Once logged in you should see something like this:
|
||||
|
||||
.. figure:: ./media/install-openstack_horizon.png
|
||||
|
@ -799,18 +753,19 @@ networks, images, and a user environment. Go to :doc:`Configure OpenStack
|
|||
|
||||
.. LINKS
|
||||
.. _OpenStack Charms: https://docs.openstack.org/charm-guide/latest/openstack-charms.html
|
||||
.. _Charm upgrades: app-upgrade-openstack.html#charm-upgrades
|
||||
.. _Charm upgrades: app-upgrade-openstack.html#charm-upgrades.html
|
||||
.. _Series upgrade: app-series-upgrade.html
|
||||
.. _Charm store: https://jaas.ai/store
|
||||
.. _Post-commission configuration: https://maas.io/docs/commission-nodes#heading--post-commission-configuration
|
||||
.. _Deploying applications: https://jaas.ai/docs/deploying-applications
|
||||
.. _Deploying to specific machines: https://jaas.ai/docs/deploying-advanced-applications#heading--deploying-to-specific-machines
|
||||
.. _Managing relations: https://jaas.ai/docs/relations
|
||||
.. _Deploying applications: https://juju.is/docs/deploying-applications
|
||||
.. _Deploying to specific machines: https://juju.is/docs/deploying-advanced-applications#heading--deploying-to-specific-machines
|
||||
.. _Managing relations: https://juju.is/docs/relations
|
||||
.. _Vault: app-vault.html
|
||||
|
||||
.. CHARMS
|
||||
.. _ceph-mon: https://jaas.ai/ceph-mon
|
||||
.. _ceph-osd: https://jaas.ai/ceph-osd
|
||||
.. _ceph-radosgw: https://jaas.ai/ceph-radosgw
|
||||
.. _cinder: https://jaas.ai/cinder
|
||||
.. _cinder-ceph: https://jaas.ai/cinder-ceph
|
||||
.. _glance: https://jaas.ai/glance
|
||||
|
@ -825,8 +780,6 @@ networks, images, and a user environment. Go to :doc:`Configure OpenStack
|
|||
.. _percona-cluster: https://jaas.ai/percona-cluster
|
||||
.. _placement: https://jaas.ai/placement
|
||||
.. _rabbitmq-server: https://jaas.ai/rabbitmq-server
|
||||
.. _swift-proxy: https://jaas.ai/swift-proxy
|
||||
.. _swift-storage: https://jaas.ai/swift-storage
|
||||
|
||||
.. BUGS
|
||||
.. _LP #1826888: https://bugs.launchpad.net/charm-deployment-guide/+bug/1826888
|
||||
|
|
|
@ -14,8 +14,8 @@ The software versions used in this guide are as follows:
|
|||
* **Ubuntu 20.04 LTS (Focal)** for the MAAS server, Juju client, Juju
|
||||
controller, and all cloud nodes (including containers)
|
||||
* **MAAS 2.8.2**
|
||||
* **Juju 2.8.1**
|
||||
* **OpenStack Ussuri**
|
||||
* **Juju 2.8.6**
|
||||
* **OpenStack Victoria**
|
||||
|
||||
Proceed to the :doc:`Install MAAS <install-maas>` page to begin your
|
||||
installation journey. Hardware requirements are also listed there.
|
||||
|
|
Loading…
Reference in New Issue