Merge "Remove migrated operations"
This commit is contained in:
commit
f6cbe0e747
|
@ -13,3 +13,5 @@ RedirectMatch 301 ^/project-deploy-guide/charm-deployment-guide/([^/]+)/app-ceph
|
|||
RedirectMatch 301 ^/project-deploy-guide/charm-deployment-guide/([^/]+)/app-pci-passthrough-gpu.html$ /project-deploy-guide/charm-deployment-guide/$1/pci-passthrough.html
|
||||
RedirectMatch 301 ^/project-deploy-guide/charm-deployment-guide/([^/]+)/app-erasure-coding.html$ /project-deploy-guide/charm-deployment-guide/$1/ceph-erasure-coding.html
|
||||
RedirectMatch 301 ^/project-deploy-guide/charm-deployment-guide/([^/]+)/app-manila-ganesha.html$ /project-deploy-guide/charm-deployment-guide/$1/manila-ganesha.html
|
||||
RedirectMatch 301 ^/project-deploy-guide/charm-deployment-guide/([^/]+)/app-managing-power-events.html$ /charm-guide/$1/howto/managing-power-events.html
|
||||
RedirectMatch 301 ^/project-deploy-guide/charm-deployment-guide/([^/]+)/deferred-events.html$ /charm-guide/$1/howto/deferred-events.html
|
||||
|
|
|
@ -266,10 +266,11 @@ Reissuing of certificates
|
|||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
New certificates can be reissued to all TLS-enabled clients by means of the
|
||||
``reissue-certificates`` action. See cloud operation :doc:`Reissue TLS
|
||||
certificates across the cloud <ops-reissue-tls-certs>` for details.
|
||||
``reissue-certificates`` action. See cloud operation `Reissue TLS
|
||||
certificates across the cloud`_ in the Admin Guide for details.
|
||||
|
||||
.. LINKS
|
||||
.. _RFC5280: https://tools.ietf.org/html/rfc5280#section-3.2
|
||||
.. _RFC7468: https://tools.ietf.org/html/rfc7468#section-5
|
||||
.. _vault: https://opendev.org/openstack/charm-vault/src/branch/master/src/README.md
|
||||
.. _Reissue TLS certificates across the cloud: https://docs.openstack.org/charm-guide/latest/admin/ops-reissue-tls-certs.html
|
||||
|
|
|
@ -203,8 +203,7 @@ requests to the application and the application itself.
|
|||
.. note::
|
||||
|
||||
Highly available applications may require attention if subjected to a power
|
||||
event (see the :doc:`Managing power events <app-managing-power-events>`
|
||||
page).
|
||||
event (see `Managing power events`_ in the Admin Guide).
|
||||
|
||||
Cloud applications are typically made highly available through the use of
|
||||
techniques applied externally to the application itself (e.g. using a
|
||||
|
@ -832,6 +831,7 @@ Charms`_ project group.
|
|||
.. _Raft algorithm: https://raft.github.io/
|
||||
.. _Ceph bucket type: https://docs.ceph.com/docs/master/rados/operations/crush-map/#types-and-buckets
|
||||
.. _Managing TLS certificates: app-certificate-management.html
|
||||
.. _Managing power events: https://docs.openstack.org/charm-guide/latest/howto/managing-power-events.html
|
||||
|
||||
.. BUGS
|
||||
.. _LP #1234561: https://bugs.launchpad.net/charm-ceph-osd/+bug/1234561
|
||||
|
|
|
@ -1,603 +0,0 @@
|
|||
:orphan:
|
||||
|
||||
.. _reference_cloud:
|
||||
|
||||
Reference cloud
|
||||
===============
|
||||
|
||||
.. note::
|
||||
|
||||
The information on this page is associated with the topic of :ref:`Managing
|
||||
Power Events <managing_power_events>`. See that page for background
|
||||
information.
|
||||
|
||||
The cloud is represented in the form of ``juju status`` output.
|
||||
|
||||
.. code::
|
||||
|
||||
Model Controller Cloud/Region Version SLA Timestamp
|
||||
openstack foundations-maas maas_cloud 2.6.2 unsupported 16:26:29Z
|
||||
|
||||
App Version Status Scale Charm Store Rev OS Notes
|
||||
aodh 7.0.0 active 3 aodh jujucharms 83 ubuntu
|
||||
bcache-tuning active 9 bcache-tuning jujucharms 10 ubuntu
|
||||
canonical-livepatch active 22 canonical-livepatch jujucharms 32 ubuntu
|
||||
ceilometer 11.0.1 blocked 3 ceilometer jujucharms 339 ubuntu
|
||||
ceilometer-agent 11.0.1 active 7 ceilometer-agent jujucharms 302 ubuntu
|
||||
ceph-mon 13.2.4+dfsg1 active 3 ceph-mon jujucharms 390 ubuntu
|
||||
ceph-osd 13.2.4+dfsg1 active 9 ceph-osd jujucharms 411 ubuntu
|
||||
ceph-radosgw 13.2.4+dfsg1 active 3 ceph-radosgw jujucharms 334 ubuntu
|
||||
cinder 13.0.3 active 3 cinder jujucharms 375 ubuntu
|
||||
cinder-ceph 13.0.3 active 3 cinder-ceph jujucharms 300 ubuntu
|
||||
designate 7.0.0 active 3 designate jujucharms 122 ubuntu
|
||||
designate-bind 9.11.3+dfsg active 2 designate-bind jujucharms 65 ubuntu
|
||||
elasticsearch active 2 elasticsearch jujucharms 37 ubuntu
|
||||
filebeat 5.6.16 active 74 filebeat jujucharms 24 ubuntu
|
||||
glance 17.0.0 active 3 glance jujucharms 372 ubuntu
|
||||
gnocchi 4.3.2 active 3 gnocchi jujucharms 60 ubuntu
|
||||
grafana active 1 grafana jujucharms 29 ubuntu
|
||||
graylog 2.5.1 active 1 graylog jujucharms 31 ubuntu
|
||||
graylog-mongodb 3.6.3 active 1 mongodb jujucharms 52 ubuntu
|
||||
hacluster-aodh active 3 hacluster jujucharms 102 ubuntu
|
||||
hacluster-ceilometer active 3 hacluster jujucharms 102 ubuntu
|
||||
hacluster-cinder active 3 hacluster jujucharms 102 ubuntu
|
||||
hacluster-designate active 3 hacluster jujucharms 102 ubuntu
|
||||
hacluster-glance active 3 hacluster jujucharms 102 ubuntu
|
||||
hacluster-gnocchi active 3 hacluster jujucharms 102 ubuntu
|
||||
hacluster-heat active 3 hacluster jujucharms 102 ubuntu
|
||||
hacluster-horizon active 3 hacluster jujucharms 102 ubuntu
|
||||
hacluster-keystone active 3 hacluster jujucharms 102 ubuntu
|
||||
hacluster-mysql active 3 hacluster jujucharms 102 ubuntu
|
||||
hacluster-neutron active 3 hacluster jujucharms 102 ubuntu
|
||||
hacluster-nova active 3 hacluster jujucharms 102 ubuntu
|
||||
hacluster-radosgw active 3 hacluster jujucharms 102 ubuntu
|
||||
heat 11.0.0 active 3 heat jujucharms 326 ubuntu
|
||||
keystone 14.0.1 active 3 keystone jujucharms 445 ubuntu
|
||||
keystone-ldap 14.0.1 active 3 keystone-ldap jujucharms 17 ubuntu
|
||||
landscape-haproxy unknown 1 haproxy jujucharms 50 ubuntu
|
||||
landscape-postgresql 10.8 maintenance 2 postgresql jujucharms 199 ubuntu
|
||||
landscape-rabbitmq-server 3.6.10 active 3 rabbitmq-server jujucharms 89 ubuntu
|
||||
landscape-server active 3 landscape-server jujucharms 32 ubuntu
|
||||
lldpd active 9 lldpd jujucharms 5 ubuntu
|
||||
memcached active 2 memcached jujucharms 23 ubuntu
|
||||
mysql 5.7.20-29.24 active 3 percona-cluster jujucharms 340 ubuntu
|
||||
nagios active 1 nagios jujucharms 32 ubuntu
|
||||
neutron-api 13.0.2 active 3 neutron-api jujucharms 401 ubuntu
|
||||
neutron-gateway 13.0.2 active 2 neutron-gateway jujucharms 371 ubuntu
|
||||
neutron-openvswitch 13.0.2 active 7 neutron-openvswitch jujucharms 358 ubuntu
|
||||
nova-cloud-controller 18.1.0 active 3 nova-cloud-controller jujucharms 424 ubuntu
|
||||
nova-compute-kvm 18.1.0 active 5 nova-compute jujucharms 448 ubuntu
|
||||
nova-compute-lxd 18.1.0 active 2 nova-compute jujucharms 448 ubuntu
|
||||
nrpe-container active 51 nrpe jujucharms 57 ubuntu
|
||||
nrpe-host active 32 nrpe jujucharms 57 ubuntu
|
||||
ntp 3.2 active 24 ntp jujucharms 32 ubuntu
|
||||
openstack-dashboard 14.0.2 active 3 openstack-dashboard jujucharms 425 ubuntu
|
||||
openstack-service-checks active 1 openstack-service-checks jujucharms 18 ubuntu
|
||||
prometheus active 1 prometheus2 jujucharms 10 ubuntu
|
||||
prometheus-ceph-exporter active 1 prometheus-ceph-exporter jujucharms 5 ubuntu
|
||||
prometheus-openstack-exporter active 1 prometheus-openstack-exporter jujucharms 7 ubuntu
|
||||
rabbitmq-server 3.6.10 active 3 rabbitmq-server jujucharms 344 ubuntu
|
||||
telegraf active 74 telegraf jujucharms 29 ubuntu
|
||||
telegraf-prometheus active 1 telegraf jujucharms 29 ubuntu
|
||||
thruk-agent unknown 1 thruk-agent jujucharms 6 ubuntu
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
aodh/0* active idle 18/lxd/0 10.244.40.236 8042/tcp Unit is ready
|
||||
filebeat/46 active idle 10.244.40.236 Filebeat ready
|
||||
hacluster-aodh/0* active idle 10.244.40.236 Unit is ready and clustered
|
||||
nrpe-container/24 active idle 10.244.40.236 icmp,5666/tcp ready
|
||||
telegraf/46 active idle 10.244.40.236 9103/tcp Monitoring aodh/0
|
||||
aodh/1 active idle 20/lxd/0 10.244.41.74 8042/tcp Unit is ready
|
||||
filebeat/61 active idle 10.244.41.74 Filebeat ready
|
||||
hacluster-aodh/1 active idle 10.244.41.74 Unit is ready and clustered
|
||||
nrpe-container/38 active idle 10.244.41.74 icmp,5666/tcp ready
|
||||
telegraf/61 active idle 10.244.41.74 9103/tcp Monitoring aodh/1
|
||||
aodh/2 active idle 21/lxd/0 10.244.41.66 8042/tcp Unit is ready
|
||||
filebeat/65 active idle 10.244.41.66 Filebeat ready
|
||||
hacluster-aodh/2 active idle 10.244.41.66 Unit is ready and clustered
|
||||
nrpe-container/42 active idle 10.244.41.66 icmp,5666/tcp ready
|
||||
telegraf/65 active idle 10.244.41.66 9103/tcp Monitoring aodh/2
|
||||
ceilometer/0 blocked idle 18/lxd/1 10.244.40.239 Run the ceilometer-upgrade action on the leader to initialize ceilometer and gnocchi
|
||||
filebeat/51 active idle 10.244.40.239 Filebeat ready
|
||||
hacluster-ceilometer/1 active idle 10.244.40.239 Unit is ready and clustered
|
||||
nrpe-container/28 active idle 10.244.40.239 icmp,5666/tcp ready
|
||||
telegraf/51 active idle 10.244.40.239 9103/tcp Monitoring ceilometer/0
|
||||
ceilometer/1 blocked idle 20/lxd/1 10.244.41.77 Run the ceilometer-upgrade action on the leader to initialize ceilometer and gnocchi
|
||||
filebeat/70 active idle 10.244.41.77 Filebeat ready
|
||||
hacluster-ceilometer/2 active idle 10.244.41.77 Unit is ready and clustered
|
||||
nrpe-container/47 active idle 10.244.41.77 icmp,5666/tcp ready
|
||||
telegraf/70 active idle 10.244.41.77 9103/tcp Monitoring ceilometer/1
|
||||
ceilometer/2* blocked idle 21/lxd/1 10.244.40.229 Run the ceilometer-upgrade action on the leader to initialize ceilometer and gnocchi
|
||||
filebeat/22 active idle 10.244.40.229 Filebeat ready
|
||||
hacluster-ceilometer/0* active idle 10.244.40.229 Unit is ready and clustered
|
||||
nrpe-container/4 active idle 10.244.40.229 icmp,5666/tcp ready
|
||||
telegraf/22 active idle 10.244.40.229 9103/tcp Monitoring ceilometer/2
|
||||
ceph-mon/0* active idle 15/lxd/0 10.244.40.227 Unit is ready and clustered
|
||||
filebeat/17 active idle 10.244.40.227 Filebeat ready
|
||||
nrpe-container/2 active idle 10.244.40.227 icmp,5666/tcp ready
|
||||
telegraf/17 active idle 10.244.40.227 9103/tcp Monitoring ceph-mon/0
|
||||
ceph-mon/1 active idle 16/lxd/0 10.244.40.253 Unit is ready and clustered
|
||||
filebeat/47 active idle 10.244.40.253 Filebeat ready
|
||||
nrpe-container/25 active idle 10.244.40.253 icmp,5666/tcp ready
|
||||
telegraf/47 active idle 10.244.40.253 9103/tcp Monitoring ceph-mon/1
|
||||
ceph-mon/2 active idle 17/lxd/0 10.244.41.78 Unit is ready and clustered
|
||||
filebeat/71 active idle 10.244.41.78 Filebeat ready
|
||||
nrpe-container/48 active idle 10.244.41.78 icmp,5666/tcp ready
|
||||
telegraf/71 active idle 10.244.41.78 9103/tcp Monitoring ceph-mon/2
|
||||
ceph-osd/0* active idle 15 10.244.40.206 Unit is ready (1 OSD)
|
||||
bcache-tuning/1 active idle 10.244.40.206 bcache devices tuned
|
||||
nrpe-host/16 active idle 10.244.40.206 icmp,5666/tcp ready
|
||||
ceph-osd/1 active idle 16 10.244.40.213 Unit is ready (1 OSD)
|
||||
bcache-tuning/8 active idle 10.244.40.213 bcache devices tuned
|
||||
nrpe-host/30 active idle 10.244.40.213 icmp,5666/tcp ready
|
||||
ceph-osd/2 active idle 17 10.244.40.220 Unit is ready (1 OSD)
|
||||
bcache-tuning/4 active idle 10.244.40.220 bcache devices tuned
|
||||
nrpe-host/23 active idle 10.244.40.220 ready
|
||||
ceph-osd/3 active idle 18 10.244.40.225 Unit is ready (1 OSD)
|
||||
bcache-tuning/5 active idle 10.244.40.225 bcache devices tuned
|
||||
nrpe-host/25 active idle 10.244.40.225 icmp,5666/tcp ready
|
||||
ceph-osd/4 active idle 19 10.244.40.221 Unit is ready (1 OSD)
|
||||
bcache-tuning/2 active idle 10.244.40.221 bcache devices tuned
|
||||
nrpe-host/18 active idle 10.244.40.221 icmp,5666/tcp ready
|
||||
ceph-osd/5 active idle 20 10.244.40.224 Unit is ready (1 OSD)
|
||||
bcache-tuning/6 active idle 10.244.40.224 bcache devices tuned
|
||||
nrpe-host/27 active idle 10.244.40.224 icmp,5666/tcp ready
|
||||
ceph-osd/6 active idle 21 10.244.40.222 Unit is ready (1 OSD)
|
||||
bcache-tuning/7 active idle 10.244.40.222 bcache devices tuned
|
||||
nrpe-host/29 active idle 10.244.40.222 ready
|
||||
ceph-osd/7 active idle 22 10.244.40.223 Unit is ready (1 OSD)
|
||||
bcache-tuning/3 active idle 10.244.40.223 bcache devices tuned
|
||||
nrpe-host/20 active idle 10.244.40.223 icmp,5666/tcp ready
|
||||
ceph-osd/8 active idle 23 10.244.40.219 Unit is ready (1 OSD)
|
||||
bcache-tuning/0* active idle 10.244.40.219 bcache devices tuned
|
||||
nrpe-host/14 active idle 10.244.40.219 ready
|
||||
ceph-radosgw/0* active idle 15/lxd/1 10.244.40.228 80/tcp Unit is ready
|
||||
filebeat/15 active idle 10.244.40.228 Filebeat ready
|
||||
hacluster-radosgw/0* active idle 10.244.40.228 Unit is ready and clustered
|
||||
nrpe-container/1 active idle 10.244.40.228 icmp,5666/tcp ready
|
||||
telegraf/15 active idle 10.244.40.228 9103/tcp Monitoring ceph-radosgw/0
|
||||
ceph-radosgw/1 active idle 16/lxd/1 10.244.40.241 80/tcp Unit is ready
|
||||
filebeat/35 active idle 10.244.40.241 Filebeat ready
|
||||
hacluster-radosgw/2 active idle 10.244.40.241 Unit is ready and clustered
|
||||
nrpe-container/15 active idle 10.244.40.241 icmp,5666/tcp ready
|
||||
telegraf/35 active idle 10.244.40.241 9103/tcp Monitoring ceph-radosgw/1
|
||||
ceph-radosgw/2 active idle 17/lxd/1 10.244.40.233 80/tcp Unit is ready
|
||||
filebeat/21 active idle 10.244.40.233 Filebeat ready
|
||||
hacluster-radosgw/1 active idle 10.244.40.233 Unit is ready and clustered
|
||||
nrpe-container/3 active idle 10.244.40.233 icmp,5666/tcp ready
|
||||
telegraf/21 active idle 10.244.40.233 9103/tcp Monitoring ceph-radosgw/2
|
||||
cinder/0* active idle 15/lxd/2 10.244.40.249 8776/tcp Unit is ready
|
||||
cinder-ceph/0* active idle 10.244.40.249 Unit is ready
|
||||
filebeat/29 active idle 10.244.40.249 Filebeat ready
|
||||
hacluster-cinder/0* active idle 10.244.40.249 Unit is ready and clustered
|
||||
nrpe-container/9 active idle 10.244.40.249 icmp,5666/tcp ready
|
||||
telegraf/29 active idle 10.244.40.249 9103/tcp Monitoring cinder/0
|
||||
cinder/1 active idle 16/lxd/2 10.244.40.248 8776/tcp Unit is ready
|
||||
cinder-ceph/2 active idle 10.244.40.248 Unit is ready
|
||||
filebeat/59 active idle 10.244.40.248 Filebeat ready
|
||||
hacluster-cinder/2 active idle 10.244.40.248 Unit is ready and clustered
|
||||
nrpe-container/36 active idle 10.244.40.248 icmp,5666/tcp ready
|
||||
telegraf/59 active idle 10.244.40.248 9103/tcp Monitoring cinder/1
|
||||
cinder/2 active idle 17/lxd/2 10.244.41.2 8776/tcp Unit is ready
|
||||
cinder-ceph/1 active idle 10.244.41.2 Unit is ready
|
||||
filebeat/42 active idle 10.244.41.2 Filebeat ready
|
||||
hacluster-cinder/1 active idle 10.244.41.2 Unit is ready and clustered
|
||||
nrpe-container/21 active idle 10.244.41.2 icmp,5666/tcp ready
|
||||
telegraf/42 active idle 10.244.41.2 9103/tcp Monitoring cinder/2
|
||||
designate-bind/0* active idle 16/lxd/3 10.244.40.250 Unit is ready
|
||||
filebeat/45 active idle 10.244.40.250 Filebeat ready
|
||||
nrpe-container/23 active idle 10.244.40.250 icmp,5666/tcp ready
|
||||
telegraf/45 active idle 10.244.40.250 9103/tcp Monitoring designate-bind/0
|
||||
designate-bind/1 active idle 17/lxd/3 10.244.40.255 Unit is ready
|
||||
filebeat/40 active idle 10.244.40.255 Filebeat ready
|
||||
nrpe-container/20 active idle 10.244.40.255 icmp,5666/tcp ready
|
||||
telegraf/40 active idle 10.244.40.255 9103/tcp Monitoring designate-bind/1
|
||||
designate/0* active idle 18/lxd/2 10.244.41.70 9001/tcp Unit is ready
|
||||
filebeat/57 active idle 10.244.41.70 Filebeat ready
|
||||
hacluster-designate/0* active idle 10.244.41.70 Unit is ready and clustered
|
||||
nrpe-container/34 active idle 10.244.41.70 icmp,5666/tcp ready
|
||||
telegraf/57 active idle 10.244.41.70 9103/tcp Monitoring designate/0
|
||||
designate/1 active idle 20/lxd/2 10.244.41.72 9001/tcp Unit is ready
|
||||
filebeat/63 active idle 10.244.41.72 Filebeat ready
|
||||
hacluster-designate/1 active idle 10.244.41.72 Unit is ready and clustered
|
||||
nrpe-container/40 active idle 10.244.41.72 icmp,5666/tcp ready
|
||||
telegraf/63 active idle 10.244.41.72 9103/tcp Monitoring designate/1
|
||||
designate/2 active idle 21/lxd/2 10.244.41.71 9001/tcp Unit is ready
|
||||
filebeat/69 active idle 10.244.41.71 Filebeat ready
|
||||
hacluster-designate/2 active idle 10.244.41.71 Unit is ready and clustered
|
||||
nrpe-container/46 active idle 10.244.41.71 icmp,5666/tcp ready
|
||||
telegraf/69 active idle 10.244.41.71 9103/tcp Monitoring designate/2
|
||||
elasticsearch/0 active idle 5 10.244.40.217 9200/tcp Unit is ready
|
||||
canonical-livepatch/3 active idle 10.244.40.217 Running kernel 4.15.0-50.54-generic, patchState: nothing-to-apply
|
||||
filebeat/4 active idle 10.244.40.217 Filebeat ready
|
||||
nrpe-host/3 active idle 10.244.40.217 icmp,5666/tcp ready
|
||||
ntp/4 active idle 10.244.40.217 123/udp chrony: Ready
|
||||
telegraf/4 active idle 10.244.40.217 9103/tcp Monitoring elasticsearch/0
|
||||
elasticsearch/1* active idle 13 10.244.40.209 9200/tcp Unit is ready
|
||||
canonical-livepatch/2 active idle 10.244.40.209 Running kernel 4.15.0-50.54-generic, patchState: nothing-to-apply
|
||||
filebeat/3 active idle 10.244.40.209 Filebeat ready
|
||||
nrpe-host/2 active idle 10.244.40.209 icmp,5666/tcp ready
|
||||
ntp/3 active idle 10.244.40.209 123/udp chrony: Ready
|
||||
telegraf/3 active idle 10.244.40.209 9103/tcp Monitoring elasticsearch/1
|
||||
glance/0 active idle 15/lxd/3 10.244.40.237 9292/tcp Unit is ready
|
||||
filebeat/36 active idle 10.244.40.237 Filebeat ready
|
||||
hacluster-glance/0* active idle 10.244.40.237 Unit is ready and clustered
|
||||
nrpe-container/16 active idle 10.244.40.237 icmp,5666/tcp ready
|
||||
telegraf/36 active idle 10.244.40.237 9103/tcp Monitoring glance/0
|
||||
glance/1 active idle 16/lxd/4 10.244.41.5 9292/tcp Unit is ready
|
||||
filebeat/67 active idle 10.244.41.5 Filebeat ready
|
||||
hacluster-glance/2 active idle 10.244.41.5 Unit is ready and clustered
|
||||
nrpe-container/44 active idle 10.244.41.5 icmp,5666/tcp ready
|
||||
telegraf/66 active idle 10.244.41.5 9103/tcp Monitoring glance/1
|
||||
glance/2* active idle 17/lxd/4 10.244.40.234 9292/tcp Unit is ready
|
||||
filebeat/37 active idle 10.244.40.234 Filebeat ready
|
||||
hacluster-glance/1 active idle 10.244.40.234 Unit is ready and clustered
|
||||
nrpe-container/17 active idle 10.244.40.234 icmp,5666/tcp ready
|
||||
telegraf/37 active idle 10.244.40.234 9103/tcp Monitoring glance/2
|
||||
gnocchi/0 active idle 18/lxd/3 10.244.40.231 8041/tcp Unit is ready
|
||||
filebeat/24 active idle 10.244.40.231 Filebeat ready
|
||||
hacluster-gnocchi/0* active idle 10.244.40.231 Unit is ready and clustered
|
||||
nrpe-container/5 active idle 10.244.40.231 icmp,5666/tcp ready
|
||||
telegraf/24 active idle 10.244.40.231 9103/tcp Monitoring gnocchi/0
|
||||
gnocchi/1 active idle 20/lxd/3 10.244.40.244 8041/tcp Unit is ready
|
||||
filebeat/55 active idle 10.244.40.244 Filebeat ready
|
||||
hacluster-gnocchi/2 active idle 10.244.40.244 Unit is ready and clustered
|
||||
nrpe-container/32 active idle 10.244.40.244 icmp,5666/tcp ready
|
||||
telegraf/55 active idle 10.244.40.244 9103/tcp Monitoring gnocchi/1
|
||||
gnocchi/2* active idle 21/lxd/3 10.244.40.230 8041/tcp Unit is ready
|
||||
filebeat/27 active idle 10.244.40.230 Filebeat ready
|
||||
hacluster-gnocchi/1 active idle 10.244.40.230 Unit is ready and clustered
|
||||
nrpe-container/7 active idle 10.244.40.230 icmp,5666/tcp ready
|
||||
telegraf/27 active idle 10.244.40.230 9103/tcp Monitoring gnocchi/2
|
||||
grafana/0* active idle 1 10.244.40.202 3000/tcp Started snap.grafana.grafana
|
||||
canonical-livepatch/1 active idle 10.244.40.202 Running kernel 4.15.0-50.54-generic, patchState: nothing-to-apply
|
||||
filebeat/2 active idle 10.244.40.202 Filebeat ready
|
||||
nrpe-host/1 active idle 10.244.40.202 icmp,5666/tcp ready
|
||||
ntp/2 active idle 10.244.40.202 123/udp chrony: Ready
|
||||
telegraf/2 active idle 10.244.40.202 9103/tcp Monitoring grafana/0
|
||||
graylog-mongodb/0* active idle 10/lxd/0 10.244.40.226 27017/tcp,27019/tcp,27021/tcp,28017/tcp Unit is ready
|
||||
filebeat/14 active idle 10.244.40.226 Filebeat ready
|
||||
nrpe-container/0* active idle 10.244.40.226 icmp,5666/tcp ready
|
||||
telegraf/14 active idle 10.244.40.226 9103/tcp Monitoring graylog-mongodb/0
|
||||
graylog/0* active idle 10 10.244.40.218 5044/tcp Ready with: filebeat, elasticsearch, mongodb
|
||||
canonical-livepatch/12 active idle 10.244.40.218 Running kernel 4.15.0-50.54-generic, patchState: nothing-to-apply
|
||||
nrpe-host/12 active idle 10.244.40.218 icmp,5666/tcp ready
|
||||
ntp/13 active idle 10.244.40.218 123/udp chrony: Ready
|
||||
telegraf/13 active idle 10.244.40.218 9103/tcp Monitoring graylog/0
|
||||
heat/0 active idle 15/lxd/4 10.244.40.246 8000/tcp,8004/tcp Unit is ready
|
||||
filebeat/34 active idle 10.244.40.246 Filebeat ready
|
||||
hacluster-heat/0* active idle 10.244.40.246 Unit is ready and clustered
|
||||
nrpe-container/14 active idle 10.244.40.246 icmp,5666/tcp ready
|
||||
telegraf/34 active idle 10.244.40.246 9103/tcp Monitoring heat/0
|
||||
heat/1* active idle 16/lxd/5 10.244.40.238 8000/tcp,8004/tcp Unit is ready
|
||||
filebeat/56 active idle 10.244.40.238 Filebeat ready.
|
||||
hacluster-heat/2 active idle 10.244.40.238 Unit is ready and clustered
|
||||
nrpe-container/33 active idle 10.244.40.238 icmp,5666/tcp ready
|
||||
telegraf/56 active idle 10.244.40.238 9103/tcp Monitoring heat/1
|
||||
heat/2 active idle 17/lxd/5 10.244.41.0 8000/tcp,8004/tcp Unit is ready
|
||||
filebeat/43 active idle 10.244.41.0 Filebeat ready.
|
||||
hacluster-heat/1 active idle 10.244.41.0 Unit is ready and clustered
|
||||
nrpe-container/22 active idle 10.244.41.0 icmp,5666/tcp ready
|
||||
telegraf/43 active idle 10.244.41.0 9103/tcp Monitoring heat/2
|
||||
keystone/0* active idle 15/lxd/5 10.244.40.243 5000/tcp Unit is ready
|
||||
filebeat/33 active idle 10.244.40.243 Filebeat ready
|
||||
hacluster-keystone/0* active idle 10.244.40.243 Unit is ready and clustered
|
||||
keystone-ldap/0* active idle 10.244.40.243 Unit is ready
|
||||
nrpe-container/13 active idle 10.244.40.243 icmp,5666/tcp ready
|
||||
telegraf/33 active idle 10.244.40.243 9103/tcp Monitoring keystone/0
|
||||
keystone/1 active idle 16/lxd/6 10.244.40.254 5000/tcp Unit is ready
|
||||
filebeat/60 active idle 10.244.40.254 Filebeat ready
|
||||
hacluster-keystone/2 active idle 10.244.40.254 Unit is ready and clustered
|
||||
keystone-ldap/2 active idle 10.244.40.254 Unit is ready
|
||||
nrpe-container/37 active idle 10.244.40.254 icmp,5666/tcp ready
|
||||
telegraf/60 active idle 10.244.40.254 9103/tcp Monitoring keystone/1
|
||||
keystone/2 active idle 17/lxd/6 10.244.41.3 5000/tcp Unit is ready
|
||||
filebeat/48 active idle 10.244.41.3 Filebeat ready
|
||||
hacluster-keystone/1 active idle 10.244.41.3 Unit is ready and clustered
|
||||
keystone-ldap/1 active idle 10.244.41.3 Unit is ready
|
||||
nrpe-container/26 active idle 10.244.41.3 icmp,5666/tcp ready
|
||||
telegraf/48 active idle 10.244.41.3 9103/tcp Monitoring keystone/2
|
||||
landscape-haproxy/0* unknown idle 2 10.244.40.203 80/tcp,443/tcp
|
||||
filebeat/1 active idle 10.244.40.203 Filebeat ready
|
||||
nrpe-host/0* active idle 10.244.40.203 icmp,5666/tcp ready
|
||||
ntp/1 active idle 10.244.40.203 123/udp chrony: Ready
|
||||
telegraf/1 active idle 10.244.40.203 9103/tcp Monitoring landscape-haproxy/0
|
||||
landscape-postgresql/0* maintenance idle 3 10.244.40.215 5432/tcp Installing postgresql-.*-debversion,postgresql-plpython-.*
|
||||
canonical-livepatch/9 active idle 10.244.40.215 Running kernel 4.15.0-50.54-generic, patchState: nothing-to-apply
|
||||
filebeat/10 active idle 10.244.40.215 Filebeat ready
|
||||
nrpe-host/9 active idle 10.244.40.215 icmp,5666/tcp ready
|
||||
ntp/10 active idle 10.244.40.215 123/udp chrony: Ready
|
||||
telegraf/10 active idle 10.244.40.215 9103/tcp Monitoring landscape-postgresql/0
|
||||
landscape-postgresql/1 active idle 8 10.244.40.214 5432/tcp Live secondary (10.8)
|
||||
canonical-livepatch/10 active idle 10.244.40.214 Running kernel 4.15.0-50.54-generic, patchState: nothing-to-apply
|
||||
filebeat/11 active idle 10.244.40.214 Filebeat ready
|
||||
nrpe-host/10 active idle 10.244.40.214 icmp,5666/tcp ready
|
||||
ntp/11 active idle 10.244.40.214 123/udp chrony: Ready
|
||||
telegraf/11 active idle 10.244.40.214 9103/tcp Monitoring landscape-postgresql/1
|
||||
landscape-rabbitmq-server/0* active idle 4 10.244.40.211 5672/tcp Unit is ready and clustered
|
||||
canonical-livepatch/8 active idle 10.244.40.211 Running kernel 4.15.0-50.54-generic, patchState: nothing-to-apply
|
||||
filebeat/9 active idle 10.244.40.211 Filebeat ready
|
||||
nrpe-host/8 active idle 10.244.40.211 icmp,5666/tcp ready
|
||||
ntp/9 active idle 10.244.40.211 123/udp chrony: Ready
|
||||
telegraf/9 active idle 10.244.40.211 9103/tcp Monitoring landscape-rabbitmq-server/0
|
||||
landscape-rabbitmq-server/1 active idle 7 10.244.40.208 5672/tcp Unit is ready and clustered
|
||||
canonical-livepatch/11 active idle 10.244.40.208 Running kernel 4.15.0-50.54-generic, patchState: nothing-to-apply
|
||||
filebeat/12 active idle 10.244.40.208 Filebeat ready
|
||||
nrpe-host/11 active idle 10.244.40.208 icmp,5666/tcp ready
|
||||
ntp/12 active idle 10.244.40.208 123/udp chrony: Ready
|
||||
telegraf/12 active idle 10.244.40.208 9103/tcp Monitoring landscape-rabbitmq-server/1
|
||||
landscape-rabbitmq-server/2 active idle 12 10.244.40.207 5672/tcp Unit is ready and clustered
|
||||
canonical-livepatch/7 active idle 10.244.40.207 Running kernel 4.15.0-50.54-generic, patchState: nothing-to-apply
|
||||
filebeat/8 active idle 10.244.40.207 Filebeat ready
|
||||
nrpe-host/7 active idle 10.244.40.207 icmp,5666/tcp ready
|
||||
ntp/8 active idle 10.244.40.207 123/udp chrony: Ready
|
||||
telegraf/8 active idle 10.244.40.207 9103/tcp Monitoring landscape-rabbitmq-server/2
|
||||
landscape-server/0* active idle 6 10.244.40.210
|
||||
canonical-livepatch/4 active idle 10.244.40.210 Running kernel 4.15.0-50.54-generic, patchState: nothing-to-apply
|
||||
filebeat/5 active idle 10.244.40.210 Filebeat ready
|
||||
nrpe-host/4 active idle 10.244.40.210 icmp,5666/tcp ready
|
||||
ntp/5 active idle 10.244.40.210 123/udp chrony: Ready
|
||||
telegraf/5 active idle 10.244.40.210 9103/tcp Monitoring landscape-server/0
|
||||
landscape-server/1 active idle 11 10.244.40.212
|
||||
canonical-livepatch/5 active idle 10.244.40.212 Running kernel 4.15.0-50.54-generic, patchState: nothing-to-apply
|
||||
filebeat/6 active idle 10.244.40.212 Filebeat ready
|
||||
nrpe-host/5 active idle 10.244.40.212 icmp,5666/tcp ready
|
||||
ntp/6 active idle 10.244.40.212 123/udp chrony: Ready
|
||||
telegraf/6 active idle 10.244.40.212 9103/tcp Monitoring landscape-server/1
|
||||
landscape-server/2 active idle 14 10.244.40.204
|
||||
canonical-livepatch/6 active idle 10.244.40.204 Running kernel 4.15.0-50.54-generic, patchState: nothing-to-apply
|
||||
filebeat/7 active idle 10.244.40.204 Filebeat ready
|
||||
nrpe-host/6 active idle 10.244.40.204 icmp,5666/tcp ready
|
||||
ntp/7 active idle 10.244.40.204 123/udp chrony: Ready
|
||||
telegraf/7 active idle 10.244.40.204 9103/tcp Monitoring landscape-server/2
|
||||
memcached/0* active idle 16/lxd/3 10.244.40.250 11211/tcp Unit is ready and clustered
|
||||
memcached/1 active idle 17/lxd/3 10.244.40.255 11211/tcp Unit is ready and clustered
|
||||
mysql/0* active idle 15/lxd/6 10.244.40.251 3306/tcp Unit is ready
|
||||
filebeat/28 active idle 10.244.40.251 Filebeat ready
|
||||
hacluster-mysql/1 active idle 10.244.40.251 Unit is ready and clustered
|
||||
nrpe-container/8 active idle 10.244.40.251 icmp,5666/tcp ready
|
||||
telegraf/28 active idle 10.244.40.251 9103/tcp Monitoring mysql/0
|
||||
mysql/1 active idle 16/lxd/7 10.244.40.252 3306/tcp Unit is ready
|
||||
filebeat/25 active idle 10.244.40.252 Filebeat ready
|
||||
hacluster-mysql/0* active idle 10.244.40.252 Unit is ready and clustered
|
||||
nrpe-container/6 active idle 10.244.40.252 icmp,5666/tcp ready
|
||||
telegraf/25 active idle 10.244.40.252 9103/tcp Monitoring mysql/1
|
||||
mysql/2 active idle 17/lxd/7 10.244.41.68 3306/tcp Unit is ready
|
||||
filebeat/50 active idle 10.244.41.68 Filebeat ready
|
||||
hacluster-mysql/2 active idle 10.244.41.68 Unit is ready and clustered
|
||||
nrpe-container/27 active idle 10.244.41.68 icmp,5666/tcp ready
|
||||
telegraf/50 active idle 10.244.41.68 9103/tcp Monitoring mysql/2
|
||||
nagios/0* active idle 0 10.244.40.201 80/tcp ready
|
||||
canonical-livepatch/0* active idle 10.244.40.201 Running kernel 4.15.0-50.54-generic, patchState: nothing-to-apply
|
||||
filebeat/0* active idle 10.244.40.201 Filebeat ready
|
||||
ntp/0* active idle 10.244.40.201 123/udp chrony: Ready
|
||||
telegraf/0* active idle 10.244.40.201 9103/tcp Monitoring nagios/0
|
||||
thruk-agent/0* unknown idle 10.244.40.201
|
||||
neutron-api/0 active idle 18/lxd/4 10.244.41.67 9696/tcp Unit is ready
|
||||
filebeat/53 active idle 10.244.41.67 Filebeat ready
|
||||
hacluster-neutron/0* active idle 10.244.41.67 Unit is ready and clustered
|
||||
nrpe-container/30 active idle 10.244.41.67 icmp,5666/tcp ready
|
||||
telegraf/53 active idle 10.244.41.67 9103/tcp Monitoring neutron-api/0
|
||||
neutron-api/1 active idle 20/lxd/4 10.244.41.73 9696/tcp Unit is ready
|
||||
filebeat/58 active idle 10.244.41.73 Filebeat ready
|
||||
hacluster-neutron/1 active idle 10.244.41.73 Unit is ready and clustered
|
||||
nrpe-container/35 active idle 10.244.41.73 icmp,5666/tcp ready
|
||||
telegraf/58 active idle 10.244.41.73 9103/tcp Monitoring neutron-api/1
|
||||
neutron-api/2* active idle 21/lxd/4 10.244.41.6 9696/tcp Unit is ready
|
||||
filebeat/64 active idle 10.244.41.6 Filebeat ready
|
||||
hacluster-neutron/2 active idle 10.244.41.6 Unit is ready and clustered
|
||||
nrpe-container/41 active idle 10.244.41.6 icmp,5666/tcp ready
|
||||
telegraf/64 active idle 10.244.41.6 9103/tcp Monitoring neutron-api/2
|
||||
neutron-gateway/0 active idle 20 10.244.40.224 Unit is ready
|
||||
canonical-livepatch/21 active idle 10.244.40.224 Running kernel 4.15.0-50.54-generic, patchState: nothing-to-apply
|
||||
filebeat/49 active idle 10.244.40.224 Filebeat ready
|
||||
lldpd/8 active idle 10.244.40.224 LLDP daemon running
|
||||
nrpe-host/31 active idle 10.244.40.224 ready
|
||||
ntp/23 active idle 10.244.40.224 123/udp chrony: Ready
|
||||
telegraf/49 active idle 10.244.40.224 9103/tcp Monitoring neutron-gateway/0
|
||||
neutron-gateway/1* active idle 21 10.244.40.222 Unit is ready
|
||||
canonical-livepatch/20 active idle 10.244.40.222 Running kernel 4.15.0-50.54-generic, patchState: nothing-to-apply
|
||||
filebeat/44 active idle 10.244.40.222 Filebeat ready
|
||||
lldpd/7 active idle 10.244.40.222 LLDP daemon running
|
||||
nrpe-host/28 active idle 10.244.40.222 icmp,5666/tcp ready
|
||||
ntp/22 active idle 10.244.40.222 123/udp chrony: Ready
|
||||
telegraf/44 active idle 10.244.40.222 9103/tcp Monitoring neutron-gateway/1
|
||||
nova-cloud-controller/0 active idle 18/lxd/5 10.244.40.242 8774/tcp,8775/tcp,8778/tcp Unit is ready
|
||||
filebeat/54 active idle 10.244.40.242 Filebeat ready
|
||||
hacluster-nova/1 active idle 10.244.40.242 Unit is ready and clustered
|
||||
nrpe-container/31 active idle 10.244.40.242 icmp,5666/tcp ready
|
||||
telegraf/54 active idle 10.244.40.242 9103/tcp Monitoring nova-cloud-controller/0
|
||||
nova-cloud-controller/1 active idle 20/lxd/5 10.244.41.76 8774/tcp,8775/tcp,8778/tcp Unit is ready
|
||||
filebeat/68 active idle 10.244.41.76 Filebeat ready
|
||||
hacluster-nova/2 active idle 10.244.41.76 Unit is ready and clustered
|
||||
nrpe-container/45 active idle 10.244.41.76 icmp,5666/tcp ready
|
||||
telegraf/68 active idle 10.244.41.76 9103/tcp Monitoring nova-cloud-controller/1
|
||||
nova-cloud-controller/2* active idle 21/lxd/5 10.244.40.235 8774/tcp,8775/tcp,8778/tcp Unit is ready
|
||||
filebeat/52 active idle 10.244.40.235 Filebeat ready
|
||||
hacluster-nova/0* active idle 10.244.40.235 Unit is ready and clustered
|
||||
nrpe-container/29 active idle 10.244.40.235 icmp,5666/tcp ready
|
||||
telegraf/52 active idle 10.244.40.235 9103/tcp Monitoring nova-cloud-controller/2
|
||||
nova-compute-kvm/0* active idle 15 10.244.40.206 Unit is ready
|
||||
canonical-livepatch/17 active idle 10.244.40.206 Running kernel 4.15.0-50.54-generic, patchState: nothing-to-apply
|
||||
ceilometer-agent/4 active idle 10.244.40.206 Unit is ready
|
||||
filebeat/23 active idle 10.244.40.206 Filebeat ready
|
||||
lldpd/4 active idle 10.244.40.206 LLDP daemon running
|
||||
neutron-openvswitch/4 active idle 10.244.40.206 Unit is ready
|
||||
nrpe-host/22 active idle 10.244.40.206 ready
|
||||
ntp/19 active idle 10.244.40.206 123/udp chrony: Ready
|
||||
telegraf/23 active idle 10.244.40.206 9103/tcp Monitoring nova-compute-kvm/0
|
||||
nova-compute-kvm/1 active idle 16 10.244.40.213 Unit is ready
|
||||
canonical-livepatch/14 active idle 10.244.40.213 Running kernel 4.15.0-50.54-generic, patchState: nothing-to-apply
|
||||
ceilometer-agent/1 active idle 10.244.40.213 Unit is ready
|
||||
filebeat/18 active idle 10.244.40.213 Filebeat ready
|
||||
lldpd/1 active idle 10.244.40.213 LLDP daemon running
|
||||
neutron-openvswitch/1 active idle 10.244.40.213 Unit is ready
|
||||
nrpe-host/17 active idle 10.244.40.213 ready
|
||||
ntp/16 active idle 10.244.40.213 123/udp chrony: Ready
|
||||
telegraf/18 active idle 10.244.40.213 9103/tcp Monitoring nova-compute-kvm/1
|
||||
nova-compute-kvm/2 active idle 17 10.244.40.220 Unit is ready
|
||||
canonical-livepatch/18 active idle 10.244.40.220 Running kernel 4.15.0-50.54-generic, patchState: nothing-to-apply
|
||||
ceilometer-agent/5 active idle 10.244.40.220 Unit is ready
|
||||
filebeat/26 active idle 10.244.40.220 Filebeat ready
|
||||
lldpd/5 active idle 10.244.40.220 LLDP daemon running
|
||||
neutron-openvswitch/5 active idle 10.244.40.220 Unit is ready
|
||||
nrpe-host/24 active idle 10.244.40.220 icmp,5666/tcp ready
|
||||
ntp/20 active idle 10.244.40.220 123/udp chrony: Ready
|
||||
telegraf/26 active idle 10.244.40.220 9103/tcp Monitoring nova-compute-kvm/2
|
||||
nova-compute-kvm/3 active idle 18 10.244.40.225 Unit is ready
|
||||
canonical-livepatch/19 active idle 10.244.40.225 Running kernel 4.15.0-50.54-generic, patchState: nothing-to-apply
|
||||
ceilometer-agent/6 active idle 10.244.40.225 Unit is ready
|
||||
filebeat/41 active idle 10.244.40.225 Filebeat ready
|
||||
lldpd/6 active idle 10.244.40.225 LLDP daemon running
|
||||
neutron-openvswitch/6 active idle 10.244.40.225 Unit is ready
|
||||
nrpe-host/26 active idle 10.244.40.225 ready
|
||||
ntp/21 active idle 10.244.40.225 123/udp chrony: Ready
|
||||
telegraf/41 active idle 10.244.40.225 9103/tcp Monitoring nova-compute-kvm/3
|
||||
nova-compute-kvm/4 active idle 19 10.244.40.221 Unit is ready
|
||||
canonical-livepatch/15 active idle 10.244.40.221 Running kernel 4.15.0-50.54-generic, patchState: nothing-to-apply
|
||||
ceilometer-agent/2 active idle 10.244.40.221 Unit is ready
|
||||
filebeat/19 active idle 10.244.40.221 Filebeat ready
|
||||
lldpd/2 active idle 10.244.40.221 LLDP daemon running
|
||||
neutron-openvswitch/2 active idle 10.244.40.221 Unit is ready
|
||||
nrpe-host/19 active idle 10.244.40.221 ready
|
||||
ntp/17 active idle 10.244.40.221 123/udp chrony: Ready
|
||||
telegraf/19 active idle 10.244.40.221 9103/tcp Monitoring nova-compute-kvm/4
|
||||
nova-compute-lxd/0 active idle 22 10.244.40.223 Unit is ready
|
||||
canonical-livepatch/16 active idle 10.244.40.223 Running kernel 4.15.0-50.54-generic, patchState: nothing-to-apply
|
||||
ceilometer-agent/3 active idle 10.244.40.223 Unit is ready
|
||||
filebeat/20 active idle 10.244.40.223 Filebeat ready
|
||||
lldpd/3 active idle 10.244.40.223 LLDP daemon running
|
||||
neutron-openvswitch/3 active idle 10.244.40.223 Unit is ready
|
||||
nrpe-host/21 active idle 10.244.40.223 ready
|
||||
ntp/18 active idle 10.244.40.223 123/udp chrony: Ready
|
||||
telegraf/20 active idle 10.244.40.223 9103/tcp Monitoring nova-compute-lxd/0
|
||||
nova-compute-lxd/1* active idle 23 10.244.40.219 Unit is ready
|
||||
canonical-livepatch/13 active idle 10.244.40.219 Running kernel 4.15.0-50.54-generic, patchState: nothing-to-apply
|
||||
ceilometer-agent/0* active idle 10.244.40.219 Unit is ready
|
||||
filebeat/16 active idle 10.244.40.219 Filebeat ready
|
||||
lldpd/0* active idle 10.244.40.219 LLDP daemon running
|
||||
neutron-openvswitch/0* active idle 10.244.40.219 Unit is ready
|
||||
nrpe-host/15 active idle 10.244.40.219 icmp,5666/tcp ready
|
||||
ntp/15 active idle 10.244.40.219 123/udp chrony: Ready
|
||||
telegraf/16 active idle 10.244.40.219 9103/tcp Monitoring nova-compute-lxd/1
|
||||
openstack-dashboard/0* active idle 18/lxd/6 10.244.40.232 80/tcp,443/tcp Unit is ready
|
||||
filebeat/30 active idle 10.244.40.232 Filebeat ready
|
||||
hacluster-horizon/0* active idle 10.244.40.232 Unit is ready and clustered
|
||||
nrpe-container/10 active idle 10.244.40.232 icmp,5666/tcp ready
|
||||
telegraf/30 active idle 10.244.40.232 9103/tcp Monitoring openstack-dashboard/0
|
||||
openstack-dashboard/1 active idle 20/lxd/6 10.244.41.75 80/tcp,443/tcp Unit is ready
|
||||
filebeat/73 active idle 10.244.41.75 Filebeat ready
|
||||
hacluster-horizon/2 active idle 10.244.41.75 Unit is ready and clustered
|
||||
nrpe-container/50 active idle 10.244.41.75 icmp,5666/tcp ready
|
||||
telegraf/73 active idle 10.244.41.75 9103/tcp Monitoring openstack-dashboard/1
|
||||
openstack-dashboard/2 active idle 21/lxd/6 10.244.41.69 80/tcp,443/tcp Unit is ready
|
||||
filebeat/72 active idle 10.244.41.69 Filebeat ready
|
||||
hacluster-horizon/1 active idle 10.244.41.69 Unit is ready and clustered
|
||||
nrpe-container/49 active idle 10.244.41.69 icmp,5666/tcp ready
|
||||
telegraf/72 active idle 10.244.41.69 9103/tcp Monitoring openstack-dashboard/2
|
||||
openstack-service-checks/0* active idle 15/lxd/7 10.244.40.240 Unit is ready
|
||||
filebeat/31 active idle 10.244.40.240 Filebeat ready
|
||||
nrpe-container/11 active idle 10.244.40.240 icmp,5666/tcp ready
|
||||
telegraf/31 active idle 10.244.40.240 9103/tcp Monitoring openstack-service-checks/0
|
||||
prometheus-ceph-exporter/0* active idle 16/lxd/8 10.244.40.245 9128/tcp Running
|
||||
filebeat/38 active idle 10.244.40.245 Filebeat ready
|
||||
nrpe-container/18 active idle 10.244.40.245 icmp,5666/tcp ready
|
||||
telegraf/38 active idle 10.244.40.245 9103/tcp Monitoring prometheus-ceph-exporter/0
|
||||
prometheus-openstack-exporter/0* active idle 17/lxd/8 10.244.41.1 Ready
|
||||
filebeat/39 active idle 10.244.41.1 Filebeat ready
|
||||
nrpe-container/19 active idle 10.244.41.1 icmp,5666/tcp ready
|
||||
telegraf/39 active idle 10.244.41.1 9103/tcp Monitoring prometheus-openstack-exporter/0
|
||||
prometheus/0* active idle 9 10.244.40.216 9090/tcp,12321/tcp Ready
|
||||
filebeat/13 active idle 10.244.40.216 Filebeat ready
|
||||
nrpe-host/13 active idle 10.244.40.216 icmp,5666/tcp ready
|
||||
ntp/14 active idle 10.244.40.216 123/udp chrony: Ready
|
||||
telegraf-prometheus/0* active idle 10.244.40.216 9103/tcp Monitoring prometheus/0
|
||||
rabbitmq-server/0 active idle 18/lxd/7 10.244.41.65 5672/tcp Unit is ready and clustered
|
||||
filebeat/62 active idle 10.244.41.65 Filebeat ready
|
||||
nrpe-container/39 active idle 10.244.41.65 icmp,5666/tcp ready
|
||||
telegraf/62 active idle 10.244.41.65 9103/tcp Monitoring rabbitmq-server/0
|
||||
rabbitmq-server/1* active idle 20/lxd/7 10.244.40.247 5672/tcp Unit is ready and clustered
|
||||
filebeat/32 active idle 10.244.40.247 Filebeat ready
|
||||
nrpe-container/12 active idle 10.244.40.247 icmp,5666/tcp ready
|
||||
telegraf/32 active idle 10.244.40.247 9103/tcp Monitoring rabbitmq-server/1
|
||||
rabbitmq-server/2 active idle 21/lxd/7 10.244.41.4 5672/tcp Unit is ready and clustered
|
||||
filebeat/66 active idle 10.244.41.4 Filebeat ready
|
||||
nrpe-container/43 active idle 10.244.41.4 icmp,5666/tcp ready
|
||||
telegraf/67 active idle 10.244.41.4 9103/tcp Monitoring rabbitmq-server/2
|
||||
|
||||
Machine State DNS Inst id Series AZ Message
|
||||
0 started 10.244.40.201 nagios-1 bionic default Deployed
|
||||
1 started 10.244.40.202 grafana-1 bionic default Deployed
|
||||
2 started 10.244.40.203 landscapeha-1 bionic default Deployed
|
||||
3 started 10.244.40.215 landscapesql-1 bionic default Deployed
|
||||
4 started 10.244.40.211 landscapeamqp-1 bionic default Deployed
|
||||
5 started 10.244.40.217 elastic-3 bionic zone3 Deployed
|
||||
6 started 10.244.40.210 landscape-2 bionic zone2 Deployed
|
||||
7 started 10.244.40.208 landscapeamqp-3 bionic zone3 Deployed
|
||||
8 started 10.244.40.214 landscapesql-2 bionic zone2 Deployed
|
||||
9 started 10.244.40.216 prometheus-3 bionic zone3 Deployed
|
||||
10 started 10.244.40.218 graylog-3 bionic zone3 Deployed
|
||||
10/lxd/0 started 10.244.40.226 juju-5aed61-10-lxd-0 bionic zone3 Container started
|
||||
11 started 10.244.40.212 landscape-3 bionic zone3 Deployed
|
||||
12 started 10.244.40.207 landscapeamqp-2 bionic zone2 Deployed
|
||||
13 started 10.244.40.209 elastic-2 bionic zone2 Deployed
|
||||
14 started 10.244.40.204 landscape-1 bionic default Deployed
|
||||
15 started 10.244.40.206 suicune bionic zone2 Deployed
|
||||
15/lxd/0 started 10.244.40.227 juju-5aed61-15-lxd-0 bionic zone2 Container started
|
||||
15/lxd/1 started 10.244.40.228 juju-5aed61-15-lxd-1 bionic zone2 Container started
|
||||
15/lxd/2 started 10.244.40.249 juju-5aed61-15-lxd-2 bionic zone2 Container started
|
||||
15/lxd/3 started 10.244.40.237 juju-5aed61-15-lxd-3 bionic zone2 Container started
|
||||
15/lxd/4 started 10.244.40.246 juju-5aed61-15-lxd-4 bionic zone2 Container started
|
||||
15/lxd/5 started 10.244.40.243 juju-5aed61-15-lxd-5 bionic zone2 Container started
|
||||
15/lxd/6 started 10.244.40.251 juju-5aed61-15-lxd-6 bionic zone2 Container started
|
||||
15/lxd/7 started 10.244.40.240 juju-5aed61-15-lxd-7 bionic zone2 Container started
|
||||
16 started 10.244.40.213 geodude bionic default Deployed
|
||||
16/lxd/0 started 10.244.40.253 juju-5aed61-16-lxd-0 bionic default Container started
|
||||
16/lxd/1 started 10.244.40.241 juju-5aed61-16-lxd-1 bionic default Container started
|
||||
16/lxd/2 started 10.244.40.248 juju-5aed61-16-lxd-2 bionic default Container started
|
||||
16/lxd/3 started 10.244.40.250 juju-5aed61-16-lxd-3 bionic default Container started
|
||||
16/lxd/4 started 10.244.41.5 juju-5aed61-16-lxd-4 bionic default Container started
|
||||
16/lxd/5 started 10.244.40.238 juju-5aed61-16-lxd-5 bionic default Container started
|
||||
16/lxd/6 started 10.244.40.254 juju-5aed61-16-lxd-6 bionic default Container started
|
||||
16/lxd/7 started 10.244.40.252 juju-5aed61-16-lxd-7 bionic default Container started
|
||||
16/lxd/8 started 10.244.40.245 juju-5aed61-16-lxd-8 bionic default Container started
|
||||
17 started 10.244.40.220 armaldo bionic default Deployed
|
||||
17/lxd/0 started 10.244.41.78 juju-5aed61-17-lxd-0 bionic default Container started
|
||||
17/lxd/1 started 10.244.40.233 juju-5aed61-17-lxd-1 bionic default Container started
|
||||
17/lxd/2 started 10.244.41.2 juju-5aed61-17-lxd-2 bionic default Container started
|
||||
17/lxd/3 started 10.244.40.255 juju-5aed61-17-lxd-3 bionic default Container started
|
||||
17/lxd/4 started 10.244.40.234 juju-5aed61-17-lxd-4 bionic default Container started
|
||||
17/lxd/5 started 10.244.41.0 juju-5aed61-17-lxd-5 bionic default Container started
|
||||
17/lxd/6 started 10.244.41.3 juju-5aed61-17-lxd-6 bionic default Container started
|
||||
17/lxd/7 started 10.244.41.68 juju-5aed61-17-lxd-7 bionic default Container started
|
||||
17/lxd/8 started 10.244.41.1 juju-5aed61-17-lxd-8 bionic default Container started
|
||||
18 started 10.244.40.225 elgyem bionic zone3 Deployed
|
||||
18/lxd/0 started 10.244.40.236 juju-5aed61-18-lxd-0 bionic zone3 Container started
|
||||
18/lxd/1 started 10.244.40.239 juju-5aed61-18-lxd-1 bionic zone3 Container started
|
||||
18/lxd/2 started 10.244.41.70 juju-5aed61-18-lxd-2 bionic zone3 Container started
|
||||
18/lxd/3 started 10.244.40.231 juju-5aed61-18-lxd-3 bionic zone3 Container started
|
||||
18/lxd/4 started 10.244.41.67 juju-5aed61-18-lxd-4 bionic zone3 Container started
|
||||
18/lxd/5 started 10.244.40.242 juju-5aed61-18-lxd-5 bionic zone3 Container started
|
||||
18/lxd/6 started 10.244.40.232 juju-5aed61-18-lxd-6 bionic zone3 Container started
|
||||
18/lxd/7 started 10.244.41.65 juju-5aed61-18-lxd-7 bionic zone3 Container started
|
||||
19 started 10.244.40.221 spearow bionic zone2 Deployed
|
||||
20 started 10.244.40.224 quilava bionic default Deployed
|
||||
20/lxd/0 started 10.244.41.74 juju-5aed61-20-lxd-0 bionic default Container started
|
||||
20/lxd/1 started 10.244.41.77 juju-5aed61-20-lxd-1 bionic default Container started
|
||||
20/lxd/2 started 10.244.41.72 juju-5aed61-20-lxd-2 bionic default Container started
|
||||
20/lxd/3 started 10.244.40.244 juju-5aed61-20-lxd-3 bionic default Container started
|
||||
20/lxd/4 started 10.244.41.73 juju-5aed61-20-lxd-4 bionic default Container started
|
||||
20/lxd/5 started 10.244.41.76 juju-5aed61-20-lxd-5 bionic default Container started
|
||||
20/lxd/6 started 10.244.41.75 juju-5aed61-20-lxd-6 bionic default Container started
|
||||
20/lxd/7 started 10.244.40.247 juju-5aed61-20-lxd-7 bionic default Container started
|
||||
21 started 10.244.40.222 rufflet bionic zone3 Deployed
|
||||
21/lxd/0 started 10.244.41.66 juju-5aed61-21-lxd-0 bionic zone3 Container started
|
||||
21/lxd/1 started 10.244.40.229 juju-5aed61-21-lxd-1 bionic zone3 Container started
|
||||
21/lxd/2 started 10.244.41.71 juju-5aed61-21-lxd-2 bionic zone3 Container started
|
||||
21/lxd/3 started 10.244.40.230 juju-5aed61-21-lxd-3 bionic zone3 Container started
|
||||
21/lxd/4 started 10.244.41.6 juju-5aed61-21-lxd-4 bionic zone3 Container started
|
||||
21/lxd/5 started 10.244.40.235 juju-5aed61-21-lxd-5 bionic zone3 Container started
|
||||
21/lxd/6 started 10.244.41.69 juju-5aed61-21-lxd-6 bionic zone3 Container started
|
||||
21/lxd/7 started 10.244.41.4 juju-5aed61-21-lxd-7 bionic zone3 Container started
|
||||
22 started 10.244.40.223 ralts bionic zone2 Deployed
|
||||
23 started 10.244.40.219 beartic bionic zone3 Deployed
|
|
@ -1,334 +0,0 @@
|
|||
:orphan:
|
||||
|
||||
.. _cloud_topology_example:
|
||||
|
||||
Cloud topology example
|
||||
======================
|
||||
|
||||
.. note::
|
||||
|
||||
The information on this page is associated with the topic of :ref:`Managing
|
||||
Power Events <managing_power_events>`. See that page for background
|
||||
information.
|
||||
|
||||
This page contains the analysis of cloud machines. The ideal is to do this for
|
||||
every machine in a cloud in order to determine the *cloud topology*. Six
|
||||
machines are features here. They represent a good cross-section of an *Ubuntu
|
||||
OpenStack* cloud. See :ref:`Reference cloud <reference_cloud>` for the cloud
|
||||
upon which this exercise is based.
|
||||
|
||||
Generally speaking, the cloud nodes are hyperconverged and this is the case for
|
||||
three of the chosen machines, numbered **17**, **18**, and **20**. Yet this
|
||||
analysis also looks at a trio of nodes dedicated to the `Landscape project`_:
|
||||
machines **3**, **11**, and **12**, each of which are not hyperconverged.
|
||||
|
||||
.. note::
|
||||
|
||||
Juju applications can be given custom names at deploy time (see
|
||||
`Application groups`_ in the Juju documentation). This document will call
|
||||
out these `named applications` wherever they occur.
|
||||
|
||||
**machine 17**
|
||||
|
||||
This is what's on machine 17:
|
||||
|
||||
.. code::
|
||||
|
||||
Unit Workload Agent Machine
|
||||
nova-compute-kvm/2 active idle 17
|
||||
canonical-livepatch/18 active idle
|
||||
ceilometer-agent/5 active idle
|
||||
filebeat/26 active idle
|
||||
lldpd/5 active idle
|
||||
neutron-openvswitch/5 active idle
|
||||
nrpe-host/24 active idle
|
||||
ntp/20 active idle
|
||||
telegraf/26 active idle
|
||||
ceph-osd/2 active idle 17
|
||||
bcache-tuning/4 active idle
|
||||
nrpe-host/23 active idle
|
||||
ceph-mon/2 active idle 17/lxd/0
|
||||
filebeat/71 active idle
|
||||
nrpe-container/48 active idle
|
||||
telegraf/71 active idle
|
||||
ceph-radosgw/2 active idle 17/lxd/1
|
||||
filebeat/21 active idle
|
||||
hacluster-radosgw/1 active idle
|
||||
nrpe-container/3 active idle
|
||||
telegraf/21 active idle
|
||||
cinder/2 active idle 17/lxd/2
|
||||
cinder-ceph/1 active idle
|
||||
filebeat/42 active idle
|
||||
hacluster-cinder/1 active idle
|
||||
nrpe-container/21 active idle
|
||||
telegraf/42 active idle
|
||||
designate-bind/1 active idle 17/lxd/3
|
||||
filebeat/40 active idle
|
||||
nrpe-container/20 active idle
|
||||
telegraf/40 active idle
|
||||
glance/2* active idle 17/lxd/4
|
||||
filebeat/37 active idle
|
||||
hacluster-glance/1 active idle
|
||||
nrpe-container/17 active idle
|
||||
telegraf/37 active idle
|
||||
heat/2 active idle 17/lxd/5
|
||||
filebeat/43 active idle
|
||||
hacluster-heat/1 active idle
|
||||
nrpe-container/22 active idle
|
||||
telegraf/43 active idle
|
||||
keystone/2 active idle 17/lxd/6
|
||||
filebeat/48 active idle
|
||||
hacluster-keystone/1 active idle
|
||||
keystone-ldap/1 active idle
|
||||
nrpe-container/26 active idle
|
||||
telegraf/48 active idle
|
||||
mysql/2 active idle 17/lxd/7
|
||||
filebeat/50 active idle
|
||||
hacluster-mysql/2 active idle
|
||||
nrpe-container/27 active idle
|
||||
telegraf/50 active idle
|
||||
prometheus-openstack-exporter/0* active idle 17/lxd/8
|
||||
filebeat/39 active idle
|
||||
nrpe-container/19 active idle
|
||||
telegraf/39 active idle
|
||||
|
||||
.. attention::
|
||||
|
||||
In this example, ``mysql`` and ``nova-compute-kvm`` are `named
|
||||
applications`. The rest of this section will use their real names of
|
||||
``percona-cluster`` and ``nova-compute``, respectively.
|
||||
|
||||
The main applications (principle charms) for this machine are listed below
|
||||
along with their HA status and machine type:
|
||||
|
||||
- ``nova-compute`` (metal)
|
||||
- ``ceph-osd`` (natively HA; metal)
|
||||
- ``ceph-mon`` (natively HA; lxd)
|
||||
- ``ceph-radosgw`` (natively HA; lxd)
|
||||
- ``cinder`` (HA; lxd)
|
||||
- ``designate-bind`` (HA; lxd)
|
||||
- ``glance`` (HA; lxd)
|
||||
- ``heat`` (HA; lxd)
|
||||
- ``keystone`` (HA; lxd)
|
||||
- ``percona-cluster`` (HA; lxd)
|
||||
- ``prometheus-openstack-exporter`` (lxd)
|
||||
|
||||
**machine 18**
|
||||
|
||||
This is what's on machine 18:
|
||||
|
||||
.. code::
|
||||
|
||||
Unit Workload Agent Machine
|
||||
nova-compute-kvm/3 active idle 18
|
||||
canonical-livepatch/19 active idle
|
||||
ceilometer-agent/6 active idle
|
||||
filebeat/41 active idle
|
||||
lldpd/6 active idle
|
||||
neutron-openvswitch/6 active idle
|
||||
nrpe-host/26 active idle
|
||||
ntp/21 active idle
|
||||
telegraf/41 active idle
|
||||
ceph-osd/3 active idle 18
|
||||
bcache-tuning/5 active idle
|
||||
nrpe-host/25 active idle
|
||||
aodh/0* active idle 18/lxd/0
|
||||
filebeat/46 active idle
|
||||
hacluster-aodh/0* active idle
|
||||
nrpe-container/24 active idle
|
||||
telegraf/46 active idle
|
||||
ceilometer/0 blocked idle 18/lxd/1
|
||||
filebeat/51 active idle
|
||||
hacluster-ceilometer/1 active idle
|
||||
nrpe-container/28 active idle
|
||||
telegraf/51 active idle
|
||||
designate/0* active idle 18/lxd/2
|
||||
filebeat/57 active idle
|
||||
hacluster-designate/0* active idle
|
||||
nrpe-container/34 active idle
|
||||
telegraf/57 active idle
|
||||
gnocchi/0 active idle 18/lxd/3
|
||||
filebeat/24 active idle
|
||||
hacluster-gnocchi/0* active idle
|
||||
nrpe-container/5 active idle
|
||||
telegraf/24 active idle
|
||||
neutron-api/0 active idle 18/lxd/4
|
||||
filebeat/53 active idle
|
||||
hacluster-neutron/0* active idle
|
||||
nrpe-container/30 active idle
|
||||
telegraf/53 active idle
|
||||
nova-cloud-controller/0 active idle 18/lxd/5
|
||||
filebeat/54 active idle
|
||||
hacluster-nova/1 active idle
|
||||
nrpe-container/31 active idle
|
||||
telegraf/54 active idle
|
||||
openstack-dashboard/0* active idle 18/lxd/6
|
||||
filebeat/30 active idle
|
||||
hacluster-horizon/0* active idle
|
||||
nrpe-container/10 active idle
|
||||
telegraf/30 active idle
|
||||
rabbitmq-server/0 active idle 18/lxd/7
|
||||
filebeat/62 active idle
|
||||
nrpe-container/39 active idle
|
||||
telegraf/62 active idle
|
||||
|
||||
.. attention::
|
||||
|
||||
In this example, ``nova-compute-kvm`` is a `named application` The rest of
|
||||
this section will use its real name of ``nova-compute``.
|
||||
|
||||
The main applications (principle charms) for this machine are listed below
|
||||
along with their HA status and machine type:
|
||||
|
||||
- ``nova-compute`` (metal)
|
||||
- ``ceph-osd`` (natively HA; metal)
|
||||
- ``aodh`` (HA; lxd)
|
||||
- ``ceilometer`` (HA; lxd)
|
||||
- ``designate`` (HA; lxd)
|
||||
- ``gnocchi`` (HA; lxd)
|
||||
- ``neutron-api`` (HA; lxd)
|
||||
- ``nova-cloud-controller`` (HA; lxd)
|
||||
- ``openstack-dashboard`` (HA; lxd)
|
||||
- ``rabbitmq-server`` (natively HA; lxd)
|
||||
|
||||
**machine 20**
|
||||
|
||||
This is what's on machine 20:
|
||||
|
||||
.. code::
|
||||
|
||||
Unit Workload Agent Machine
|
||||
neutron-gateway/0 active idle 20
|
||||
canonical-livepatch/21 active idle
|
||||
filebeat/49 active idle
|
||||
lldpd/8 active idle
|
||||
nrpe-host/31 active idle
|
||||
ntp/23 active idle
|
||||
telegraf/49 active idle
|
||||
ceph-osd/5 active idle 20
|
||||
bcache-tuning/6 active idle
|
||||
nrpe-host/27 active idle
|
||||
aodh/1 active idle 20/lxd/0
|
||||
filebeat/61 active idle
|
||||
hacluster-aodh/1 active idle
|
||||
nrpe-container/38 active idle
|
||||
telegraf/61 active idle
|
||||
ceilometer/1 blocked idle 20/lxd/1
|
||||
filebeat/70 active idle
|
||||
hacluster-ceilometer/2 active idle
|
||||
nrpe-container/47 active idle
|
||||
telegraf/70 active idle
|
||||
designate/1 active idle 20/lxd/2
|
||||
filebeat/63 active idle
|
||||
hacluster-designate/1 active idle
|
||||
nrpe-container/40 active idle
|
||||
telegraf/63 active idle
|
||||
gnocchi/1 active idle 20/lxd/3
|
||||
filebeat/55 active idle
|
||||
hacluster-gnocchi/2 active idle
|
||||
nrpe-container/32 active idle
|
||||
telegraf/55 active idle
|
||||
neutron-api/1 active idle 20/lxd/4
|
||||
filebeat/58 active idle
|
||||
hacluster-neutron/1 active idle
|
||||
nrpe-container/35 active idle
|
||||
telegraf/58 active idle
|
||||
nova-cloud-controller/1 active idle 20/lxd/5
|
||||
filebeat/68 active idle
|
||||
hacluster-nova/2 active idle
|
||||
nrpe-container/45 active idle
|
||||
telegraf/68 active idle
|
||||
openstack-dashboard/1 active idle 20/lxd/6
|
||||
filebeat/73 active idle
|
||||
hacluster-horizon/2 active idle
|
||||
nrpe-container/50 active idle
|
||||
telegraf/73 active idle
|
||||
rabbitmq-server/1* active idle 20/lxd/7
|
||||
filebeat/32 active idle
|
||||
nrpe-container/12 active idle
|
||||
telegraf/32 active idle
|
||||
|
||||
The main applications (principle charms) for this machine are listed below
|
||||
along with their HA status and machine type:
|
||||
|
||||
- ``neutron-gateway`` (natively HA; metal)
|
||||
- ``ceph-osd`` (natively HA; metal)
|
||||
- ``aodh`` (HA; lxd)
|
||||
- ``ceilometer`` (HA; lxd)
|
||||
- ``designate`` (HA; lxd)
|
||||
- ``gnocchi`` (HA; lxd)
|
||||
- ``neutron-api`` (HA; lxd)
|
||||
- ``nova-cloud-controller`` (HA; lxd)
|
||||
- ``openstack-dashboard`` (HA; lxd)
|
||||
- ``rabbitmq-server`` (natively HA; lxd)
|
||||
|
||||
**machine 3**
|
||||
|
||||
This is what's on machine 3:
|
||||
|
||||
.. code::
|
||||
|
||||
Unit Workload Agent Machine
|
||||
landscape-postgresql/0* maintenance idle 3
|
||||
canonical-livepatch/9 active idle
|
||||
filebeat/10 active idle
|
||||
nrpe-host/9 active idle
|
||||
ntp/10 active idle
|
||||
telegraf/10 active idle
|
||||
|
||||
.. attention::
|
||||
|
||||
In this example, ``landscape-postgresql`` is a `named application` The rest
|
||||
of this section will use its real name of ``postgresql``.
|
||||
|
||||
The main application (principle charm) for this machine is listed below along
|
||||
along with their HA status and machine type:
|
||||
|
||||
- ``postgresql`` (natively HA; metal)
|
||||
|
||||
**machine 11**
|
||||
|
||||
This is what's on machine 11:
|
||||
|
||||
.. code::
|
||||
|
||||
Unit Workload Agent Machine
|
||||
landscape-server/1 active idle 11
|
||||
canonical-livepatch/5 active idle
|
||||
filebeat/6 active idle
|
||||
nrpe-host/5 active idle
|
||||
ntp/6 active idle
|
||||
telegraf/6 active idle
|
||||
|
||||
The main application (principle charm) for this machine is listed below along
|
||||
along with their HA status and machine type:
|
||||
|
||||
- ``landscape-server`` (natively HA; metal)
|
||||
|
||||
**machine 12**
|
||||
|
||||
This is what's on machine 12:
|
||||
|
||||
.. code::
|
||||
|
||||
Unit Workload Agent Machine
|
||||
landscape-rabbitmq-server/2 active idle 12
|
||||
canonical-livepatch/7 active idle
|
||||
filebeat/8 active idle
|
||||
nrpe-host/7 active idle
|
||||
ntp/8 active idle
|
||||
telegraf/8 active idle
|
||||
|
||||
.. attention::
|
||||
|
||||
In this example, ``landscape-rabbitmq-server`` is a `named application`.
|
||||
The rest of this section will use its real name of ``rabbitmq-server``.
|
||||
|
||||
The main application (principle charm) for this machine is listed below along
|
||||
along with their HA status and machine type:
|
||||
|
||||
- ``rabbitmq-server`` (natively HA; metal)
|
||||
|
||||
.. LINKS
|
||||
.. _Application groups: https://discourse.charmhub.io/t/application-groups
|
||||
.. _Landscape project: https://landscape.canonical.com
|
File diff suppressed because it is too large
Load Diff
|
@ -1,20 +0,0 @@
|
|||
===============
|
||||
Ceph operations
|
||||
===============
|
||||
|
||||
Ceph plays a central role in Charmed OpenStack. The below operational topics
|
||||
for Ceph are covered in the `Charmed Ceph documentation`_:
|
||||
|
||||
* `Adding OSDs`_
|
||||
* `Adding MONs`_
|
||||
* `Replacing OSD disks`_
|
||||
* `Encryption at Rest`_
|
||||
* `Software upgrades`_
|
||||
|
||||
.. LINKS
|
||||
.. _Charmed Ceph documentation: https://ubuntu.com/ceph/docs
|
||||
.. _Adding OSDs: https://ubuntu.com/ceph/docs/adding-osds
|
||||
.. _Adding MONs: https://ubuntu.com/ceph/docs/adding-mons
|
||||
.. _Replacing OSD disks: https://ubuntu.com/ceph/docs/replacing-osd-disks
|
||||
.. _Encryption at Rest: https://ubuntu.com/ceph/docs/encryption-at-rest
|
||||
.. _Software upgrades: https://ubuntu.com/ceph/docs/software-upgrades
|
|
@ -1,176 +0,0 @@
|
|||
=======================
|
||||
Deferred service events
|
||||
=======================
|
||||
|
||||
Overview
|
||||
--------
|
||||
|
||||
Operational or maintenance procedures applied to a cloud often lead to the
|
||||
restarting of various OpenStack services and/or the calling of certain charm
|
||||
hooks. Although normal, such events can be undesirable due to the service
|
||||
interruptions they can cause.
|
||||
|
||||
The deferred service events feature allows the operator the choose to prevent
|
||||
these service restarts and hook calls from occurring. These deferred events can
|
||||
then be resolved by the operator at a more opportune time.
|
||||
|
||||
Situations in which these service events are prone to take place include:
|
||||
|
||||
* charm upgrades
|
||||
* OpenStack upgrades (charm-managed)
|
||||
* package upgrades (non-charm-managed)
|
||||
* charm configuration option changes
|
||||
|
||||
Charms
|
||||
------
|
||||
|
||||
Deferred service events are supported on a per-charm basis.
|
||||
|
||||
Here is the current list of deferred event-aware charms:
|
||||
|
||||
* neutron-gateway
|
||||
* neutron-openvswitch
|
||||
* ovn-central
|
||||
* ovn-chassis
|
||||
* ovn-dedicated-chassis
|
||||
* rabbitmq-server
|
||||
|
||||
.. COMMENT # Comment this out until the READMEs have been updated
|
||||
|
||||
Deferred restarts are supported on a per-charm basis. This support will be
|
||||
mentioned in a charm's README along with any charm-specific deferred restart
|
||||
information.
|
||||
|
||||
Here is the current list of deferred restart-aware charms:
|
||||
|
||||
* `neutron-gateway`_
|
||||
* `neutron-openvswitch`_
|
||||
* `ovn-central`_
|
||||
* `ovn-chassis`_
|
||||
* `ovn-dedicated-chassis`_
|
||||
* `rabbitmq-server`_
|
||||
|
||||
Enabling and disabling deferred service events
|
||||
----------------------------------------------
|
||||
|
||||
Deferred service events are disabled by default for all charms. To enable them
|
||||
for a charm:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju config <charm-name> enable-auto-restarts=False
|
||||
|
||||
.. important::
|
||||
|
||||
The ``enable-auto-restarts`` option can only be set post-deployment.
|
||||
|
||||
To disable deferred service events for a charm:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju config <charm-name> enable-auto-restarts=True
|
||||
|
||||
Identifying and resolving deferred service events
|
||||
-------------------------------------------------
|
||||
|
||||
The existence of a deferred service event is exposed in the output of the
|
||||
:command:`juju status` command. The following two sections provide an example
|
||||
of how to identify and resolve each type.
|
||||
|
||||
Service restarts
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
Here the ``neutron-openvswitch/1`` unit is affected by a deferred service
|
||||
restart:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
App Version Status Scale Charm Store Channel Rev OS Message
|
||||
neutron-openvswitch 16.3.0 active 2 neutron-openvswitch charmstore 433 ubuntu Unit is ready
|
||||
nova-compute 21.1.2 active 2 nova-compute charmstore 537 ubuntu Unit is ready.
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
nova-compute/0* active idle 6 172.20.0.13 Unit is ready.
|
||||
neutron-openvswitch/1 active idle 172.20.0.13 Unit is ready. Services queued for restart: openvswitch-switch
|
||||
nova-compute/1 active idle 7 172.20.0.4 Unit is ready
|
||||
neutron-openvswitch/0* active idle 172.20.0.4 Unit is ready
|
||||
|
||||
To see more detail the ``show-deferred-events`` action is used:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju run-action --wait neutron-openvswitch/1 show-deferred-events
|
||||
|
||||
unit-neutron-openvswitch-1:
|
||||
UnitId: neutron-openvswitch/1
|
||||
id: "67"
|
||||
results:
|
||||
Stdout: |
|
||||
none
|
||||
output: |
|
||||
hooks: []
|
||||
restarts:
|
||||
- 1618896650 openvswitch-switch Package update
|
||||
status: completed
|
||||
timing:
|
||||
completed: 2021-04-20 05:52:39 +0000 UTC
|
||||
enqueued: 2021-04-20 05:52:32 +0000 UTC
|
||||
started: 2021-04-20 05:52:33 +0000 UTC
|
||||
|
||||
In this example, the message "Package update" is displayed. This signifies that
|
||||
the package management software of the host is responsible for the service
|
||||
restart request.
|
||||
|
||||
Resolving deferred service restarts
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
To resolve a deferred service restart on a unit run the ``restart-services``
|
||||
action:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju run-action --wait neutron-openvswitch/1 restart-services deferred-only=True
|
||||
|
||||
The argument ``deferred-only`` ensures that only the necessary services are
|
||||
restarted (for a charm that manages multiple services).
|
||||
|
||||
.. note::
|
||||
|
||||
Alternatively, the service can be restarted manually on the unit. The status
|
||||
message will be removed in due course by the charm (i.e. during the next
|
||||
``update-status`` hook execution - a maximum delay of five minutes).
|
||||
|
||||
Hook calls
|
||||
~~~~~~~~~~
|
||||
|
||||
Here the ``neutron-openvswitch/1`` unit is affected by a deferred hook call:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
App Version Status Scale Charm Store Channel Rev OS Message
|
||||
neutron-openvswitch 16.3.0 active 2 neutron-openvswitch charmstore 433 ubuntu Unit is ready. Hooks skipped due to disabled auto restarts: config-changed
|
||||
nova-compute 21.1.2 active 2 nova-compute charmstore 537 ubuntu Unit is ready
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
nova-compute/0* active idle 6 172.20.0.13 Unit is ready
|
||||
neutron-openvswitch/1 active idle 172.20.0.13 Unit is ready. Hooks skipped due to disabled auto restarts: config-changed
|
||||
|
||||
Resolving deferred hook calls
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
To resolve a deferred hook call on a unit run the ``run-deferred-hooks``
|
||||
action:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju run-action --wait neutron-openvswitch/1 run-deferred-hooks
|
||||
|
||||
.. LINKS
|
||||
|
||||
.. CHARMS
|
||||
.. _neutron-gateway: https://opendev.org/openstack/charm-neutron-gateway/src/branch/master/README.md#user-content-deferred-restarts
|
||||
.. _neutron-openvswitch: https://opendev.org/openstack/charm-neutron-openvswitch/src/branch/master/README.md#user-content-deferred-restarts
|
||||
.. _ovn-central: https://opendev.org/x/charm-ovn-central/src/branch/master/README.md#user-content-deferred-restarts
|
||||
.. _ovn-chassis: https://opendev.org/x/charm-ovn-chassis/src/branch/master/README.md#user-content-deferred-restarts
|
||||
.. _ovn-dedicated-chassis: https://opendev.org/x/charm-ovn-dedicated-chassis/src/branch/master/README.md#user-content-deferred-restarts
|
||||
.. _rabbitmq-server: https://opendev.org/openstack/charm-rabbitmq-server/src/branch/master/README.md#user-content-deferred-restarts
|
|
@ -74,12 +74,8 @@ OpenStack Charms usage. To help improve it you can `file an issue`_ or
|
|||
|
||||
.. toctree::
|
||||
:caption: Operations
|
||||
:maxdepth: 1
|
||||
|
||||
app-managing-power-events
|
||||
ceph-operations
|
||||
deferred-events
|
||||
operational-tasks
|
||||
Operations have moved (charm-guide) <https://docs.openstack.org/charm-guide/latest/howto>
|
||||
|
||||
.. toctree::
|
||||
:caption: Storage
|
||||
|
|
|
@ -1,21 +0,0 @@
|
|||
=================
|
||||
Operational tasks
|
||||
=================
|
||||
|
||||
This page lists operational tasks that can be applied to a Charmed OpenStack
|
||||
cloud. Generally speaking, the cloud should be in a healthy state prior to
|
||||
having these operations applied to it.
|
||||
|
||||
* :doc:`Change Keystone password <ops-change-keystone-password>`
|
||||
* :doc:`Scale in the nova-compute application <ops-scale-in-nova-compute>`
|
||||
* :doc:`Unseal Vault <ops-unseal-vault>`
|
||||
* :doc:`Configure TLS for the Vault API <ops-config-tls-vault-api>`
|
||||
* :doc:`Live migrate VMs from a running compute node <ops-live-migrate-vms>`
|
||||
* :doc:`Scale back an application with the hacluster charm <ops-scale-back-with-hacluster>`
|
||||
* :doc:`Scale out the nova-compute application <ops-scale-out-nova-compute>`
|
||||
* :doc:`Start MySQL InnoDB Cluster from a complete outage <ops-start-innodb-from-outage>`
|
||||
* :doc:`Implement automatic Glance image updates <ops-auto-glance-image-updates>`
|
||||
* :doc:`Use OpenStack as a backing cloud for Juju <ops-use-openstack-to-back-juju>`
|
||||
* :doc:`Implement HA with a VIP <ops-implement-ha-with-vip>`
|
||||
* :doc:`Set up admin access to a cloud <ops-cloud-admin-access>`
|
||||
* :doc:`Reissue TLS certificates across the cloud <ops-reissue-tls-certs>`
|
|
@ -1,145 +0,0 @@
|
|||
:orphan:
|
||||
|
||||
========================================
|
||||
Implement automatic Glance image updates
|
||||
========================================
|
||||
|
||||
Preamble
|
||||
--------
|
||||
|
||||
An OpenStack cloud generally benefits from making available the most recent
|
||||
cloud images as it minimizes the need to perform software updates on its VMs.
|
||||
It is also convenient to automate the process of updating these images. This
|
||||
article will show how to accomplish all this with the
|
||||
`glance-simplestreams-sync`_ charm.
|
||||
|
||||
Requirements
|
||||
------------
|
||||
|
||||
The glance-simplestreams-sync charm places Simplestreams metadata in Object
|
||||
Storage via the cloud's ``swift`` API endpoint. The cloud will therefore
|
||||
require the presence of either Swift (`swift-proxy`_ and `swift-storage`_
|
||||
charms) or the Ceph RADOS Gateway (`ceph-radosgw`_ charm).
|
||||
|
||||
Procedure
|
||||
---------
|
||||
|
||||
Deploy the software
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Deploy the glance-simplestreams-sync application. Here it is containerised on
|
||||
machine 1:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy --to lxd:1 glance-simplestreams-sync
|
||||
juju add-relation glance-simplestreams-sync:identity-service keystone:identity-service
|
||||
juju add-relation glance-simplestreams-sync:certificates vault:certificates
|
||||
|
||||
We are assuming that the cloud is TLS-enabled (hence the Vault relation).
|
||||
|
||||
.. note::
|
||||
|
||||
The glance-simplestreams-sync charm sets up its own ``image-stream``
|
||||
endpoint. However, it is not utilised in this present scenario. It is
|
||||
leveraged when using OpenStack as a backing cloud to Juju.
|
||||
|
||||
Configure image downloads
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The recommended way to configure image downloads is with a YAML file. As an
|
||||
example, we'll filter on the following:
|
||||
|
||||
* Bionic and Focal images
|
||||
* arm64 and amd64 architectures
|
||||
* officially released images and daily images
|
||||
* the latest of each found image (i.e. maximum of one)
|
||||
|
||||
To satisfy all the above place the below configuration in, say, file
|
||||
``~/gss.yaml``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
glance-simplestreams-sync:
|
||||
mirror_list: |
|
||||
[{ url: 'http://cloud-images.ubuntu.com/releases/',
|
||||
name_prefix: 'ubuntu:released',
|
||||
path: 'streams/v1/index.sjson',
|
||||
max: 1,
|
||||
item_filters: ['arch~(arm64|amd64)', 'ftype~(uefi1.img|uefi.img|disk1.img)', 'release~(bionic|focal)']
|
||||
},
|
||||
{ url: 'http://cloud-images.ubuntu.com/daily/',
|
||||
name_prefix: 'ubuntu:daily',
|
||||
path: 'streams/v1/index.sjson',
|
||||
max: 1,
|
||||
item_filters: ['arch~(arm64|amd64)', 'ftype~(uefi1.img|uefi.img|disk1.img)', 'release~(bionic|focal)']
|
||||
}
|
||||
]
|
||||
|
||||
Now configure the charm by referencing the file:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju config --file ~/gss.yaml glance-simplestreams-sync
|
||||
|
||||
.. note::
|
||||
|
||||
If a configuration is not provided the applications's default behaviour is
|
||||
to download the latest official amd64 image for each of the last four LTS
|
||||
releases.
|
||||
|
||||
Enable automatic image updates
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Enable automatic image updates via the ``run`` option. Here we also specify
|
||||
checks to occur on a weekly basis:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju config glance-simplestreams-sync frequency=weekly run=true
|
||||
|
||||
Valid frequencies are 'hourly', 'daily', and 'weekly'.
|
||||
|
||||
Perform a manual image sync (optional)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
A manual image sync can optionally be performed with the ``sync-images``
|
||||
action:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju run-action --wait glance-simplestreams-sync/leader sync-images
|
||||
|
||||
Sample output:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
unit-glance-simplestreams-sync-0:
|
||||
UnitId: glance-simplestreams-sync/0
|
||||
id: "32"
|
||||
results:
|
||||
.
|
||||
.
|
||||
.
|
||||
created 12b3415c-8f50-491f-916c-e08ba4da71c5: auto-sync/ubuntu-bionic-18.04-amd64-server-20210720-disk1.img
|
||||
created 73ea8a47-1b1f-48cf-b216-c7eba38d96ab: auto-sync/ubuntu-bionic-18.04-arm64-server-20210720-disk1.img
|
||||
created 37d7aeff-5ccb-4a4a-9258-f7948df4caa2: auto-sync/ubuntu-focal-20.04-amd64-server-20210720-disk1.img
|
||||
created 10acb4a1-ed7d-4a43-b14c-49d646f23b87: auto-sync/ubuntu-focal-20.04-arm64-server-20210720-disk1.img
|
||||
created 90d308d3-cf23-49da-a625-c50a55286d94: auto-sync/ubuntu-bionic-daily-amd64-server-20210720-disk1.img
|
||||
created aafa3f2b-002b-4b1c-a212-d99d858bf6b7: auto-sync/ubuntu-bionic-daily-arm64-server-20210720-disk1.img
|
||||
created 350d4537-cb8d-445b-a62f-6a1ad15ce3b7: auto-sync/ubuntu-focal-daily-amd64-server-20210720-disk1.img
|
||||
created 63f75ea0-e55f-499a-92bc-d02f46126834: auto-sync/ubuntu-focal-daily-arm64-server-20210720-disk1.img
|
||||
status: completed
|
||||
timing:
|
||||
completed: 2021-07-23 22:37:28 +0000 UTC
|
||||
enqueued: 2021-07-23 22:22:49 +0000 UTC
|
||||
started: 2021-07-23 22:22:54 +0000 UTC
|
||||
|
||||
This output should reflect the information available via the
|
||||
:command:`openstack image list` command.
|
||||
|
||||
.. LINKS
|
||||
.. _glance-simplestreams-sync: https://jaas.ai/glance-simplestreams-sync
|
||||
.. _ceph-radosgw: https://jaas.ai/ceph-radosgw
|
||||
.. _swift-proxy: https://jaas.ai/swift-proxy
|
||||
.. _swift-storage: https://jaas.ai/swift-storage
|
|
@ -1,104 +0,0 @@
|
|||
:orphan:
|
||||
|
||||
==================================
|
||||
Change the Keystone admin password
|
||||
==================================
|
||||
|
||||
Preamble
|
||||
--------
|
||||
|
||||
There are valid use cases for resetting the Keystone administrator password on
|
||||
a running cloud. For example, the password may have been unintentionally
|
||||
exposed to a third-party during a troubleshooting session (e.g. directly on
|
||||
screen, remote screen-sharing, viewing of log files, etc.).
|
||||
|
||||
.. warning::
|
||||
|
||||
This procedure will cause downtime for Keystone, the cloud's central
|
||||
authentication service. Many core services will therefore be impacted. Plan
|
||||
for a short maintenance window (~15 minutes).
|
||||
|
||||
It is recommended to first test this procedure on a staging cloud.
|
||||
|
||||
Procedure
|
||||
---------
|
||||
|
||||
Confirm the admin user context
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Ensure that the current user is user 'admin':
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
env | grep OS_USERNAME
|
||||
OS_USERNAME=admin
|
||||
|
||||
If it's not, source the appropriate cloud admin init file (e.g. ``openrc`` or
|
||||
``novarc``).
|
||||
|
||||
Obtain the current password
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Obtain the current password with:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju run --unit keystone/leader leader-get admin_passwd
|
||||
|
||||
Change the password
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Generate a 16-character password string with the :command:`pwgen` utility:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
PASSWD=$(pwgen -s 16 1)
|
||||
|
||||
Change the password with the below command. When prompted, enter the current
|
||||
password and then the new password (i.e. the output to ``echo $PASSWD``).
|
||||
|
||||
.. caution::
|
||||
|
||||
Once the next command completes successfully the cloud will no longer be
|
||||
able to authenticate requests by the OpenStack CLI clients or the cloud's
|
||||
core services (i.e. Cinder, Glance, Neutron, Compute, Nova Cloud
|
||||
Controller).
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
openstack user password set
|
||||
Current Password: ****************
|
||||
New Password: ****************
|
||||
Repeat New Password: ****************
|
||||
|
||||
The entered data will not echo back to the screen.
|
||||
|
||||
.. note::
|
||||
|
||||
Command options ``--original-password`` and ``--password`` are available but
|
||||
can leak sensitive information to the system logs.
|
||||
|
||||
Inform the keystone charm
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Inform the keystone charm of the new password:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju run -u keystone/leader -- leader-set 'admin_passwd=$PASSWD'
|
||||
|
||||
Verification
|
||||
~~~~~~~~~~~~
|
||||
|
||||
Verify the resumption of normal cloud operations by running a routine battery
|
||||
of tests. The creation of a VM is a good choice.
|
||||
|
||||
Update any user-facing tools
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Any cloud init files (e.g. ``novarc``) that are hardcoded with the old admin
|
||||
password should be updated to guarantee continued administrative access to the
|
||||
cloud by admin-level operators.
|
||||
|
||||
Refresh any browser-cached passwords or password-management plugins (e.g.
|
||||
Bitwarden, LastPass) to ensure successful cloud dashboard (Horizon) logins.
|
|
@ -1,172 +0,0 @@
|
|||
:orphan:
|
||||
|
||||
==============================
|
||||
Set up admin access to a cloud
|
||||
==============================
|
||||
|
||||
Preamble
|
||||
--------
|
||||
|
||||
In order to configure a newly deployed OpenStack cloud for production use one
|
||||
must first gain native administrative control of it. Although this refers to
|
||||
OpenStack-level admin user access, this article will show how to obtain it via
|
||||
queries made with the Juju client.
|
||||
|
||||
.. note::
|
||||
|
||||
As an alternative to the instructions presented in this article, if the
|
||||
Horizon dashboard is available, access can be obtained by downloading a
|
||||
credentials file.
|
||||
|
||||
Procedure
|
||||
---------
|
||||
|
||||
Install the client software
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The OpenStack clients will be needed in order to manage the cloud from the
|
||||
command line. Install them on the same machine that hosts the Juju client. This
|
||||
example uses the snap install method:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
sudo snap install openstackclients --classic
|
||||
|
||||
Set cloud-specific authentication variables
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In terms of authentication, three cloud-specific pieces of information are
|
||||
needed:
|
||||
|
||||
* the Keystone administrator password
|
||||
* the Keystone service endpoint
|
||||
* the root CA certificate (if the cloud is TLS-enabled)
|
||||
|
||||
Keystone administrator password
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Set environmental variable ``OS_PASSWORD`` to the Keystone administrator
|
||||
password:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
export OS_PASSWORD=$(juju run --unit keystone/leader 'leader-get admin_passwd')
|
||||
|
||||
Keystone service endpoint
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Determine the IP address of the keystone unit and set environmental variable
|
||||
``OS_AUTH_URL`` to the Keystone service endpoint:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
IP_ADDRESS=$(juju run --unit keystone/leader -- 'network-get --bind-address public')
|
||||
export OS_AUTH_URL=https://${IP_ADDRESS}:5000/v3
|
||||
|
||||
.. important::
|
||||
|
||||
If the Keystone endpoint is not using TLS you will need to modify the URL to
|
||||
use HTTP.
|
||||
|
||||
Root CA certificate
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Place the CA certificate in a file that your OpenStack client software can
|
||||
access and set environmental variable ``OS_CACERT`` to that file's path. A
|
||||
commonly used path that works for the ``openstackclients`` snap, for user
|
||||
'ubuntu', is ``/home/ubuntu/snap/openstackclients/common/root-ca.crt``:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
export OS_CACERT=/home/ubuntu/snap/openstackclients/common/root-ca.crt
|
||||
juju run --unit vault/leader 'leader-get root-ca' > $OS_CACERT
|
||||
|
||||
Set other authentication variables
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Charmed OpenStack uses standard values for other authentication variables:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
export OS_USERNAME=admin
|
||||
export OS_PROJECT_NAME=admin
|
||||
export OS_PROJECT_DOMAIN_NAME=admin_domain
|
||||
export OS_USER_DOMAIN_NAME=admin_domain
|
||||
|
||||
Verify administrative control
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The admin user environment should now be complete.
|
||||
|
||||
First inspect all the variables:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
env | grep OS_
|
||||
|
||||
A good initial verification test is to query the cloud's endpoints (Keystone
|
||||
service catalog):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
openstack endpoint list
|
||||
|
||||
A second recommended verification to make is a login to the Horizon dashboard
|
||||
(if present), where the following should be used:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
OS_USERNAME (User Name)
|
||||
OS_PASSWORD (Password)
|
||||
OS_PROJECT_DOMAIN_NAME (Domain)
|
||||
|
||||
You should now have the permissions to configure and manage the cloud.
|
||||
|
||||
Consider a helper script
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Variables can be conveniently set through the use of a shell script that you
|
||||
can write yourself. However, the OpenStack Charms project maintains such files
|
||||
(one script calls another) and they can be found in the `openstack-bundles`_
|
||||
repository.
|
||||
|
||||
Simply download the repository and source the ``openrc`` file:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
git clone https://github.com/openstack-charmers/openstack-bundles ~/openstack-bundles
|
||||
source ~/openstack-bundles/stable/openstack-base/openrc
|
||||
|
||||
This sets a suite of variables. Here is an example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
OS_REGION_NAME=RegionOne
|
||||
OS_AUTH_VERSION=3
|
||||
OS_CACERT=/home/ubuntu/snap/openstackclients/common/root-ca.crt
|
||||
OS_AUTH_URL=https://10.0.0.162:5000/v3
|
||||
OS_PROJECT_DOMAIN_NAME=admin_domain
|
||||
OS_AUTH_PROTOCOL=https
|
||||
OS_USERNAME=admin
|
||||
OS_AUTH_TYPE=password
|
||||
OS_USER_DOMAIN_NAME=admin_domain
|
||||
OS_PROJECT_NAME=admin
|
||||
OS_PASSWORD=aegoaquoo1veZae6
|
||||
OS_IDENTITY_API_VERSION=3
|
||||
|
||||
Some of the above variables were not covered in the manual method but can be
|
||||
required in certain situations. For instance, Swift needs ``OS_AUTH_VERSION``,
|
||||
Gnocchi looks for ``OS_AUTH_TYPE``, and when backing Juju with OpenStack one
|
||||
needs to know the values of multiple variables (see cloud operation :doc:`Use
|
||||
OpenStack as a backing cloud for Juju <ops-use-openstack-to-back-juju>`).
|
||||
|
||||
.. note::
|
||||
|
||||
The helper files will set the Keystone endpoint variable ``OS_AUTH_URL`` to
|
||||
use HTTPS if Vault is detected as containing a root CA certificate. This
|
||||
will always be the case due to the OVN requirement for TLS via Vault. If
|
||||
Keystone is not TLS-enabled (for some reason) you will need to manually
|
||||
reset the above variable to use HTTP.
|
||||
|
||||
.. LINKS
|
||||
.. _openstack-bundles: https://github.com/openstack-charmers/openstack-bundles
|
|
@ -1,120 +0,0 @@
|
|||
:orphan:
|
||||
|
||||
===============================
|
||||
Configure TLS for the Vault API
|
||||
===============================
|
||||
|
||||
Preamble
|
||||
--------
|
||||
|
||||
Configuring the Vault API with TLS assures the identity of the Vault service
|
||||
and encrypts all the information Vault sends over the network. For instance,
|
||||
unsealing keys will not be sent in cleartext. Note that the issuing of its own
|
||||
certificates to the various cloud API services (e.g Cinder, Glance, etc.) is
|
||||
done over relations.
|
||||
|
||||
This procedure can also be used to re-configure an already-encrypted Vault API
|
||||
endpoint.
|
||||
|
||||
.. warning::
|
||||
|
||||
This procedure will cause Vault to become sealed. Please ensure that the
|
||||
requisite number of unseal keys are available before continuing.
|
||||
|
||||
.. caution::
|
||||
|
||||
Although this procedure will cause Vault to become inaccessible, a cloud
|
||||
service outage will not occur unless Vault is solicited. Examples of this
|
||||
include:
|
||||
|
||||
* new certificates being re-issued via the ``reissue-certificates`` vault
|
||||
charm action
|
||||
|
||||
* the rebooting of a Compute node (resulting in the need to decrypt its VMs
|
||||
disk locations)
|
||||
|
||||
TLS material
|
||||
~~~~~~~~~~~~
|
||||
|
||||
It is assumed that the necessary TLS material exists and that it is stored in
|
||||
the below locations (the current user is 'ubuntu' in this example):
|
||||
|
||||
* ``/home/ubuntu/tls/server.crt`` (server certificate)
|
||||
* ``/home/ubuntu/tls/server.key`` (server private key)
|
||||
* ``/home/ubuntu/tls/ca.crt`` (CA certificate)
|
||||
|
||||
.. important::
|
||||
|
||||
The CA certificate must be in the current user's home directory for it to be
|
||||
available to the vault snap at the subsequent unseal step.
|
||||
|
||||
In this example, the Common Name (CN) provided to the server certificate is
|
||||
'vault.example.com'. Ensure that the hostname is resolvable, from the local
|
||||
system, to the IP address as reported by Juju (``juju status vault``).
|
||||
|
||||
For Vault in HA, there would be a unique server certificate for each unit.
|
||||
|
||||
Add TLS material to Vault
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Add the base64-encoded TLS material to Vault via charm configuration options:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju config vault ssl-ca="$(base64 /home/ubuntu/tls/ca.crt)"
|
||||
juju config vault ssl-cert="$(base64 /home/ubuntu/tls/server.crt)"
|
||||
juju config vault ssl-key="$(base64 /home/ubuntu/tls/server.key)"
|
||||
|
||||
Confirm the change by inspecting a file on the unit(s). Once everything has
|
||||
settled, the server certificate can be found in
|
||||
``/var/snap/vault/common/vault.crt``:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju run -a vault "sudo cat /var/snap/vault/common/vault.crt"
|
||||
|
||||
Restart the Vault service
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Restart **each** vault unit for its new certificate to be recognised.
|
||||
|
||||
.. important::
|
||||
|
||||
Restarting Vault will cause it to become sealed.
|
||||
|
||||
For a single unit (``vault/0``):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju run-action --wait vault/0 restart
|
||||
|
||||
The output to :command:`juju status vault` should show that Vault is sealed:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
vault/0* blocked idle 3/lxd/3 10.0.0.204 8200/tcp Unit is sealed
|
||||
|
||||
Vault is now configured with the new certificate.
|
||||
|
||||
Unseal Vault
|
||||
~~~~~~~~~~~~
|
||||
|
||||
Unseal **each** vault unit.
|
||||
|
||||
For a single unit requiring three keys:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
export VAULT_CACERT="/home/ubuntu/tls/ca.crt"
|
||||
export VAULT_ADDR="https://vault.example.com:8200"
|
||||
|
||||
vault operator unseal
|
||||
vault operator unseal
|
||||
vault operator unseal
|
||||
|
||||
For multiple vault units, repeat the procedure by using a different value each
|
||||
time for ``VAULT_ADDR``.
|
||||
|
||||
For more information on unsealing Vault see cloud operation :doc:`Unseal Vault
|
||||
<ops-unseal-vault>`.
|
|
@ -1,73 +0,0 @@
|
|||
:orphan:
|
||||
|
||||
=======================
|
||||
Implement HA with a VIP
|
||||
=======================
|
||||
|
||||
Preamble
|
||||
--------
|
||||
|
||||
The subordinate `hacluster charm`_ provides high availability for OpenStack
|
||||
applications that lack native (built-in) HA functionality. The clustering
|
||||
solution is based on Corosync and Pacemaker.
|
||||
|
||||
.. important::
|
||||
|
||||
The virtual IP method of implementing HA requires that all units of the
|
||||
clustered OpenStack application are on the same subnet.
|
||||
|
||||
The chosen VIP should also be part of a reserved subnet range that MAAS does
|
||||
not use for assigning addresses to its nodes.
|
||||
|
||||
Procedure
|
||||
---------
|
||||
|
||||
HA can be included during the deployment of the principle application or added
|
||||
to an existing application.
|
||||
|
||||
.. note::
|
||||
|
||||
When hacluster is deployed it is normally given an application name that is
|
||||
based on the principle charm name (i.e. <principle-charm-name>-hacluster).
|
||||
|
||||
Deploying an application with HA
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
These commands will deploy a three-node Keystone HA cluster with a VIP of
|
||||
10.246.114.11. Each node will reside in a container on existing machines 0, 1,
|
||||
and 2:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 --config vip=10.246.114.11 keystone
|
||||
juju deploy --config cluster_count=3 hacluster keystone-hacluster
|
||||
juju add-relation keystone-hacluster:ha keystone:ha
|
||||
|
||||
Adding HA to an existing application
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
These commands will add two units to an assumed single existing unit to create
|
||||
a three-node Keystone HA cluster with a VIP of 10.246.114.11:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju config vip=10.246.114.11 keystone
|
||||
juju add-unit -n 2 --to lxd:1,lxd:2 keystone
|
||||
juju deploy --config cluster_count=3 hacluster keystone-hacluster
|
||||
juju add-relation keystone-hacluster:ha keystone:ha
|
||||
|
||||
.. warning::
|
||||
|
||||
Adding HA to an existing application will cause a control plane outage for
|
||||
the given application and any applications that depend on it. New units will
|
||||
be spawned and the Keystone service catalog will be updated (with the new IP
|
||||
address). Plan for a maintenance window.
|
||||
|
||||
.. note::
|
||||
|
||||
Adding HA to Keystone in this way is affected by a known issue (tracked in
|
||||
bug `LP #1930763`_).
|
||||
|
||||
.. LINKS
|
||||
.. _hacluster charm: https://jaas.ai/hacluster
|
||||
.. _LP #1930763: https://bugs.launchpad.net/charm-keystone/+bug/1930763
|
|
@ -1,381 +0,0 @@
|
|||
:orphan:
|
||||
|
||||
============================================
|
||||
Live migrate VMs from a running compute node
|
||||
============================================
|
||||
|
||||
Preamble
|
||||
--------
|
||||
|
||||
A VM migration is the relocation of a VM from one hypervisor to another.
|
||||
|
||||
When a VM has a live migration performed it is not shut down during the
|
||||
process. This is useful when there is an imperative to not interrupt the
|
||||
applications that are running on the VM.
|
||||
|
||||
This article covers manual migrations (migrating an individual VM) and node
|
||||
evacuations (migrating all VMs on a compute node).
|
||||
|
||||
.. warning::
|
||||
|
||||
* Migration involves disabling compute services on the source host,
|
||||
effectively removing the hypervisor from the cloud.
|
||||
|
||||
* Network usage may be significantly impacted if block migration mode is
|
||||
used.
|
||||
|
||||
* VMs with intensive memory workloads may require pausing for live
|
||||
migration to succeed.
|
||||
|
||||
Terminology
|
||||
-----------
|
||||
|
||||
This article makes use of the following terms:
|
||||
|
||||
block migration
|
||||
A migration mode where a disk is copied over the network (source host to
|
||||
destination host). It works with local storage only.
|
||||
|
||||
boot-from-volume
|
||||
A root disk that is based on a Cinder volume.
|
||||
|
||||
ephemeral disk
|
||||
A non-root disk that is managed by Nova. It is based on local or shared
|
||||
storage.
|
||||
|
||||
local storage
|
||||
Storage that is local to the hypervisor. Disk images are typically found
|
||||
under ``/var/lib/nova/instances``.
|
||||
|
||||
nova-scheduler
|
||||
The cloud's ``nova-scheduler`` can be leveraged in the context of migrations
|
||||
to select destination hosts dynamically. It is compatible with both manual
|
||||
migrations and node evacuations.
|
||||
|
||||
shared storage
|
||||
Storage that is accessible to multiple hypervisors simultaneously (e.g. Ceph
|
||||
RBD, NFS, iSCSI).
|
||||
|
||||
Procedure
|
||||
---------
|
||||
|
||||
Ensure live migration is enabled
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
By default, live migration is disabled. Check the current configuration:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju config nova-compute enable-live-migration
|
||||
|
||||
Enable the functionality if the command returns 'false':
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju config nova-compute enable-live-migration=true
|
||||
|
||||
See the `nova-compute charm`_ for information on all migration-related
|
||||
configuration options. Also see section `SSH keys and VM migration`_ for
|
||||
information on how multiple application groups can affect migrations.
|
||||
|
||||
Gather relevant information
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Display the nova-compute units:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju status nova-compute
|
||||
|
||||
This article will be based on the command's following partial output:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
nova-compute/0* active idle 0 10.0.0.222 Unit is ready
|
||||
ntp/0* active idle 10.0.0.222 123/udp chrony: Ready
|
||||
ovn-chassis/0* active idle 10.0.0.222 Unit is ready
|
||||
nova-compute/1 active idle 3 10.0.0.241 Unit is ready
|
||||
ntp/1 active idle 10.0.0.241 123/udp chrony: Ready
|
||||
ovn-chassis/1 active idle 10.0.0.241 Unit is ready
|
||||
|
||||
List the compute node hostnames:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
openstack hypervisor list
|
||||
|
||||
+----+---------------------+-----------------+------------+-------+
|
||||
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
|
||||
+----+---------------------+-----------------+------------+-------+
|
||||
| 1 | node1.maas | QEMU | 10.0.0.222 | up |
|
||||
| 2 | node4.maas | QEMU | 10.0.0.241 | up |
|
||||
+----+---------------------+-----------------+------------+-------+
|
||||
|
||||
Based on the above, map units to node names. This information will be useful
|
||||
later on. The source host should also be clearly identified (this document will
|
||||
use 'node1.maas'):
|
||||
|
||||
+------------+----------------+-------------+
|
||||
| Node name | Unit | Source host |
|
||||
+============+================+=============+
|
||||
| node1.maas | nova-compute/0 | ✔ |
|
||||
+------------+----------------+-------------+
|
||||
| node4.maas | nova-compute/1 | |
|
||||
+------------+----------------+-------------+
|
||||
|
||||
List the VMs hosted on the source host:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
openstack server list --host node1.maas --all-projects
|
||||
|
||||
+--------------------------------------+---------+--------+----------------------------------+-------------+----------+
|
||||
| ID | Name | Status | Networks | Image | Flavor |
|
||||
+--------------------------------------+---------+--------+----------------------------------+-------------+----------+
|
||||
| 81df1304-f755-4ae6-9b8c-2f888f6ad623 | focal-2 | ACTIVE | int_net=192.168.0.144, 10.0.0.76 | focal-amd64 | m1.micro |
|
||||
| 7e897540-a0aa-4031-9b7c-dd03ebc8ec5e | focal-3 | ACTIVE | int_net=192.168.0.73, 10.0.0.69 | focal-amd64 | m1.micro |
|
||||
+--------------------------------------+---------+--------+----------------------------------+-------------+----------+
|
||||
|
||||
Ensure adequate capacity on the destination host
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Oversubscribing the destination host can lead to service outages. This is an
|
||||
issue when a destination host is explicitly selected by the operator.
|
||||
|
||||
The following commands are useful for discovering a VM's flavor, listing flavor
|
||||
parameters, and viewing the available capacity of a destination host:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
openstack server show <vm-name> -c flavor
|
||||
openstack flavor show <flavor> -c vcpus -c ram -c disk
|
||||
openstack host show <destination-host>
|
||||
|
||||
Avoid expired Keystone tokens
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
:ref:`Keystone token <keyston_tokens>` expiration times should be increased
|
||||
when dealing with oversized VMs as expired tokens will prevent the cloud
|
||||
database from being updated. This will lead to migration failure and a
|
||||
corrupted database entry.
|
||||
|
||||
To set the token expiration time to three hours (from the default one hour):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju config keystone token-expiration=10800
|
||||
|
||||
To ensure that the new expiration time is in effect, wait for the current
|
||||
tokens to expire (e.g. one hour) before continuing.
|
||||
|
||||
Disable the source host
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Prior to migration or evacuation, disable the source host by referring to its
|
||||
corresponding unit:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju run-action --wait nova-compute/0 disable
|
||||
|
||||
This will stop nova-compute services and inform nova-scheduler to no longer
|
||||
assign new VMs to the host.
|
||||
|
||||
Live migrate VMs
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
Live migrate VMs using either manual migration or node evacuation.
|
||||
|
||||
Manual migration
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
The command to use when live migrating VMs manually is:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
openstack server migrate --live-migration [--block-migration] [--host <dest-host>] <vm>
|
||||
|
||||
Examples are provided for various scenarios.
|
||||
|
||||
.. note::
|
||||
|
||||
* Depending on your client the Nova API Microversion of '2.30' may need to
|
||||
be specified when combining live migration with a specified host (i.e.
|
||||
``--os-compute-api-version 2.30``).
|
||||
|
||||
* Specifying a destination host will override any anti-affinity rules that
|
||||
may be in place.
|
||||
|
||||
.. important::
|
||||
|
||||
The migration of VMs using local storage will fail if the
|
||||
``--block-migration`` option is not specified. However, the use of this
|
||||
option will also lead to a successful migration for a combination of local
|
||||
and non-local storage (e.g. local ephemeral disk and boot-from-volume).
|
||||
|
||||
1. To migrate VM 'focal-2', which is backed by local storage, using the
|
||||
scheduler:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
openstack server migrate --live-migration --block-migration focal-2
|
||||
|
||||
2. To migrate VM 'focal-3', which is backed by non-local storage, using the
|
||||
scheduler:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
openstack server migrate --live-migration focal-3
|
||||
|
||||
3. To migrate VM 'focal-2', which is backed by a combination local and
|
||||
non-local storage:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
openstack server migrate --live-migration --block-migration focal-2
|
||||
|
||||
4. To migrate VM 'focal-2', which is backed by local storage, to host
|
||||
'node4.maas' specifically:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
openstack --os-compute-api-version 2.30 server migrate \
|
||||
--live-migration --block-migration --host node4.maas focal-2
|
||||
|
||||
5. To migrate VM 'focal-3', which is backed by non-local storage, to host
|
||||
'node4.maas' specifically:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
openstack --os-compute-api-version 2.30 server migrate \
|
||||
--live-migration --host node4.maas focal-3
|
||||
|
||||
Node evacuation
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
The command to use when live evacuating a compute node is:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
nova host-evacuate-live [--block-migrate] [--target-host <dest-host>] <source-host>
|
||||
|
||||
Examples are provided for various scenarios.
|
||||
|
||||
.. note::
|
||||
|
||||
* The scheduler may send VMs to multiple destination hosts.
|
||||
|
||||
* Block migration will be attempted on VMs by default.
|
||||
|
||||
.. important::
|
||||
|
||||
The migration of VMs using non-local storage will fail if the
|
||||
``--block-migrate`` option is specified. However, the omittance of this
|
||||
option will lead to a successful migration for a combination of local and
|
||||
non-local storage (e.g. local ephemeral disk and boot-from-volume). This
|
||||
option therefore has no compelling use case.
|
||||
|
||||
1. To evacuate host 'node1.maas' of VMs, which are backed by local or non-local
|
||||
storage (or a combination thereof):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
nova host-evacuate-live node1.maas
|
||||
|
||||
2. To evacuate host 'node1.maas' of VMs, which are backed by local or non-local
|
||||
storage (or a combination thereof), to host 'node4.maas' specifically:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
nova host-evacuate-live --target-host node4.maas node1.maas
|
||||
|
||||
Enable the source host
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Providing the source host is not being retired, re-enable it by referring to
|
||||
its corresponding unit:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju run-action --wait nova-compute/0 enable
|
||||
|
||||
This will start nova-compute services and allows nova-scheduler to run new VMs
|
||||
on this host.
|
||||
|
||||
Revert Keystone token expiration time
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If the Keystone token expiration time was modified in an earlier step, change
|
||||
it back to its original value. Here it is reset to the default of one hour:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju config keystone token-expiration=3600
|
||||
|
||||
Troubleshooting
|
||||
---------------
|
||||
|
||||
Migration list and migration ID
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To get a record of all past migrations on a per-VM basis, which includes the
|
||||
migration ID:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
nova migration-list
|
||||
|
||||
In this output columns have been removed for legibility purposes, and only a
|
||||
single, currently running, migration is shown:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
+----+-------------+------------+---------+--------------------------------------+----------------+
|
||||
| Id | Source Node | Dest Node | Status | Instance UUID | Type |
|
||||
+----+-------------+------------+---------+--------------------------------------+----------------+
|
||||
| 29 | node4.maas | node1.maas | running | 81df1304-f755-4ae6-9b8c-2f888f6ad623 | live-migration |
|
||||
+----+-------------+------------+---------+--------------------------------------+----------------+
|
||||
|
||||
The above migration has an ID of '29'.
|
||||
|
||||
Forcing a migration
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
A VM with an intensive memory workload can be hard to live migrate. In such a
|
||||
case the migration can be forced by pausing the VM until the copying of memory
|
||||
is finished.
|
||||
|
||||
The :command:`openstack server show` command output contains a ``progress``
|
||||
field that normally displays a value of '0' but for this scenario of a busy VM
|
||||
it will start to provide percentages.
|
||||
|
||||
Forcing a migration should be considered once its progress nears 90%:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
nova live-migration-force-complete <vm> <migration-id>
|
||||
|
||||
.. caution::
|
||||
|
||||
Some applications are time sensitive and may not tolerate a forced migration
|
||||
due to the effect pausing can have on a VM's clock.
|
||||
|
||||
Aborting a migration
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
A migration can be aborted like this:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
nova live-migration-abort <vm> <migration-id>
|
||||
|
||||
Migration logs
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
A failed migration will result in log messages being appended to the
|
||||
``nova-compute.log`` file on the source host.
|
||||
|
||||
.. LINKS
|
||||
.. _nova-compute charm: https://jaas.ai/nova-compute
|
||||
.. _SSH keys and VM migration: https://opendev.org/openstack/charm-nova-compute/src/branch/master/README.md#ssh-keys-and-vm-migration
|
|
@ -1,153 +0,0 @@
|
|||
:orphan:
|
||||
|
||||
=========================================
|
||||
Reissue TLS certificates across the cloud
|
||||
=========================================
|
||||
|
||||
Preamble
|
||||
--------
|
||||
|
||||
New certificates can be reissued to all cloud clients that are currently
|
||||
TLS-enabled. This is easily done with an action available to the vault charm.
|
||||
|
||||
One use case for this operation is when a cloud's existing application
|
||||
certificates have expired.
|
||||
|
||||
.. important::
|
||||
|
||||
This operation may cause momentary downtime for all API services that are
|
||||
being issued new certificates. Plan for a short maintenance window of
|
||||
approximately 15 minutes, including post-operation verification tests.
|
||||
|
||||
Certificate inspection
|
||||
----------------------
|
||||
|
||||
TLS certificates can be inspected with the :command:`openssl` command with
|
||||
output compared before and after the operation. In these examples, the Glance
|
||||
API is listening on 10.0.0.220:9292.
|
||||
|
||||
Examples:
|
||||
|
||||
a) Expiration dates:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
echo | openssl s_client -showcerts -connect 10.0.0.220:9292 2>/dev/null \
|
||||
| openssl x509 -inform pem -noout -text | grep Validity -A2
|
||||
|
||||
Output:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Validity
|
||||
Not Before: Sep 24 20:19:38 2021 GMT
|
||||
Not After : Sep 24 19:20:08 2022 GMT
|
||||
|
||||
b) Certificate chain:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
echo | openssl s_client -showcerts -connect 10.0.0.220:9292 2>/dev/null \
|
||||
| openssl x509 -inform pem -noout -text | sed -n '/-----BEGIN/,/-----END/p'
|
||||
|
||||
Output:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
----BEGIN CERTIFICATE-----
|
||||
MIIEPjCCAyagAwIBAgIUOkw3afcFa47rmYSGwdqphiboh5kwDQYJKoZIhvcNAQEL
|
||||
BQAwPTE7MDkGA1UEAxMyVmF1bHQgUm9vdCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkg
|
||||
KGNoYXJtLXBraS1sb2NhbCkwHhcNMjEwOTI0MjAxOTM4WhcNMjIwOTI0MTkyMDA4
|
||||
.
|
||||
.
|
||||
.
|
||||
jcfdFmuy6hSHaqaV3XN//nZlk7yRlmMOisGXVQFvrxWg5xyfc56353hC6FQ1tXre
|
||||
gXr20uy5HKUkNulJXhcqxqC2Txevs/KJG2TXc3oKrBManFdw0BHT3qoeK91GDdVO
|
||||
tSHFWJB+kc74RajveqYOjXiC20Ei+bJaQgwrviyPL8W1qQ==
|
||||
-----END CERTIFICATE-----
|
||||
|
||||
Procedure
|
||||
---------
|
||||
|
||||
To reissue new certificates to all TLS-enabled clients run the
|
||||
``reissue-certificates`` action on the leader unit:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju run-action --wait vault/leader reissue-certificates
|
||||
|
||||
The output to the :command:`juju status` command for the model will show
|
||||
activity for each affected service as their corresponding endpoints get updated
|
||||
via hook calls, for example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
ceph-mon/0 active idle 0/lxd/0 10.0.0.231 Unit is ready and clustered
|
||||
ceph-mon/1 active idle 1/lxd/0 10.0.0.235 Unit is ready and clustered
|
||||
ceph-mon/2* active idle 2/lxd/0 10.0.0.217 Unit is ready and clustered
|
||||
ceph-osd/0* active idle 0 10.0.0.203 Unit is ready (1 OSD)
|
||||
ceph-osd/1 active idle 1 10.0.0.216 Unit is ready (1 OSD)
|
||||
ceph-osd/2 active idle 2 10.0.0.219 Unit is ready (1 OSD)
|
||||
cinder/0* active executing 1/lxd/1 10.0.0.230 8776/tcp Unit is ready
|
||||
cinder-ceph/0* active idle 10.0.0.230 Unit is ready
|
||||
cinder-mysql-router/0* active idle 10.0.0.230 Unit is ready
|
||||
glance/0* active executing 2/lxd/1 10.0.0.220 9292/tcp Unit is ready
|
||||
glance-mysql-router/0* active idle 10.0.0.220 Unit is ready
|
||||
keystone/0* active executing 0/lxd/1 10.0.0.225 5000/tcp Unit is ready
|
||||
keystone-mysql-router/0* active idle 10.0.0.225 Unit is ready
|
||||
mysql-innodb-cluster/0 active executing 0/lxd/2 10.0.0.240 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
|
||||
mysql-innodb-cluster/1 active executing 1/lxd/2 10.0.0.208 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
|
||||
mysql-innodb-cluster/2* active executing 2/lxd/2 10.0.0.218 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
|
||||
neutron-api/0* active idle 1/lxd/3 10.0.0.238 9696/tcp Unit is ready
|
||||
neutron-api-plugin-ovn/0* active executing 10.0.0.238 Unit is ready
|
||||
neutron-mysql-router/0* active idle 10.0.0.238 Unit is ready
|
||||
nova-cloud-controller/0* active executing 0/lxd/3 10.0.0.236 8774/tcp,8775/tcp Unit is ready
|
||||
nova-mysql-router/0* active idle 10.0.0.236 Unit is ready
|
||||
nova-compute/0* active idle 0 10.0.0.203 Unit is ready
|
||||
ntp/0* active idle 10.0.0.203 123/udp chrony: Ready
|
||||
ovn-chassis/0* active executing 10.0.0.203 Unit is ready
|
||||
ovn-central/0 active executing 0/lxd/4 10.0.0.228 6641/tcp,6642/tcp Unit is ready (northd: active)
|
||||
ovn-central/1 active executing 1/lxd/4 10.0.0.232 6641/tcp,6642/tcp Unit is ready
|
||||
ovn-central/2* active executing 2/lxd/3 10.0.0.213 6641/tcp,6642/tcp Unit is ready (leader: ovnnb_db, ovnsb_db)
|
||||
placement/0* active executing 2/lxd/4 10.0.0.210 8778/tcp Unit is ready
|
||||
placement-mysql-router/0* active idle 10.0.0.210 Unit is ready
|
||||
rabbitmq-server/0* active idle 2/lxd/5 10.0.0.206 5672/tcp Unit is ready
|
||||
vault/0* active idle 0/lxd/5 10.0.0.227 8200/tcp Unit is ready (active: true, mlock: disabled)
|
||||
vault-mysql-router/0* active idle 10.0.0.227 Unit is ready
|
||||
|
||||
Verification
|
||||
------------
|
||||
|
||||
Verify that cloud service endpoints are available and are using HTTPS:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
openstack endpoint list
|
||||
|
||||
Sample output:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
|
||||
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
|
||||
+----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
|
||||
| 181cc040c4c141d78a0f942dd584ac22 | RegionOne | keystone | identity | True | public | https://10.0.0.225:5000/v3 |
|
||||
| 235bd5e3831443afb4bf46929d1840c8 | RegionOne | placement | placement | True | public | https://10.0.0.210:8778 |
|
||||
| 2dd78e0f745b4bd49f92256d95187a30 | RegionOne | keystone | identity | True | admin | https://10.0.0.225:35357/v3 |
|
||||
| 39773c0683da4a0bb60909c12e7db69a | RegionOne | nova | compute | True | public | https://10.0.0.203:8774/v2.1 |
|
||||
| 49e72a65aa2f441db8e78e641bf6fe0c | RegionOne | placement | placement | True | admin | https://10.0.0.210:8778 |
|
||||
| 566e4d3850c64da38274e53a556eebe9 | RegionOne | neutron | network | True | public | https://10.0.0.238:9696 |
|
||||
| 7a803410e3344ce6912b7124b486ef4a | RegionOne | nova | compute | True | admin | https://10.0.0.203:8774/v2.1 |
|
||||
| 823c22a4951549169714d9e368dfe760 | RegionOne | nova | compute | True | internal | https://10.0.0.203:8774/v2.1 |
|
||||
| 9231f55f7d23442a9915a4321c3fc0e8 | RegionOne | placement | placement | True | internal | https://10.0.0.210:8778 |
|
||||
| b0e384c7368f4110b770eb56c3d720e1 | RegionOne | neutron | network | True | internal | https://10.0.0.238:9696 |
|
||||
| c658bd5a200d4111a31ae71e31503c35 | RegionOne | glance | image | True | public | https://10.0.0.220:9292 |
|
||||
| ce49bdeb066b4e3bafa97eec7cfec657 | RegionOne | glance | image | True | internal | https://10.0.0.220:9292 |
|
||||
| d320d4fc76574d2b806a8e88152b4ea1 | RegionOne | keystone | identity | True | internal | https://10.0.0.225:5000/v3 |
|
||||
| e6676dbb9e784e8880c00f6fbc8dd4b6 | RegionOne | glance | image | True | admin | https://10.0.0.220:9292 |
|
||||
| ec5d565e34124cdd8e694aaef8705611 | RegionOne | neutron | network | True | admin | https://10.0.0.238:9696 |
|
||||
+----------------------------------+-----------+--------------+--------------+---------+-----------+------------------------------+
|
||||
|
||||
Also check the successful resumption of cloud operations by running a routine
|
||||
battery of tests. The creation of a VM is a good choice.
|
|
@ -1,154 +0,0 @@
|
|||
:orphan:
|
||||
|
||||
==================================================
|
||||
Scale back an application with the hacluster charm
|
||||
==================================================
|
||||
|
||||
Preamble
|
||||
--------
|
||||
|
||||
This article shows how to scale back an application that is made highly
|
||||
available by means of the subordinate hacluster charm. It implies the removal
|
||||
of one or more of the principal application's units. This is easily done with
|
||||
generic Juju commands and actions available to the hacluster charm.
|
||||
|
||||
.. note::
|
||||
|
||||
Since the application being scaled back is already in HA mode the removal of
|
||||
one of its cluster members should not cause any immediate interruption of
|
||||
cloud services.
|
||||
|
||||
Scaling back an application will also remove its associated hacluster unit.
|
||||
It is best practice to have at least three hacluster units per application
|
||||
at all times. An odd number is also recommended.
|
||||
|
||||
Procedure
|
||||
---------
|
||||
|
||||
If the unit being removed is in a 'lost' state (as seen in :command:`juju
|
||||
status`) please first see the `Notes`_ section.
|
||||
|
||||
List the application units
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Display the units, in this case for the vault application:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju status vault
|
||||
|
||||
This article will be based on the following output:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
vault/0* active idle 0/lxd/5 10.0.0.227 8200/tcp Unit is ready (active: true, mlock: disabled)
|
||||
vault-hacluster/0* active idle 10.0.0.227 Unit is ready and clustered
|
||||
vault-mysql-router/0* active idle 10.0.0.227 Unit is ready
|
||||
vault/1 active idle 1/lxd/5 10.0.0.234 8200/tcp Unit is ready (active: true, mlock: disabled)
|
||||
vault-hacluster/1 active idle 10.0.0.234 Unit is ready and clustered
|
||||
vault-mysql-router/1 active idle 10.0.0.234 Unit is ready
|
||||
vault/2 active idle 2/lxd/6 10.0.0.233 8200/tcp Unit is ready (active: true, mlock: disabled)
|
||||
vault-hacluster/2 active idle 10.0.0.233 Unit is ready and clustered
|
||||
vault-mysql-router/2 active idle 10.0.0.233 Unit is ready
|
||||
|
||||
In the below example, unit ``vault/1`` will be removed.
|
||||
|
||||
Pause the subordinate hacluster unit
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Pause the hacluster unit that corresponds to the principle application unit
|
||||
being removed. Here, unit ``vault-hacluster/1`` corresponds to unit
|
||||
``vault/1``:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju run-action --wait vault-hacluster/1 pause
|
||||
|
||||
.. caution::
|
||||
|
||||
Unit numbers for a subordinate unit and its corresponding principal unit are
|
||||
not necessarily the same (e.g. it is possible to have ``vault-hacluster/2``
|
||||
correspond to ``vault/1``).
|
||||
|
||||
Remove the principal application unit
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Remove the principal application unit:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju remove-unit vault/1
|
||||
|
||||
This will also remove the hacluster subordinate unit (and any other subordinate
|
||||
units).
|
||||
|
||||
Update the ``cluster_count`` value
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Inform the hacluster charm about the new number of hacluster units, two here:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju config vault-hacluster cluster_count=2
|
||||
|
||||
In this example a count of two (less than three) removes quorum functionality
|
||||
and enables a two-node cluster. This is a sub-optimal state and is shown as an
|
||||
example only.
|
||||
|
||||
Update Corosync
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Remove Corosync nodes from its ring and update ``corosync.conf`` to reflect the
|
||||
new number of nodes (``min_quorum`` is recalculated):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju run-action --wait vault-hacluster/leader update-ring i-really-mean-it=true
|
||||
|
||||
Check the status of the Corosync cluster by querying a remaining hacluster
|
||||
unit:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju ssh 0/lxd/5 sudo crm status
|
||||
|
||||
There should not be any node listed as OFFLINE.
|
||||
|
||||
Verify cloud services
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
For this example, the final :command:`juju status vault` output is:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
vault/0* active idle 0/lxd/5 10.0.0.227 8200/tcp Unit is ready (active: true, mlock: disabled)
|
||||
vault-hacluster/0* active idle 10.0.0.227 Unit is ready and clustered
|
||||
vault-mysql-router/0* active idle 10.0.0.227 Unit is ready
|
||||
vault/2 active idle 2/lxd/6 10.0.0.233 8200/tcp Unit is ready (active: true, mlock: disabled)
|
||||
vault-hacluster/2 active idle 10.0.0.233 Unit is ready and clustered
|
||||
vault-mysql-router/2 active idle 10.0.0.233 Unit is ready
|
||||
|
||||
Ensure that all cloud services are working as expected.
|
||||
|
||||
Notes
|
||||
-----
|
||||
|
||||
Pre-removal, in the case where the principal application unit has transitioned
|
||||
to a 'lost' state (e.g. dropped off the network due to a hardware failure),
|
||||
|
||||
#. the first step (pause the hacluster unit) can be skipped
|
||||
#. the second step (remove the principal unit) can be replaced by:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju remove-machine N --force
|
||||
|
||||
N is the Juju machine ID (see the :command:`juju status` command) where the
|
||||
unit to be removed is running.
|
||||
|
||||
.. warning::
|
||||
|
||||
Removing the machine by force will naturally remove any other units that
|
||||
may be present, including those from an entirely different application.
|
|
@ -1,120 +0,0 @@
|
|||
:orphan:
|
||||
|
||||
=====================================
|
||||
Scale in the nova-compute application
|
||||
=====================================
|
||||
|
||||
Preamble
|
||||
--------
|
||||
|
||||
Scaling in the nova-compute application implies the removal of one or more
|
||||
nova-compute units (i.e. compute nodes). This is easily done with generic Juju
|
||||
commands and actions available to the nova-compute charm.
|
||||
|
||||
Procedure
|
||||
---------
|
||||
|
||||
List the nova-compute units
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Display the nova-compute units:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju status nova-compute
|
||||
|
||||
This article will be based on the following output:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
nova-compute/0* active idle 15 10.5.0.5 Unit is ready
|
||||
ntp/0* active idle 10.5.0.5 123/udp chrony: Ready
|
||||
ovn-chassis/0* active idle 10.5.0.5 Unit is ready
|
||||
nova-compute/1 active idle 16 10.5.0.24 Unit is ready
|
||||
ntp/2 active idle 10.5.0.24 123/udp chrony: Ready
|
||||
ovn-chassis/2 active idle 10.5.0.24 Unit is ready
|
||||
nova-compute/2 active idle 17 10.5.0.10 Unit is ready
|
||||
ntp/1 active idle 10.5.0.10 123/udp chrony: Ready
|
||||
ovn-chassis/1 active idle 10.5.0.10 Unit is ready
|
||||
|
||||
.. tip::
|
||||
|
||||
You can use the :command:`openstack` client to map compute nodes to
|
||||
nova-compute units by IP address: ``openstack hypervisor list``.
|
||||
|
||||
Disable the node
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
Disable the compute node by referring to its corresponding unit, here
|
||||
``nova-compute/0``:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju run-action --wait nova-compute/0 disable
|
||||
|
||||
This will stop nova-compute services and inform nova-scheduler to no longer
|
||||
assign new VMs to the unit.
|
||||
|
||||
.. warning::
|
||||
|
||||
Before continuing, make sure that all VMs hosted on the target compute node
|
||||
have been either deleted or migrated to another node.
|
||||
|
||||
Remove the node
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Now remove the compute node from the cloud:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju run-action --wait nova-compute/0 remove-from-cloud
|
||||
|
||||
The workload status of the unit can be checked with:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju status nova-compute/0
|
||||
|
||||
Sample output:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
nova-compute/0* blocked idle 15 10.5.0.5 Unit was removed from the cloud
|
||||
ntp/0* active idle 10.5.0.5 123/udp chrony: Ready
|
||||
ovn-chassis/0* active idle 10.5.0.5 Unit is ready
|
||||
|
||||
At this point (before the unit is actually removed from the model with the
|
||||
:command:`remove-unit` command) the process can be reverted with the
|
||||
``register-to-cloud`` action, followed by the ``enable`` action. This
|
||||
combination will restart nova-compute services and enable nova-scheduler to run
|
||||
new VMs on the unit.
|
||||
|
||||
Remove the unit
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Now that the compute node has been logically removed at the OpenStack level,
|
||||
remove its unit from the model:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju remove-unit nova-compute/0
|
||||
|
||||
Request the status of the application once more:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju status nova-compute
|
||||
|
||||
The unit's removal should be confirmed by its absence in the output:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
nova-compute/1* active idle 16 10.5.0.24 Unit is ready
|
||||
ntp/2* active idle 10.5.0.24 123/udp chrony: Ready
|
||||
ovn-chassis/2 active idle 10.5.0.24 Unit is ready
|
||||
nova-compute/2 active idle 17 10.5.0.10 Unit is ready
|
||||
ntp/1 active idle 10.5.0.10 123/udp chrony: Ready
|
||||
ovn-chassis/1* active idle 10.5.0.10 Unit is ready
|
|
@ -1,152 +0,0 @@
|
|||
:orphan:
|
||||
|
||||
======================================
|
||||
Scale out the nova-compute application
|
||||
======================================
|
||||
|
||||
Preamble
|
||||
--------
|
||||
|
||||
Scaling out the nova-compute application implies the addition of one or more
|
||||
nova-compute units (i.e. compute nodes). It is a straightforward operation
|
||||
that should not incur any cloud downtime.
|
||||
|
||||
Procedure
|
||||
---------
|
||||
|
||||
Check the current state of the cloud, scale out by adding a single compute
|
||||
node, and verify the new node.
|
||||
|
||||
Current state
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Gather basic information about the current state of the cloud in terms of the
|
||||
nova-compute application:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju status nova-compute
|
||||
|
||||
Below is sample output from an OVN-based cloud. This example cloud has a single
|
||||
nova-compute unit:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Model Controller Cloud/Region Version SLA Timestamp
|
||||
openstack maas-one maas-one/default 2.9.0 unsupported 13:32:43Z
|
||||
|
||||
App Version Status Scale Charm Store Channel Rev OS Message
|
||||
ceph-osd active 0 ceph-osd charmstore stable 310 ubuntu Unit is ready (1 OSD)
|
||||
nova-compute 23.0.0 active 1 nova-compute charmstore stable 327 ubuntu Unit is ready
|
||||
ntp 3.5 active 1 ntp charmstore stable 45 ubuntu chrony: Ready
|
||||
ovn-chassis 20.12.0 active 1 ovn-chassis charmstore stable 14 ubuntu Unit is ready
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
nova-compute/0* active idle 0 10.0.0.222 Unit is ready
|
||||
ntp/0* active idle 10.0.0.222 123/udp chrony: Ready
|
||||
ovn-chassis/0* active idle 10.0.0.222 Unit is ready
|
||||
|
||||
Machine State DNS Inst id Series AZ Message
|
||||
0 started 10.0.0.222 node1 focal default Deployed
|
||||
|
||||
Display the name of the current compute host:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
openstack host list
|
||||
|
||||
+---------------------+-----------+----------+
|
||||
| Host Name | Service | Zone |
|
||||
+---------------------+-----------+----------+
|
||||
| juju-616a7f-0-lxd-3 | conductor | internal |
|
||||
| juju-616a7f-0-lxd-3 | scheduler | internal |
|
||||
| node1.maas | compute | nova |
|
||||
+---------------------+-----------+----------+
|
||||
|
||||
Scale out
|
||||
~~~~~~~~~
|
||||
|
||||
Use the ``add-unit`` command to scale out the nova-compute application.
|
||||
Multiple units can be added with the use of the ``-n`` option.
|
||||
|
||||
.. note::
|
||||
|
||||
If the node has specific hardware-related requirements (e.g. storage) it
|
||||
will need to be manually attended to first (within MAAS) and then targeted
|
||||
with the ``--to`` option.
|
||||
|
||||
The new unit can also be placed on an existing Juju machine (co-located with
|
||||
another application). In this case, if the ``--to`` option is used it will
|
||||
refer to the machine ID.
|
||||
|
||||
Here we add a single unit onto a new machine (MAAS node):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju add-unit --to node4.maas nova-compute
|
||||
|
||||
The status output should eventually look similar to:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Model Controller Cloud/Region Version SLA Timestamp
|
||||
openstack maas-one maas-one/default 2.9.0 unsupported 14:05:36Z
|
||||
|
||||
App Version Status Scale Charm Store Channel Rev OS Message
|
||||
ceph-osd 16.2.0 active 1 ceph-osd charmstore stable 310 ubuntu Unit is ready (1 OSD)
|
||||
nova-compute 23.0.0 active 2 nova-compute charmstore stable 327 ubuntu Unit is ready
|
||||
ntp 3.5 active 2 ntp charmstore stable 45 ubuntu chrony: Ready
|
||||
ovn-chassis 20.12.0 active 2 ovn-chassis charmstore stable 14 ubuntu Unit is ready
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
ceph-osd/0 active idle 0 10.0.0.222 Unit is ready (1 OSD)
|
||||
nova-compute/0* active idle 0 10.0.0.222 Unit is ready
|
||||
ntp/0* active idle 10.0.0.222 123/udp chrony: Ready
|
||||
ovn-chassis/0* active idle 10.0.0.222 Unit is ready
|
||||
nova-compute/1 active idle 3 10.0.0.241 Unit is ready
|
||||
ntp/1 active idle 10.0.0.241 123/udp chrony: Ready
|
||||
ovn-chassis/1 active idle 10.0.0.241 Unit is ready
|
||||
|
||||
Machine State DNS Inst id Series AZ Message
|
||||
0 started 10.0.0.222 node1 focal default Deployed
|
||||
3 started 10.0.0.241 node4 focal default Deployed
|
||||
|
||||
Verification
|
||||
~~~~~~~~~~~~
|
||||
|
||||
Verify that the new compute node is functional by creating a VM on it.
|
||||
|
||||
First confirm that the new compute host is known to the cloud:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
openstack host list
|
||||
|
||||
+---------------------+-----------+----------+
|
||||
| Host Name | Service | Zone |
|
||||
+---------------------+-----------+----------+
|
||||
| juju-616a7f-0-lxd-3 | conductor | internal |
|
||||
| juju-616a7f-0-lxd-3 | scheduler | internal |
|
||||
| node1.maas | compute | nova |
|
||||
| node4.maas | compute | nova |
|
||||
+---------------------+-----------+----------+
|
||||
|
||||
Then create a VM by targeting the new host, in this case 'node4.maas'. Note
|
||||
that a minimum Nova API Microversion is required (the cloud admin role is
|
||||
needed to specify this):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
openstack --os-compute-api-version 2.74 server create \
|
||||
--image focal-amd64 --flavor m1.micro --key-name admin-key \
|
||||
--network int_net --host node4.maas \
|
||||
focal-2
|
||||
|
||||
Confirm that the new node is being used (information only available to the
|
||||
cloud admin by default):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
openstack server show focal-2 | grep hypervisor
|
||||
|
||||
| OS-EXT-SRV-ATTR:hypervisor_hostname | node4.maas
|
|
@ -1,60 +0,0 @@
|
|||
:orphan:
|
||||
|
||||
=================================================
|
||||
Start MySQL InnoDB Cluster from a complete outage
|
||||
=================================================
|
||||
|
||||
Preamble
|
||||
--------
|
||||
|
||||
Regardless of how MySQL InnoDB Cluster services were shut down (gracefully,
|
||||
hard shutdown, or power outage) a special startup procedure is required in
|
||||
order to put the cloud database back online.
|
||||
|
||||
Procedure
|
||||
---------
|
||||
|
||||
This example will assume that the state of the cloud database is as follows:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju status mysql-innodb-cluster
|
||||
|
||||
App Version Status Scale Charm Store Channel Rev OS Message
|
||||
mysql-innodb-cluster 8.0.25 blocked 3 mysql-innodb-cluster charmstore stable 7 ubuntu Cluster is inaccessible from this instance. Please check logs for details.
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
mysql-innodb-cluster/0 blocked idle 0/lxd/2 10.0.0.240 Cluster is inaccessible from this instance. Please check logs for details.
|
||||
mysql-innodb-cluster/1 blocked idle 1/lxd/2 10.0.0.208 Cluster is inaccessible from this instance. Please check logs for details.
|
||||
mysql-innodb-cluster/2* blocked idle 2/lxd/2 10.0.0.218 Cluster is inaccessible from this instance. Please check logs for details.
|
||||
|
||||
Initialise the cluster by running the ``reboot-cluster-from-complete-outage``
|
||||
action on any mysql-innodb-cluster unit:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju run-action --wait mysql-innodb-cluster/1 reboot-cluster-from-complete-outage
|
||||
|
||||
.. important::
|
||||
|
||||
If the chosen unit is not the most up-to-date in terms of cluster activity
|
||||
the action will fail. However, the action's output messaging will include
|
||||
the correct node to use (in terms of its IP address). In such a case, simply
|
||||
re-run the action against the proper unit.
|
||||
|
||||
The mysql-innodb-cluster application should now be back to a clustered and
|
||||
healthy state:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
App Version Status Scale Charm Store Channel Rev OS Message
|
||||
mysql-innodb-cluster 8.0.25 active 3 mysql-innodb-cluster charmstore stable 7 ubuntu Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
mysql-innodb-cluster/0 active idle 0/lxd/2 10.0.0.240 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
|
||||
mysql-innodb-cluster/1 active idle 1/lxd/2 10.0.0.208 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
|
||||
mysql-innodb-cluster/2* active idle 2/lxd/2 10.0.0.218 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
|
||||
|
||||
See the :ref:`mysql-innodb-cluster section <mysql_innodb_cluster_startup>` on
|
||||
the :doc:`Managing power events <app-managing-power-events>` page for full
|
||||
coverage.
|
|
@ -1,81 +0,0 @@
|
|||
:orphan:
|
||||
|
||||
============
|
||||
Unseal Vault
|
||||
============
|
||||
|
||||
Preamble
|
||||
--------
|
||||
|
||||
The Vault service always starts in a sealed state. Unsealing is the process of
|
||||
obtaining the master key necessary to read the decryption key that decrypts the
|
||||
data stored within. Prior to unsealing, therefore, Vault cannot be accessed by
|
||||
the cloud.
|
||||
|
||||
.. important::
|
||||
|
||||
Unsealing involves the input of special unseal keys, the number of which
|
||||
depends on how Vault was originally initialised. Without these keys Vault
|
||||
cannot be unsealed.
|
||||
|
||||
Procedure
|
||||
---------
|
||||
|
||||
.. note::
|
||||
|
||||
Ensure that the ``vault`` snap is installed on your Juju client host. You
|
||||
will need it to manage the Vault that is deployed in your cloud.
|
||||
|
||||
The output to :command:`juju status vault` should show that Vault is sealed:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
vault/0* blocked idle 3/lxd/3 10.0.0.204 8200/tcp Unit is sealed
|
||||
|
||||
Unseal **each** vault unit.
|
||||
|
||||
.. note::
|
||||
|
||||
If the Vault API is encrypted see cloud operation :doc:`Configure TLS for
|
||||
the Vault API <ops-config-tls-vault-api>`.
|
||||
|
||||
For a single unit requiring three keys (``vault/0`` with IP address
|
||||
10.0.0.204):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
export VAULT_ADDR="http://10.0.0.204:8200"
|
||||
|
||||
vault operator unseal
|
||||
vault operator unseal
|
||||
vault operator unseal
|
||||
|
||||
You will be prompted for the unseal keys. The information will not be echoed
|
||||
back to the screen nor captured in the shell's history.
|
||||
|
||||
The output to :command:`juju status vault` should eventually contain:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
vault/0* active idle 0/lxd/0 10.0.0.204 8200/tcp Unit is ready (active: true, mlock: disabled)
|
||||
|
||||
.. note::
|
||||
|
||||
It can take a few minutes for the "ready" status to appear. To expedite,
|
||||
force a status update: ``juju run -u vault/0 hooks/update-status``.
|
||||
|
||||
For multiple vault units, repeat the procedure by using a different value each
|
||||
time for ``VAULT_ADDR``. For a three-member Vault cluster the output should
|
||||
look similar to:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Unit Workload Agent Machine Public address Ports Message
|
||||
vault/0 active idle 0/lxd/0 10.0.0.204 8200/tcp Unit is ready (active: true, mlock: disabled)
|
||||
vault-hacluster/1 active idle 10.0.0.204 Unit is ready and clustered
|
||||
vault/1* active idle 1/lxd/0 10.0.0.205 8200/tcp Unit is ready (active: false, mlock: disabled)
|
||||
vault-hacluster/0* active idle 10.0.0.205 Unit is ready and clustered
|
||||
vault/2 active idle 2/lxd/0 10.0.0.206 8200/tcp Unit is ready (active: false, mlock: disabled)
|
||||
vault-hacluster/2 active idle 10.0.0.206 Unit is ready and clustered
|
|
@ -1,184 +0,0 @@
|
|||
:orphan:
|
||||
|
||||
=========================================
|
||||
Use OpenStack as a backing cloud for Juju
|
||||
=========================================
|
||||
|
||||
Preamble
|
||||
--------
|
||||
|
||||
An OpenStack cloud can be used as a backing cloud to Juju. This means that
|
||||
Juju-managed workloads will run in OpenStack VMs.
|
||||
|
||||
Requirements
|
||||
------------
|
||||
|
||||
The glance-simplestreams-sync application will need to be deployed in the
|
||||
OpenStack cloud. This will manage image downloads and place SimpleStreams
|
||||
image metadata in Object Storage for Juju to consult in order to provision its
|
||||
machines with those images.
|
||||
|
||||
Cloud operation :doc:`Implement automatic Glance image updates
|
||||
<ops-auto-glance-image-updates>` has full deployment instructions.
|
||||
|
||||
Once the above requirement is met, the metadata should be available to Juju via
|
||||
the ``image-stream`` endpoint, which points to an Object Storage URL. For
|
||||
example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
openstack endpoint list --service image-stream
|
||||
|
||||
+---------------------+-----------+--------------+-----------------+---------+-----------+-----------------------------------------------------+
|
||||
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
|
||||
+---------------------+-----------+--------------+-----------------+---------+-----------+-----------------------------------------------------+
|
||||
| 043b73545d804457... | RegionOne | image-stream | product-streams | True | admin | https://10.0.0.224:443/swift/simplestreams/data/ |
|
||||
| ad06281ba76e4cdf... | RegionOne | image-stream | product-streams | True | public | https://10.0.0.224:443/swift/v1/simplestreams/data/ |
|
||||
| e1baedce6e004da8... | RegionOne | image-stream | product-streams | True | internal | https://10.0.0.224:443/swift/v1/simplestreams/data/ |
|
||||
+---------------------+-----------+--------------+-----------------+---------+-----------+-----------------------------------------------------+
|
||||
|
||||
.. note::
|
||||
|
||||
If the cloud is TLS-enabled, the initial image sync (whether manual or
|
||||
automatic) will update the ``image-stream`` endpoint to HTTPS.
|
||||
|
||||
Procedure
|
||||
---------
|
||||
|
||||
The procedure will consist of adding a TLS-enabled OpenStack cloud to Juju,
|
||||
adding a credential to Juju, and finally creating a Juju controller.
|
||||
|
||||
Add the cloud
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Adding the cloud can be done either interactively (user prompts) or via a YAML
|
||||
file.
|
||||
|
||||
Here we'll use file ``mystack-cloud.yaml`` to define a cloud called 'mystack':
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
clouds:
|
||||
mystack:
|
||||
type: openstack
|
||||
auth-types: userpass
|
||||
regions:
|
||||
RegionOne:
|
||||
endpoint: https://10.0.0.225:5000/v3
|
||||
ca-certificates:
|
||||
- |
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIDazCCAlOgAwIBAgIUQ0ASDlfq4sWpPwrxjspBZ4DO+bgwDQYJKoZIhvcNAQEL
|
||||
BQAwPTE7MDkGA1UEAxMyVmF1bHQgUm9vdCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkg
|
||||
KGNoYXJtLXBraS1sb2NhbCkwHhcNMjEwODAyMjI1MjUyWhcNMzEwNzMxMjE1MzIx
|
||||
WjA9MTswOQYDVQQDEzJWYXVsdCBSb290IENlcnRpZmljYXRlIEF1dGhvcml0eSAo
|
||||
Y2hhcm0tcGtpLWxvY2FsKTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
|
||||
ALjXjvYtmVoAw15Zuub41vRabiSZe8GF64nKb0EZxN9/13dAINYhusBX+5CHxFUm
|
||||
qOSmktu8DtKUvqpaoTgAAJerugbW2Xzmj23T9rKk4y3zoVPpuMRozN8Riv8itBaw
|
||||
LKImxKeUetDWwhWEO7uX0+5K48Vg5hhiiGZJaHaVU1eSjSWnVKFGbExgv9PsS4Wt
|
||||
AnL3awWuR/3NulZTmNHqnwNfb+2DffdQVTH7UyuqlNNhTyQZQOlKY1DwtHZiAMLE
|
||||
rb1yoLpx6gR4JR8PuohTqu0MrWNqZLQnnIMc/Ty3kRrfSTgJslwFTGyzBNRIYt13
|
||||
PF3c51lDlLwszqW3NfBlpIUCAwEAAaNjMGEwDgYDVR0PAQH/BAQDAgEGMA8GA1Ud
|
||||
EwEB/wQFMAMBAf8wHQYDVR0OBBYEFA38OdAQaIVjor23NY7m5seQACduMB8GA1Ud
|
||||
IwQYMBaAFA38OdAQaIVjor23NY7m5seQACduMA0GCSqGSIb3DQEBCwUAA4IBAQCj
|
||||
D/bVi/t0t1B7HI4IuBS3oLwxvy098qEu1/UPmJ6EXgEWT7Q/2SZrvdWXr8FJHbk3
|
||||
Meu5N+Sn5mksXRWhl6E7DXWGyABkvAQUdGgF6gQxg80XbX1LW6G1mzto1QeaCHZl
|
||||
Yl04rZzt2P5ut/CMJn6PFI7GhkhwOsrWKx2+wxZaLHwNFuGiNUJp8mOl0sCPqq7i
|
||||
1CKzbDp12oW6enWvL6zzntHB0VY4wE6OBwghHiXJ2FHSwClQoEzKxR6Z+onpw8EJ
|
||||
3ZkLiYiEs0fljKcKdBtnjc/PiKIC29OAcGEDGEdy2YX4mH19fNTZoAGkIkLg6CuW
|
||||
bSOc6nke3F1sEtda0CbQ
|
||||
-----END CERTIFICATE-----
|
||||
|
||||
The endpoint (the cloud's public Keystone endpoint) and the region can be
|
||||
obtained by running ``openstack endpoint list``.
|
||||
|
||||
Because this example cloud is using TLS, we need to pass the related CA
|
||||
certificate. This can be gathered, if using Vault, by running ``juju run-action
|
||||
--wait vault/leader get-root-ca``.
|
||||
|
||||
To add the cloud:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju add-cloud --client mystack -f mystack-cloud.yaml
|
||||
|
||||
See `Adding an OpenStack cloud`_ in the Juju documentation for more general
|
||||
information.
|
||||
|
||||
Add a credential
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
A Juju credential, which represents a set of OpenStack credentials, needs to be
|
||||
associated with the newly added cloud. This can be done either interactively
|
||||
(user prompts) or via a YAML file.
|
||||
|
||||
Here we'll use file ``mystack-creds.yaml`` to define a credential called, say,
|
||||
'operator':
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
credentials:
|
||||
mystack:
|
||||
operator:
|
||||
auth-type: userpass
|
||||
version: "3"
|
||||
password: Boh9wiahee8xah5l
|
||||
username: admin
|
||||
tenant-name: admin
|
||||
user-domain-name: admin_domain
|
||||
project-domain-name: admin_domain
|
||||
|
||||
.. tip::
|
||||
|
||||
Many of the values placed in the YAML file can be obtained by sourcing the
|
||||
cloud admin's init file and reviewing the values for commonly used OpenStack
|
||||
environmental variables (e.g. ``source openrc && env | grep OS_``).
|
||||
|
||||
To add the credential for cloud 'mystack':
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju add-credential --client mystack -f mystack-creds.yaml
|
||||
|
||||
See `Adding an OpenStack credential`_ in the Juju documentation for further
|
||||
guidance.
|
||||
|
||||
Create a controller
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Create a Juju controller called, say, 'over-controller' on cloud 'mystack'
|
||||
using credential 'admin':
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju bootstrap --credential admin \
|
||||
mystack over-controller
|
||||
|
||||
Often a cloud will provide access to its VMs only through floating IP addresses
|
||||
on a public network. In such a case, add constraint ``allocate-public-ip``:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju bootstrap --credential admin \
|
||||
--bootstrap-constraints allocate-public-ip=true \
|
||||
mystack over-controller
|
||||
|
||||
Inspect the new controller:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
juju controllers
|
||||
|
||||
Controller Model User Access Cloud/Region Models Nodes HA Version
|
||||
over-controller* default admin superuser mystack/RegionOne 2 1 none 2.9.10
|
||||
under-controller openstack admin superuser corpstack/corpstack 2 1 none 2.9.0
|
||||
|
||||
Controller 'under-controller' is managing the original OpenStack cloud.
|
||||
|
||||
See `Creating a Juju controller for OpenStack`_ in the Juju documentation for
|
||||
further guidance.
|
||||
|
||||
.. LINKS
|
||||
.. _Adding an OpenStack cloud: https://juju.is/docs/olm/openstack#heading--adding-an-openstack-cloud
|
||||
.. _Adding an OpenStack credential: https://juju.is/docs/olm/openstack#heading--adding-an-openstack-credential
|
||||
.. _Creating a Juju controller for OpenStack: https://juju.is/docs/olm/openstack#heading--creating-a-juju-controller-for-openstack
|
|
@ -25,3 +25,5 @@
|
|||
/project-deploy-guide/charm-deployment-guide/latest/app-pci-passthrough-gpu.html 301 /project-deploy-guide/charm-deployment-guide/latest/pci-passthrough.html
|
||||
/project-deploy-guide/charm-deployment-guide/latest/app-erasure-coding.html 301 /project-deploy-guide/charm-deployment-guide/latest/ceph-erasure-coding.html
|
||||
/project-deploy-guide/charm-deployment-guide/latest/app-manila-ganesha.html 301 /project-deploy-guide/charm-deployment-guide/latest/manila-ganesha.html
|
||||
/project-deploy-guide/charm-deployment-guide/latest/app-managing-power-events.html 301 /charm-guide/latest/howto/managing-power-events.html
|
||||
/project-deploy-guide/charm-deployment-guide/latest/deferred-events.html 301 /charm-guide/latest/howto/deferred-events.html
|
||||
|
|
Loading…
Reference in New Issue