From eb8b8afbb3e85841581f39f464f8700dd4963e4f Mon Sep 17 00:00:00 2001 From: John Fulton Date: Fri, 13 Mar 2020 19:03:28 -0400 Subject: [PATCH] Introduce Multibackend Storage Documenation Change-Id: Ib93b1ad7ef55d5ee41a6c26e200589950c1858e6 --- .../features/distributed_compute_node.rst | 111 +- .../distributed_multibackend_storage.rst | 1066 +++++++++++++++++ deploy-guide/source/features/index.rst | 1 + 3 files changed, 1151 insertions(+), 27 deletions(-) create mode 100644 deploy-guide/source/features/distributed_multibackend_storage.rst diff --git a/deploy-guide/source/features/distributed_compute_node.rst b/deploy-guide/source/features/distributed_compute_node.rst index df17e1d9..08b5fc7c 100644 --- a/deploy-guide/source/features/distributed_compute_node.rst +++ b/deploy-guide/source/features/distributed_compute_node.rst @@ -152,9 +152,16 @@ provisioning networks though. Key benefits IPv6 may provide for DCN are: Storage recommendations ^^^^^^^^^^^^^^^^^^^^^^^ -DCN with only ephemeral storage is available for Nova Compute services. -That is up to the edge cloud applications to be designed to provide enhanced -data availability, locality awareness and/or replication mechanisms. +Prior to Ussuri, DCN was only available with ephemeral storage for +Nova Compute services. Enhanced data availability, locality awareness +and/or replication mechanisms had to be addressed only on the edge +cloud application layer. + +In Ussuri and newer, |project| is able to deploy +:doc:`distributed_multibackend_storage` which may be combined with the +example in this document to add distributed image management and +persistent storage at the edge. + Deploying DCN ------------- @@ -203,10 +210,12 @@ experience for distributed compute nodes. Configure the Swift temporary URL key _____________________________________ -Images are served by Swift and are made available to nodes using an HTTP URL, -over the ``direct`` deploy interface. To allow Swift to create temporary URLs, it -must be configured with a temporary URL key. The key value is used for -cryptographic signing and verification of the temporary URLs created by Swift. +Images used for overcloud deployment are served by Swift and are made +available to nodes using an HTTP URL, over the ``direct`` deploy +interface. To allow Swift to create temporary URLs, it must be +configured with a temporary URL key. The key value is used for +cryptographic signing and verification of the temporary URLs created +by Swift. The following commands demonstrate how to configure the setting. In this example, ``uuidgen`` is used to randomly create a key value. You should choose a @@ -281,14 +290,19 @@ purposes of this documentation, this stack is referred to as the No specific changes or deployment configuration is necessary to deploy just the control plane services. -It's recommended that the ``control-plane`` stack contain only control plane -services, and no compute or storage services. If compute and storage services -are desired at the same geographical site as the ``control-plane`` stack then -they should be deployed in a separate stack just like a edge site specific stack, -but using nodes at the same geographical location. In such a scenario, the -stack with compute and storage services could be called ``central`` and -deploying it in a separate stack allows for separation of management and -operations. +It's possible to configure the ``control-plane`` stack to contain +only control plane services, and no compute or storage services. If +compute and storage services are desired at the same geographical site +as the ``control-plane`` stack, then they may be deployed in a +separate stack just like a edge site specific stack, but using nodes +at the same geographical location. In such a scenario, the stack with +compute and storage services could be called ``central`` and deploying +it in a separate stack allows for separation of management and +operations. This scenario may also be implemented with an "external" +Ceph cluster for storage as described in :doc:`ceph_external`. If +however, Glance needs to be configured with multiple stores so that +images may be served to remote sites one ``control-plane`` stack may +be used as described in :doc:`distributed_multibackend_storage`. It is suggested to give each stack an explicit name. For example, the control plane stack could be called ``control-plane`` and set by passing ``--stack @@ -440,6 +454,11 @@ definition would look like: ``name_lower`` property such as ``InternalApiCompute0`` and ``internal_api_compute_0``. +If separate storage and storage management networks are used with +multiple Ceph clusters and Glance servers per site, then a routed +storage network should be shared between sites for image transfer. +The storage management network, which Ceph uses to keep OSDs balanced, +does not need to be shared between sites. DCN related roles _________________ @@ -449,8 +468,8 @@ configuration and desired services to be deployed at each distributed site. The default compute role at ``roles/Compute.yaml`` can be used if that is sufficient for the use case. -Two additional roles are also available for deploying compute nodes with -co-located persistent storage at the distributed site. +Three additional roles are also available for deploying compute nodes +with co-located persistent storage at the distributed site. The first is ``roles/DistributedCompute.yaml``. This role includes the default compute services, but also includes the cinder volume service. The cinder @@ -459,10 +478,29 @@ distributed site for persistent storage. The second is ``roles/DistributedComputeHCI.yaml``. This role includes the default computes services, the cinder volume service, and also includes the -Ceph services for deploying a Ceph cluster at the distributed site. Using this -role, both the compute services and ceph services are deployed on the same -nodes, enabling a hyperconverged infrastructure for persistent storage at the -distributed site. +Ceph Mon, Mgr, and OSD services for deploying a Ceph cluster at the +distributed site. Using this role, both the compute services and Ceph +services are deployed on the same nodes, enabling a hyperconverged +infrastructure for persistent storage at the distributed site. When +Ceph is used, there must be a minimum of three `DistributedComputeHCI` +nodes. This role also includes a Glance server, provided by the +`GlanceApiEdge` service with in the `DistributedComputeHCI` role. The +Nova compute service of each node in the `DistributedComputeHCI` role +is configured by default to use its local Glance server. + +The third is ``roles/DistributedComputeHCIScaleUp.yaml``. This role is +like the DistributedComputeHCI role but does not run the Ceph Mon and +Mgr service. It offers the Ceph OSD service however, so it may be used +to scale up storage and compute services at each DCN site after the +minimum of three DistributedComputeHCI nodes have been deployed. There +is no `GlanceApiEdge` service in the `DistributedComputeHCIScaleUp` +role but in its place the Nova compute service of the role is +configured by default to connect to a local `HaProxyEdge` service +which in turn proxies image requests to the Glance servers running on +the `DistributedComputeHCI` roles. + +For information on configuring the distributed Glance services see +:doc:`distributed_multibackend_storage`. Configuring Availability Zones (AZ) ___________________________________ @@ -544,8 +582,9 @@ This example shows an environment file setting the AZ for the backend in the Deploying Ceph with HCI ####################### -When deploying Ceph while using the ``DistributedComputeHCI`` roles, the -environment file to enable ceph should be used:: +When deploying Ceph while using the ``DistributedComputeHCI`` and +``DistributedComputeHCIScaleUp`` roles, the following environment file +should be used to enable Ceph:: environments/ceph-ansible/ceph-ansible.yaml @@ -638,6 +677,11 @@ templates directory at ``roles/Controller.yaml``. parameter_defaults: ControllerCount: 1 +.. warning:: + Only one `Controller` node is deployed for example purposes but + three are recommended in order to have a highly available control + plane. + ``network_data.yaml`` contains the default contents from the templates directory. @@ -646,7 +690,6 @@ directory. parameter_defaults: CinderStorageAvailabilityZone: 'central' NovaComputeAvailabilityZone: 'central' - NovaAZAttach: false When the deployment completes, a single stack is deployed:: @@ -776,13 +819,19 @@ templates directory at ``roles/DistributedComputeHCI.yaml``. parameter_defaults: DistributedComputeHCICount: 1 +.. warning:: + Only one `DistributedComputeHCI` is deployed for example + purposes but three are recommended in order to have a highly + available Ceph cluster. If more than three such nodes of that role + are necessary for additional compute and storage resources, then + use additional nodes from the `DistributedComputeHCIScaleOut` role. + ``az.yaml`` contains the same content as was used in the ``control-plane`` stack:: parameter_defaults: CinderStorageAvailabilityZone: 'central' NovaComputeAvailabilityZone: 'central' - NovaAZAttach: false The ``control-plane-export.yaml`` file was generated from the command from example_export_dcn_. @@ -911,12 +960,18 @@ same as was used in the ``central`` stack. parameter_defaults: DistributedComputeHCICount: 1 +.. warning:: + Only one `DistributedComputeHCI` is deployed for example + purposes but three are recommended in order to have a highly + available Ceph cluster. If more than three such nodes of that role + are necessary for additional compute and storage resources, then + use additional nodes from the `DistributedComputeHCIScaleOut` role. + ``az.yaml`` contains specific content for the ``edge-0`` stack:: parameter_defaults: CinderStorageAvailabilityZone: 'edge-0' NovaComputeAvailabilityZone: 'edge-0' - NovaAZAttach: false The ``CinderStorageAvailabilityZone`` and ``NovaDefaultAvailabilityZone`` parameters are set to ``edge-0`` to match the stack name. @@ -931,7 +986,6 @@ different name with ``--stack edge-1`` and ``az.yaml`` contains:: parameter_defaults: CinderStorageAvailabilityZone: 'edge-1' NovaComputeAvailabilityZone: 'edge-1' - NovaAZAttach: false When the deployment completes, there are now 4 stacks are deployed:: @@ -1013,6 +1067,9 @@ and ``edge-1`` are created and available:: +------------------+-------------------------+---------+---------+-------+----------------------------+ (control-plane) [centos@scale ~]$ +For information on extending this example with distributed image +management for image sharing between DCN site Ceph clusters see +:doc:`distributed_multibackend_storage`. Updating DCN ------------ diff --git a/deploy-guide/source/features/distributed_multibackend_storage.rst b/deploy-guide/source/features/distributed_multibackend_storage.rst new file mode 100644 index 00000000..ed47b836 --- /dev/null +++ b/deploy-guide/source/features/distributed_multibackend_storage.rst @@ -0,0 +1,1066 @@ +Distributed Multibackend Storage +================================ + +In Ussuri and newer, |project| is able to extend +:doc:`distributed_compute_node` to include distributed image +management and persistent storage with the benefits of using +OpenStack and Ceph. + +Features +-------- + +This Distributed Multibackend Storage design extends the architecture +described in :doc:`distributed_compute_node` to support the following +worklow. + +- Upload an image to the Central site, and any additional DCN sites + with storage, concurrently using one command like `glance + image-create-via-import --stores central,dcn1,dcn3`. +- Move a copy of the same image to additional DCN sites when needed + using a command like `glance image-import --stores + dcn2,dcn4 --import-method copy-image`. +- The image's unique ID will be shared consistently across sites +- The image may be copy-on-write booted on any DCN site as the RBD + pools for Glance and Nova will use the same local Ceph cluster. +- If the Glance server at each DCN site was configured with write + access to the Central Ceph cluster as an additional store, then an + image generated from making a snapshot of an instance running at a + DCN site may be copied back to the central site and then copied to + additional DCN sites. +- The same Ceph cluster per site may also be used by Cinder as an RBD + store to offer local volumes in active/active mode. + +In the above workflow the only time RBD traffic crosses the WAN is +when an image is imported or copied between sites. Otherwise all RBD +traffic is local to each site for fast COW boots, and performant IO +to the local Cinder and Nova Ceph pools. + +Architecture +------------ + +The architecture to support the above features has the following +properties. + +- A separate Ceph cluster at each availability zone or geographic + location +- Glance servers at each availability zone or geographic location +- The containers implementing the Ceph clusters may be collocated on + the same hardware providing compute services, i.e. the compute nodes + may be hyper-converged, though it is not necessary that they be + hyper-converged +- It is not necessary to deploy Glance and Ceph at each DCN site, if + storage services are not needed at that DCN site + +In this scenario the Glance service at the central site is configured +with multiple stores such that. + +- The central Glance server's default store is the central Ceph + cluster using the RBD driver +- The central Glance server has additional RBD stores; one per DCN + site running Ceph + +Similarly the Glance server at each DCN site is configured with +multiple stores such that. + +- Each DCN Glance server's default store is the DCN Ceph + cluster that is in the same geographic location. +- Each DCN Glance server is configured with one additional store which + is the Central RBD Ceph cluster. + +Though there are Glance services distributed to multiple sites, the +Glance client for overcloud users should use the public Glance +endpoints at the central site. These endpoints may be determined by +querying the Keystone service, which only runs at the central site, +with `openstack endpoint list`. Ideally all images should reside in +the central Glance and be copied to DCN sites before instances of +those images are booted on DCN sites. If an image is not copied to a +DCN site before it is booted, then the image will be streamed to the +DCN site and then the image will boot as an instance. This happens +because Glance at the DCN site has access to the images store at the +Central ceph cluster. Though the booting of the image will take time +because it has not been copied in advance, this is still preferable +to failing to boot the image. + +Stacks +------ + +In the example deployment three stacks are deployed: + +control-plane + All control plane services including Glance. Includes a Ceph + cluster named central which is hypercoverged with compute nodes and + runs Cinder in active/passive mode managed by pacemaker. +dcn0 + Runs Compute, Glance and Ceph services. The Cinder volume service + is configured in active/active mode and not managed by pacemaker. + The Compute and Cinder services are deployed in a separate + availability zone and may also be in a separate geographic + location. +dcn1 + Deploys the same services as dcn0 but in a different availability + zone and also in a separate geographic location. + +Note how the above differs from the :doc:`distributed_compute_node` +example which splits services at the primary location into two stacks +called `control-plane` and `central`. This example combines the two +into one stack. + +During the deployment steps all templates used to deploy the +control-plane stack will be kept on the undercloud in +`/home/stack/control-plane`, all templates used to deploy the dcn0 +stack will be kept on the undercloud in `/home/stack/dcn0` and dcn1 +will follow the same pattern as dcn0. The sites dcn2, dcn3 and so on +may be created, based on need, by following the same pattern. + +Ceph Deployment Types +--------------------- + +|project| supports two types of Ceph deployments. An "internal" Ceph +deployment is one where a Ceph cluster is deployed as part of the +overcloud as described in :doc:`ceph_config`. An "external" Ceph +deployment is one where a Ceph cluster already exists and an overcloud +is configured to be a client of that Ceph cluster as described in +:doc:`ceph_external`. Ceph external deployments have special meaning +to |project| in the following ways: + +- The Ceph cluster was not deployed by |project| +- The OpenStack Ceph client is configured by |project| + +The deployment example in this document uses the "external" term to +focus on the second of the above because the client configuration is +important. This example differs from the first of the above because +Ceph was deployed by |project|, however relative to other stacks, it +is an external Ceph cluster because, for the stacks which configure +the Ceph clients, it doesn't matter that the Ceph server came from a +different stack. In this sense, the example in this document uses both +types of deployments as described in the following sequence: + +- The central site deploys an internal Ceph cluster called central + with an additional cephx keyring which may be used to access the + central ceph pools. +- The dcn0 site deploys an internal Ceph cluster called dcn0 with an + additional cephx keyring which may be used to access the dcn0 Ceph + pools. During the same deployment the dcn0 site is also configured + with the cephx keyring from the previous step so that it is also a + client of the external Ceph cluster, relative to dcn0, called + central from the previous step. The `GlanceMultistoreConfig` + parameter is also used during this step so that Glance will use the + dcn0 Ceph cluster as an RBD store by default but it will also be + configured to use the central Ceph cluster as an additional RBD + backend. +- The dcn1 site is deployed the same way as the dcn0 site and the + pattern may be continued for as many DCN sites as necessary. +- The central site is then updated so that in addition to having an + internal Ceph deployment for the cluster called central, it is also + configured with multiple external ceph clusters, relative to the + central site, for each DCN site. This is accomplished by passing + the cephx keys which were created during each DCN site deployment + as input to the stack update. During the stack update the + `GlanceMultistoreConfig` parameter is added so that Glance will + continue to use the central Ceph cluster as an RBD store by + default but it will also be configured to use each DCN Ceph cluster + as an additional RBD backend. + +The above sequence is possible by using the `CephExtraKeys` parameter +as described in :doc:`ceph_config` and the `CephExternalMultiConfig` +parameter described in :doc:`ceph_external`. + +Deployment Steps +---------------- + +This section shows the deployment commands and associated environment +files of an example DCN deployment with distributed image +management. It is based on the :doc:`distributed_compute_node` +example and does not cover redundant aspects of it such as networking. + +Create extra Ceph key +^^^^^^^^^^^^^^^^^^^^^ + +Create ``/home/stack/control-plane/ceph_keys.yaml`` with contents like +the following:: + + parameter_defaults: + CephExtraKeys: + - name: "client.external" + caps: + mgr: "allow *" + mon: "profile rbd" + osd: "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=images" + key: "AQD29WteAAAAABAAphgOjFD7nyjdYe8Lz0mQ5Q==" + mode: "0600" + +The key should be considered sensitive and may be randomly generated +with the following command:: + + python3 -c 'import os,struct,time,base64; key = os.urandom(16); header = struct.pack(" | + grep disk_format` after the image is uploaded. + +Set an environment variable to the ID of the newly created image: + +.. code-block:: bash + + ID=$(openstack image show cirros -c id -f value) + +Copy the image from the default store to the dcn1 store: + +.. code-block:: bash + + glance image-import $ID --stores dcn1 --import-method copy-image + +Confirm a copy of the image is in each store by looking at the image properties: + +.. code-block:: bash + + $ openstack image show $ID | grep properties + | properties | direct_url='rbd://d25504ce-459f-432d-b6fa-79854d786f2b/images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076/snap', locations='[{u'url': u'rbd://d25504ce-459f-432d-b6fa-79854d786f2b/images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076/snap', u'metadata': {u'store': u'default_backend'}}, {u'url': u'rbd://0c10d6b5-a455-4c4d-bd53-8f2b9357c3c7/images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076/snap', u'metadata': {u'store': u'dcn0'}}, {u'url': u'rbd://8649d6c3-dcb3-4aae-8c19-8c2fe5a853ac/images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076/snap', u'metadata': {u'store': u'dcn1'}}]', os_glance_failed_import='', os_glance_importing_to_stores='', os_hash_algo='sha512', os_hash_value='b795f047a1b10ba0b7c95b43b2a481a59289dc4cf2e49845e60b194a911819d3ada03767bbba4143b44c93fd7f66c96c5a621e28dff51d1196dae64974ce240e', os_hidden='False', stores='default_backend,dcn0,dcn1' | + +The `stores` key, which is the last item in the properties map is set +to 'default_backend,dcn0,dcn1'. + +On further inspection the `direct_url` key is set to:: + + rbd://d25504ce-459f-432d-b6fa-79854d786f2b/images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076/snap + +Which contains 'd25504ce-459f-432d-b6fa-79854d786f2b', the FSID of the +central Ceph cluster, the name of the pool, 'images', followed by +'8083c7e7-32d8-4f7a-b1da-0ed7884f1076', the Glance image ID and name +of the Ceph object. + +The properties map also contains `locations` which is set to similar +RBD paths for the dcn0 and dcn1 cluster with their respective FSIDs +and pool names. Note that the Glance image ID is consistent in all RBD +paths. + +If the image were deleted with `glance image-delete`, then the image +would be removed from all three RBD stores to ensure consistency. +However, if the glanceclient is >3.1.0, then an image may be deleted +from a specific store only by using a syntax like `glance +stores-delete --store `. + +Optionally, run the following on any Controller node from the +control-plane stack: + +.. code-block:: bash + + sudo podman exec ceph-mon-$(hostname) rbd --cluster central -p images ls -l + +Run the following on any DistributedComputeHCI node from the dcn0 stack: + +.. code-block:: bash + + sudo podman exec ceph-mon-$(hostname) rbd --id external --keyring /etc/ceph/dcn0.client.external.keyring --conf /etc/ceph/dcn0.conf -p images ls -l + +Run the following on any DistributedComputeHCI node from the dcn1 stack: + +.. code-block:: bash + + sudo podman exec ceph-mon-$(hostname) rbd --id external --keyring /etc/ceph/dcn1.client.external.keyring --conf /etc/ceph/dcn1.conf -p images ls -l + +The results in all cases should produce output like the following:: + + NAME SIZE PARENT FMT PROT LOCK + 8083c7e7-32d8-4f7a-b1da-0ed7884f1076 44 MiB 2 + 8083c7e7-32d8-4f7a-b1da-0ed7884f1076@snap 44 MiB 2 yes + +When an ephemeral instance is COW booted from the image a similar +command in the vms pool should show the same parent image: + +.. code-block:: bash + + $ sudo podman exec ceph-mon-$(hostname) rbd --id external --keyring /etc/ceph/dcn1.client.external.keyring --conf /etc/ceph/dcn1.conf -p vms ls -l + NAME SIZE PARENT FMT PROT LOCK + 2b431c77-93b8-4edf-88d9-1fd518d987c2_disk 1 GiB images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076@snap 2 excl + $ + + +Confirm image-based volumes may be booted as DCN instances +---------------------------------------------------------- + +An instance with a persistent root volume may be created on a DCN +site by using the active/active Cinder service at the DCN site. +Assuming the Glance image created in the previous step is available, +identify the image ID and pass it to `openstack volume create` with +the `--image` option to create a volume based on that image. + +.. code-block:: bash + + IMG_ID=$(openstack image show cirros -c id -f value) + openstack volume create --size 8 --availability-zone dcn0 pet-volume-dcn0 --image $IMG_ID + +Once the volume is created identify its volume ID and pass it to +`openstack server create` with the `--volume` option. This example +assumes a flavor, key, security group and network have already been +created. + +.. code-block:: bash + + VOL_ID=$(openstack volume show -f value -c id pet-volume-dcn0) + openstack server create --flavor tiny --key-name dcn0-key --network dcn0-network --security-group basic --availability-zone dcn0 --volume $VOL_ID pet-server-dcn0 + +It is also possible to issue one command to have Nova ask Cinder +to create the volume before it boots the instance by passing the +`--image` and `--boot-from-volume` options as in the shown in the +example below: + +.. code-block:: bash + + openstack server create --flavor tiny --image $IMG_ID --key-name dcn0-key --network dcn0-network --security-group basic --availability-zone dcn0 --boot-from-volume 4 pet-server-dcn0 + +The above will only work if the Nova `cross_az_attach` setting +of the relevant compute node is set to `false`. This is automatically +configured by deploying with `environments/dcn-hci.yaml`. If the +`cross_az_attach` setting is `true` (the default), then the volume +will be created from the image not in the dcn0 site, but on the +default central site (as verified with the `rbd` command on the +central Ceph cluster) and then the instance will fail to boot on the +dcn0 site. Even if `cross_az_attach` is `true`, it's still possible to +create an instance from a volume by using `openstack volume create` +and then `openstack server create` as shown earlier. + +Optionally, after creating the volume from the image at the dcn0 +site and then creating an instance from the existing volume, verify +that the volume is based on the image by running the `rbd` command +within a ceph-mon container on the dcn0 site to list the volumes pool. + +.. code-block:: bash + + $ sudo podman exec ceph-mon-$HOSTNAME rbd --cluster dcn0 -p volumes ls -l + NAME SIZE PARENT FMT PROT LOCK + volume-28c6fc32-047b-4306-ad2d-de2be02716b7 8 GiB images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076@snap 2 excl + $ + +The following commands may be used to create a Cinder snapshot of the +root volume of the instance. + +.. code-block:: bash + + openstack server stop pet-server-dcn0 + openstack volume snapshot create pet-volume-dcn0-snap --volume $VOL_ID --force + openstack server start pet-server-dcn0 + +In the above example the server is stopped to quiesce data for clean +a snapshot. The `--force` option is necessary when creating the +snapshot because the volume status will remain "in-use" even when the +server is shut down. When the snapshot is completed start the +server. Listing the contents of the volumes pool on the dcn0 Ceph +cluster should show the snapshot which was created and how it is +connected to the original volume and original image. + +.. code-block:: bash + + $ sudo podman exec ceph-mon-$HOSTNAME rbd --cluster dcn0 -p volumes ls -l + NAME SIZE PARENT FMT PROT LOCK + volume-28c6fc32-047b-4306-ad2d-de2be02716b7 8 GiB images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076@snap 2 excl + volume-28c6fc32-047b-4306-ad2d-de2be02716b7@snapshot-a1ca8602-6819-45b4-a228-b4cd3e5adf60 8 GiB images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076@snap 2 yes + $ + +Confirm image snapshots may be created and copied between sites +--------------------------------------------------------------- + +A new image called "cirros-snapshot" may be created at the dcn0 site +from the instance created in the previous section by running the +following commands. + +.. code-block:: bash + + NOVA_ID=$(openstack server show pet-server-dcn0 -f value -c id) + openstack server stop $NOVA_ID + openstack server image create --name cirros-snapshot $NOVA_ID + openstack server start $NOVA_ID + +In the above example the instance is stopped to quiesce data for clean +a snapshot image and is then restarted after the image has been +created. The output of `openstack image show $IMAGE_ID -f value -c +properties` should contain a JSON data structure whose key called +`stores` should only contain "dcn0" as that is the only store +which has a copy of the new cirros-snapshot image. + +The new image may then by copied from the dcn0 site to the central +site, which is the default backend for Glance. + +.. code-block:: bash + + IMAGE_ID=$(openstack image show cirros-snapshot -f value -c id) + glance image-import $IMAGE_ID --stores default_backend --import-method copy-image + +After the above is run the output of `openstack image show +$IMAGE_ID -f value -c properties` should contain a JSON data structure +whose key called `stores` should looke like "dcn0,default_backend" as +the image will also exist in the "default_backend" which stores its +data on the central Ceph cluster. The same image at the Central site +may then be copied to other DCN sites, booted in the vms or volumes +pool, and snapshotted so that the same process may repeat. diff --git a/deploy-guide/source/features/index.rst b/deploy-guide/source/features/index.rst index 0b25ff24..d817c590 100644 --- a/deploy-guide/source/features/index.rst +++ b/deploy-guide/source/features/index.rst @@ -17,6 +17,7 @@ Documentation on additional features for |project|. designate disable_telemetry distributed_compute_node + distributed_multibackend_storage extra_config high_availability instance_ha