Merge "Update Standalone Containers based Deployment with Bluestore"

This commit is contained in:
Zuul 2019-05-14 17:51:49 +00:00 committed by Gerrit Code Review
commit 94cf9da0cc
1 changed files with 21 additions and 11 deletions

View File

@ -56,12 +56,17 @@ Deploying a Standalone OpenStack node
.. admonition:: Ceph .. admonition:: Ceph
:class: ceph :class: ceph
Create a block device to be used as an OSD. Create a block device with logical volumes to be used as an OSD.
.. code-block:: bash .. code-block:: bash
sudo dd if=/dev/zero of=/var/lib/ceph-osd.img bs=1 count=0 seek=7G sudo dd if=/dev/zero of=/var/lib/ceph-osd.img bs=1 count=0 seek=7G
sudo losetup /dev/loop3 /var/lib/ceph-osd.img sudo losetup /dev/loop3 /var/lib/ceph-osd.img
sudo pvcreate /dev/loop3
sudo vgcreate vg2 /dev/loop3
sudo lvcreate -n data-lv2 -l 597 vg2
sudo lvcreate -n db-lv2 -l 597 vg2
sudo lvcreate -n wal-lv2 -l 597 vg2
Create a systemd service that restores the device on startup. Create a systemd service that restores the device on startup.
@ -210,24 +215,29 @@ Deploying a Standalone OpenStack node
:class: ceph :class: ceph
Create an additional environment file which directs ceph-ansible Create an additional environment file which directs ceph-ansible
to use the block device and fecth directory backup created to use the block device with logical volumes and fecth directory
earlier. In the same file pass additional Ceph parameters backup created earlier. In the same file pass additional Ceph
for the OSD scenario and Ceph networks. Set the placement group parameters for the OSD scenario and Ceph networks. Set the
and replica count to values which fit the number of OSDs being placement group and replica count to values which fit the number
used, e.g. 32 and 1 are used for testing with only one OSD. of OSDs being used, e.g. 32 and 1 are used for testing with only
one OSD.
.. code-block:: bash .. code-block:: bash
cat <<EOF > $HOME/ceph_parameters.yaml cat <<EOF > $HOME/ceph_parameters.yaml
parameter_defaults: parameter_defaults:
CephAnsibleDisksConfig: CephAnsibleDisksConfig:
devices: osd_scenario: lvm
- /dev/loop3 osd_objectstore: bluestore
journal_size: 1024 lvm_volumes:
- data: data-lv2
data_vg: vg2
db: db-lv2
db_vg: vg2
wal: wal-lv2
wal_vg: vg2
LocalCephAnsibleFetchDirectoryBackup: /root/ceph_ansible_fetch LocalCephAnsibleFetchDirectoryBackup: /root/ceph_ansible_fetch
CephAnsibleExtraConfig: CephAnsibleExtraConfig:
osd_scenario: collocated
osd_objectstore: filestore
cluster_network: 192.168.24.0/24 cluster_network: 192.168.24.0/24
public_network: 192.168.24.0/24 public_network: 192.168.24.0/24
CephPoolDefaultPgNum: 32 CephPoolDefaultPgNum: 32