Support openstack with ceph-backed ROOK deployment (r10,osdsR10)

Add rook-ceph back to openstack documentation.

Changes made in https://review.opendev.org/c/starlingx/docs/+/945526 were reverted.

Change-Id: Id9a4fe0c10164963ec5d68a03ef0a0881c4b641d
Signed-off-by: Elisamara Aoki Gonçalves <elisamaraaoki.goncalves@windriver.com>
This commit is contained in:
Elisamara Aoki Gonçalves
2025-06-06 13:05:46 +00:00
parent a7e2b01fc2
commit 9cd7725d97
4 changed files with 417 additions and 412 deletions

View File

@@ -934,20 +934,13 @@ A persistent storage backend is required if your application requires |PVCs|.
The StarlingX OpenStack application **requires** |PVCs|.
.. only:: starlingx or platform
There are two options for persistent storage backend: the host-based Ceph
solution and the Rook container-based Ceph solution.
There are two options for persistent storage backend: the host-based Ceph
solution and the Rook container-based Ceph solution.
.. note::
.. note::
Host-based Ceph will be deprecated and removed in an upcoming release.
Adoption of Rook-Ceph is recommended for new deployments.
.. warning::
Currently |prod-os| does not support rook-ceph. If you plan on using
|prod-os|, choose host-based Ceph.
Host-based Ceph will be deprecated and removed in an upcoming release.
Adoption of Rook-Ceph is recommended for new deployments.
For host-based Ceph:
@@ -971,39 +964,38 @@ For host-based Ceph:
# List OSD storage devices
~(keystone_admin)$ system host-stor-list controller-0
.. only:: starlingx or platform
For Rook-Ceph:
For Rook-Ceph:
.. note::
.. note::
Each deployment model enforces a different structure for the Rook Ceph
cluster and its integration with the platform.
Each deployment model enforces a different structure for the Rook Ceph
cluster and its integration with the platform.
#. Add Storage-Backend with Deployment Model.
#. Add Storage-Backend with Deployment Model.
.. code-block:: none
.. code-block:: none
~(keystone_admin)$ system storage-backend-add ceph-rook --deployment controller
~(keystone_admin)$ system storage-backend-list
+--------------------------------------+-----------------+-----------+----------------------+----------+------------------+------------------------------------------------------+
| uuid | name | backend | state | task | services | capabilities |
+--------------------------------------+-----------------+-----------+----------------------+----------+------------------+------------------------------------------------------+
| 45e3fedf-c386-4b8b-8405-882038dd7d13 | ceph-rook-store | ceph-rook | configuring-with-app | uploaded | block,filesystem | deployment_model: controller replication: 2 |
| | | | | | | min_replication: 1 |
+--------------------------------------+-----------------+-----------+----------------------+----------+------------------+------------------------------------------------------+
~(keystone_admin)$ system storage-backend-add ceph-rook --deployment controller
~(keystone_admin)$ system storage-backend-list
+--------------------------------------+-----------------+-----------+----------------------+----------+------------------+------------------------------------------------------+
| uuid | name | backend | state | task | services | capabilities |
+--------------------------------------+-----------------+-----------+----------------------+----------+------------------+------------------------------------------------------+
| 45e3fedf-c386-4b8b-8405-882038dd7d13 | ceph-rook-store | ceph-rook | configuring-with-app | uploaded | block,filesystem | deployment_model: controller replication: 2 |
| | | | | | | min_replication: 1 |
+--------------------------------------+-----------------+-----------+----------------------+----------+------------------+------------------------------------------------------+
#. Set up a ``contorllerfs ceph-float`` filesystem.
#. Set up a ``contorllerfs ceph-float`` filesystem.
.. code-block:: none
.. code-block:: none
~(keystone_admin)$ system controllerfs-add ceph-float=20
~(keystone_admin)$ system controllerfs-add ceph-float=20
#. Set up a ``host-fs ceph`` filesystem on controller-0.
#. Set up a ``host-fs ceph`` filesystem on controller-0.
.. code-block:: none
.. code-block:: none
~(keystone_admin)$ system host-fs-add controller-0 ceph=20
~(keystone_admin)$ system host-fs-add controller-0 ceph=20
-------------------
@@ -1468,15 +1460,14 @@ For host-based Ceph:
# List OSD storage devices
~(keystone_admin)$ system host-stor-list controller-1
.. only:: starlingx or platform
For Rook-Ceph:
For Rook-Ceph:
#. Set up a ``host-fs ceph`` filesystem on controller-1.
#. Set up a ``host-fs ceph`` filesystem on controller-1.
.. code-block:: none
.. code-block:: none
~(keystone_admin)$ system host-fs-add controller-1 ceph=20
~(keystone_admin)$ system host-fs-add controller-1 ceph=20
-------------------
Unlock controller-1
@@ -1492,119 +1483,118 @@ Controller-1 will reboot in order to apply configuration changes and come into
service. This can take 5-10 minutes, depending on the performance of the host
machine.
.. only:: starlingx or platform
-------------------------------------------------------------------
If configuring Rook Ceph Storage Backend, configure the environment
-------------------------------------------------------------------
-------------------------------------------------------------------
If configuring Rook Ceph Storage Backend, configure the environment
-------------------------------------------------------------------
#. Check if the rook-ceph app is uploaded.
#. Check if the rook-ceph app is uploaded.
.. code-block:: none
.. code-block:: none
~(keystone_admin)$ source /etc/platform/openrc
~(keystone_admin)$ system application-list
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
| application | version | manifest name | manifest file | status | progress |
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
| cert-manager | 24.09-76 | cert-manager-fluxcd-manifests | fluxcd-manifests | applied | completed |
| dell-storage | 24.09-25 | dell-storage-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| deployment-manager | 24.09-13 | deployment-manager-fluxcd-manifests | fluxcd-manifests | applied | completed |
| nginx-ingress-controller | 24.09-57 | nginx-ingress-controller-fluxcd-manifests | fluxcd-manifests | applied | completed |
| oidc-auth-apps | 24.09-53 | oidc-auth-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| platform-integ-apps | 24.09-138 | platform-integ-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| rook-ceph | 24.09-12 | rook-ceph-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
~(keystone_admin)$ source /etc/platform/openrc
~(keystone_admin)$ system application-list
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
| application | version | manifest name | manifest file | status | progress |
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
| cert-manager | 24.09-76 | cert-manager-fluxcd-manifests | fluxcd-manifests | applied | completed |
| dell-storage | 24.09-25 | dell-storage-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| deployment-manager | 24.09-13 | deployment-manager-fluxcd-manifests | fluxcd-manifests | applied | completed |
| nginx-ingress-controller | 24.09-57 | nginx-ingress-controller-fluxcd-manifests | fluxcd-manifests | applied | completed |
| oidc-auth-apps | 24.09-53 | oidc-auth-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| platform-integ-apps | 24.09-138 | platform-integ-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| rook-ceph | 24.09-12 | rook-ceph-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
#. List all the disks.
#. List all the disks.
.. code-block:: none
.. code-block:: none
~(keystone_admin)$ system host-disk-list controller-0
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| 7ce699f0-12dd-4416-ae43-00d3877450f7 | /dev/sda | 2048 | HDD | 292.968 | 0.0 | Undetermined | VB0e18230e-6a8780e1 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
| bfb83b6f-61e2-4f9f-a87d-ecae938b7e78 | /dev/sdb | 2064 | HDD | 9.765 | 9.765 | Undetermined | VB144f1510-14f089fd | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| 937cfabc-8447-4dbd-8ca3-062a46953023 | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VB95057d1c-4ee605c2 | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
~(keystone_admin)$ system host-disk-list controller-0
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| 7ce699f0-12dd-4416-ae43-00d3877450f7 | /dev/sda | 2048 | HDD | 292.968 | 0.0 | Undetermined | VB0e18230e-6a8780e1 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
| bfb83b6f-61e2-4f9f-a87d-ecae938b7e78 | /dev/sdb | 2064 | HDD | 9.765 | 9.765 | Undetermined | VB144f1510-14f089fd | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| 937cfabc-8447-4dbd-8ca3-062a46953023 | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VB95057d1c-4ee605c2 | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
(keystone_admin)]$ system host-disk-list controller-1
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| 52c8e1b5-0551-4748-a7a0-27b9c028cf9d | /dev/sda | 2048 | HDD | 292.968 | 0.0 | Undetermined | VB9b565509-a2edaa2e | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
| 93020ce0-249e-4db3-b8c3-6c7e8f32713b | /dev/sdb | 2064 | HDD | 9.765 | 9.765 | Undetermined | VBa08ccbda-90190faa | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| dc0ec403-67f8-40bf-ada0-6fcae3ed76da | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VB16244caf-ab36d36c | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
(keystone_admin)]$ system host-disk-list controller-1
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| 52c8e1b5-0551-4748-a7a0-27b9c028cf9d | /dev/sda | 2048 | HDD | 292.968 | 0.0 | Undetermined | VB9b565509-a2edaa2e | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
| 93020ce0-249e-4db3-b8c3-6c7e8f32713b | /dev/sdb | 2064 | HDD | 9.765 | 9.765 | Undetermined | VBa08ccbda-90190faa | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| dc0ec403-67f8-40bf-ada0-6fcae3ed76da | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VB16244caf-ab36d36c | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
#. Choose empty disks and provide hostname and uuid to finish |OSD|
configuration:
#. Choose empty disks and provide hostname and uuid to finish |OSD|
configuration:
.. code-block:: none
.. code-block:: none
~(keystone_admin)$ system host-stor-add controller-0 osd bfb83b6f-61e2-4f9f-a87d-ecae938b7e78
~(keystone_admin)$ system host-stor-add controller-1 osd 93020ce0-249e-4db3-b8c3-6c7e8f32713b
~(keystone_admin)$ system host-stor-add controller-0 osd bfb83b6f-61e2-4f9f-a87d-ecae938b7e78
~(keystone_admin)$ system host-stor-add controller-1 osd 93020ce0-249e-4db3-b8c3-6c7e8f32713b
#. Wait for |OSDs| pod to be ready.
#. Wait for |OSDs| pod to be ready.
.. code-block:: none
.. code-block:: none
$ kubectl get pods -n rook-ceph
NAME READY STATUS RESTARTS AGE
ceph-mgr-provision-w55rh 0/1 Completed 0 10m
csi-cephfsplugin-8j7xz 2/2 Running 1 (11m ago) 12m
csi-cephfsplugin-lmmg2 2/2 Running 0 12m
csi-cephfsplugin-provisioner-5467c6c4f-mktqg 5/5 Running 0 12m
csi-rbdplugin-8m8kd 2/2 Running 1 (11m ago) 12m
csi-rbdplugin-provisioner-fd84899c-kpv4q 5/5 Running 0 12m
csi-rbdplugin-z92sk 2/2 Running 0 12m
mon-float-post-install-sw8qb 0/1 Completed 0 6m5s
mon-float-pre-install-nfj5b 0/1 Completed 0 6m40s
rook-ceph-crashcollector-controller-0-589f5f774-sp6zf 1/1 Running 0 7m49s
rook-ceph-crashcollector-controller-1-68d66b9bff-zwgp9 1/1 Running 0 7m36s
rook-ceph-exporter-controller-0-5fd477bb8-jgsdk 1/1 Running 0 7m44s
rook-ceph-exporter-controller-1-6f5d8695b9-ndksh 1/1 Running 0 7m32s
rook-ceph-mds-kube-cephfs-a-5f584f4bc-tbk8q 2/2 Running 0 7m49s
rook-ceph-mgr-a-6845774cb5-lgjjd 3/3 Running 0 9m1s
rook-ceph-mgr-b-7fccfdf64d-4pcmc 3/3 Running 0 9m1s
rook-ceph-mon-a-69fd4895c7-2lfz4 2/2 Running 0 11m
rook-ceph-mon-b-7fd8cbb997-f84ng 2/2 Running 0 11m
rook-ceph-mon-float-85c4cbb7f9-k7xwj 2/2 Running 0 6m27s
rook-ceph-operator-69b5674578-z456r 1/1 Running 0 13m
rook-ceph-osd-0-5f59b5bb7b-mkwrg 2/2 Running 0 8m17s
rook-ceph-osd-prepare-controller-0-rhjgx 0/1 Completed 0 8m38s
rook-ceph-provision-5glpc 0/1 Completed 0 6m17s
rook-ceph-tools-7dc9678ccb-nmwwc 1/1 Running 0 12m
stx-ceph-manager-664f8585d8-5lt8c 1/1 Running 0 10m
$ kubectl get pods -n rook-ceph
NAME READY STATUS RESTARTS AGE
ceph-mgr-provision-w55rh 0/1 Completed 0 10m
csi-cephfsplugin-8j7xz 2/2 Running 1 (11m ago) 12m
csi-cephfsplugin-lmmg2 2/2 Running 0 12m
csi-cephfsplugin-provisioner-5467c6c4f-mktqg 5/5 Running 0 12m
csi-rbdplugin-8m8kd 2/2 Running 1 (11m ago) 12m
csi-rbdplugin-provisioner-fd84899c-kpv4q 5/5 Running 0 12m
csi-rbdplugin-z92sk 2/2 Running 0 12m
mon-float-post-install-sw8qb 0/1 Completed 0 6m5s
mon-float-pre-install-nfj5b 0/1 Completed 0 6m40s
rook-ceph-crashcollector-controller-0-589f5f774-sp6zf 1/1 Running 0 7m49s
rook-ceph-crashcollector-controller-1-68d66b9bff-zwgp9 1/1 Running 0 7m36s
rook-ceph-exporter-controller-0-5fd477bb8-jgsdk 1/1 Running 0 7m44s
rook-ceph-exporter-controller-1-6f5d8695b9-ndksh 1/1 Running 0 7m32s
rook-ceph-mds-kube-cephfs-a-5f584f4bc-tbk8q 2/2 Running 0 7m49s
rook-ceph-mgr-a-6845774cb5-lgjjd 3/3 Running 0 9m1s
rook-ceph-mgr-b-7fccfdf64d-4pcmc 3/3 Running 0 9m1s
rook-ceph-mon-a-69fd4895c7-2lfz4 2/2 Running 0 11m
rook-ceph-mon-b-7fd8cbb997-f84ng 2/2 Running 0 11m
rook-ceph-mon-float-85c4cbb7f9-k7xwj 2/2 Running 0 6m27s
rook-ceph-operator-69b5674578-z456r 1/1 Running 0 13m
rook-ceph-osd-0-5f59b5bb7b-mkwrg 2/2 Running 0 8m17s
rook-ceph-osd-prepare-controller-0-rhjgx 0/1 Completed 0 8m38s
rook-ceph-provision-5glpc 0/1 Completed 0 6m17s
rook-ceph-tools-7dc9678ccb-nmwwc 1/1 Running 0 12m
stx-ceph-manager-664f8585d8-5lt8c 1/1 Running 0 10m
#. Check ceph cluster health.
#. Check ceph cluster health.
.. code-block:: none
.. code-block:: none
$ ceph -s
cluster:
id: c18dfe3a-9b72-46e4-bb6e-6984f131598f
health: HEALTH_OK
$ ceph -s
cluster:
id: c18dfe3a-9b72-46e4-bb6e-6984f131598f
health: HEALTH_OK
services:
mon: 2 daemons, quorum a,b (age 9m)
mgr: a(active, since 6m), standbys: b
mds: 1/1 daemons up, 1 hot standby
osd: 2 osds: 2 up (since 7m), 2 in (since 7m)
services:
mon: 2 daemons, quorum a,b (age 9m)
mgr: a(active, since 6m), standbys: b
mds: 1/1 daemons up, 1 hot standby
osd: 2 osds: 2 up (since 7m), 2 in (since 7m)
data:
volumes: 1/1 healthy
pools: 4 pools, 113 pgs
objects: 25 objects, 594 KiB
usage: 72 MiB used, 19 GiB / 20 GiB avail
pgs: 113 active+clean
data:
volumes: 1/1 healthy
pools: 4 pools, 113 pgs
objects: 25 objects, 594 KiB
usage: 72 MiB used, 19 GiB / 20 GiB avail
pgs: 113 active+clean
io:
client: 1.2 KiB/s rd, 2 op/s rd, 0 op/s wr
io:
client: 1.2 KiB/s rd, 2 op/s rd, 0 op/s wr
.. include:: /_includes/bootstrapping-and-deploying-starlingx.rest
.. include:: /_includes/bootstrapping-and-deploying-starlingx.rest
.. _extend-dx-with-workers:

View File

@@ -855,20 +855,14 @@ A persistent storage backend is required if your application requires
The StarlingX OpenStack application **requires** |PVCs|.
.. only:: starlingx or platform
There are two options for persistent storage backend: the host-based Ceph
solution and the Rook container-based Ceph solution.
There are two options for persistent storage backend: the host-based Ceph
solution and the Rook container-based Ceph solution.
.. note::
.. note::
Host-based Ceph will be deprecated and removed in an upcoming release.
Adoption of Rook-Ceph is recommended for new deployments.
Host-based Ceph will be deprecated and removed in an upcoming release.
Adoption of Rook-Ceph is recommended for new deployments.
.. warning::
Currently |prod-os| does not support rook-ceph. If you plan on using
|prod-os|, choose host-based Ceph.
For host-based Ceph:
@@ -892,86 +886,85 @@ For host-based Ceph:
# List OSD storage devices
~(keystone_admin)$ system host-stor-list controller-0
.. only:: starlingx or platform
For Rook-Ceph:
For Rook-Ceph:
.. note::
.. note::
Each deployment model enforces a different structure for the Rook Ceph
cluster and its integration with the platform.
Each deployment model enforces a different structure for the Rook Ceph
cluster and its integration with the platform.
#. Check if the rook-ceph app is uploaded.
#. Check if the rook-ceph app is uploaded.
.. code-block:: none
.. code-block:: none
$ source /etc/platform/openrc
$ system application-list
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
| application | version | manifest name | manifest file | status | progress |
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
| cert-manager | 24.09-76 | cert-manager-fluxcd-manifests | fluxcd-manifests | applied | completed |
| dell-storage | 24.09-25 | dell-storage-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| deployment-manager | 24.09-13 | deployment-manager-fluxcd-manifests | fluxcd-manifests | applied | completed |
| nginx-ingress-controller | 24.09-57 | nginx-ingress-controller-fluxcd-manifests | fluxcd-manifests | applied | completed |
| oidc-auth-apps | 24.09-53 | oidc-auth-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| platform-integ-apps | 24.09-138 | platform-integ-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| rook-ceph | 24.09-12 | rook-ceph-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
$ source /etc/platform/openrc
$ system application-list
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
| application | version | manifest name | manifest file | status | progress |
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
| cert-manager | 24.09-76 | cert-manager-fluxcd-manifests | fluxcd-manifests | applied | completed |
| dell-storage | 24.09-25 | dell-storage-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| deployment-manager | 24.09-13 | deployment-manager-fluxcd-manifests | fluxcd-manifests | applied | completed |
| nginx-ingress-controller | 24.09-57 | nginx-ingress-controller-fluxcd-manifests | fluxcd-manifests | applied | completed |
| oidc-auth-apps | 24.09-53 | oidc-auth-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| platform-integ-apps | 24.09-138 | platform-integ-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| rook-ceph | 24.09-12 | rook-ceph-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
#. Add Storage-Backend with Deployment Model.
#. Add Storage-Backend with Deployment Model.
There are three deployment models: Controller, Dedicated, and Open.
There are three deployment models: Controller, Dedicated, and Open.
For the simplex and duplex environments you can use the Controller and Open
configuration.
For the simplex and duplex environments you can use the Controller and Open
configuration.
Controller (default)
|OSDs| must only be added to host with controller personality set.
Controller (default)
|OSDs| must only be added to host with controller personality set.
Replication factor is limited to a maximum of 2.
Replication factor is limited to a maximum of 2.
Dedicated
|OSDs| must be added only to hosts with the worker personality.
Dedicated
|OSDs| must be added only to hosts with the worker personality.
The replication factor is limited to a maximum of 3.
The replication factor is limited to a maximum of 3.
This model aligns with existing Bare-metal Ceph use of dedicated storage
hosts in groups of 2 or 3.
This model aligns with existing Bare-metal Ceph use of dedicated storage
hosts in groups of 2 or 3.
Open
|OSDs| can be added to any host without limitations.
Open
|OSDs| can be added to any host without limitations.
Replication factor has no limitations.
Replication factor has no limitations.
Application Strategies for deployment model controller.
Application Strategies for deployment model controller.
Simplex
|OSDs|: Added to controller nodes.
Simplex
|OSDs|: Added to controller nodes.
Replication Factor: Default 1, maximum 2.
Replication Factor: Default 1, maximum 2.
MON, MGR, MDS: Configured based on the number of hosts where the
``host-fs ceph`` is available.
MON, MGR, MDS: Configured based on the number of hosts where the
``host-fs ceph`` is available.
.. code-block:: none
.. code-block:: none
$ system storage-backend-add ceph-rook --deployment controller --confirmed
$ system storage-backend-list
+--------------------------------------+-----------------+-----------+----------------------+----------+------------+-------------------------------------------+
| uuid | name | backend | state | task | services | capabilities |
+--------------------------------------+-----------------+-----------+----------------------+----------+------------+-------------------------------------------+
| a2452e47-4b2b-4a3a-a8f0-fb749d92d9cd | ceph-rook-store | ceph-rook | configuring-with-app | uploaded | block, | deployment_model: controller replication: |
| | | | | | filesystem | 1 min_replication: 1 |
+--------------------------------------+-----------------+-----------+----------------------+----------+------------+-------------------------------------------+
$ system storage-backend-add ceph-rook --deployment controller --confirmed
$ system storage-backend-list
+--------------------------------------+-----------------+-----------+----------------------+----------+------------+-------------------------------------------+
| uuid | name | backend | state | task | services | capabilities |
+--------------------------------------+-----------------+-----------+----------------------+----------+------------+-------------------------------------------+
| a2452e47-4b2b-4a3a-a8f0-fb749d92d9cd | ceph-rook-store | ceph-rook | configuring-with-app | uploaded | block, | deployment_model: controller replication: |
| | | | | | filesystem | 1 min_replication: 1 |
+--------------------------------------+-----------------+-----------+----------------------+----------+------------+-------------------------------------------+
#. Set up a ``host-fs ceph`` filesystem.
#. Set up a ``host-fs ceph`` filesystem.
.. code-block:: none
.. code-block:: none
$ system host-fs-add controller-0 ceph=20
$ system host-fs-add controller-0 ceph=20
.. incl-config-controller-0-openstack-specific-aio-simplex-end:
.. incl-config-controller-0-openstack-specific-aio-simplex-end:
-------------------
Unlock controller-0
@@ -989,78 +982,77 @@ Controller-0 will reboot in order to apply configuration changes and come into
service. This can take 5-10 minutes, depending on the performance of the host
machine.
.. only:: starlingx or platform
For Rook-Ceph:
For Rook-Ceph:
#. List all the disks.
#. List all the disks.
.. code-block:: none
.. code-block:: none
$ system host-disk-list controller-0
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| 17408af3-e211-4e2b-8cf1-d2b6687476d5 | /dev/sda | 2048 | HDD | 292.968 | 0.0 | Undetermined | VBba52ec56-f68a9f2d | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
| cee99187-dac4-4a7b-8e58-f2d5bd48dcaf | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VBf96fa322-597194da | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| 0c6435af-805a-4a62-ad8e-403bf916f5cf | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VBeefed5ad-b4815f0d | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
$ system host-disk-list controller-0
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| 17408af3-e211-4e2b-8cf1-d2b6687476d5 | /dev/sda | 2048 | HDD | 292.968 | 0.0 | Undetermined | VBba52ec56-f68a9f2d | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
| cee99187-dac4-4a7b-8e58-f2d5bd48dcaf | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VBf96fa322-597194da | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| 0c6435af-805a-4a62-ad8e-403bf916f5cf | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VBeefed5ad-b4815f0d | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
#. Choose empty disks and provide hostname and uuid to finish |OSD|
configuration:
#. Choose empty disks and provide hostname and uuid to finish |OSD|
configuration:
.. code-block:: none
.. code-block:: none
$ system host-stor-add controller-0 osd cee99187-dac4-4a7b-8e58-f2d5bd48dcaf
$ system host-stor-add controller-0 osd cee99187-dac4-4a7b-8e58-f2d5bd48dcaf
#. Wait for |OSDs| pod to be ready.
#. Wait for |OSDs| pod to be ready.
.. code-block:: none
.. code-block:: none
$ kubectl get pods -n rook-ceph
NAME READY STATUS RESTARTS AGE
ceph-mgr-provision-78xjk 0/1 Completed 0 4m31s
csi-cephfsplugin-572jc 2/2 Running 0 5m32s
csi-cephfsplugin-provisioner-5467c6c4f-t8x8d 5/5 Running 0 5m28s
csi-rbdplugin-2npb6 2/2 Running 0 5m32s
csi-rbdplugin-provisioner-fd84899c-k8wcw 5/5 Running 0 5m32s
rook-ceph-crashcollector-controller-0-589f5f774-d8sjz 1/1 Running 0 3m24s
rook-ceph-exporter-controller-0-5fd477bb8-c7nxh 1/1 Running 0 3m21s
rook-ceph-mds-kube-cephfs-a-cc647757-6p9j5 2/2 Running 0 3m25s
rook-ceph-mds-kube-cephfs-b-5b5845ff59-xprbb 2/2 Running 0 3m19s
rook-ceph-mgr-a-746fc4dd54-t8bcw 2/2 Running 0 4m40s
rook-ceph-mon-a-b6c95db97-f5fqq 2/2 Running 0 4m56s
rook-ceph-operator-69b5674578-27bn4 1/1 Running 0 6m26s
rook-ceph-osd-0-7f5cd957b8-ppb99 2/2 Running 0 3m52s
rook-ceph-osd-prepare-controller-0-vzq2d 0/1 Completed 0 4m18s
rook-ceph-provision-zcs89 0/1 Completed 0 101s
rook-ceph-tools-7dc9678ccb-v2gps 1/1 Running 0 6m2s
stx-ceph-manager-664f8585d8-wzr4v 1/1 Running 0 4m31s
$ kubectl get pods -n rook-ceph
NAME READY STATUS RESTARTS AGE
ceph-mgr-provision-78xjk 0/1 Completed 0 4m31s
csi-cephfsplugin-572jc 2/2 Running 0 5m32s
csi-cephfsplugin-provisioner-5467c6c4f-t8x8d 5/5 Running 0 5m28s
csi-rbdplugin-2npb6 2/2 Running 0 5m32s
csi-rbdplugin-provisioner-fd84899c-k8wcw 5/5 Running 0 5m32s
rook-ceph-crashcollector-controller-0-589f5f774-d8sjz 1/1 Running 0 3m24s
rook-ceph-exporter-controller-0-5fd477bb8-c7nxh 1/1 Running 0 3m21s
rook-ceph-mds-kube-cephfs-a-cc647757-6p9j5 2/2 Running 0 3m25s
rook-ceph-mds-kube-cephfs-b-5b5845ff59-xprbb 2/2 Running 0 3m19s
rook-ceph-mgr-a-746fc4dd54-t8bcw 2/2 Running 0 4m40s
rook-ceph-mon-a-b6c95db97-f5fqq 2/2 Running 0 4m56s
rook-ceph-operator-69b5674578-27bn4 1/1 Running 0 6m26s
rook-ceph-osd-0-7f5cd957b8-ppb99 2/2 Running 0 3m52s
rook-ceph-osd-prepare-controller-0-vzq2d 0/1 Completed 0 4m18s
rook-ceph-provision-zcs89 0/1 Completed 0 101s
rook-ceph-tools-7dc9678ccb-v2gps 1/1 Running 0 6m2s
stx-ceph-manager-664f8585d8-wzr4v 1/1 Running 0 4m31s
#. Check ceph cluster health.
#. Check ceph cluster health.
.. code-block:: none
.. code-block:: none
$ ceph -s
cluster:
id: 75c8f017-e7b8-4120-a9c1-06f38e1d1aa3
health: HEALTH_OK
$ ceph -s
cluster:
id: 75c8f017-e7b8-4120-a9c1-06f38e1d1aa3
health: HEALTH_OK
services:
mon: 1 daemons, quorum a (age 32m)
mgr: a(active, since 30m)
mds: 1/1 daemons up, 1 hot standby
osd: 1 osds: 1 up (since 30m), 1 in (since 31m)
services:
mon: 1 daemons, quorum a (age 32m)
mgr: a(active, since 30m)
mds: 1/1 daemons up, 1 hot standby
osd: 1 osds: 1 up (since 30m), 1 in (since 31m)
data:
volumes: 1/1 healthy
pools: 4 pools, 113 pgs
objects: 22 objects, 595 KiB
usage: 27 MiB used, 9.7 GiB / 9.8 GiB avail
pgs: 113 active+clean
data:
volumes: 1/1 healthy
pools: 4 pools, 113 pgs
objects: 22 objects, 595 KiB
usage: 27 MiB used, 9.7 GiB / 9.8 GiB avail
pgs: 113 active+clean
io:
client: 852 B/s rd, 1 op/s rd, 0 op/s wr
io:
client: 852 B/s rd, 1 op/s rd, 0 op/s wr
.. incl-unlock-controller-0-aio-simplex-end:

View File

@@ -758,11 +758,6 @@ host machine.
If configuring host based Ceph Storage Backend, Add Ceph OSDs to controllers
----------------------------------------------------------------------------
.. warning::
Currently |prod-os| does not support rook-ceph. If you plan on using
|prod-os|, choose host-based Ceph.
.. only:: starlingx and html
.. tabs::
@@ -834,205 +829,200 @@ Complete system configuration by reviewing procedures in:
- |index-sysconf-kub-78f0e1e9ca5a|
- |index-admintasks-kub-ebc55fefc368|
.. only:: starlingx or platform
*******************************************************************
If configuring Rook Ceph Storage Backend, configure the environment
*******************************************************************
*******************************************************************
If configuring Rook Ceph Storage Backend, configure the environment
*******************************************************************
.. warning::
.. note::
Currently |prod-os| does not support rook-ceph. If you plan on using
|prod-os|, choose host-based Ceph.
Each deployment model enforces a different structure for the Rook Ceph
cluster and its integration with the platform.
.. note::
#. Check if the rook-ceph app is uploaded.
Each deployment model enforces a different structure for the Rook Ceph
cluster and its integration with the platform.
.. code-block:: none
#. Check if the rook-ceph app is uploaded.
$ source /etc/platform/openrc
$ system application-list
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
| application | version | manifest name | manifest file | status | progress |
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
| cert-manager | 24.09-76 | cert-manager-fluxcd-manifests | fluxcd-manifests | applied | completed |
| dell-storage | 24.09-25 | dell-storage-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| deployment-manager | 24.09-13 | deployment-manager-fluxcd-manifests | fluxcd-manifests | applied | completed |
| nginx-ingress-controller | 24.09-57 | nginx-ingress-controller-fluxcd-manifests | fluxcd-manifests | applied | completed |
| oidc-auth-apps | 24.09-53 | oidc-auth-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| platform-integ-apps | 24.09-138 | platform-integ-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| rook-ceph | 24.09-12 | rook-ceph-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
.. code-block:: none
#. Add Storage-Backend with Deployment Model.
$ source /etc/platform/openrc
$ system application-list
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
| application | version | manifest name | manifest file | status | progress |
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
| cert-manager | 24.09-76 | cert-manager-fluxcd-manifests | fluxcd-manifests | applied | completed |
| dell-storage | 24.09-25 | dell-storage-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| deployment-manager | 24.09-13 | deployment-manager-fluxcd-manifests | fluxcd-manifests | applied | completed |
| nginx-ingress-controller | 24.09-57 | nginx-ingress-controller-fluxcd-manifests | fluxcd-manifests | applied | completed |
| oidc-auth-apps | 24.09-53 | oidc-auth-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| platform-integ-apps | 24.09-138 | platform-integ-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
| rook-ceph | 24.09-12 | rook-ceph-fluxcd-manifests | fluxcd-manifests | uploaded | completed |
+--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+
There are three deployment models: Controller, Dedicated, and Open.
#. Add Storage-Backend with Deployment Model.
For the simplex and duplex environments you can use the Controller and Open
configuration.
There are three deployment models: Controller, Dedicated, and Open.
For the simplex and duplex environments you can use the Controller and Open
configuration.
This model aligns with the existing Bare-metal Ceph assignment of OSDs
to controllers.
This model aligns with the existing Bare-metal Ceph assignment of OSDs
to controllers.
Controller (default)
|OSDs| must only be added to host with controller personality set.
Controller (default)
|OSDs| must only be added to host with controller personality set.
Replication factor is limited to a maximum of 2.
Replication factor is limited to a maximum of 2.
This model aligns with the existing Bare-metal Ceph assignement of OSDs
to controllers.
This model aligns with the existing Bare-metal Ceph assignement of OSDs
to controllers.
Dedicated
|OSDs| must be added only to hosts with the worker personality.
Dedicated
|OSDs| must be added only to hosts with the worker personality.
The replication factor is limited to a maximum of 3.
The replication factor is limited to a maximum of 3.
This model aligns with existing Bare-metal Ceph use of dedicated storage
hosts in groups of 2 or 3.
This model aligns with existing Bare-metal Ceph use of dedicated storage
hosts in groups of 2 or 3.
Open
|OSDs| can be added to any host without limitations.
Open
|OSDs| can be added to any host without limitations.
Replication factor has no limitations.
Replication factor has no limitations.
Application Strategies for deployment model controller.
Application Strategies for deployment model controller.
Duplex, Duplex+ or Standard
|OSDs|: Added to controller nodes.
Duplex, Duplex+ or Standard
|OSDs|: Added to controller nodes.
Replication Factor: Default 1, maximum 'Any'.
Replication Factor: Default 1, maximum 'Any'.
.. code-block:: none
.. code-block:: none
$ system storage-backend-add ceph-rook --deployment open --confirmed
$ system storage-backend-list
+--------------------------------------+-----------------+-----------+----------------------+----------+------------------+---------------------------------------------+
| uuid | name | backend | state | task | services | capabilities |
+--------------------------------------+-----------------+-----------+----------------------+----------+------------------+---------------------------------------------+
| 0dfef1f0-a5a4-4b20-a013-ef76e92bcd42 | ceph-rook-store | ceph-rook | configuring-with-app | uploaded | block,filesystem | deployment_model: open replication: 2 |
| | | | | | | min_replication: 1 |
+--------------------------------------+-----------------+-----------+----------------------+----------+------------------+---------------------------------------------+
$ system storage-backend-add ceph-rook --deployment open --confirmed
$ system storage-backend-list
+--------------------------------------+-----------------+-----------+----------------------+----------+------------------+---------------------------------------------+
| uuid | name | backend | state | task | services | capabilities |
+--------------------------------------+-----------------+-----------+----------------------+----------+------------------+---------------------------------------------+
| 0dfef1f0-a5a4-4b20-a013-ef76e92bcd42 | ceph-rook-store | ceph-rook | configuring-with-app | uploaded | block,filesystem | deployment_model: open replication: 2 |
| | | | | | | min_replication: 1 |
+--------------------------------------+-----------------+-----------+----------------------+----------+------------------+---------------------------------------------+
#. Set up a ``host-fs ceph`` filesystem.
#. Set up a ``host-fs ceph`` filesystem.
.. code-block:: none
.. code-block:: none
$ system host-fs-add controller-0 ceph=20
$ system host-fs-add controller-1 ceph=20
$ system host-fs-add compute-0 ceph=20
$ system host-fs-add controller-0 ceph=20
$ system host-fs-add controller-1 ceph=20
$ system host-fs-add compute-0 ceph=20
#. List all the disks.
#. List all the disks.
.. code-block:: none
.. code-block:: none
$ system host-disk-list controller-0
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| 7f2b9ff5-b6ee-4eaf-a7eb-cecd3ba438fd | /dev/sda | 2048 | HDD | 292.968 | 0.0 | Undetermined | VB3e6c5449-c7224b07 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
| fdaf3f71-a2df-4b40-9e70-335900f953a3 | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VB323207f8-b6b9d531 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| ced60373-0dbc-4bc7-9d03-657c1f92164a | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VB49833b9d-a22a2455 | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
$ system host-disk-list controller-1
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| 119533a5-bc66-47e0-a448-f0561871989e | /dev/sda | 2048 | HDD | 292.968 | 0.0 | Undetermined | VBb1b06a09-6137c63a | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
| 03cbb10e-fdc1-4d84-a0d8-6e02c22e3251 | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VB5fcf59a9-7c8a531b | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| 7351013f-8280-4ff3-88bd-76e88f14fa2f | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VB0d1ce946-d0a172c4 | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
$ system host-disk-list compute-0
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| 14245695-46df-43e8-b54b-9fb3c22ac359 | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VB8ac41a93-82275093 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
| 765d8dff-e584-4064-9c95-6ea3aa25473c | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VB569d6dab-9ae3e6af | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| c9b4ed65-da32-4770-b901-60b56fd68c35 | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VBf88762a8-9aa3315c | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
$ system host-disk-list controller-0
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| 7f2b9ff5-b6ee-4eaf-a7eb-cecd3ba438fd | /dev/sda | 2048 | HDD | 292.968 | 0.0 | Undetermined | VB3e6c5449-c7224b07 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
| fdaf3f71-a2df-4b40-9e70-335900f953a3 | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VB323207f8-b6b9d531 | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| ced60373-0dbc-4bc7-9d03-657c1f92164a | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VB49833b9d-a22a2455 | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
$ system host-disk-list controller-1
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| 119533a5-bc66-47e0-a448-f0561871989e | /dev/sda | 2048 | HDD | 292.968 | 0.0 | Undetermined | VBb1b06a09-6137c63a | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
| 03cbb10e-fdc1-4d84-a0d8-6e02c22e3251 | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VB5fcf59a9-7c8a531b | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| 7351013f-8280-4ff3-88bd-76e88f14fa2f | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VB0d1ce946-d0a172c4 | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
$ system host-disk-list compute-0
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
| 14245695-46df-43e8-b54b-9fb3c22ac359 | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined | VB8ac41a93-82275093 | /dev/disk/by-path/pci-0000:00:0d.0-ata-1.0 |
| 765d8dff-e584-4064-9c95-6ea3aa25473c | /dev/sdb | 2064 | HDD | 9.765 | 0.0 | Undetermined | VB569d6dab-9ae3e6af | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| c9b4ed65-da32-4770-b901-60b56fd68c35 | /dev/sdc | 2080 | HDD | 9.765 | 9.761 | Undetermined | VBf88762a8-9aa3315c | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0 |
+--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+---------------------+--------------------------------------------+
#. Choose empty disks and provide hostname and uuid to finish |OSD|
configuration:
#. Choose empty disks and provide hostname and uuid to finish |OSD|
configuration:
.. code-block:: none
.. code-block:: none
$ system host-stor-add controller-0 osd fdaf3f71-a2df-4b40-9e70-335900f953a3
$ system host-stor-add controller-1 osd 03cbb10e-fdc1-4d84-a0d8-6e02c22e3251
$ system host-stor-add compute-0 osd c9b4ed65-da32-4770-b901-60b56fd68c35
#. Apply the rook-ceph application.
$ system host-stor-add controller-0 osd fdaf3f71-a2df-4b40-9e70-335900f953a3
$ system host-stor-add controller-1 osd 03cbb10e-fdc1-4d84-a0d8-6e02c22e3251
$ system host-stor-add compute-0 osd c9b4ed65-da32-4770-b901-60b56fd68c35
.. code-block:: none
#. Apply the rook-ceph application.
$ system application-apply rook-ceph
.. code-block:: none
#. Wait for |OSDs| pod to be ready.
$ system application-apply rook-ceph
.. code-block:: none
#. Wait for |OSDs| pod to be ready.
$ kubectl get pods -n rook-ceph
NAME READY STATUS RESTARTS AGE
ceph-mgr-provision-nh6dl 0/1 Completed 0 18h
csi-cephfsplugin-2nnwf 2/2 Running 10 (3h9m ago) 18h
csi-cephfsplugin-flbll 2/2 Running 14 (3h42m ago) 18h
csi-cephfsplugin-provisioner-5467c6c4f-98fxk 5/5 Running 5 (4h7m ago) 18h
csi-cephfsplugin-zzskz 2/2 Running 17 (168m ago) 18h
csi-rbdplugin-42ldl 2/2 Running 17 (168m ago) 18h
csi-rbdplugin-8xzxz 2/2 Running 14 (3h42m ago) 18h
csi-rbdplugin-b6dvk 2/2 Running 10 (3h9m ago) 18h
csi-rbdplugin-provisioner-fd84899c-6795x 5/5 Running 5 (4h7m ago) 18h
rook-ceph-crashcollector-compute-0-59f554f6fc-5s5cz 1/1 Running 0 4m19s
rook-ceph-crashcollector-controller-0-589f5f774-b2297 1/1 Running 0 3h2m
rook-ceph-crashcollector-controller-1-68d66b9bff-njrhg 1/1 Running 1 (4h7m ago) 18h
rook-ceph-exporter-compute-0-569b65cf6c-xhfjk 1/1 Running 0 4m14s
rook-ceph-exporter-controller-0-5fd477bb8-rzkqd 1/1 Running 0 3h2m
rook-ceph-exporter-controller-1-6f5d8695b9-772rb 1/1 Running 1 (4h7m ago) 18h
rook-ceph-mds-kube-cephfs-a-654c56d89d-mdklw 2/2 Running 11 (166m ago) 18h
rook-ceph-mds-kube-cephfs-b-6c498f5db4-5hbcj 2/2 Running 2 (166m ago) 3h2m
rook-ceph-mgr-a-5d6664f544-rgfpn 3/3 Running 9 (3h42m ago) 18h
rook-ceph-mgr-b-5c4cb984b9-cl4qq 3/3 Running 0 168m
rook-ceph-mgr-c-7d89b6cddb-j9hxp 3/3 Running 0 3h9m
rook-ceph-mon-a-6ffbf95cdf-cvw8r 2/2 Running 0 3h9m
rook-ceph-mon-b-5558b5ddc7-h7nhz 2/2 Running 2 (4h7m ago) 18h
rook-ceph-mon-c-6db9c888cb-mfxfh 2/2 Running 0 167m
rook-ceph-operator-69b5674578-k6k4j 1/1 Running 0 8m10s
rook-ceph-osd-0-dd94574ff-dvrrs 2/2 Running 2 (4h7m ago) 18h
rook-ceph-osd-1-5d7f598f8f-88t2j 2/2 Running 0 3h9m
rook-ceph-osd-2-6776d44476-sqnlj 2/2 Running 0 4m20s
rook-ceph-osd-prepare-compute-0-ls2xw 0/1 Completed 0 5m16s
rook-ceph-osd-prepare-controller-0-jk6bz 0/1 Completed 0 5m27s
rook-ceph-osd-prepare-controller-1-d845s 0/1 Completed 0 5m21s
rook-ceph-provision-vtvc4 0/1 Completed 0 17h
rook-ceph-tools-7dc9678ccb-srnd8 1/1 Running 1 (4h7m ago) 18h
stx-ceph-manager-664f8585d8-csl7p 1/1 Running 1 (4h7m ago) 18h
.. code-block:: none
#. Check ceph cluster health.
$ kubectl get pods -n rook-ceph
NAME READY STATUS RESTARTS AGE
ceph-mgr-provision-nh6dl 0/1 Completed 0 18h
csi-cephfsplugin-2nnwf 2/2 Running 10 (3h9m ago) 18h
csi-cephfsplugin-flbll 2/2 Running 14 (3h42m ago) 18h
csi-cephfsplugin-provisioner-5467c6c4f-98fxk 5/5 Running 5 (4h7m ago) 18h
csi-cephfsplugin-zzskz 2/2 Running 17 (168m ago) 18h
csi-rbdplugin-42ldl 2/2 Running 17 (168m ago) 18h
csi-rbdplugin-8xzxz 2/2 Running 14 (3h42m ago) 18h
csi-rbdplugin-b6dvk 2/2 Running 10 (3h9m ago) 18h
csi-rbdplugin-provisioner-fd84899c-6795x 5/5 Running 5 (4h7m ago) 18h
rook-ceph-crashcollector-compute-0-59f554f6fc-5s5cz 1/1 Running 0 4m19s
rook-ceph-crashcollector-controller-0-589f5f774-b2297 1/1 Running 0 3h2m
rook-ceph-crashcollector-controller-1-68d66b9bff-njrhg 1/1 Running 1 (4h7m ago) 18h
rook-ceph-exporter-compute-0-569b65cf6c-xhfjk 1/1 Running 0 4m14s
rook-ceph-exporter-controller-0-5fd477bb8-rzkqd 1/1 Running 0 3h2m
rook-ceph-exporter-controller-1-6f5d8695b9-772rb 1/1 Running 1 (4h7m ago) 18h
rook-ceph-mds-kube-cephfs-a-654c56d89d-mdklw 2/2 Running 11 (166m ago) 18h
rook-ceph-mds-kube-cephfs-b-6c498f5db4-5hbcj 2/2 Running 2 (166m ago) 3h2m
rook-ceph-mgr-a-5d6664f544-rgfpn 3/3 Running 9 (3h42m ago) 18h
rook-ceph-mgr-b-5c4cb984b9-cl4qq 3/3 Running 0 168m
rook-ceph-mgr-c-7d89b6cddb-j9hxp 3/3 Running 0 3h9m
rook-ceph-mon-a-6ffbf95cdf-cvw8r 2/2 Running 0 3h9m
rook-ceph-mon-b-5558b5ddc7-h7nhz 2/2 Running 2 (4h7m ago) 18h
rook-ceph-mon-c-6db9c888cb-mfxfh 2/2 Running 0 167m
rook-ceph-operator-69b5674578-k6k4j 1/1 Running 0 8m10s
rook-ceph-osd-0-dd94574ff-dvrrs 2/2 Running 2 (4h7m ago) 18h
rook-ceph-osd-1-5d7f598f8f-88t2j 2/2 Running 0 3h9m
rook-ceph-osd-2-6776d44476-sqnlj 2/2 Running 0 4m20s
rook-ceph-osd-prepare-compute-0-ls2xw 0/1 Completed 0 5m16s
rook-ceph-osd-prepare-controller-0-jk6bz 0/1 Completed 0 5m27s
rook-ceph-osd-prepare-controller-1-d845s 0/1 Completed 0 5m21s
rook-ceph-provision-vtvc4 0/1 Completed 0 17h
rook-ceph-tools-7dc9678ccb-srnd8 1/1 Running 1 (4h7m ago) 18h
stx-ceph-manager-664f8585d8-csl7p 1/1 Running 1 (4h7m ago) 18h
.. code-block:: none
#. Check ceph cluster health.
$ ceph -s
cluster:
id: 5b579aca-617f-4f2a-b059-73e7071111dc
health: HEALTH_OK
.. code-block:: none
services:
mon: 3 daemons, quorum a,b,c (age 2h)
mgr: a(active, since 2h), standbys: c, b
mds: 1/1 daemons up, 1 hot standby
osd: 3 osds: 3 up (since 82s), 3 in (since 2m)
$ ceph -s
cluster:
id: 5b579aca-617f-4f2a-b059-73e7071111dc
health: HEALTH_OK
data:
volumes: 1/1 healthy
pools: 4 pools, 113 pgs
objects: 26 objects, 648 KiB
usage: 129 MiB used, 29 GiB / 29 GiB avail
pgs: 110 active+clean
2 active+clean+scrubbing+deep
1 active+clean+scrubbing
services:
mon: 3 daemons, quorum a,b,c (age 2h)
mgr: a(active, since 2h), standbys: c, b
mds: 1/1 daemons up, 1 hot standby
osd: 3 osds: 3 up (since 82s), 3 in (since 2m)
io:
client: 1.2 KiB/s rd, 2 op/s rd, 0 op/s wr
data:
volumes: 1/1 healthy
pools: 4 pools, 113 pgs
objects: 26 objects, 648 KiB
usage: 129 MiB used, 29 GiB / 29 GiB avail
pgs: 110 active+clean
2 active+clean+scrubbing+deep
1 active+clean+scrubbing
io:
client: 1.2 KiB/s rd, 2 op/s rd, 0 op/s wr
.. end-content

View File

@@ -67,6 +67,39 @@ Install application manifest and helm-charts
the current state of the underlying StarlingX Kubernetes platform and the
recommended StarlingX configuration of OpenStack services.
By default, |prefix|-openstack is configured to support Rook-Ceph
deployments. For host-based Ceph deployments, the following script needs to
be executed to override the ``ceph-config-helper`` image used by
|prefix|-openstack helm charts:
.. code-block:: none
NAMESPACE=openstack
APP_NAME=|prefix|-openstack
DOCKER_REGISTRY_URL=<your docker registry url> # e.g., myprivateregistry.abc.com:9001/docker.io
HOST_CEPH_IMAGE=${DOCKER_REGISTRY_URL}/openstackhelm/ceph-config-helper:ubuntu_bionic-20201223
OVERRIDE_CHARTS=(
"cinder" "cinder"
"glance"
"libvirt"
"nova" "nova"
)
OVERRIDE_IMAGES=(
"cinder_backup_storage_init" "cinder_storage_init" # Cinder images
"glance_storage_init" # Glance image
"ceph_config_helper" # libvirt image
"nova_service_cleaner" "nova_storage_init" # Nova images
)
for ((i=0; i<${#OVERRIDE_CHARTS[@]}; i++)); do
CHART=${OVERRIDE_CHARTS[$i]}
IMAGE=${OVERRIDE_IMAGES[$i]}
echo "Overriding ${IMAGE} image of ${CHART} chart"
system helm-override-update ${APP_NAME} ${CHART} ${NAMESPACE} \
--reuse-values --set images.tags.${IMAGE}=${HOST_CEPH_IMAGE}
done
#. Apply the |prefix|-openstack application in order to bring |prod-os| into
service. If your environment is preconfigured with a proxy server, then make
sure HTTPS proxy is set before applying |prefix|-openstack.