Rook Ceph Dashboard Doc procedure improvement

Improves the documentation about enabling Rook Ceph Dashboard

Story: 2011066
Task: 51394

Change-Id: Ib5d682ea3da26888b4aa0f817711d331d865d4e8
Signed-off-by: Caio Correa <caio.correa@windriver.com>
This commit is contained in:
Caio Correa 2024-11-22 16:28:03 -03:00 committed by Juanita-Balaraj
parent d309e04f1a
commit 7ce9b3292a

View File

@ -11,7 +11,7 @@ Install Rook Ceph
Rook Ceph in an orchestrator providing a containerized solution for Ceph
Storage with a specialized Kubernetes Operator to automate the management of
the cluster. It is an alternative solution to the bare metal Ceph Storage. See
the cluster. It is an alternative solution to the bare metal Ceph storage. See
https://rook.io/docs/rook/latest-release/Getting-Started/intro/ for more
details.
@ -19,8 +19,8 @@ details.
Before configuring the deployment model and services.
- Certify that there is no no ceph-store storage backend configured on the
system:
- Ensure that there is no ceph-store storage backend configured on the
system.
.. code-block:: none
@ -28,21 +28,21 @@ Before configuring the deployment model and services.
- Create a storage backend for Rook Ceph, choose your deployment model
(controller, dedicated, open), and the desired services (block or ecblock,
filesystem, object):
filesystem, object).
.. code-block:: none
~(keystone_admin)$ system storage-backend-add ceph-rook --deployment controller --confirmed
- Create a ``host-fs ceph`` for each host that will house a Rook Ceph monitor
- Create a ``host-fs ceph`` for each host that will use a Rook Ceph monitor
(preferably an ODD number of hosts):
.. code-block:: none
~(keystone_admin)$ system host-fs-add <hostname> ceph=<size>
- For DX platforms, adding a floating monitor is recommended. To add a
floating monitor, the inactive controller should be locked:
- It is recommended to use a |AIO-DX| platform adding a floating monitor. To add a
floating monitor the inactive controller should be locked.
.. code-block:: none
@ -51,7 +51,7 @@ Before configuring the deployment model and services.
- Configure |OSDs|.
- Check the uuid of the disks of the desired host that will house the
- Check the |UUID| of the disks of the desired host that will use the
|OSDs|:
.. code-block:: none
@ -70,16 +70,16 @@ Before configuring the deployment model and services.
~(keystone_admin)$ system host-stor-add <hostname> osd <disk_uuid>
For more details om deployment models and services see
For more details on deployment models and services see
:ref:`deployment-models-for-rook-ceph-b855bd0108cf`.
.. rubric:: |proc|
After configuring environment according to the chosen deployment model
correctly, Rook Ceph will install automatically.
After configuring the environment based on the selected deployment model,
Rook Ceph will be installed automatically.
Check the health of the cluster after some minutes after application applied
using any ceph commands, for example :command:`ceph status`.
Check the health of the cluster after a few minutes after the application is applied
using any ceph command, for example :command:`ceph status`.
.. code-block:: none
@ -104,9 +104,9 @@ using any ceph commands, for example :command:`ceph status`.
usage: 3.8 GiB used, 5.7 TiB / 5.7 TiB avail
pgs: 81 active+clean
Check if the cluster contains all the desired elements. All pods should be
running or completed to the cluster to be considered healthy. You can see the
Rook Ceph pods with:
Check if the cluster contains all the required elements. All pods should be
running or completed on the cluster to be considered healthy. Use the following command
to check the Rook Ceph pods on the cluster.
.. code-block:: none
@ -136,26 +136,25 @@ Rook Ceph pods with:
stx-ceph-osd-audit-28553058-ms92w 0/1 Completed 0 2m5s
Additional Features and Procedures
----------------------------------
Additional Enhancements
-----------------------
Add New OSDs on a Running Cluster
Add new OSDs on a running cluster
*********************************
To add new |OSDs| to the cluster, add the new |OSD| to the platform and
re-apply the application.
reapply the application.
.. code-block:: none
~(keystone_admin)$ system host-stor-add <host> <disk_uuid>
~(keystone_admin)$ system application-apply rook-ceph
Add New Monitor on a Running Cluster
************************************
Add a new monitor on a running cluster
**************************************
To add a new monitor to the cluster, add the ``host-fs`` to the desired host
and re-apply the application.
and reapply the application.
.. code-block:: none
@ -163,58 +162,84 @@ and re-apply the application.
~(keystone_admin)$ system application-apply rook-ceph
Enable Ceph Dashboard
*********************
Enable the Ceph Dashboard
*************************
To enable Ceph Dashboard a Helm override must be provided before the
application apply. You should provide a password coded in base64.
To enable the Ceph dashboard a Helm override must be provided to the
application. Provide a password coded in base64.
Create the override file.
.. rubric:: |proc|
.. code-block:: none
#. Create the override file.
$ openssl base64 -e <<< "my_dashboard_passwd"
bXlfZGFzaGJvYXJkX3Bhc3N3ZAo=
.. code-block:: none
$ cat << EOF >> dashboard-override.yaml
cephClusterSpec:
dashboard:
enabled: true
password: "bXlfZGFzaGJvYXJkX3Bhc3N3ZAo="
EOF
$ openssl base64 -e <<< "my_dashboard_passwd"
bXlfZGFzaGJvYXJkX3Bhc3N3ZAo=
Check Rook Ceph Pods
$ cat << EOF >> dashboard-override.yaml
cephClusterSpec:
dashboard:
enabled: true
password: "bXlfZGFzaGJvYXJkX3Bhc3N3ZAo="
EOF
#. Update the Helm chart with the created user-override.
.. code-block:: none
~(keystone_admin)$ system helm-override-update --values dashboard-override.yaml rook-ceph rook-ceph-cluster rook-ceph
+----------------+-------------------+
| Property | Value |
+----------------+-------------------+
| name | rook-ceph-cluster |
| namespace | rook-ceph |
| user_overrides | cephClusterSpec: |
| | dashboard: |
| | enabled: true |
| | |
+----------------+-------------------+
#. Apply/reapply the Rook Ceph application.
.. code-block:: none
~(keystone_admin)$ system application-apply rook-ceph
You can access the dashboard using the following address: ``https://<floating_ip>:30443``.
Check Rook Ceph pods
********************
You can check the pods of the storage cluster running the following command:
You can check the pods of the storage cluster using the following command:
.. code-block:: none
kubectl get pod -n rook-ceph
Installation on |AIO-SX| deployments
------------------------------------
Installation on Simplex with controller model, 1 monitor, installing manually, services: block and cephfs
---------------------------------------------------------------------------------------------------------
For example, you can manually install a controller model, a monitor and some services (block and cephfs)
on |AIO-SX| deployments.
In this configuration, you can add monitors and |OSDs| on the Simplex node.
In this configuration, you can add monitors and |OSDs| on the |AIO-SX| node.
#. On a system with no bare metal Ceph storage backend on it, add a ceph-rook
storage back end. Use block (RBD), cephfs (default option, no need to
specify with arguments).
storage backend using block (RBD), cephfs (default option).
.. code-block:: none
$ system storage-backend-add ceph-rook --deployment controller --confirmed
#. Add the ``host-fs ceph`` on controller, the ``host-fs ceph`` is configured
with 20 GB.
#. Add the ``host-fs ceph`` on the controller, the ``host-fs ceph`` is configured
with 20GB.
.. code-block:: none
$ system host-fs-add controller-0 ceph=20
#. To add |OSDs|, get the |UUID| of each disk to feed the
:command:`host-stor-add` command:
#. To add |OSDs|, get the |UUID| of each disk and run the :command:`host-stor-add` command.
.. code-block:: none
@ -234,8 +259,7 @@ In this configuration, you can add monitors and |OSDs| on the Simplex node.
| | | | | | | | | |
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+--------------------+--------------------------------------------+
#. Add all the desired disks as |OSDs|:
#. Add all the desired disks as |OSDs|.
.. code-block:: none
@ -281,8 +305,8 @@ In this configuration, you can add monitors and |OSDs| on the Simplex node.
| updated_at | None |
+------------------+--------------------------------------------------+
#. Check the progress of the app. With a valid configuration of ``host-fs``
and |OSDs|, the app will apply automatically.
#. Check the progress of the application. With a valid configuration of ``host-fs``
and |OSDs| the application will be applied automatically.
.. code-block:: none
@ -290,8 +314,7 @@ In this configuration, you can add monitors and |OSDs| on the Simplex node.
#or
$ system application-list
#. After the app is applied the pod list of the namespace rook-ceph should
look like this:
#. After applying the application the pod list of the namespace Rook Ceph is as follows:
.. code-block:: none
@ -318,33 +341,35 @@ In this configuration, you can add monitors and |OSDs| on the Simplex node.
stx-ceph-osd-audit-28553058-ms92w 0/1 Completed 0 2m5s
Installation on Duplex with controller model, 3 monitors, installing manually, services: block and cephfs
---------------------------------------------------------------------------------------------------------
Installation on |AIO-DX| deployments
------------------------------------
In this configuration, you can add monitors and OSDs on the Duplex node.
For example, you can manually install a controller model, three monitors and some services (block and cephfs)
on |AIO-DX| deployments.
In this configuration, you can add monitors and |OSDs| on the |AIO-DX| node.
#. On a system with no bare metal Ceph storage backend on it, add a ceph-rook
storage back end. Use block (RBD), cephfs (default option, no need
to specify with arguments).
storage backend using block (RBD), cephfs (default option).
.. code-block:: none
$ system storage-backend-add ceph-rook --deployment controller --confirmed
#. Add the ``controller-fs`` ``ceph-float`` configured with 20 GB.
#. Add the ``controller-fs`` ``ceph-float`` configured with 20GB.
.. code-block:: none
$ system controllerfs-add ceph-float=20
#. Add the ``host-fs ceph`` on each controller, the ``host-fs ceph`` is
configured with 20 GB.
configured with 20GB.
.. code-block:: none
$ system host-fs-add controller-0 ceph=20
#. To add |OSDs|, get the |UUID| of each disk to feed the
#. To add |OSDs|, get the |UUID| of each disk and run the
:command:`host-stor-add` command.
.. code-block:: none
@ -381,7 +406,7 @@ In this configuration, you can add monitors and OSDs on the Duplex node.
| | | | | | | | | |
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+
#. Add all the desired disks as |OSDs|:
#. Add all the desired disks as |OSDs|.
.. code-block:: none
@ -406,7 +431,6 @@ In this configuration, you can add monitors and OSDs on the Duplex node.
| updated_at | None |
+------------------+--------------------------------------------------+
# system host-stor-add controller-1 #UUID
$ system host-stor-add controller-1 1e36945e-e0fb-4a72-9f96-290f9bf57523
+------------------+--------------------------------------------------+
@ -428,8 +452,8 @@ In this configuration, you can add monitors and OSDs on the Duplex node.
| updated_at | None |
+------------------+--------------------------------------------------+
#. Check the progress of the app. With a valid configuration of monitors and
|OSDs|, the app will apply automatically.
#. Check the progress of the application. With a valid configuration of monitors and
|OSDs|, the app will be applied automatically.
.. code-block:: none
@ -437,8 +461,8 @@ In this configuration, you can add monitors and OSDs on the Duplex node.
#or
$ system application-list
#. After the app is applied the pod list of the namespace ``rook-ceph`` should
look like this:
#. After applying the application the pod list of the namespace ``rook-ceph`` should
be as follows:
.. code-block:: none
@ -474,26 +498,26 @@ In this configuration, you can add monitors and OSDs on the Duplex node.
rook-ceph-tools-84659bcd67-r8qbp 1/1 Running 0 22m
stx-ceph-manager-689997b4f4-hk6gh 1/1 Running 0 22m
Installation on Standard deployments
------------------------------------
Installation on Standard with dedicated model, 5 monitors, services: ecblock and cephfs
---------------------------------------------------------------------------------------
For example, you can install on standard deployments with a dedicated model, five monitors
and services (ecblock and cephfs)
In this configuration, you can add monitors on 5 hosts and, to fit this
deployment in the dedicated model, |OSDs| will be added on workers only.
Compute-1 and Compute-2 were chosen to house the cluster |OSDs|.
In this configuration, you can add monitors on 5 hosts and fit this
deployment in a dedicated model, and |OSDs| will be added on workers only.
You can choose compute-1 and compute-2 hosts to keep the cluster |OSDs|.
#. On a system with no bare metal Ceph storage backend on it, add a ceph-rook
storage back end. To fit in the dedicated model, the |OSDs| must be placed
on dedicated workers only. We will use ``ecblock`` instead of |RBD| and
cephfs.
#. On a system with no bare metal Ceph storage backend on it, add a Ceph Rook
storage backend using cephfs and ecblock. To fit in the dedicated model, the |OSDs| must be placed
on dedicated workers only.
.. code-block:: none
$ system storage-backend-add ceph-rook --deployment dedicated --confirmed --services ecblock,filesystem
#. Add all the ``host-fs`` on the nodes that will house mon, mgr and mds. In
this particular case, 5 hosts will have the ``host-fs ceph`` configured.
#. Add all the ``host-fs`` on the nodes that will keep ``mon``, ``mgr`` and ``mds``. In
this case, 5 hosts will have the ``host-fs ceph`` configured.
.. code-block:: none
@ -503,8 +527,7 @@ Compute-1 and Compute-2 were chosen to house the cluster |OSDs|.
$ system host-fs-add compute-1 ceph=20
$ system host-fs-add compute-2 ceph=20
#. To add |OSDs| get the |UUID| of each disk to feed the
:command:`host-stor-add` command.
#. To add |OSDs| get the |UUID| of each disk run the :command:`host-stor-add` command.
.. code-block:: none
@ -541,8 +564,8 @@ Compute-1 and Compute-2 were chosen to house the cluster |OSDs|.
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+-------------------+--------------------------------------------+
#. Add all the desired disks as |OSDs|, here for sake of simplicity only one
|OSD| on compute-1 and compute-2 will be added:
#. Add all the desired disks as |OSDs|, for example, only one
|OSD| on compute-1 and compute-2 will be added.
.. code-block:: none
@ -588,8 +611,8 @@ Compute-1 and Compute-2 were chosen to house the cluster |OSDs|.
| updated_at | None |
+------------------+--------------------------------------------------+
#. Check the progress of the app. With a valid configuration of ``host-fs``
and |OSDs|, the app will apply automatically.
#. Check the progress of the application. With a valid configuration of ``host-fs``
and |OSDs|, the app will be applied automatically.
.. code-block:: none
@ -599,7 +622,7 @@ Compute-1 and Compute-2 were chosen to house the cluster |OSDs|.
#. After the app is applied the pod list of the namespace ``rook-ceph`` should
look like this:
be as follows:
.. code-block:: none