Rook ceph configuration (dsR10,dsR10minor)

Improve Rook Ceph documentation

Change-Id: Ic59e102ebe443d9a04cdc5bf6a9a7d873c87cb40
Signed-off-by: Elisamara Aoki Gonçalves <elisamaraaoki.goncalves@windriver.com>
This commit is contained in:
Elisamara Aoki Gonçalves
2025-07-24 03:38:22 +00:00
parent 9d855b77c3
commit feb3ac8176
5 changed files with 323 additions and 85 deletions

View File

@@ -972,6 +972,8 @@ For Rook-Ceph:
Each deployment model enforces a different structure for the Rook Ceph
cluster and its integration with the platform.
For more details see :ref:`install-rook-ceph-a7926a1f9b70`.
#. Add Storage-Backend with Deployment Model.
.. code-block:: none

View File

@@ -894,6 +894,8 @@ For Rook-Ceph:
Each deployment model enforces a different structure for the Rook Ceph
cluster and its integration with the platform.
For more details see :ref:`install-rook-ceph-a7926a1f9b70`.
#. Check if the rook-ceph app is uploaded.
.. code-block:: none

View File

@@ -839,6 +839,8 @@ If configuring Rook Ceph Storage Backend, configure the environment
Each deployment model enforces a different structure for the Rook Ceph
cluster and its integration with the platform.
For more details see :ref:`install-rook-ceph-a7926a1f9b70`.
#. Check if the rook-ceph app is uploaded.
.. code-block:: none

View File

@@ -15,6 +15,9 @@ elements.
Available Deployment Models
---------------------------
Deployment Models Rules
***********************
Each deployment model works with different deployment strategies and rules to
fit different needs. Choose one of the following models according to the
demands of your cluster:
@@ -22,18 +25,32 @@ demands of your cluster:
Controller Model (default)
- The |OSDs| must be added only in hosts with controller personality.
- The replication factor can be configured up to size 3.
- Can swap into Open Model.
Dedicated Model
- The |OSDs| must be added only in hosts with worker personality.
- The replication factor can be configured up to size 3.
- Can swap into Open Model.
Open Model
- The |OSD| placement does not have any limitation.
- The replication factor does not have any limitation.
- Can swap into controller or dedicated if the placement requisites are
satisfied.
.. important::
The Open deployment model offers greater flexibility in configuration.
However, users must thoroughly understand the implications of their
settings, as they are solely responsible for ensuring proper configuration.
Change the Deployment Model
***************************
The deployment models can be changed as long as the system follows the
previously established rules.
To change to another deployment model, execute the following command:
.. code-block:: none
~(keystone_admin)$ system storage-backend-modify ceph-rook-store -d <desired_deployment_model>
Replication Factor
------------------
@@ -56,7 +73,7 @@ Simplex Controller Model:
Simplex Open Model:
Default: 1
Max: 5
Max: Any
Duplex Controller Model:
Default: 2
@@ -64,7 +81,7 @@ Duplex Controller Model:
Duplex Open Model:
Default: 2
Max: 5
Max: Any
Duplex+ or Standard Controller Model:
Default: 2
@@ -76,7 +93,7 @@ Duplex+ or Standard Dedicated Model:
Duplex+ or Standard Open Model:
Default: 2
Max: 5
Max: Any
Minimum Replication Factor
**************************
@@ -99,26 +116,63 @@ with the command:
~(keystone_admin)$ system storage-backend-modify ceph-rook-store min_replication=<size>
Monitor, Host-fs and controllerfs
----------------------------------
Ceph monitors are the central nervous system of the Ceph cluster, ensuring that
all components are aware of each other and that data is stored and accessed
reliably. To properly set the environment for Rook Ceph monitors, some
filesystems are needed: ``host-fs`` for fixed monitor and ``controllerfs`` for
the floating monitor.
.. note::
All changes in ``host-fs`` and ``controllerfs`` require a reapply on the
application to properly propagate the modifications in the Rook Ceph
cluster.
Functions
*********
The functions parameter contains the Ceph cluster function of a given host. A
``host-fs`` can have monitor and osd functions, a ``controllerfs`` can only
have the monitor function.
To modify the function of a ``host-fs`` the complete list of functions desired
must be informed.
Examples:
``host-fs``
.. code-block:: none
#(only monitor)
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=monitor
#(only osd)
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=osd
#(no function)
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=
``controllerfs``
.. code-block:: none
#(only monitor)
~(keystone_admin)$ system controllerfs-modify ceph-float --functions=monitor
#(no function)
~(keystone_admin)$ system controllerfs-modify ceph-float --functions=
Monitor Count
*************
Monitors (mons) are allocated on all the hosts that have a ``host-fs ceph``
with the monitor capability on it.
When the host has no |OSD| registered on the platform, you should add ``host-fs ceph``
in every node intended to house a monitor with the command:
.. code-block:: none
~(keystone_admin)$ system host-fs-add <hostname> ceph=<size>
When there are |OSDs| registered on a host you should add the monitor function
to the existing ``host-fs``.
.. code-block:: none
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=osd,monitor
Possible Monitor Count on Deployment Models for Platforms
*********************************************************
@@ -136,65 +190,135 @@ Duplex+ or Standard:
Recommended: 3
Max: 5
Fixed Monitors
**************
Fixed monitor is the normal monitor that is associated with a given host. Each
fixed monitor requires a ``host-fs ceph`` properly set and configured on the
host.
**Add a monitor**
To add a monitor the ``host-fs ceph`` must be created or have the function
'monitor' added to its capabilities
When the host has no |OSD| registered on the platform, add ``host-fs ceph`` on
every node intended to house a monitor. Creating a host-fs this way
automatically sets the monitor function. To create a ``host-fs ceph``, run the
command:
.. code-block:: none
~(keystone_admin)$ system host-fs-add <hostname> ceph=<size>
When there are |OSDs| registered on a host, add the 'monitor' function to the
existing ``host-fs``.
.. code-block:: none
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=osd,monitor
After adding the 'monitor' function, reapply the application.
.. code-block:: none
~(keystone_admin)$ system application-apply rook-ceph
**Remove a monitor**
To remove a monitor, the function 'monitor' must be removed from the
capabilities list of the ``host-fs ceph``.
When the host has no |OSD| registered on the platform, remove the 'monitor'
function from the ``host-fs ceph``.
.. code-block:: none
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=
When there are |OSDs| registered on the same host, only the 'monitor' function
should be removed from the ``host-fs ceph`` capabilities list.
.. code-block:: none
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=osd
After the removal of the 'monitor' function, reapply the application.
.. code-block:: none
~(keystone_admin)$ system application-apply rook-ceph
Floating Monitor (only in Duplex)
*********************************
A Floating monitor is possible and recommended on Duplex platforms. The monitor
roams and is always allocated on the active controller.
A floating monitor is supported and recommended on |AIO-DX| platforms. The
monitor roams and is always allocated on the active controller, providing
redundancy and improving stability.
To add the floating monitor:
**Add the floating monitor**
.. note::
You should lock the inactive controller add ``controllerfs ceph-float`` to
the platform.
Lock the standby controller before adding the ``controllerfs ceph-float``
to the platform.
#. Lock the standby controller.
.. code-block:: none
# Considering controller-0 as the active controller
~(keystone_admin)$ system host-lock controller-1
#. Add the ``controllerfs`` with the standby controller locked.
.. code-block:: none
~(keystone_admin)$ system controllerfs-add ceph-float=<size>
#. Unlock the standby controller.
.. code-block::
# Considering controller-0 as the active controller
~(keystone_admin)$ system host-unlock controller-1
#. Reapply the Rook Ceph application, with the standby controller unlocked and
available.
.. code-block:: none
~(keystone_admin)$ system host-lock controller-1 (with controller-0 as the active controller)
~(keystone_admin)$ system controllerfs-add ceph-float=<size>
~(keystone_admin)$ system application-apply rook-ceph
Host-fs and controller-fs
-------------------------
**Remove the floating monitor**
To properly set the environment for Rook Ceph, some filesystems are needed.
.. note::
All changes in ``host-fs`` and ``controller-fs`` need a reapply on the
application to properly propagate the modifications in the Rook ceph
cluster.
Functions
*********
The functions parameter contains the ceph cluster function of a given host. A
``host-fs`` can have monitor and osd functions, a ``controller-fs`` can only
have the monitor function.
To modify the function of a ``host-fs`` the complete list of functions desired
must be informed.
Examples:
To remove the floating monitor, the function 'monitor' must be removed from the
capabilities list of the ``controllerfs ceph-float``.
.. code-block:: none
#(only monitor)
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=monitor
#(only osd)
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=osd
#(no function)
~(keystone_admin)$ system host-fs-modify <hostname> ceph --functions=
#(only monitor)
~(keystone_admin)$ system controllerfs-modify ceph-float --functions=monitor
#(no function)
~(keystone_admin)$ system controllerfs-modify ceph-float --functions=
After the removal of the 'monitor' function, reapply the Rook Ceph
application.
.. code-block:: none
~(keystone_admin)$ system application-apply rook-ceph
**Migration between AIO-Duplex and AIO-Duplex+**
Migrating to AIO-Duplex+
To migrate from AIO-Duplex to AIO-Duplex+ the floating monitor must be
removed before the migration, and a new fixed monitor should be added in a
worker after the migration is done.
Migrating to AIO-Duplex
To migrate from AIO-Duplex+ to AIO-Duplex, the fixed monitor should be
removed from the cluster before the migration, and a floating monitor
should be added after the migration is done.
Services
--------
@@ -205,7 +329,7 @@ Available Services
******************
There are four possible services compatible with Rook Ceph. You can combine
them following the rules:
them, following the rules below:
``block`` (default)
- Not possible to be deployed together with ecblock.
@@ -218,23 +342,64 @@ them following the rules:
- Will enable the ecblock service in rook, will use cephRBD.
``filesystem`` (default)
- Will enable the ceph filesystem and use cephFS.
- Will enable the Ceph filesystem and use cephFS.
``object``
- Will enable the ceph object store (RGW).
- Will enable the Ceph object store (RGW).
.. important::
A Service cannot be removed or replaced. Services can only be added.
Add New Services
****************
To add a new service to the storage-backend, first choose a possible service
compatible with the aforementioned rules.
#. Get the list of the current services of the storage-backend.
.. code-block:: none
~(keystone_admin)$ system storage-backend-show ceph-rook-store
#. Add the desired service to the list.
.. code-block:: none
~(keystone_admin)$ system storage-backend-modify ceph-rook-store --services=<previous_list>,<new_service>
#. Reapply the Rook Ceph application.
.. code-block:: none
~(keystone_admin)$ system application-apply rook-ceph
For example, in a storage-backend with the service list ``block,filesystem``,
only ``object`` can be added as a service:
.. code-block:: none
~(keystone_admin)$ system storage-backend-modify ceph-rook-store --services=block,filesystem,object
Services Parameterization for the Open Model
********************************************
In the 'open' deployment model, no specific configurations are enforced. Users are responsible
for customizing settings based on their specific needs. To update configurations, a Helm override is required.
In the 'open' deployment model, no specific configurations are enforced.
When applying a helm-override update, list-type values are completely replaced, not incrementally updated.
For example, modifying cephFileSystems (or cephBlockPools, cephECBlockPools, cephObjectStores) via
Helm override will overwrite the entire entry.
You are responsible for customizing settings based on your specific needs.
To update configurations, a Helm override is required.
Here is an **example** of how to change a parameter, using failureDomain as
an example, for **Cephfs** and **RBD**:
When applying a helm-override update, list-type values are completely replaced,
not incrementally updated.
For example, modifying ``cephFileSystems`` (or ``cephBlockPools``,
``cephECBlockPools``, ``cephObjectStores``) via Helm override will overwrite
the entire entry.
This is an example of how to change a parameter, using ``failureDomain``, for
**Cephfs** and **RBD**:
.. tabs::
@@ -242,7 +407,7 @@ an example, for **Cephfs** and **RBD**:
.. code-block:: none
# Get the current crush rule information
# Get the current CRUSH rule information
ceph osd pool get kube-cephfs-data crush_rule
# Get the current default values
@@ -290,4 +455,82 @@ Disable Helm Chart
Do not disable any of the Rook Ceph Helm charts using :command:`system helm-chart-attribute-modify`
as this may result in a broken installation.
Ceph Health Status Filter
-------------------------
Some Ceph health statuses can be filtered to avoid generating alarms.
The detection of a particular Health error or warning can be disabled.
.. important::
Disabling the detection of any health error or warning can prevent the
system from generating alarms, detecting issues and log generating. This
feature must be used at your discretion. It is recommended to use this
feature temporarily during some analysis or procedure and then revert back
to the default empty values.
There are two filters: ``health_filters_for_ignore`` for filtering at any time
(always active) and ``health_filters_for_upgrade`` that applies the filter only
during an upgrade of the Rook Ceph.
To apply the always-on filter (``health_filters_for_ignore``), use the
following procedure.
#. Check for the name of any Ceph health issues that might want to be filtered
out.
.. code-block:: none
~(keystone_admin)$ ceph health detail
#. Consult the list of the Ceph health issues currently ignored.
.. code-block:: none
~(keystone_admin)$ kubectl get configmap ceph-manager-config -n rook-ceph -o yaml | sed -n '/ceph-manager-config.yaml:/,/^[^ ]/p' | sed -n 's/^[ ]*health_filters_for_ignore:[ ]*//p'
#. Edit the ConfigMap adding the name of the all Ceph health issues, comma
separated and delimited by [], to the list ``health_filters_for_ignore``.
.. code-block:: none
#Examples of useful health statuses to ignore: MON_DOWN,OSD_DOWN, BLUESTORE_SLOW_OP_ALERT
~(keystone_admin)$ health_filters='[<ceph_health_status_1>,<ceph_health_status_2>]'
~(keystone_admin)$ kubectl get configmap ceph-manager-config -n rook-ceph -o yaml | sed "s/^\(\s*health_filters_for_ignore:\s*\).*/\1$health_filters/" | kubectl apply -f -
#. Restart the stx-ceph-manager pod.
.. code-block:: none
~(keystone_admin)$ kubectl rollout restart -n rook-ceph deployment stx-ceph-manager
To use the upgrade only filter (``health_filters_for_upgrade``), follow the
procedure above changing commands for consult and edit for the following
versions:
#. Check for the name of any Ceph health issues that might want to be filtered
out.
.. code-block:: none
~(keystone_admin)$ ceph health detail
#. Consult the list of the Ceph health issues currently ignored.
.. code-block:: none
~(keystone_admin)$ kubectl get configmap ceph-manager-config -n rook-ceph -o yaml | sed -n '/ceph-manager-config.yaml:/,/^[^ ]/p' | sed -n 's/^[ ]*health_filters_for_upgrade:[ ]*//p'
#. Edit the ConfigMap adding the name of the all Ceph health issues, comma
separated and delimited by [], to the list ``health_filters_for_upgrade``.
.. code-block:: none
~(keystone_admin)$ health_filters='[<ceph_health_status_1>,<ceph_health_status_2>]'
~(keystone_admin)$ kubectl get configmap ceph-manager-config -n rook-ceph -o yaml | sed "s/^\(\s*health_filters_for_upgrade:\s*\).*/\1$health_filters/" | kubectl apply -f -
#. Restart the stx-ceph-manager pod.
.. code-block:: none
~(keystone_admin)$ kubectl rollout restart -n rook-ceph deployment stx-ceph-manager

View File

@@ -52,12 +52,6 @@ environment configurations to prevent an automatic reinstall.
#. Remove |OSDs|.
#. Lock the host.
.. code-block:: none
~(keystone_admin)$ system host-lock <hostname>
#. List all |OSDs| to get the uuid of each |OSD|.
.. code-block:: none
@@ -70,11 +64,6 @@ environment configurations to prevent an automatic reinstall.
~(keystone_admin)$ system host-stor-delete <uuid>
#. Unlock the host.
.. code-block:: none
~(keystone_admin)$ system host-unlock <hostname>
#. (|AIO-DX| Only) Remove ``controllerfs``.
@@ -107,4 +96,4 @@ environment configurations to prevent an automatic reinstall.
.. code-block:: none
~(keystone_admin)$ system storage-backend-delete ceph-rook-store --force
~(keystone_admin)$ system storage-backend-delete ceph-rook-store --force