charm-deployment-guide/deploy-guide/source/1

153 lines
6.5 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

Appendix M: Ceph Erasure Coding and Device Classing
===================================================
Overview
++++++++
Ceph pools supporting applications within an OpenStack deployment are
by default configured as replicated pools which means that every object
stored is copied to multiple hosts or zones to allow the pool to survive
the loss of an OSD.
Ceph also supports Erasure Coded pools which can be used instead to save
raw space within the storage cluster. The following charms can be
configured to use Erasure Coded pools:
* glance
* cinder-ceph
* nova-compute
* ceph-fs
* ceph-radosgw
Configuring charms for Erasure Coding
+++++++++++++++++++++++++++++++++++++
All charms that support use with Erasure Coded pools support a consistent
set of configuration options to enable and tune the Erasure Coding profile
used to configure the Erasure Coded pool.
Erasure Coding is enabled by setting the 'pool-type' option to 'erasure-coded'.
By default the JErasure plugin is used with K=1 and M=2. This does not
actually save any raw storage compared to a replicated pool with 3 replicas
(and is designd to allow use on a three node Ceph cluster) so most deployments
using Erasure Coded pools will need to tune the K and M values based on either
the number of hosts deployed or the number of zones in the deployment if
the 'customize-failure-domain' option is enabled on the ceph-osd and ceph-mon
charms.
The K value defines the number of data chunks that will be used for each object
and the M value defines the number of coding chunks generated for each object.
The M value also defines the number of OSD's that may be lost before the pool
goes into a degraded state.
K + M must always be less than or equal to the number of hosts or zones in the
deployment (depending on the configuration of 'customize-failure-domain'.
In the example below, the Erasure Coded pool used by the glance application
will sustain the loss of two hosts or zones while only consuming 2TB instead
of 3TB of storage to store 1TB of data.
.. code-block:: yaml
glance:
options:
pool-type: erasure-coded
ec-profile-k: 2
ec-profile-m: 2
The full list of Erasure Coding configuration options is detailed below.
.. list-table:: Erasure Coding charm options
:widths: 25 5 15 55
:header-rows: 1
* - Option
- Type
- Default Value
- Description
* - pool-type
- string
- replicated
- Ceph pool type to use for storage - valid values are 'replicated' and 'erasure-coded'.
* - ec-profile-name
- string
-
- Name for the EC profile to be created for the EC pools. If not defined a profile name will be generated based on the name of the pool used by the application.
* - ec-rbd-metadata-pool
- string
-
- Name of the metadata pool to be created (for RBD use-cases). If not defined a metadata pool name will be generated based on the name of the data pool used by the application. The metadata pool is always replicated (not erasure coded).
* - ec-profile-k
- int
- 1
- Number of data chunks that will be used for EC data pool. K+M factors should never be greater than the number of available AZs for balancing.
* - ec-profile-m
- int
- 2
- Number of coding chunks that will be used for EC data pool. K+M factors should never be greater than number of available AZs for balancing.
* - ec-profile-locality
- int
-
- (lrc plugin - l) Group the coding and data chunks into sets of size l. For instance, for k=4 and m=2, when l=3 two groups of three are created. Each set can be recovered without reading chunks from another set. Note that using the lrc plugin does incur more raw storage usage than isa or jerasure in order to reduce the cost of recovery operations.
* - ec-profile-crush-locality
- string
-
- (lrc plugin) The type of the crush bucket in which each set of chunks defined by l will be stored. For instance, if it is set to rack, each group of l chunks will be placed in a different rack. It is used to create a CRUSH rule step such as 'step choose rack'. If it is not set, no such grouping is done.
* - ec-profile-durability-estimator
- int
-
- (shec plugin - c) The number of parity chunks each of which includes each data chunk in its calculation range. The number is used as a durability estimator. For instance, if c=2, 2 OSDs can be down without losing data.
* - ec-profile-helper-chunks
- int
-
- (clay plugin - d) Number of OSDs requested to send data during recovery of a single chunk. d needs to be chosen such that k+1 <= d <= k+m-1. The larger the d, the better the savings.
* - ec-profile-scalar-mds
- string
-
- (clay plugin) Specifies the plugin that is used as a building block in the layered construction. It can be one of: jerasure, isa or shec.
* - ec-profile-plugin
- string
- jerasure
- EC plugin to use for this applications pool. These plugins are available: jerasure, lrc, isa, shec, clay.
* - ec-profile-technique
- string
- reed_sol_van
- EC profile technique used for this applications pool - will be validated based on the plugin configured via ec-profile-plugin. Supported techniques are 'reed_sol_van', 'reed_sol_r6_op', 'cauchy_orig', 'cauchy_good', 'liber8tion' for jerasure, 'reed_sol_van', 'cauchy' for isa and 'single', 'multiple' for shec.
* - ec-profile-device-class
- string
-
- Device class from CRUSH map to use for placement groups for erasure profile - valid values: ssd, hdd or nvme (or leave unset to not use a device class).
Ceph automatic device classing
++++++++++++++++++++++++++++++
Newer versions of Ceph do automatic classing of OSD devices. Each OSD
will be placed into nvme, ssd or hdd device classes. These can
be used when creating erasure profiles or new CRUSH rules (see following
sections).
The classes can be inspected using:
.. code::
sudo ceph osd crush tree
ID CLASS WEIGHT TYPE NAME
-1 8.18729 root default
-5 2.72910 host node-laveran
2 nvme 0.90970 osd.2
5 ssd 0.90970 osd.5
7 ssd 0.90970 osd.7
-7 2.72910 host node-mees
1 nvme 0.90970 osd.1
6 ssd 0.90970 osd.6
8 ssd 0.90970 osd.8
-3 2.72910 host node-pytheas
0 nvme 0.90970 osd.0
3 ssd 0.90970 osd.3
4 ssd 0.90970 osd.4
The device class for an Erasure Coded pool can be configured in the
consuming charm using the 'ec-device-class' configuration option.