Update README for Ceph EC pools

This updates the README for erasure coded
Ceph pools for the case of CephFS.

The new text should be as similar as possible
for all the charms that support configuration
options for EC pools. See the below review for
the first of these charms whose README has been
updated.

https://review.opendev.org/#/c/749824/

Other minor improvements.

Change-Id: Ic6543e3241048591818358c972eadecd6ceab50c
This commit is contained in:
Peter Matulis 2020-10-02 16:03:11 -04:00
parent 8b2b48f0ea
commit a44894a249
1 changed files with 67 additions and 13 deletions

View File

@ -4,11 +4,8 @@
excellent performance, reliability, and scalability.
The ceph-fs charm deploys the metadata server daemon (MDS) for the Ceph
distributed file system (CephFS). It is used in conjunction with the
[ceph-mon][ceph-mon-charm] and the [ceph-osd][ceph-osd-charm] charms.
Highly available CephFS is achieved by deploying multiple MDS servers (i.e.
multiple ceph-fs units).
distributed file system (CephFS). The deployment is done within the context of
an existing Ceph cluster.
# Usage
@ -20,6 +17,11 @@ default values. A YAML file (e.g. `ceph-osd.yaml`) is often used to store
configuration options. See the [Juju documentation][juju-docs-config-apps] for
details on configuring applications.
#### `pool-type`
The `pool-type` option dictates the storage pool type. See section 'Ceph pool
type' for more information.
#### `source`
The `source` option states the software sources. A common value is an OpenStack
@ -28,18 +30,70 @@ and the UCA][cloud-archive-ceph]. The underlying host's existing apt sources
will be used if this option is not specified (this behaviour can be explicitly
chosen by using the value of 'distro').
## Ceph pool type
Ceph storage pools can be configured to ensure data resiliency either through
replication or by erasure coding. This charm supports both types via the
`pool-type` configuration option, which can take on the values of 'replicated'
and 'erasure-coded'. The default value is 'replicated'.
For this charm, the pool type will be associated with CephFS volumes.
> **Note**: Erasure-coded pools are supported starting with Ceph Luminous.
### Replicated pools
Replicated pools use a simple replication strategy in which each written object
is copied, in full, to multiple OSDs within the cluster.
The `ceph-osd-replication-count` option sets the replica count for any object
stored within the 'ceph-fs-data' cephfs pool. Increasing this value increases
data resilience at the cost of consuming more real storage in the Ceph cluster.
The default value is '3'.
> **Important**: The `ceph-osd-replication-count` option must be set prior to
adding the relation to the ceph-mon application. Otherwise, the pool's
configuration will need to be set by interfacing with the cluster directly.
### Erasure coded pools
Erasure coded pools use a technique that allows for the same resiliency as
replicated pools, yet reduces the amount of space required. Written data is
split into data chunks and error correction chunks, which are both distributed
throughout the cluster.
> **Note**: Erasure coded pools require more memory and CPU cycles than
replicated pools do.
When using erasure coded pools for CephFS file systems two pools will be
created: a replicated pool (for storing MDS metadata) and an erasure coded pool
(for storing the data written into a CephFS volume). The
`ceph-osd-replication-count` configuration option only applies to the metadata
(replicated) pool.
Erasure coded pools can be configured via options whose names begin with the
`ec-` prefix.
> **Important**: It is strongly recommended to tailor the `ec-profile-k` and
`ec-profile-m` options to the needs of the given environment. These latter
options have default values of '1' and '2' respectively, which result in the
same space requirements as those of a replicated pool.
See [Ceph Erasure Coding][cdg-ceph-erasure-coding] in the [OpenStack Charms
Deployment Guide][cdg] for more information.
## Deployment
We are assuming a pre-existing Ceph cluster.
To deploy a single MDS node:
To deploy a single MDS node within an existing Ceph cluster:
juju deploy ceph-fs
Then add a relation to the ceph-mon application:
juju add-relation ceph-fs:ceph-mds ceph-mon:mds
## High availability
Highly available CephFS is achieved by deploying multiple MDS servers (i.e.
multiple ceph-fs units).
## Actions
This section lists Juju [actions][juju-docs-actions] supported by the charm.
@ -60,10 +114,10 @@ For general charm questions refer to the OpenStack [Charm Guide][cg].
<!-- LINKS -->
[cg]: https://docs.openstack.org/charm-guide
[cdg]: https://docs.openstack.org/project-deploy-guide/charm-deployment-guide
[ceph-upstream]: https://ceph.io
[ceph-mon-charm]: https://jaas.ai/ceph-mon
[ceph-osd-charm]: https://jaas.ai/ceph-osd
[juju-docs-actions]: https://jaas.ai/docs/actions
[juju-docs-config-apps]: https://juju.is/docs/configuring-applications
[lp-bugs-charm-ceph-fs]: https://bugs.launchpad.net/charm-ceph-fs/+filebug
[cloud-archive-ceph]: https://wiki.ubuntu.com/OpenStack/CloudArchive#Ceph_and_the_UCA
[cdg-ceph-erasure-coding]: https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-erasure-coding.html