Merge "Update README, including EC Ceph pools"
This commit is contained in:
commit
c9c484c1da
159
README.md
159
README.md
@ -1,33 +1,146 @@
|
||||
Ceph Storage Backend for Cinder
|
||||
-------------------------------
|
||||
# Overview
|
||||
|
||||
Overview
|
||||
========
|
||||
[Cinder][cinder-upstream] is the OpenStack block storage (volume) service.
|
||||
[Ceph][ceph-upstream] is a unified, distributed storage system designed for
|
||||
excellent performance, reliability, and scalability. Ceph-backed Cinder
|
||||
therefore allows for scalability and redundancy for storage volumes. This
|
||||
arrangement is intended for large-scale production deployments.
|
||||
|
||||
This charm provides a Ceph storage backend for use with the Cinder
|
||||
charm; this allows multiple Ceph storage clusters to be associated
|
||||
with a single Cinder deployment, potentially alongside other storage
|
||||
backends from other vendors.
|
||||
The cinder-ceph charm provides a Ceph (RBD) storage backend for Cinder and is
|
||||
used in conjunction with the [cinder][cinder-charm] charm and an existing Ceph
|
||||
cluster (via the [ceph-mon][ceph-mon-charm] or the
|
||||
[ceph-proxy][ceph-proxy-charm] charms).
|
||||
|
||||
To use:
|
||||
Specialised use cases:
|
||||
|
||||
* Through the use of multiple application names (e.g. cinder-ceph-1,
|
||||
cinder-ceph-2), multiple Ceph clusters can be associated with a single Cinder
|
||||
deployment.
|
||||
|
||||
* A variety of storage types can be achieved with a single Ceph cluster by
|
||||
mapping pools with multiple cinder-ceph applications. For instance, different
|
||||
pools could be used for HDD or SSD devices. See option `rbd-pool-name` below.
|
||||
|
||||
# Usage
|
||||
|
||||
## Configuration
|
||||
|
||||
This section covers common and/or important configuration options. See file
|
||||
`config.yaml` for the full list of options, along with their descriptions and
|
||||
default values. See the [Juju documentation][juju-docs-config-apps] for details
|
||||
on configuring applications.
|
||||
|
||||
#### `pool-type`
|
||||
|
||||
The `pool-type` option dictates the storage pool type. See section 'Pool type'
|
||||
for more information.
|
||||
|
||||
#### `rbd-pool-name`
|
||||
|
||||
The `rbd-pool-name` option sets an existing rbd pool that Cinder should map
|
||||
to.
|
||||
|
||||
## Pool type
|
||||
|
||||
Ceph storage pools can be configured to ensure data resiliency either through
|
||||
replication or by erasure coding. This charm supports both types via the
|
||||
`pool-type` configuration option, which can take on the values of 'replicated'
|
||||
and 'erasure-coded'. The default value is 'replicated'.
|
||||
|
||||
For this charm, the pool type will be associated with Cinder volumes.
|
||||
|
||||
> **Note**: Erasure-coded pools are supported starting with Ceph Luminous.
|
||||
|
||||
### Replicated pools
|
||||
|
||||
Replicated pools use a simple replication strategy in which each written object
|
||||
is copied, in full, to multiple OSDs within the cluster.
|
||||
|
||||
The `ceph-osd-replication-count` option sets the replica count for any object
|
||||
stored within the 'cinder-ceph' rbd pool. Increasing this value increases data
|
||||
resilience at the cost of consuming more real storage in the Ceph cluster. The
|
||||
default value is '3'.
|
||||
|
||||
> **Important**: The `ceph-osd-replication-count` option must be set prior to
|
||||
adding the relation to the ceph-mon (or ceph-proxy) application. Otherwise,
|
||||
the pool's configuration will need to be set by interfacing with the cluster
|
||||
directly.
|
||||
|
||||
### Erasure coded pools
|
||||
|
||||
Erasure coded pools use a technique that allows for the same resiliency as
|
||||
replicated pools, yet reduces the amount of space required. Written data is
|
||||
split into data chunks and error correction chunks, which are both distributed
|
||||
throughout the cluster.
|
||||
|
||||
> **Note**: Erasure coded pools require more memory and CPU cycles than
|
||||
replicated pools do.
|
||||
|
||||
When using erasure coded pools for Cinder volumes two pools will be created: a
|
||||
replicated pool (for storing RBD metadata) and an erasure coded pool (for
|
||||
storing the data written into the RBD). The `ceph-osd-replication-count`
|
||||
configuration option only applies to the metadata (replicated) pool.
|
||||
|
||||
Erasure coded pools can be configured via options whose names begin with the
|
||||
`ec-` prefix.
|
||||
|
||||
> **Important**: It is strongly recommended to tailor the `ec-profile-k` and
|
||||
`ec-profile-m` options to the needs of the given environment. These latter
|
||||
options have default values of '1' and '2' respectively, which result in the
|
||||
same space requirements as those of a replicated pool.
|
||||
|
||||
See [Ceph Erasure Coding][cdg-ceph-erasure-coding] in the [OpenStack Charms
|
||||
Deployment Guide][cdg] for more information.
|
||||
|
||||
## Deployment
|
||||
|
||||
These instructions will show how to deploy Cinder and connect it to an
|
||||
existing Juju-managed Ceph cluster.
|
||||
|
||||
Let file `cinder.yaml` contain the following:
|
||||
|
||||
cinder:
|
||||
block-device: None
|
||||
|
||||
Deploy Cinder and add relations to essential OpenStack components:
|
||||
|
||||
juju deploy --config cinder.yaml cinder
|
||||
|
||||
juju add-relation cinder:cinder-volume-service nova-cloud-controller:cinder-volume-service
|
||||
juju add-relation cinder:shared-db mysql:shared-db
|
||||
juju add-relation cinder:identity-service keystone:identity-service
|
||||
juju add-relation cinder:amqp rabbitmq-server:amqp
|
||||
|
||||
Now deploy cinder-ceph and add a relation to both the cinder and ceph-mon
|
||||
applications:
|
||||
|
||||
juju deploy cinder
|
||||
juju deploy -n 3 ceph
|
||||
juju deploy cinder-ceph
|
||||
juju add-relation cinder-ceph cinder
|
||||
juju add-relation cinder-ceph ceph
|
||||
|
||||
Configuration
|
||||
=============
|
||||
juju add-relation cinder-ceph:storage-backend cinder:storage-backend
|
||||
juju add-relation cinder-ceph:ceph ceph-mon:client
|
||||
|
||||
The cinder-ceph charm allows the replica count for the Ceph storage
|
||||
pool to be configured. This must be done in advance of relating to
|
||||
the ceph charm:
|
||||
Additionally, when both the nova-compute and cinder-ceph applications are
|
||||
deployed a relation is needed between them:
|
||||
|
||||
juju set cinder-ceph ceph-osd-replication-count=3
|
||||
juju add-relation cinder-ceph ceph
|
||||
juju add-relation cinder-ceph:ceph-access nova-compute:ceph-access
|
||||
|
||||
By default, the replica count is set to 2 replicas. Increasing this
|
||||
value increases data resilience at the cost of consuming most real
|
||||
storage in the Ceph cluster.
|
||||
# Bugs
|
||||
|
||||
Please report bugs on [Launchpad][lp-bugs-charm-cinder-ceph].
|
||||
|
||||
For general charm questions refer to the [OpenStack Charm Guide][cg].
|
||||
|
||||
<!-- LINKS -->
|
||||
|
||||
[cg]: https://docs.openstack.org/charm-guide
|
||||
[cdg]: https://docs.openstack.org/project-deploy-guide/charm-deployment-guide
|
||||
[ceph-upstream]: https://ceph.io
|
||||
[cinder-upstream]: https://docs.openstack.org/cinder
|
||||
[cinder-charm]: https://jaas.ai/cinder
|
||||
[ceph-mon-charm]: https://jaas.ai/ceph-mon
|
||||
[ceph-proxy-charm]: https://jaas.ai/ceph-proxy
|
||||
[cinder-purestorage-charm]: https://jaas.ai/cinder-purestorage
|
||||
[juju-docs-actions]: https://jaas.ai/docs/actions
|
||||
[juju-docs-config-apps]: https://juju.is/docs/configuring-applications
|
||||
[lp-bugs-charm-cinder-ceph]: https://bugs.launchpad.net/charm-cinder-ceph/+filebug
|
||||
[cdg-ceph-erasure-coding]: https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-erasure-coding.html
|
||||
|
@ -4,7 +4,7 @@ options:
|
||||
type: int
|
||||
description: |
|
||||
This value dictates the number of replicas ceph must make of any
|
||||
object it stores withing the cinder rbd pool. Of course, this only
|
||||
object it stores within the cinder rbd pool. Of course, this only
|
||||
applies if using Ceph as a backend store. Note that once the cinder
|
||||
rbd pool has been created, changing this value will not have any
|
||||
effect (although it can be changed in ceph by manually configuring
|
||||
|
Loading…
Reference in New Issue
Block a user