* charm-helpers sync for classic charms * sync from release-tools * switch to release-specific zosci functional tests * run focal-ussuri as smoke tests * remove trusty, xenial, and groovy metadata/tests * drop py35 and add py39 Change-Id: I315a7941cd6c5eaea698657af122adb221f5647f
|1 month ago|
|actions||5 years ago|
|charmhelpers||2 weeks ago|
|files||2 years ago|
|hooks||8 months ago|
|lib||5 years ago|
|templates||6 years ago|
|tests||2 weeks ago|
|unit_tests||8 months ago|
|.gitignore||1 year ago|
|.gitreview||3 years ago|
|.project||8 years ago|
|.pydevproject||5 years ago|
|.stestr.conf||3 years ago|
|.zuul.yaml||2 years ago|
|LICENSE||6 years ago|
|Makefile||2 years ago|
|README.md||8 months ago|
|charm-helpers-hooks.yaml||7 months ago|
|config.yaml||8 months ago|
|copyright||6 years ago|
|icon.svg||4 years ago|
|metadata.yaml||2 weeks ago|
|osci.yaml||2 weeks ago|
|pip.sh||2 months ago|
|requirements.txt||4 months ago|
|revision||8 years ago|
|setup.cfg||8 years ago|
|test-requirements.txt||2 weeks ago|
|tox.ini||2 weeks ago|
Cinder is the OpenStack block storage (volume) service. Ceph is a unified, distributed storage system designed for excellent performance, reliability, and scalability. Ceph-backed Cinder therefore allows for scalability and redundancy for storage volumes. This arrangement is intended for large-scale production deployments.
Specialised use cases:
Through the use of multiple application names (e.g. cinder-ceph-1, cinder-ceph-2), multiple Ceph clusters can be associated with a single Cinder deployment.
A variety of storage types can be achieved with a single Ceph cluster by mapping pools with multiple cinder-ceph applications. For instance, different pools could be used for HDD or SSD devices. See option
Note: There is currently no upgrade path to using the cinder-ceph charm for older deployments that have the cinder and ceph-mon applications related directly. This issue is tracked in bug LP #1727184.
This section covers common and/or important configuration options. See file
config.yaml for the full list of options, along with their descriptions and
default values. See the Juju documentation for details
on configuring applications.
pool-type option dictates the storage pool type. See section 'Ceph pool
type' for more information.
rbd-pool-name option sets an existing rbd pool that Cinder should map
Ceph pool type
Ceph storage pools can be configured to ensure data resiliency either through
replication or by erasure coding. This charm supports both types via the
pool-type configuration option, which can take on the values of 'replicated'
and 'erasure-coded'. The default value is 'replicated'.
For this charm, the pool type will be associated with Cinder volumes.
Note: Erasure-coded pools are supported starting with Ceph Luminous.
Replicated pools use a simple replication strategy in which each written object is copied, in full, to multiple OSDs within the cluster.
ceph-osd-replication-count option sets the replica count for any object
stored within the 'cinder-ceph' rbd pool. Increasing this value increases data
resilience at the cost of consuming more real storage in the Ceph cluster. The
default value is '3'.
ceph-osd-replication-countoption must be set prior to adding the relation to the ceph-mon (or ceph-proxy) application. Otherwise, the pool's configuration will need to be set by interfacing with the cluster directly.
Erasure coded pools
Erasure coded pools use a technique that allows for the same resiliency as replicated pools, yet reduces the amount of space required. Written data is split into data chunks and error correction chunks, which are both distributed throughout the cluster.
Note: Erasure coded pools require more memory and CPU cycles than replicated pools do.
When using erasure coded pools for Cinder volumes two pools will be created: a
replicated pool (for storing RBD metadata) and an erasure coded pool (for
storing the data written into the RBD). The
configuration option only applies to the metadata (replicated) pool.
Erasure coded pools can be configured via options whose names begin with the
Important: It is strongly recommended to tailor the
ec-profile-moptions to the needs of the given environment. These latter options have default values of '1' and '2' respectively, which result in the same space requirements as those of a replicated pool.
Ceph BlueStore compression
This charm supports BlueStore inline compression
for its associated Ceph storage pool(s). The feature is enabled by assigning a
compression mode via the
bluestore-compression-mode configuration option. The
default behaviour is to disable compression.
The efficiency of compression depends heavily on what type of data is stored in the pool and the charm provides a set of configuration options to fine tune the compression behaviour.
Note: BlueStore compression is supported starting with Ceph Mimic.
These instructions will show how to deploy Cinder and connect it to an existing Juju-managed Ceph cluster.
cinder.yaml contain the following:
cinder: block-device: None
Deploy Cinder and add relations to essential OpenStack components:
juju deploy --config cinder.yaml cinder juju add-relation cinder:cinder-volume-service nova-cloud-controller:cinder-volume-service juju add-relation cinder:shared-db mysql:shared-db juju add-relation cinder:identity-service keystone:identity-service juju add-relation cinder:amqp rabbitmq-server:amqp
Now deploy cinder-ceph and add a relation to both the cinder and ceph-mon applications:
juju deploy cinder-ceph juju add-relation cinder-ceph:storage-backend cinder:storage-backend juju add-relation cinder-ceph:ceph ceph-mon:client
Additionally, when both the nova-compute and cinder-ceph applications are deployed a relation is needed between them:
juju add-relation cinder-ceph:ceph-access nova-compute:ceph-access
The OpenStack Charms project maintains two documentation guides:
- OpenStack Charm Guide: for project information, including development and support notes
- OpenStack Charms Deployment Guide: for charm usage information
Please report bugs on Launchpad.