|Zuul d99b2d6ba4||2 days ago|
|actions||1 year ago|
|bundles||2 years ago|
|files/www||6 years ago|
|hooks||2 days ago|
|lib/charms_ceph||2 months ago|
|templates||5 months ago|
|tests||1 month ago|
|unit_tests||2 days ago|
|.coveragerc||6 years ago|
|.gitignore||10 months ago|
|.gitreview||2 years ago|
|.project||7 years ago|
|.pydevproject||2 years ago|
|.stestr.conf||2 years ago|
|.zuul.yaml||2 years ago|
|LICENSE||5 years ago|
|Makefile||1 year ago|
|README.md||4 months ago|
|actions.yaml||2 years ago|
|charm-helpers-hooks.yaml||1 month ago|
|config.yaml||2 days ago|
|copyright||5 years ago|
|hardening.yaml||5 years ago|
|icon.svg||4 years ago|
|metadata.yaml||3 months ago|
|osci.yaml||1 month ago|
|requirements.txt||6 months ago|
|revision||7 years ago|
|test-requirements.txt||1 month ago|
|tox.ini||6 months ago|
Ceph is a unified, distributed storage system designed for excellent performance, reliability, and scalability.
The ceph-radosgw charm deploys the RADOS Gateway, a S3 and Swift compatible HTTP gateway. The deployment is done within the context of an existing Ceph cluster.
This section covers common and/or important configuration options. See file
config.yaml for the full list of options, along with their descriptions and
default values. See the Juju documentation for details
on configuring applications.
pool-type option dictates the storage pool type. See section 'Ceph pool
type' for more information.
source option states the software sources. A common value is an OpenStack
UCA release (e.g. 'cloud:xenial-queens' or 'cloud:bionic-ussuri'). See Ceph
and the UCA. The underlying host's existing apt sources
will be used if this option is not specified (this behaviour can be explicitly
chosen by using the value of 'distro').
Ceph storage pools can be configured to ensure data resiliency either through
replication or by erasure coding. This charm supports both types via the
pool-type configuration option, which can take on the values of 'replicated'
and 'erasure-coded'. The default value is 'replicated'.
For this charm, the pool type will be associated with Object storage.
Note: Erasure-coded pools are supported starting with Ceph Luminous.
Replicated pools use a simple replication strategy in which each written object is copied, in full, to multiple OSDs within the cluster.
ceph-osd-replication-count option sets the replica count for any object
stored within the rgw pools. Increasing this value increases data resilience at
the cost of consuming more real storage in the Ceph cluster. The default value
ceph-osd-replication-countoption must be set prior to adding the relation to the ceph-mon application. Otherwise, the pool's configuration will need to be set by interfacing with the cluster directly.
Erasure coded pools use a technique that allows for the same resiliency as replicated pools, yet reduces the amount of space required. Written data is split into data chunks and error correction chunks, which are both distributed throughout the cluster.
Note: Erasure coded pools require more memory and CPU cycles than replicated pools do.
When using erasure coded pools for Object storage multiple pools will be
created: one erasure coded pool ('rgw.buckets.data' for storing actual RGW
data) and several replicated pools (for storing RGW omap metadata). The
ceph-osd-replication-count configuration option only applies to the metadata
Erasure coded pools can be configured via options whose names begin with the
Important: It is strongly recommended to tailor the
ec-profile-moptions to the needs of the given environment. These latter options have default values of '1' and '2' respectively, which result in the same space requirements as those of a replicated pool.
This charm supports BlueStore inline compression
for its associated Ceph storage pool(s). The feature is enabled by assigning a
compression mode via the
bluestore-compression-mode configuration option. The
default behaviour is to disable compression.
The efficiency of compression depends heavily on what type of data is stored in the pool and the charm provides a set of configuration options to fine tune the compression behaviour.
Note: BlueStore compression is supported starting with Ceph Mimic.
Ceph RADOS Gateway is often containerised. Here a single unit is deployed to a new container on machine '1' within an existing Ceph cluster:
juju deploy --to lxd:1 ceph-radosgw juju add-relation ceph-radosgw:mon ceph-mon:radosgw
If the RADOS Gateway is being integrated into OpenStack then a relation to the keystone application is needed:
juju add-relation ceph-radosgw:identity-service keystone:identity-service
Expose the service:
juju expose ceph-radosgw
exposecommand is only required if the backing cloud blocks traffic by default. In general, MAAS is the only cloud type that does not employ firewalling.
The Gateway can be accessed over port 80 (as per
juju status ceph-radosgw
By default, Ceph RADOS Gateway puts all tenant buckets into the same global namespace, disallowing multiple tenants to have buckets with the same name. Tenant namespacing can be enabled in this charm by deploying with configuration like:
ceph-radosgw: charm: cs:ceph-radosgw num_units: 1 options: namespace-tenants: True
Enabling tenant namespacing will place all tenant buckets into their own namespace under their tenant id, as well as adding the tenant's ID parameter to the Keystone endpoint registration to allow seamless integration with OpenStack. Tenant namespacing cannot be toggled on in an existing installation as it will remove tenant access to existing buckets. Toggling this option on an already deployed RADOS Gateway will have no effect.
For security reasons the charm is not designed to administer the Ceph cluster. A user (e.g. 'ubuntu') for the Ceph Object Gateway service will need to be created manually:
juju ssh ceph-mon/0 'sudo radosgw-admin user create \ --uid="ubuntu" --display-name="Charmed Ceph"'
Ceph RGW supports Keystone authentication of Swift requests. This is enabled by adding a relation to an existing keystone application:
juju add-relation ceph-radosgw:identity-service keystone:identity-service
When more than one unit is deployed with the hacluster application the charm will bring up an HA active/active cluster.
There are two mutually exclusive high availability options: using virtual IP(s) or DNS. In both cases the hacluster subordinate charm is used to provide the Corosync and Pacemaker backend HA functionality.
This charm supports the use of Juju network spaces (Juju
v.2.0). This feature optionally allows specific types of the application's
network traffic to be bound to subnets that the underlying hardware is
Note: Spaces must be configured in the backing cloud prior to deployment.
API endpoints can be bound to distinct network spaces supporting the network separation of public, internal and admin endpoints.
For example, providing that spaces 'public-space', 'internal-space', and 'admin-space' exist, the deploy command above could look like this:
juju deploy ceph-radosgw \ --bind "public=public-space internal=internal-space admin=admin-space"
Alternatively, configuration can be provided as part of a bundle:
ceph-radosgw: charm: cs:ceph-radosgw num_units: 1 bindings: public: public-space internal: internal-space admin: admin-space
Note: Existing ceph-radosgw units configured with the
os-admin-hostnameoptions will continue to honour them. Furthermore, these options override any space bindings, if set.
This section lists Juju actions supported by the charm.
Actions allow specific operations to be performed on a per-unit basis. To
display action descriptions run
juju actions ceph-radosgw. If the charm is
not deployed then see file
The OpenStack Charms project maintains two documentation guides:
Please report bugs on Launchpad.