|1 week ago|
|actions||3 months ago|
|files||1 year ago|
|hooks||1 week ago|
|lib/charms_ceph||1 week ago|
|templates||3 weeks ago|
|tests||1 month ago|
|unit_tests||3 weeks ago|
|.gitignore||2 years ago|
|.gitreview||1 year ago|
|.project||7 years ago|
|.pydevproject||2 years ago|
|.stestr.conf||1 year ago|
|.zuul.yaml||5 months ago|
|LICENSE||3 years ago|
|Makefile||2 months ago|
|README.md||2 weeks ago|
|TODO||7 years ago|
|actions.yaml||3 months ago|
|charm-helpers-hooks.yaml||8 months ago|
|config.yaml||10 months ago|
|copyright||3 years ago|
|hardening.yaml||4 years ago|
|icon.svg||2 years ago|
|metadata.yaml||1 month ago|
|requirements.txt||8 months ago|
|revision||6 years ago|
|setup.cfg||4 years ago|
|test-requirements.txt||8 months ago|
|tox.ini||1 week ago|
Ceph is a unified, distributed storage system designed for excellent performance, reliability, and scalability.
The ceph-osd charm deploys the Ceph object storage daemon (OSD) and manages its volumes. It is used in conjunction with the ceph-mon charm. Together, these charms can scale out the amount of storage available in a Ceph cluster.
The list of all possible storage devices for the cluster is defined by the
osd-devices option (default value is
/dev/vdb). Configuration is typically
provided via a YAML file, like
ceph-osd.yaml. See the following examples:
ceph-osd: options: osd-devices: /dev/vdb /dev/vdc /dev/vdd
Each regular block device must be an absolute path to a device node.
ceph-osd: storage: osd-devices: cinder,20G
See the Juju documentation for guidance on implementing Juju storage.
ceph-osd: storage: osd-devices: /var/tmp/osd-1
Note: OSD directories can no longer be created starting with Ceph Nautilus. Existing OSD directories will continue to function after an upgrade to Nautilus.
The list defined by option
osd-devices may affect newly added ceph-osd units
as well as existing units (the option may be modified after units have been
added). The charm will attempt to activate as Ceph storage any listed device
that is visible by the unit's underlying machine. To prevent the activation of
volumes on existing units the
blacklist-add-disk action may be used.
The configuration option is modified in the usual way. For instance, to have it consist solely of devices ‘/dev/sdb’ and ‘/dev/sdc’:
juju config ceph-osd osd-devices='/dev/sdb /dev/sdc'
The charm will go into a blocked state (visible in
juju status output) if it
detects pre-existing data on a device. In this case the operator can either
instruct the charm to ignore the disk (action
blacklist-add-disk) or to have
it purge all data on the disk (action
A cloud with three MON nodes is a typical design whereas three OSD nodes are considered the minimum. For example, to deploy a Ceph cluster consisting of three OSDs and three MONs:
juju deploy --config ceph-osd.yaml -n 3 ceph-osd juju deploy --to lxd:0 ceph-mon juju add-unit --to lxd:1 ceph-mon juju add-unit --to lxd:2 ceph-mon juju add-relation ceph-osd ceph-mon
Here, a containerised MON is running alongside each OSD.
Note: Refer to the Install OpenStack page in the OpenStack Charms Deployment Guide for instructions on installing the ceph-osd application for use with OpenStack.
For each ceph-osd unit, the ceph-osd charm will scan for all the devices
configured via the
osd-devices option and attempt to assign to it all the
ones it finds. The cluster's initial pool of available storage is the “sum” of
all these assigned devices.
This charm supports the use of Juju network spaces (Juju
v.2.0). This feature optionally allows specific types of the application's
network traffic to be bound to subnets that the underlying hardware is
Note: Spaces must be configured in the backing cloud prior to deployment.
The ceph-osd charm exposes the following Ceph traffic types (bindings):
For example, providing that spaces ‘data-space’ and ‘cluster-space’ exist, the deploy command above could look like this:
juju deploy --config ceph-osd.yaml -n 3 ceph-osd \ --bind "public=data-space cluster=cluster-space"
Alternatively, configuration can be provided as part of a bundle:
ceph-osd: charm: cs:ceph-osd num_units: 1 bindings: public: data-space cluster: cluster-space
Refer to the Ceph Network Reference to learn about the implications of segregating Ceph network traffic.
Note: Existing ceph-osd units configured with the
ceph-cluster-networkoptions will continue to honour them. Furthermore, these options override any space bindings, if set.
Although AppArmor is not enabled for Ceph by default, an AppArmor profile can
be generated by the charm by assigning a value of ‘complain’, ‘enforce’, or
‘disable’ (the default) to option
Caution: Enabling an AppArmor profile is disruptive to a running Ceph cluster as all ceph-osd processes must be restarted.
The new profile has a narrow supported use case, and it should always be verified in pre-production against the specific configurations and topologies intended for production.
The profiles generated by the charm should not be used in the following scenarios:
The ceph-osd charm supports encryption for OSD volumes that are backed by block
devices. To use Ceph's native key management framework, available since Ceph
Jewel, set option
osd-encrypt for the ceph-osd charm:
ceph-osd: options: osd-encrypt: True
Here, dm-crypt keys are stored in the MON sub-cluster.
Alternatively, since Ceph Luminous, encryption keys can be stored in Vault,
which is deployed and initialised via the vault charm. Set
osd-encrypt-keymanager for the ceph-osd charm:
ceph-osd: options: osd-encrypt: True osd-encrypt-keymanager: vault
Important: Post deployment configuration will only affect block devices associated with new ceph-osd units.
This section covers Juju actions supported by the charm.
Actions allow specific operations to be performed on a per-unit basis. To
display action descriptions run
juju actions ceph-osd. If the charm is not
deployed then see file
osd-out action to set all OSD volumes on a unit to ‘out’.
Warning: This action has the potential of impacting your cluster significantly. The Ceph documentation on this topic is considered essential reading.
osd-out action sets all OSDs on the unit as ‘out’. Unless the cluster
itself is set to ‘noout’ this action will cause Ceph to rebalance data by
migrating PGs out of the unit's OSDs and onto OSDs available on other units.
The impact is twofold:
Note: It has been reported that setting OSDs as ‘out’ may cause some PGs to get stuck in the ‘active+remapped’ state. This is an upstream issue.
The ceph-mon charm has an action called
set-noout that sets
‘noout’ for the cluster.
It may be perfectly fine to have data rebalanced. The decisive factor is whether the OSDs are being paused temporarily (e.g. the underlying machine is scheduled for maintenance) or whether they are being removed from the cluster completely (e.g. the storage hardware is reaching EOL).
juju run-action --wait ceph-osd/4 osd-out
osd-in action to set all OSD volumes on a unit to ‘in’.
osd-in action is reciprocal to the
osd-out action. The OSDs are set to
‘in’. It is typically used when the
osd-out action was used in conjunction
with the cluster ‘noout’ flag.
juju run-action --wait ceph-osd/4 osd-in
list-disks action to list disks known to a unit.
The action lists the unit's block devices by categorising them in three ways:
disks: visible (known by udev), unused (not mounted), and not designated as
an OSD journal (via the
osd-journal configuration option)
disks but blacklisted (see action
disks but not eligible for use due to the presence of
juju run-action --wait ceph-osd/4 list-disks
add-disk action to add a disk to a unit.
A ceph-osd unit is automatically assigned OSD volumes based on the current
value of the
osd-devices application option. The
add-disk action allows the
operator to manually add OSD volumes (for disks that are not listed by
osd-devices) to an existing unit.
juju run-action --wait ceph-osd/4 add-disk osd-devices=/dev/vde
blacklist-add-disk action to add a disk to a unit's blacklist.
The action allows the operator to add disks (that are visible to the unit's
underlying machine) to the unit's blacklist. A blacklisted device will not be
initialised as an OSD volume when the value of the
option changes. This action does not prevent a device from being activated via
list-disks action to list the unit's blacklist entries.
Important: This action and blacklist do not have any effect on current OSD volumes.
juju run-action --wait ceph-osd/0 \ blacklist-add-disk osd-devices='/dev/vda /dev/vdf'
blacklist-remove-disk action to remove a disk from a unit's
Each device should have an existing entry in the unit's blacklist. Use the
list-disks action to list the unit's blacklist entries.
juju run-action --wait ceph-osd/1 \ blacklist-remove-disk osd-devices=/dev/vdb
zap-disk action to purge a disk of all data.
In order to prevent unintentional data loss, the charm will not use a disk that
has existing data already on it. To forcibly make a disk available, the
zap-disk action can be used. Due to the destructive nature of this action the
i-really-mean-it option must be passed. This action is normally followed by
juju run-action --wait ceph-osd/3 zap-disk i-really-mean-it devices=/dev/vdc
Please report bugs on Launchpad.
For general charm questions refer to the OpenStack Charm Guide.