Validate output of list_logical_volumes
The charm was checking for the zeroth value of the return value of list_logical_volumes. However, if no logical volumes are found it returns an empty list. This change validates that the list has an entry. Depends-On: I75a6b1dda15dd7c2cece8cfe97b28317b3d5162b Change-Id: I2d371dae94dca328cf4782a79e85c1c6fd77f547 Closes-Bug: #1819382
|4 days ago|
|actions||2 months ago|
|files||3 weeks ago|
|hooks||1 month ago|
|lib/ceph||3 days ago|
|templates||2 months ago|
|tests||3 weeks ago|
|unit_tests||2 months ago|
|.gitignore||1 year ago|
|.gitreview||4 weeks ago|
|.project||6 years ago|
|.pydevproject||1 year ago|
|.stestr.conf||2 months ago|
|.zuul.yaml||2 months ago|
|LICENSE||2 years ago|
|Makefile||7 months ago|
|README.md||11 months ago|
|TODO||6 years ago|
|actions.yaml||2 months ago|
|charm-helpers-hooks.yaml||2 months ago|
|config.yaml||2 months ago|
|copyright||2 years ago|
|hardening.yaml||3 years ago|
|icon.svg||1 year ago|
|metadata.yaml||1 month ago|
|requirements.txt||7 months ago|
|revision||5 years ago|
|setup.cfg||3 years ago|
|test-requirements.txt||1 week ago|
|tox.ini||1 week ago|
Ceph is a distributed storage and network file system designed to provide excellent performance, reliability, and scalability.
This charm deploys additional Ceph OSD storage service units and should be used in conjunction with the ‘ceph-mon’ charm to scale out the amount of storage available in a Ceph cluster.
The charm also supports specification of the storage devices to use in the ceph cluster::
osd-devices: A list of devices that the charm will attempt to detect, initialise and activate as ceph storage. If the charm detects pre-existing data on a device it will go into a blocked state and the operator must resolve the situation utilizing the `list-disks`, `zap-disk` and/or `blacklist-*` actions. This this can be a superset of the actual storage devices presented to each service unit and can be changed post ceph-osd deployment using `juju set`.
ceph-osd: options: osd-devices: /dev/vdb /dev/vdc /dev/vdd /dev/vde
Example utilizing Juju storage::
ceph-osd: storage: osd-devices: cinder,20G
Please refer to Juju Storage Documentation for details on support for various storage providers and cloud substrates.
How to deploy::
juju deploy -n 3 ceph-osd juju deploy ceph-mon --to lxd:0 juju add-unit ceph-mon --to lxd:1 juju add-unit ceph-mon --to lxd:2 juju add-relation ceph-osd ceph-mon
Once the ‘ceph-mon’ charm has bootstrapped the cluster, it will notify the ceph-osd charm which will scan for the configured storage devices and add them to the pool of available storage.
This charm supports the use of Juju Network Spaces, allowing the charm to be bound to network space configurations managed directly by Juju. This is only supported with Juju 2.0 and above.
Network traffic can be bound to specific network spaces using the public (front-side) and cluster (back-side) bindings:
juju deploy ceph-osd --bind "public=data-space cluster=cluster-space"
alternatively these can also be provided as part of a Juju native bundle configuration:
ceph-osd: charm: cs:xenial/ceph-osd num_units: 1 bindings: public: data-space cluster: cluster-space
Please refer to the Ceph Network Reference for details on how using these options effects network traffic within a Ceph deployment.
NOTE: Spaces must be configured in the underlying provider prior to attempting to use them.
NOTE: Existing deployments using ceph-*-network configuration options will continue to function; these options are preferred over any network space binding provided if set.
AppArmor is not enforced for Ceph by default. An AppArmor profile can be generated by the charm. However, great care must be taken.
Changing the value of the
aa-profile-mode option is disruptive to a running Ceph cluster as all ceph-osd processes must be restarted as part of changing the AppArmor profile enforcement mode.
The generated AppArmor profile currently has a narrow supported use case, and it should always be verified in pre-production against the specific configurations and topologies intended for production.
The AppArmor profile(s) which are generated by the charm should NOT yet be used in the following scenarios:
The ceph-osd charm supports encryption of underlying block devices supporting OSD’s.
To use the ‘native’ key management approach (where dm-crypt keys are stored in the ceph-mon cluster), simply set the ‘osd-encrypt’ configuration option::
ceph-osd: options: osd-encrypt: True
NOTE: This is supported for Ceph Jewel or later.
Alternatively, encryption keys can be stored in Vault; this requires deployment of the vault charm (and associated initialization of vault - see the Vault charm for details) and configuration of the ‘osd-encrypt’ and ‘osd-encrypt-keymanager’ options::
ceph-osd: options: osd-encrypt: True osd-encrypt-keymanager: vault
NOTE: This option is only supported with Ceph Luminous or later.
NOTE: Changing these options post deployment will only take effect for any new block devices added to the ceph-osd application; existing OSD devices will not be encrypted.
The charm offers actions which may be used to perform operational tasks on individual units.
USE WITH CAUTION - Set the local osd units in the charm to ‘out’ but does not stop the osds. Unless the osd cluster is set to noout (see below), this removes them from the ceph cluster and forces ceph to migrate the PGs to other OSDs in the cluster.
From upstream documentation “Do not let your cluster reach its full ratio when removing an OSD. Removing OSDs could cause the cluster to reach or exceed its full ratio.”
Also note that for small clusters you may encounter the corner case where some PGs remain stuck in the active+remapped state. Refer to the above link on how to resolve this.
pause-health (on a ceph-mon) unit can be used before pausing a ceph-osd
unit to stop the cluster rebalancing the data off this ceph-osd unit.
pause-health sets ‘noout’ on the cluster such that it will not try to
rebalance the data accross the remaining units.
It is up to the user of the charm to determine whether pause-health should be used as it depends on whether the osd is being paused for maintenance or to remove it from the cluster completely.
pause action does NOT stop the ceph-osd processes.
Set the local osd units in the charm to ‘in’.
The ‘disks’ key is populated with block devices that are known by udev, are not mounted and not mentioned in ‘osd-journal’ configuration option.
The ‘blacklist’ key is populated with osd-devices in the blacklist stored in the local kv store of this specific unit.
The ‘non-pristine’ key is populated with block devices that are known by udev, are not mounted, not mentioned in ‘osd-journal’ configuration option and are currently not eligible for use because of presence of foreign data.
Add disk(s) to Ceph
Add disk(s) to blacklist. Blacklisted disks will not be initialized for use with Ceph even if listed in the application level osd-devices configuration option.
The current blacklist can be viewed with list-disks action.
NOTE This action and blacklist will not have any effect on already initialized disks.
Each element should be a absolute path to a device node or filesystem directory (the latter is supported for ceph >= 0.56.6).
Example: ‘/dev/vdb /var/tmp/test-osd’
Remove disk(s) from blacklist.
Each element should be a existing entry in the units blacklist. Use list-disks action to list current blacklist entries.
Example: ‘/dev/vdb /var/tmp/test-osd’
Purge disk of all data and signatures for use by Ceph
This action can be necessary in cases where a Ceph cluster is being redeployed as the charm defaults to skipping disks that look like Ceph devices in order to preserve data. In order to forcibly redeploy, the admin is required to perform this action for each disk to be re-consumed.
In addition to triggering this action, it is required to pass an additional
parameter option of
i-really-mean-it to ensure that the
administrator is aware that this will cause data loss on the specified