415b795127
The new python3-ceph-common deb package (introduced in ceph octopus) adds a new ceph directory (a parent package in python terms) in /usr/lib/python3/dist-packages/ceph/. This results in a conflict with charm-ceph-osd/lib/ceph/. For example, with the current import of ceph.utils in hooks/ceph_hooks.py, Python finds no utils.py in /usr/lib/python3/dist-packages/ceph/ and then stops searching. Therefore, rename lib/ceph to lib/charms_ceph to avoid the conflict. Depends-On: https://review.opendev.org/#/c/709226 Change-Id: Id5bdff991c1e6c196c09ba5a2241ebd5ebe6da91 |
||
---|---|---|
actions | ||
files/nagios | ||
hooks | ||
lib/charms_ceph | ||
templates | ||
tests | ||
unit_tests | ||
.gitignore | ||
.gitreview | ||
.project | ||
.pydevproject | ||
.stestr.conf | ||
.zuul.yaml | ||
actions.yaml | ||
charm-helpers-hooks.yaml | ||
config.yaml | ||
copyright | ||
hardening.yaml | ||
icon.svg | ||
LICENSE | ||
Makefile | ||
metadata.yaml | ||
README.md | ||
requirements.txt | ||
setup.cfg | ||
test-requirements.txt | ||
TODO | ||
tox.ini |
Overview
Ceph is a unified, distributed storage system designed for excellent performance, reliability, and scalability.
The ceph-mon charm deploys Ceph monitor nodes, allowing one to create a monitor cluster. It is used in conjunction with the ceph-osd charm. Together, these charms can scale out the amount of storage available in a Ceph cluster.
Usage
Deployment
A cloud with three MON nodes is a typical design whereas three OSD nodes are considered the minimum. For example, to deploy a Ceph cluster consisting of three OSDs and three MONs:
juju deploy --config ceph-osd.yaml -n 3 ceph-osd
juju deploy --to lxd:0 ceph-mon
juju add-unit --to lxd:1 ceph-mon
juju add-unit --to lxd:2 ceph-mon
juju add-relation ceph-osd ceph-mon
Here, a containerised MON is running alongside each OSD.
By default, the monitor cluster will not be complete until three ceph-mon units have been deployed. This is to ensure that a quorum is achieved prior to the addition of storage devices.
See the Ceph documentation for notes on monitor cluster deployment strategies.
Note
: Refer to the Install OpenStack page in the OpenStack Charms Deployment Guide for instructions on installing a monitor cluster for use with OpenStack.
Network spaces
This charm supports the use of Juju network spaces (Juju
v.2.0
). This feature optionally allows specific types of the application's
network traffic to be bound to subnets that the underlying hardware is
connected to.
Note
: Spaces must be configured in the backing cloud prior to deployment.
The ceph-mon charm exposes the following Ceph traffic types (bindings):
- 'public' (front-side)
- 'cluster' (back-side)
For example, providing that spaces 'data-space' and 'cluster-space' exist, the deploy command above could look like this:
juju deploy --config ceph-mon.yaml -n 3 ceph-mon \
--bind "public=data-space cluster=cluster-space"
Alternatively, configuration can be provided as part of a bundle:
ceph-mon:
charm: cs:ceph-mon
num_units: 1
bindings:
public: data-space
cluster: cluster-space
Refer to the Ceph Network Reference to learn about the implications of segregating Ceph network traffic.
Note
: Existing ceph-mon units configured with the
ceph-public-network
orceph-cluster-network
options will continue to honour them. Furthermore, these options override any space bindings, if set.
Actions
This section lists Juju actions supported by the charm. Actions allow specific operations to be performed on a per-unit basis.
copy-pool
Copy contents of a pool to a new pool.
create-cache-tier
Create a new cache tier.
create-crush-rule
Create a new replicated CRUSH rule to use on a pool.
create-erasure-profile
Create a new erasure code profile to use on a pool.
create-pool
Create a pool.
crushmap-update
Apply a new CRUSH map definition.
Warning
: This action can break your cluster in unexpected ways if misused.
delete-erasure-profile
Delete an erasure code profile.
delete-pool
Delete a pool.
get-erasure-profile
Display an erasure code profile.
get-health
Display cluster health.
list-erasure-profiles
List erasure code profiles.
list-pools
List pools.
pause-health
Pause the cluster's health operations.
pool-get
Get a value for a pool.
pool-set
Set a value for a pool.
pool-statistics
Display a pool's utilisation statistics.
remove-cache-tier
Remove a cache tier.
remove-pool-snapshot
Remove a pool's snapshot.
rename-pool
Rename a pool.
resume-health
Resume the cluster's health operations.
security-checklist
Validate the running configuration against the OpenStack security guides checklist.
set-noout
Set the cluster's 'noout' flag.
set-pool-max-bytes
Set a pool's quota for the maximum number of bytes.
show-disk-free
Show disk utilisation by host and OSD.
snapshot-pool
Create a pool snapshot.
unset-noout
Unset the cluster's 'noout' flag.
Bugs
Please report bugs on Launchpad.
For general charm questions refer to the OpenStack Charm Guide.