Juju Charm - Ceph MON
Go to file
Corey Bryant 415b795127 Rename lib/ceph to lib/charms_ceph
The new python3-ceph-common deb package (introduced in ceph octopus)
adds a new ceph directory (a parent package in python terms) in
/usr/lib/python3/dist-packages/ceph/. This results in a conflict with
charm-ceph-osd/lib/ceph/. For example, with the current import of
ceph.utils in hooks/ceph_hooks.py, Python finds no utils.py in
/usr/lib/python3/dist-packages/ceph/ and then stops searching.
Therefore, rename lib/ceph to lib/charms_ceph to avoid the conflict.

Depends-On: https://review.opendev.org/#/c/709226
Change-Id: Id5bdff991c1e6c196c09ba5a2241ebd5ebe6da91
2020-02-24 15:07:44 +00:00
actions Rename lib/ceph to lib/charms_ceph 2020-02-24 15:07:44 +00:00
files/nagios Creates nrpe check for number of OSDs 2019-05-03 10:02:31 +02:00
hooks Rename lib/ceph to lib/charms_ceph 2020-02-24 15:07:44 +00:00
lib/charms_ceph Rename lib/ceph to lib/charms_ceph 2020-02-24 15:07:44 +00:00
templates Disable object skew warnings 2018-12-11 13:39:54 +00:00
tests Remove disco support from the charm 2020-02-18 17:03:02 +00:00
unit_tests Rename lib/ceph to lib/charms_ceph 2020-02-24 15:07:44 +00:00
.gitignore Migrate charm to work with Python3 only 2017-11-17 10:22:30 +00:00
.gitreview OpenDev Migration Patch 2019-04-19 19:28:22 +00:00
.project Add support for Juju network spaces 2016-04-07 16:22:52 +01:00
.pydevproject Add support for Juju network spaces 2016-04-07 16:22:52 +01:00
.stestr.conf Move from .testr.conf to .stestr.conf 2017-11-30 10:44:40 +00:00
.zuul.yaml Switch to Ussuri jobs 2019-12-10 09:58:21 +08:00
actions.yaml Add security-checklist to ceph-mon 2019-03-13 10:32:00 +01:00
charm-helpers-hooks.yaml Sync charm/ceph helpers, tox, and requirements 2019-10-17 22:37:39 +01:00
config.yaml Revert "Remove explicit fsid charm config option" 2019-10-25 12:39:32 +01:00
copyright Re-license charm as Apache-2.0 2016-07-01 13:55:54 +01:00
hardening.yaml Add hardening support 2016-03-29 20:26:58 +01:00
icon.svg Update charm icon 2017-07-31 14:15:49 -05:00
LICENSE Re-license charm as Apache-2.0 2016-07-01 13:55:54 +01:00
Makefile Tests dir no longer need copy of charmhelpers 2018-10-10 12:39:10 +00:00
metadata.yaml Remove disco support from the charm 2020-02-18 17:03:02 +00:00
README.md Fix typo in bundle yaml 2020-02-05 11:10:56 -05:00
requirements.txt Sync charm/ceph helpers, tox, and requirements 2019-10-17 22:37:39 +01:00
setup.cfg [dosaboy,r=james-page] Add broker functionality 2014-11-19 16:12:04 -06:00
test-requirements.txt Sync charm/ceph helpers, tox, and requirements 2019-10-17 22:37:39 +01:00
TODO Turn on cephx support by default 2012-10-09 12:18:01 +01:00
tox.ini Sync charm/ceph helpers, tox, and requirements 2019-10-17 22:37:39 +01:00

Overview

Ceph is a unified, distributed storage system designed for excellent performance, reliability, and scalability.

The ceph-mon charm deploys Ceph monitor nodes, allowing one to create a monitor cluster. It is used in conjunction with the ceph-osd charm. Together, these charms can scale out the amount of storage available in a Ceph cluster.

Usage

Deployment

A cloud with three MON nodes is a typical design whereas three OSD nodes are considered the minimum. For example, to deploy a Ceph cluster consisting of three OSDs and three MONs:

juju deploy --config ceph-osd.yaml -n 3 ceph-osd
juju deploy --to lxd:0 ceph-mon
juju add-unit --to lxd:1 ceph-mon
juju add-unit --to lxd:2 ceph-mon
juju add-relation ceph-osd ceph-mon

Here, a containerised MON is running alongside each OSD.

By default, the monitor cluster will not be complete until three ceph-mon units have been deployed. This is to ensure that a quorum is achieved prior to the addition of storage devices.

See the Ceph documentation for notes on monitor cluster deployment strategies.

Note

: Refer to the Install OpenStack page in the OpenStack Charms Deployment Guide for instructions on installing a monitor cluster for use with OpenStack.

Network spaces

This charm supports the use of Juju network spaces (Juju v.2.0). This feature optionally allows specific types of the application's network traffic to be bound to subnets that the underlying hardware is connected to.

Note

: Spaces must be configured in the backing cloud prior to deployment.

The ceph-mon charm exposes the following Ceph traffic types (bindings):

  • 'public' (front-side)
  • 'cluster' (back-side)

For example, providing that spaces 'data-space' and 'cluster-space' exist, the deploy command above could look like this:

juju deploy --config ceph-mon.yaml -n 3 ceph-mon \
   --bind "public=data-space cluster=cluster-space"

Alternatively, configuration can be provided as part of a bundle:

    ceph-mon:
      charm: cs:ceph-mon
      num_units: 1
      bindings:
        public: data-space
        cluster: cluster-space

Refer to the Ceph Network Reference to learn about the implications of segregating Ceph network traffic.

Note

: Existing ceph-mon units configured with the ceph-public-network or ceph-cluster-network options will continue to honour them. Furthermore, these options override any space bindings, if set.

Actions

This section lists Juju actions supported by the charm. Actions allow specific operations to be performed on a per-unit basis.

copy-pool

Copy contents of a pool to a new pool.

create-cache-tier

Create a new cache tier.

create-crush-rule

Create a new replicated CRUSH rule to use on a pool.

create-erasure-profile

Create a new erasure code profile to use on a pool.

create-pool

Create a pool.

crushmap-update

Apply a new CRUSH map definition.

Warning

: This action can break your cluster in unexpected ways if misused.

delete-erasure-profile

Delete an erasure code profile.

delete-pool

Delete a pool.

get-erasure-profile

Display an erasure code profile.

get-health

Display cluster health.

list-erasure-profiles

List erasure code profiles.

list-pools

List pools.

pause-health

Pause the cluster's health operations.

pool-get

Get a value for a pool.

pool-set

Set a value for a pool.

pool-statistics

Display a pool's utilisation statistics.

remove-cache-tier

Remove a cache tier.

remove-pool-snapshot

Remove a pool's snapshot.

rename-pool

Rename a pool.

resume-health

Resume the cluster's health operations.

security-checklist

Validate the running configuration against the OpenStack security guides checklist.

set-noout

Set the cluster's 'noout' flag.

set-pool-max-bytes

Set a pool's quota for the maximum number of bytes.

show-disk-free

Show disk utilisation by host and OSD.

snapshot-pool

Create a pool snapshot.

unset-noout

Unset the cluster's 'noout' flag.

Bugs

Please report bugs on Launchpad.

For general charm questions refer to the OpenStack Charm Guide.