RETIRED, Juju Charm - Ceph
Go to file
James Page ec5c5d394b Switch to using charm-store for amulet tests
All OpenStack charms are now directly published to the charm store
on landing; switch Amulet helper to resolve charms using the
charm store rather than bzr branches, removing the lag between
charm changes landing and being available for other charms to
use for testing.

This is also important for new layered charms where the charm must
be build and published prior to being consumable.

Change-Id: Iad8888761d1016b833585bd33ca3758e8ade769f
2016-06-17 11:31:31 +01:00
actions Pause/resume for ceph charm 2016-04-08 14:48:56 +00:00
files update to ue ceph-disk prepare instead of ceph-disk-prepare 2016-01-08 14:03:51 -05:00
hooks Switch to using charm-store for amulet tests 2016-06-17 11:31:31 +01:00
templates Add support for user-provided ceph config 2016-05-25 15:54:08 +01:00
tests Switch to using charm-store for amulet tests 2016-06-17 11:31:31 +01:00
unit_tests Add support for user-provided ceph config 2016-05-25 15:54:08 +01:00
.coveragerc [dosaboy,r=james-page] Add broker functionality 2014-11-19 16:12:04 -06:00
.gitignore Add support for Juju network spaces 2016-04-09 16:54:58 +01:00
.gitreview Add gitreview prior to migration to openstack 2016-02-24 21:53:28 +00:00
.project Merged changes from pjdc including cephx configuration support and better arbitarty repository handling 2012-10-18 09:24:36 +01:00
.pydevproject Initial move to charm-helpers 2013-06-23 20:10:07 +01:00
.testr.conf Add tox configurations and requirements definitions 2015-10-30 11:15:38 +09:00
actions.yaml Pause/resume for ceph charm 2016-04-08 14:48:56 +00:00
charm-helpers-hooks.yaml Add support for user-provided ceph config 2016-05-25 15:54:08 +01:00
charm-helpers-tests.yaml Move charm-helpers-sync.yaml to charm-helpers-hooks.yaml and 2014-08-25 18:26:04 +00:00
config.yaml Add support for user-provided ceph config 2016-05-25 15:54:08 +01:00
copyright Updated README verbosity, added checks to harden ceph admin-daemon usage in ceph utils 2012-10-04 14:24:12 +01:00
hardening.yaml Add hardening support 2016-03-30 11:48:16 +01:00
icon.svg Added icon.svg 2013-04-25 14:24:03 -04:00
Makefile Use tox in Makefile targets 2016-03-15 20:11:56 -07:00
metadata.yaml Add support for Juju network spaces 2016-04-09 16:54:58 +01:00
README.md Add support for Juju network spaces 2016-04-09 16:54:58 +01:00
requirements.txt Fix pbr requirement 2016-04-13 10:25:18 +00:00
revision [hopem] Added use-syslog cfg option to allow logging to syslog 2014-03-25 18:44:22 +00:00
setup.cfg [dosaboy,r=james-page] Add broker functionality 2014-11-19 16:12:04 -06:00
test-requirements.txt Update to charm-tools >= 2.0.0 2016-03-23 09:30:16 +00:00
TODO Turn on cephx support by default 2012-10-09 12:18:01 +01:00
tox.ini Update to charm-tools >= 2.0.0 2016-03-23 09:30:16 +00:00

Overview

Ceph is a distributed storage and network file system designed to provide excellent performance, reliability, and scalability.

This charm deploys a Ceph cluster. juju

Usage

The ceph charm has two pieces of mandatory configuration for which no defaults are provided. You must set these configuration options before deployment or the charm will not work:

fsid:
    uuid specific to a ceph cluster used to ensure that different
    clusters don't get mixed up - use `uuid` to generate one.

monitor-secret:
    a ceph generated key used by the daemons that manage to cluster
    to control security.  You can use the ceph-authtool command to
    generate one:

        ceph-authtool /dev/stdout --name=mon. --gen-key

These two pieces of configuration must NOT be changed post bootstrap; attempting to do this will cause a reconfiguration error and new service units will not join the existing ceph cluster.

The charm also supports the specification of storage devices to be used in the ceph cluster.

osd-devices:
    A list of devices that the charm will attempt to detect, initialise and
    activate as ceph storage.

    This can be a superset of the actual storage devices presented to each
    service unit and can be changed post ceph bootstrap using `juju set`.

    The full path of each device must be provided, e.g. /dev/vdb.

    For Ceph >= 0.56.6 (Raring or the Grizzly Cloud Archive) use of
    directories instead of devices is also supported.

At a minimum you must provide a juju config file during initial deployment with the fsid and monitor-secret options (contents of cepy.yaml below):

ceph:
    fsid: ecbb8960-0e21-11e2-b495-83a88f44db01
    monitor-secret: AQD1P2xQiKglDhAA4NGUF5j38Mhq56qwz+45wg==
    osd-devices: /dev/vdb /dev/vdc /dev/vdd /dev/vde

Specifying the osd-devices to use is also a good idea.

Boot things up by using:

juju deploy -n 3 --config ceph.yaml ceph

By default the ceph cluster will not bootstrap until 3 service units have been deployed and started; this is to ensure that a quorum is achieved prior to adding storage devices.

Actions

This charm supports pausing and resuming ceph's health functions on a cluster, for example when doing maintainance on a machine. to pause or resume, call:

juju action do --unit ceph/0 pause-health or juju action do --unit ceph/0 resume-health

Scale Out Usage

You can use the Ceph OSD and Ceph Radosgw charms:

Network Space support

This charm supports the use of Juju Network Spaces, allowing the charm to be bound to network space configurations managed directly by Juju. This is only supported with Juju 2.0 and above.

Network traffic can be bound to specific network spaces using the public (front-side) and cluster (back-side) bindings:

juju deploy ceph --bind "public=data-space cluster=cluster-space"

alternatively these can also be provided as part of a Juju native bundle configuration:

ceph:
  charm: cs:xenial/ceph
  num_units: 1
  bindings:
    public: data-space
    cluster: cluster-space

Please refer to the Ceph Network Reference for details on how using these options effects network traffic within a Ceph deployment.

NOTE: Spaces must be configured in the underlying provider prior to attempting to use them.

NOTE: Existing deployments using ceph-*-network configuration options will continue to function; these options are preferred over any network space binding provided if set.

Contact Information

Authors

Report bugs on Launchpad

Ceph

Technical Footnotes

This charm uses the new-style Ceph deployment as reverse-engineered from the Chef cookbook at https://github.com/ceph/ceph-cookbooks, although we selected a different strategy to form the monitor cluster. Since we don't know the names or addresses of the machines in advance, we use the relation-joined hook to wait for all three nodes to come up, and then write their addresses to ceph.conf in the "mon host" parameter. After we initialize the monitor cluster a quorum forms quickly, and OSD bringup proceeds.

The osds use so-called "OSD hotplugging". ceph-disk prepare is used to create the filesystems with a special GPT partition type. udev is set up to mount such filesystems and start the osd daemons as their storage becomes visible to the system (or after udevadm trigger).

The Chef cookbook mentioned above performs some extra steps to generate an OSD bootstrapping key and propagate it to the other nodes in the cluster. Since all OSDs run on nodes that also run mon, we don't need this and did not implement it.

See the documentation for more information on Ceph monitor cluster deployment strategies and pitfalls.