Juju Charm - Ceph OSD
Go to file
Chris MacNaughton c94e0b4b0e add juju availability zone to ceph osd location when present
The approach here is to use the availability zone as an imaginary rack.
All hosts that are in the same AZ will be in the same imaginary rack.
From Ceph's perspective this doesn't matter as it's just a bucket after all.
This will give users the ability to further customize their ceph deployment.

Change-Id: Ie25ac1b001db558d6a40fe3eaca014e8f4174241
2016-03-18 09:09:06 -04:00
files [bradm] Removed nagios check files that were moved to nrpe-external-master charm 2014-11-18 11:06:09 +10:00
hooks add juju availability zone to ceph osd location when present 2016-03-18 09:09:06 -04:00
templates add juju availability zone to ceph osd location when present 2016-03-18 09:09:06 -04:00
tests Merge "support Ceph's --dmcrypt flag for OSD preparation" 2016-03-04 16:10:21 +00:00
unit_tests Lint. 2016-02-02 19:02:58 +02:00
.coveragerc Add unit tests for service status 2015-10-06 21:15:38 +01:00
.gitignore Resync charm-helpers 2016-03-02 12:06:15 +00:00
.gitreview Add gitreview prior to migration to openstack 2016-02-24 21:53:28 +00:00
.project Initial ceph-osd charm 2012-10-08 15:07:16 +01:00
.pydevproject Add unit tests for service status 2015-10-06 21:15:38 +01:00
.testr.conf Add tox support 2015-10-30 11:22:54 +09:00
Makefile Use tox in Makefile targets 2016-03-15 20:12:08 -07:00
README.md Resync with ceph charm, updates for raring 2012-12-17 10:31:03 +00:00
TODO Enable cephx support by default 2012-10-09 12:19:16 +01:00
charm-helpers-hooks.yaml [gnuoy,trivial] Pre-release charmhelper sync 2015-08-03 15:53:05 +01:00
charm-helpers-tests.yaml Move charm-helpers-sync.yaml to charm-helpers-hooks.yaml and 2014-09-27 02:28:51 +00:00
config.yaml support Ceph's --dmcrypt flag for OSD preparation 2016-03-03 17:03:30 -05:00
copyright Initial ceph-osd charm 2012-10-08 15:07:16 +01:00
icon.svg Add ceph icon 2013-06-25 14:04:30 +01:00
metadata.yaml Update maintainer 2015-11-18 10:30:34 +00:00
requirements.txt resync requirements with openstack upstream 2015-10-30 15:05:27 +09:00
revision [hopem] Added use-syslog cfg option to allow logging to syslog 2014-03-25 18:44:23 +00:00
setup.cfg Add unit tests for service status 2015-10-06 21:15:38 +01:00
test-requirements.txt resync requirements with openstack upstream 2015-10-30 15:05:27 +09:00
tox.ini Tidy tox targets 2016-02-16 06:59:45 +00:00

README.md

Overview

Ceph is a distributed storage and network file system designed to provide excellent performance, reliability, and scalability.

This charm deploys additional Ceph OSD storage service units and should be used in conjunction with the 'ceph' charm to scale out the amount of storage available in a Ceph cluster.

Usage

The charm also supports specification of the storage devices to use in the ceph cluster::

osd-devices:
    A list of devices that the charm will attempt to detect, initialise and
    activate as ceph storage.
    
    This this can be a superset of the actual storage devices presented to
    each service unit and can be changed post ceph-osd deployment using
    `juju set`.

For example::

ceph-osd:
    osd-devices: /dev/vdb /dev/vdc /dev/vdd /dev/vde

Boot things up by using::

juju deploy -n 3 --config ceph.yaml ceph

You can then deploy this charm by simple doing::

juju deploy -n 10 --config ceph.yaml ceph-osd
juju add-relation ceph-osd ceph

Once the ceph charm has bootstrapped the cluster, it will notify the ceph-osd charm which will scan for the configured storage devices and add them to the pool of available storage.

Contact Information

Author: James Page james.page@ubuntu.com Report bugs at: http://bugs.launchpad.net/charms/+source/ceph-osd/+filebug Location: http://jujucharms.com/charms/ceph-osd