Juju Charm - Ceph OSD
Go to file
OpenDev Sysadmins c1e17b677b OpenDev Migration Patch
This commit was bulk generated and pushed by the OpenDev sysadmins
as a part of the Git hosting and code review systems migration
detailed in these mailing list posts:

http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003603.html
http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004920.html

Attempts have been made to correct repository namespaces and
hostnames based on simple pattern matching, but it's possible some
were updated incorrectly or missed entirely. Please reach out to us
via the contact information listed at https://opendev.org/ with any
questions you may have.
2019-04-19 19:31:00 +00:00
actions Pause/resume for ceph-osd charm 2016-04-08 11:25:18 +00:00
files [bradm] Removed nagios check files that were moved to nrpe-external-master charm 2014-11-18 11:06:09 +10:00
hooks Limit OSD object name lengths for Jewel + ext4 2016-05-19 13:22:00 +01:00
templates Limit OSD object name lengths for Jewel + ext4 2016-05-19 13:22:00 +01:00
tests Updates for stable branch creation 2016-04-21 15:18:29 +01:00
unit_tests Add support for Juju network spaces 2016-04-08 22:42:36 +01:00
.coveragerc Add unit tests for service status 2015-10-06 21:15:38 +01:00
.gitignore Add support for replacing a failed OSD drive 2016-03-17 08:41:15 -07:00
.gitreview OpenDev Migration Patch 2019-04-19 19:31:00 +00:00
.project Initial ceph-osd charm 2012-10-08 15:07:16 +01:00
.pydevproject Add unit tests for service status 2015-10-06 21:15:38 +01:00
.testr.conf Add tox support 2015-10-30 11:22:54 +09:00
Makefile Use tox in Makefile targets 2016-03-15 20:12:08 -07:00
README.md Add support for Juju network spaces 2016-04-08 22:42:36 +01:00
TODO Enable cephx support by default 2012-10-09 12:19:16 +01:00
actions.yaml Pause/resume for ceph-osd charm 2016-04-08 11:25:18 +00:00
charm-helpers-hooks.yaml Updates for stable branch creation 2016-04-21 15:18:29 +01:00
charm-helpers-tests.yaml Updates for stable branch creation 2016-04-21 15:18:29 +01:00
config.yaml Add hardening support 2016-03-24 11:14:47 +00:00
copyright Initial ceph-osd charm 2012-10-08 15:07:16 +01:00
hardening.yaml Add hardening support 2016-03-24 11:14:47 +00:00
icon.svg Add ceph icon 2013-06-25 14:04:30 +01:00
metadata.yaml Add support for Juju network spaces 2016-04-08 22:42:36 +01:00
requirements.txt Fix pbr requirement 2016-04-13 10:24:32 +00:00
revision [hopem] Added use-syslog cfg option to allow logging to syslog 2014-03-25 18:44:23 +00:00
setup.cfg Add unit tests for service status 2015-10-06 21:15:38 +01:00
test-requirements.txt Update to charm-tools >= 2.0.0 2016-03-23 09:30:16 +00:00
tox.ini Update to charm-tools >= 2.0.0 2016-03-23 09:30:16 +00:00

README.md

Overview

Ceph is a distributed storage and network file system designed to provide excellent performance, reliability, and scalability.

This charm deploys additional Ceph OSD storage service units and should be used in conjunction with the 'ceph' charm to scale out the amount of storage available in a Ceph cluster.

Usage

The charm also supports specification of the storage devices to use in the ceph cluster::

osd-devices:
    A list of devices that the charm will attempt to detect, initialise and
    activate as ceph storage.
    
    This this can be a superset of the actual storage devices presented to
    each service unit and can be changed post ceph-osd deployment using
    `juju set`.

For example::

ceph-osd:
    osd-devices: /dev/vdb /dev/vdc /dev/vdd /dev/vde

Boot things up by using::

juju deploy -n 3 --config ceph.yaml ceph

You can then deploy this charm by simple doing::

juju deploy -n 10 --config ceph.yaml ceph-osd
juju add-relation ceph-osd ceph

Once the ceph charm has bootstrapped the cluster, it will notify the ceph-osd charm which will scan for the configured storage devices and add them to the pool of available storage.

Network Space support

This charm supports the use of Juju Network Spaces, allowing the charm to be bound to network space configurations managed directly by Juju. This is only supported with Juju 2.0 and above.

Network traffic can be bound to specific network spaces using the public (front-side) and cluster (back-side) bindings:

juju deploy ceph-osd --bind "public=data-space cluster=cluster-space"

alternatively these can also be provided as part of a Juju native bundle configuration:

ceph-osd:
  charm: cs:xenial/ceph-osd
  num_units: 1
  bindings:
    public: data-space
    cluster: cluster-space

Please refer to the Ceph Network Reference for details on how using these options effects network traffic within a Ceph deployment.

NOTE: Spaces must be configured in the underlying provider prior to attempting to use them.

NOTE: Existing deployments using ceph-*-network configuration options will continue to function; these options are preferred over any network space binding provided if set.

Contact Information

Author: James Page james.page@ubuntu.com Report bugs at: http://bugs.launchpad.net/charms/+source/ceph-osd/+filebug Location: http://jujucharms.com/charms/ceph-osd