Updated README verbosity, added checks to harden ceph admin-daemon usage in ceph utils

This commit is contained in:
James Page 2012-10-04 14:24:12 +01:00
parent 6198863707
commit 5dca3b0b93
6 changed files with 101 additions and 28 deletions

81
README
View File

@ -1,18 +1,75 @@
Overview
========
Ceph is a distributed storage and network file system designed to provide
excellent performance, reliability, and scalability.
This charm deploys a Ceph cluster. This charm deploys a Ceph cluster.
It uses the new-style Ceph deployment as reverse-engineered from the Usage
Chef cookbook at https://github.com/ceph/ceph-cookbooks =====
The ceph charm has two pieces of mandatory configuration for which no defaults
are provided:
fsid:
uuid specific to a ceph cluster used to ensure that different
clusters don't get mixed up - use `uuid` to generate one.
monitor-secret:
a ceph generated key used by the daemons that manage to cluster
to control security. You can use the ceph-authtool command to
generate one:
ceph-authtool /dev/stdout --name=mon. --gen-key
These two pieces of configuration must NOT be changed post bootstrap; attempting
todo this will cause a reconfiguration error and new service units will not join
the existing ceph cluster.
The charm also supports specification of the storage devices to use in the ceph
cluster.
osd-devices:
A list of devices that the charm will attempt to detect, initialise and
activate as ceph storage.
This this can be a superset of the actual storage devices presented to
each service unit and can be changed post ceph bootstrap using `juju set`.
At a minimum you must provide a juju config file during initial deployment
with the fsid and monitor-secret options:
ceph:
fsid: ecbb8960-0e21-11e2-b495-83a88f44db01
monitor-secret: AQD1P2xQiKglDhAA4NGUF5j38Mhq56qwz+45wg==
osd-devices: /dev/vdb /dev/vdc /dev/vdd /dev/vde
Specifying the osd-devices to use is also a good idea.
By default the ceph cluster will not bootstrap until 3 service units have been
deployed and started; this is to ensure that a quorum is achieved prior to adding
storage devices.
Bootnotes
=========
This charm uses the new-style Ceph deployment as reverse-engineered from the Chef
cookbook at https://github.com/ceph/ceph-cookbooks.
This charm is currently deliberately inflexible and potentially destructive. It
is designed to deploy on exactly three machines.
This charm is currently deliberately inflexible and potentially
destructive. It is designed to deploy on exactly three machines.
Each machine will run mon and osd. Each machine will run mon and osd.
The osds use so-called "OSD hotplugging". ceph-disk-prepare is used The osds use so-called "OSD hotplugging". ceph-disk-prepare is used to create the
to create the filesystems with a special GPT partition type. udev is filesystems with a special GPT partition type. udev is set up to mount such
set up to mount such filesystems and start the osd daemons as their filesystems and start the osd daemons as their storage becomes visible to the
storage becomes visible to the system (or after "udevadm trigger"). system (or after "udevadm trigger").
The Chef cookbook above performs some extra steps to generate an OSD The Chef cookbook above performs some extra steps to generate an OSD bootstrapping
bootstrapping key and propagate it to the other nodes in the cluster. key and propagate it to the other nodes in the cluster. Since all our OSDs run on
Since all our OSDs run on nodes that also run mon, we don't need this nodes that also run mon, we don't need this and did not implement it.
and did not implement it.
The charm does not currently implement cephx and its explicitly turned off in the
configuration generated for ceph.

8
TODO
View File

@ -2,7 +2,15 @@
* fix tunables (http://tracker.newdream.net/issues/2210) * fix tunables (http://tracker.newdream.net/issues/2210)
* more than 192 PGs * more than 192 PGs
* fixup data placement in crush to be host not osd driven
== Major == == Major ==
* deploy more than 3 OSD hosts * deploy more than 3 OSD hosts
== Public Charm ==
* cephx support
* rel: remote OSD services (+bootstrap.osd keys for cephx)
* rel: remote MON clients (+client keys for cephx)
* rel: RADOS gateway (+client key for cephx)

View File

@ -9,7 +9,7 @@ options:
monitor-secret: monitor-secret:
type: string type: string
description: | description: |
This value will become the "mon." key. To generate a suitable value use: This value will become the mon. key. To generate a suitable value use:
. .
ceph-authtool /dev/stdout --name=mon. --gen-key ceph-authtool /dev/stdout --name=mon. --gen-key
. .

View File

@ -11,27 +11,33 @@ import json
import subprocess import subprocess
import time import time
import utils import utils
import os
QUORUM = ['leader', 'peon'] QUORUM = ['leader', 'peon']
def is_quorum(): def is_quorum():
asok = "/var/run/ceph/ceph-mon.{}.asok".format(utils.get_unit_hostname())
cmd = [ cmd = [
"ceph", "ceph",
"--admin-daemon", "--admin-daemon",
"/var/run/ceph/ceph-mon.{}.asok".format(utils.get_unit_hostname()), asok,
"mon_status" "mon_status"
] ]
if os.path.exists(asok):
try: try:
result = json.loads(subprocess.check_output(cmd)) result = json.loads(subprocess.check_output(cmd))
except subprocess.CalledProcessError: except subprocess.CalledProcessError:
return False return False
except ValueError:
# Non JSON response from mon_status
return False
if result['state'] in QUORUM: if result['state'] in QUORUM:
return True return True
else: else:
return False return False
else:
return False
def wait_for_quorum(): def wait_for_quorum():
@ -40,12 +46,14 @@ def wait_for_quorum():
def add_bootstrap_hint(peer): def add_bootstrap_hint(peer):
asok = "/var/run/ceph/ceph-mon.{}.asok".format(utils.get_unit_hostname())
cmd = [ cmd = [
"ceph", "ceph",
"--admin-daemon", "--admin-daemon",
"/var/run/ceph/ceph-mon.{}.asok".format(utils.get_unit_hostname()), asok,
"add_bootstrap_peer_hint", "add_bootstrap_peer_hint",
peer peer
] ]
if os.path.exists(asok):
# Ignore any errors for this call # Ignore any errors for this call
subprocess.call(cmd) subprocess.call(cmd)

View File

@ -1 +1 @@
54 55