remove osd stuff
This commit is contained in:
28
README.md
28
README.md
@@ -26,30 +26,12 @@ These two pieces of configuration must NOT be changed post bootstrap; attempting
|
||||
to do this will cause a reconfiguration error and new service units will not join
|
||||
the existing ceph cluster.
|
||||
|
||||
The charm also supports the specification of storage devices to be used in the
|
||||
ceph cluster.
|
||||
|
||||
osd-devices:
|
||||
A list of devices that the charm will attempt to detect, initialise and
|
||||
activate as ceph storage.
|
||||
|
||||
This can be a superset of the actual storage devices presented to each
|
||||
service unit and can be changed post ceph bootstrap using `juju set`.
|
||||
|
||||
The full path of each device must be provided, e.g. /dev/vdb.
|
||||
|
||||
For Ceph >= 0.56.6 (Raring or the Grizzly Cloud Archive) use of
|
||||
directories instead of devices is also supported.
|
||||
|
||||
At a minimum you must provide a juju config file during initial deployment
|
||||
with the fsid and monitor-secret options (contents of cepy.yaml below):
|
||||
|
||||
ceph:
|
||||
fsid: ecbb8960-0e21-11e2-b495-83a88f44db01
|
||||
monitor-secret: AQD1P2xQiKglDhAA4NGUF5j38Mhq56qwz+45wg==
|
||||
osd-devices: /dev/vdb /dev/vdc /dev/vdd /dev/vde
|
||||
|
||||
Specifying the osd-devices to use is also a good idea.
|
||||
|
||||
Boot things up by using:
|
||||
|
||||
@@ -91,14 +73,4 @@ hook to wait for all three nodes to come up, and then write their addresses
|
||||
to ceph.conf in the "mon host" parameter. After we initialize the monitor
|
||||
cluster a quorum forms quickly, and OSD bringup proceeds.
|
||||
|
||||
The osds use so-called "OSD hotplugging". **ceph-disk prepare** is used to
|
||||
create the filesystems with a special GPT partition type. *udev* is set up
|
||||
to mount such filesystems and start the osd daemons as their storage becomes
|
||||
visible to the system (or after `udevadm trigger`).
|
||||
|
||||
The Chef cookbook mentioned above performs some extra steps to generate an OSD
|
||||
bootstrapping key and propagate it to the other nodes in the cluster. Since
|
||||
all OSDs run on nodes that also run mon, we don't need this and did not
|
||||
implement it.
|
||||
|
||||
See [the documentation](http://ceph.com/docs/master/dev/mon-bootstrap/) for more information on Ceph monitor cluster deployment strategies and pitfalls.
|
||||
|
||||
72
config.yaml
72
config.yaml
@@ -37,78 +37,6 @@ options:
|
||||
How many nodes to wait for before trying to create the monitor cluster
|
||||
this number needs to be odd, and more than three is a waste except for
|
||||
very large clusters.
|
||||
osd-devices:
|
||||
type: string
|
||||
default: /dev/vdb
|
||||
description: |
|
||||
The devices to format and set up as osd volumes.
|
||||
.
|
||||
These devices are the range of devices that will be checked for and
|
||||
used across all service units, in addition to any volumes attached
|
||||
via the --storage flag during deployment.
|
||||
.
|
||||
For ceph >= 0.56.6 these can also be directories instead of devices - the
|
||||
charm assumes anything not starting with /dev is a directory instead.
|
||||
osd-journal:
|
||||
type: string
|
||||
default:
|
||||
description: |
|
||||
The device to use as a shared journal drive for all OSD's. By default
|
||||
no journal device will be used.
|
||||
.
|
||||
Only supported with ceph >= 0.48.3.
|
||||
osd-journal-size:
|
||||
type: int
|
||||
default: 1024
|
||||
description: |
|
||||
Ceph osd journal size. The journal size should be at least twice the
|
||||
product of the expected drive speed multiplied by filestore max sync
|
||||
interval. However, the most common practice is to partition the journal
|
||||
drive (often an SSD), and mount it such that Ceph uses the entire
|
||||
partition for the journal.
|
||||
.
|
||||
Only supported with ceph >= 0.48.3.
|
||||
osd-format:
|
||||
type: string
|
||||
default: xfs
|
||||
description: |
|
||||
Format of filesystem to use for OSD devices; supported formats include:
|
||||
.
|
||||
xfs (Default >= 0.48.3)
|
||||
ext4 (Only option < 0.48.3)
|
||||
btrfs (experimental and not recommended)
|
||||
.
|
||||
Only supported with ceph >= 0.48.3.
|
||||
osd-reformat:
|
||||
type: string
|
||||
default:
|
||||
description: |
|
||||
By default, the charm will not re-format a device that already looks
|
||||
as if it might be an OSD device. This is a safeguard to try to
|
||||
prevent data loss.
|
||||
.
|
||||
Specifying this option (any value) forces a reformat of any OSD devices
|
||||
found which are not already mounted.
|
||||
ignore-device-errors:
|
||||
type: boolean
|
||||
default: False
|
||||
description: |
|
||||
By default, the charm will raise errors if a whitelisted device is found,
|
||||
but for some reason the charm is unable to initialize the device for use
|
||||
by Ceph.
|
||||
.
|
||||
Setting this option to 'True' will result in the charm classifying such
|
||||
problems as warnings only and will not result in a hook error.
|
||||
ephemeral-unmount:
|
||||
type: string
|
||||
default:
|
||||
description: |
|
||||
Cloud instances provider ephermeral storage which is normally mounted
|
||||
on /mnt.
|
||||
.
|
||||
Providing this option will force an unmount of the ephemeral device
|
||||
so that it can be used as a OSD storage device. This is useful for
|
||||
testing purposes (cloud deployment is not a typical use case).
|
||||
source:
|
||||
type: string
|
||||
default:
|
||||
|
||||
@@ -1,37 +0,0 @@
|
||||
description "Ceph OSD"
|
||||
|
||||
start on ceph-osd
|
||||
stop on runlevel [!2345]
|
||||
|
||||
respawn
|
||||
respawn limit 5 30
|
||||
|
||||
pre-start script
|
||||
set -e
|
||||
test -x /usr/bin/ceph-osd || { stop; exit 0; }
|
||||
test -d "/var/lib/ceph/osd/${cluster:-ceph}-$id" || { stop; exit 0; }
|
||||
|
||||
install -d -m0755 /var/run/ceph
|
||||
|
||||
# update location in crush; put in some suitable defaults on the
|
||||
# command line, ceph.conf can override what it wants
|
||||
location="$(ceph-conf --cluster="${cluster:-ceph}" --name="osd.$id" --lookup osd_crush_location || :)"
|
||||
weight="$(ceph-conf --cluster="$cluster" --name="osd.$id" --lookup osd_crush_weight || :)"
|
||||
ceph \
|
||||
--cluster="${cluster:-ceph}" \
|
||||
--name="osd.$id" \
|
||||
--keyring="/var/lib/ceph/osd/${cluster:-ceph}-$id/keyring" \
|
||||
osd crush set \
|
||||
-- \
|
||||
"$id" "osd.$id" "${weight:-1}" \
|
||||
pool=default \
|
||||
host="$(hostname -s)" \
|
||||
$location \
|
||||
|| :
|
||||
end script
|
||||
|
||||
instance ${cluster:-ceph}/$id
|
||||
export cluster
|
||||
export id
|
||||
|
||||
exec /usr/bin/ceph-osd --cluster="${cluster:-ceph}" -i "$id" -f
|
||||
@@ -28,9 +28,7 @@ from charmhelpers.core.hookenv import (
|
||||
service_name,
|
||||
relations_of_type,
|
||||
status_set,
|
||||
local_unit,
|
||||
storage_get,
|
||||
storage_list
|
||||
local_unit
|
||||
)
|
||||
from charmhelpers.core.host import (
|
||||
service_restart,
|
||||
@@ -135,9 +133,6 @@ def config_changed():
|
||||
if not config('monitor-secret'):
|
||||
log('No monitor-secret supplied, cannot proceed.', level=ERROR)
|
||||
sys.exit(1)
|
||||
if config('osd-format') not in ceph.DISK_FORMATS:
|
||||
log('Invalid OSD disk format configuration specified', level=ERROR)
|
||||
sys.exit(1)
|
||||
|
||||
sysctl_dict = config('sysctl')
|
||||
if sysctl_dict:
|
||||
@@ -145,53 +140,16 @@ def config_changed():
|
||||
|
||||
emit_cephconf()
|
||||
|
||||
e_mountpoint = config('ephemeral-unmount')
|
||||
if e_mountpoint and ceph.filesystem_mounted(e_mountpoint):
|
||||
umount(e_mountpoint)
|
||||
|
||||
osd_journal = get_osd_journal()
|
||||
if (osd_journal and not os.path.exists(JOURNAL_ZAPPED) and
|
||||
os.path.exists(osd_journal)):
|
||||
ceph.zap_disk(osd_journal)
|
||||
with open(JOURNAL_ZAPPED, 'w') as zapped:
|
||||
zapped.write('DONE')
|
||||
|
||||
# Support use of single node ceph
|
||||
if (not ceph.is_bootstrapped() and int(config('monitor-count')) == 1):
|
||||
status_set('maintenance', 'Bootstrapping single Ceph MON')
|
||||
ceph.bootstrap_monitor_cluster(config('monitor-secret'))
|
||||
ceph.wait_for_bootstrap()
|
||||
|
||||
storage_changed()
|
||||
|
||||
if relations_of_type('nrpe-external-master'):
|
||||
update_nrpe_config()
|
||||
|
||||
|
||||
@hooks.hook('osd-devices-storage-attached', 'osd-devices-storage-detaching')
|
||||
def storage_changed():
|
||||
if ceph.is_bootstrapped():
|
||||
for dev in get_devices():
|
||||
ceph.osdize(dev, config('osd-format'), get_osd_journal(),
|
||||
reformat_osd(), config('ignore-device-errors'))
|
||||
ceph.start_osds(get_devices())
|
||||
|
||||
|
||||
def get_osd_journal():
|
||||
'''
|
||||
Returns the block device path to use for the OSD journal, if any.
|
||||
|
||||
If there is an osd-journal storage instance attached, it will be
|
||||
used as the journal. Otherwise, the osd-journal configuration will
|
||||
be returned.
|
||||
'''
|
||||
storage_ids = storage_list('osd-journal')
|
||||
if storage_ids:
|
||||
# There can be at most one osd-journal storage instance.
|
||||
return storage_get('location', storage_ids[0])
|
||||
return config('osd-journal')
|
||||
|
||||
|
||||
def get_mon_hosts():
|
||||
hosts = []
|
||||
addr = get_public_addr()
|
||||
@@ -222,26 +180,6 @@ def get_peer_units():
|
||||
return units
|
||||
|
||||
|
||||
def reformat_osd():
|
||||
if config('osd-reformat'):
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
|
||||
def get_devices():
|
||||
if config('osd-devices'):
|
||||
devices = config('osd-devices').split(' ')
|
||||
else:
|
||||
devices = []
|
||||
# List storage instances for the 'osd-devices'
|
||||
# store declared for this charm too, and add
|
||||
# their block device paths to the list.
|
||||
storage_ids = storage_list('osd-devices')
|
||||
devices.extend((storage_get('location', s) for s in storage_ids))
|
||||
return devices
|
||||
|
||||
|
||||
@hooks.hook('mon-relation-joined')
|
||||
def mon_relation_joined():
|
||||
for relid in relation_ids('mon'):
|
||||
@@ -260,10 +198,6 @@ def mon_relation():
|
||||
status_set('maintenance', 'Bootstrapping MON cluster')
|
||||
ceph.bootstrap_monitor_cluster(config('monitor-secret'))
|
||||
ceph.wait_for_bootstrap()
|
||||
for dev in get_devices():
|
||||
ceph.osdize(dev, config('osd-format'), get_osd_journal(),
|
||||
reformat_osd(), config('ignore-device-errors'))
|
||||
ceph.start_osds(get_devices())
|
||||
notify_osds()
|
||||
notify_radosgws()
|
||||
notify_client()
|
||||
@@ -409,8 +343,6 @@ def start():
|
||||
service_restart('ceph-mon')
|
||||
else:
|
||||
service_restart('ceph-mon-all')
|
||||
if ceph.is_bootstrapped():
|
||||
ceph.start_osds(get_devices())
|
||||
|
||||
|
||||
@hooks.hook('nrpe-external-master-relation-joined')
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
ceph_hooks.py
|
||||
@@ -1 +0,0 @@
|
||||
ceph_hooks.py
|
||||
@@ -1 +0,0 @@
|
||||
ceph_hooks.py
|
||||
@@ -1,4 +1,4 @@
|
||||
name: ceph
|
||||
name: ceph-mon
|
||||
summary: Highly scalable distributed storage
|
||||
maintainer: OpenStack Charmers <openstack-charmers@lists.ubuntu.com>
|
||||
description: |
|
||||
@@ -25,12 +25,3 @@ provides:
|
||||
nrpe-external-master:
|
||||
interface: nrpe-external-master
|
||||
scope: container
|
||||
storage:
|
||||
osd-devices:
|
||||
type: block
|
||||
multiple:
|
||||
range: 0-
|
||||
osd-journal:
|
||||
type: block
|
||||
multiple:
|
||||
range: 0-1
|
||||
|
||||
@@ -35,9 +35,10 @@ class CephBasicDeployment(OpenStackAmuletDeployment):
|
||||
and the rest of the service are from lp branches that are
|
||||
compatible with the local charm (e.g. stable or next).
|
||||
"""
|
||||
this_service = {'name': 'ceph', 'units': 3}
|
||||
this_service = {'name': 'ceph-mon', 'units': 3}
|
||||
other_services = [{'name': 'mysql'},
|
||||
{'name': 'keystone'},
|
||||
{'name': 'ceph-osd', 'units': 3},
|
||||
{'name': 'rabbitmq-server'},
|
||||
{'name': 'nova-compute'},
|
||||
{'name': 'glance'},
|
||||
@@ -51,17 +52,18 @@ class CephBasicDeployment(OpenStackAmuletDeployment):
|
||||
'nova-compute:shared-db': 'mysql:shared-db',
|
||||
'nova-compute:amqp': 'rabbitmq-server:amqp',
|
||||
'nova-compute:image-service': 'glance:image-service',
|
||||
'nova-compute:ceph': 'ceph:client',
|
||||
'nova-compute:ceph': 'ceph-mon:client',
|
||||
'keystone:shared-db': 'mysql:shared-db',
|
||||
'glance:shared-db': 'mysql:shared-db',
|
||||
'glance:identity-service': 'keystone:identity-service',
|
||||
'glance:amqp': 'rabbitmq-server:amqp',
|
||||
'glance:ceph': 'ceph:client',
|
||||
'glance:ceph': 'ceph-mon:client',
|
||||
'cinder:shared-db': 'mysql:shared-db',
|
||||
'cinder:identity-service': 'keystone:identity-service',
|
||||
'cinder:amqp': 'rabbitmq-server:amqp',
|
||||
'cinder:image-service': 'glance:image-service',
|
||||
'cinder:ceph': 'ceph:client'
|
||||
'cinder:ceph': 'ceph-mon:client',
|
||||
'ceph-osd:mon': 'ceph-mon:osd'
|
||||
}
|
||||
super(CephBasicDeployment, self)._add_relations(relations)
|
||||
|
||||
@@ -76,6 +78,9 @@ class CephBasicDeployment(OpenStackAmuletDeployment):
|
||||
'auth-supported': 'none',
|
||||
'fsid': '6547bd3e-1397-11e2-82e5-53567c8d32dc',
|
||||
'monitor-secret': 'AQCXrnZQwI7KGBAAiPofmKEXKxu5bUzoYLVkbQ==',
|
||||
}
|
||||
|
||||
ceph_osd_config = {
|
||||
'osd-reformat': 'yes',
|
||||
'ephemeral-unmount': '/mnt',
|
||||
'osd-devices': '/dev/vdb /srv/ceph'
|
||||
@@ -84,7 +89,8 @@ class CephBasicDeployment(OpenStackAmuletDeployment):
|
||||
configs = {'keystone': keystone_config,
|
||||
'mysql': mysql_config,
|
||||
'cinder': cinder_config,
|
||||
'ceph': ceph_config}
|
||||
'ceph-mon': ceph_config,
|
||||
'ceph-osd': ceph_osd_config}
|
||||
super(CephBasicDeployment, self)._configure_services(configs)
|
||||
|
||||
def _initialize_tests(self):
|
||||
@@ -96,9 +102,9 @@ class CephBasicDeployment(OpenStackAmuletDeployment):
|
||||
self.nova_sentry = self.d.sentry.unit['nova-compute/0']
|
||||
self.glance_sentry = self.d.sentry.unit['glance/0']
|
||||
self.cinder_sentry = self.d.sentry.unit['cinder/0']
|
||||
self.ceph0_sentry = self.d.sentry.unit['ceph/0']
|
||||
self.ceph1_sentry = self.d.sentry.unit['ceph/1']
|
||||
self.ceph2_sentry = self.d.sentry.unit['ceph/2']
|
||||
self.ceph0_sentry = self.d.sentry.unit['ceph-mon/0']
|
||||
self.ceph1_sentry = self.d.sentry.unit['ceph-mon/1']
|
||||
self.ceph2_sentry = self.d.sentry.unit['ceph-mon/2']
|
||||
u.log.debug('openstack release val: {}'.format(
|
||||
self._get_openstack_release()))
|
||||
u.log.debug('openstack release str: {}'.format(
|
||||
@@ -211,7 +217,7 @@ class CephBasicDeployment(OpenStackAmuletDeployment):
|
||||
"""Verify the ceph to nova ceph-client relation data."""
|
||||
u.log.debug('Checking ceph:nova-compute ceph relation data...')
|
||||
unit = self.ceph0_sentry
|
||||
relation = ['client', 'nova-compute:ceph']
|
||||
relation = ['client', 'nova-compute:ceph-mon']
|
||||
expected = {
|
||||
'private-address': u.valid_ip,
|
||||
'auth': 'none',
|
||||
@@ -227,7 +233,7 @@ class CephBasicDeployment(OpenStackAmuletDeployment):
|
||||
"""Verify the nova to ceph client relation data."""
|
||||
u.log.debug('Checking nova-compute:ceph ceph-client relation data...')
|
||||
unit = self.nova_sentry
|
||||
relation = ['ceph', 'ceph:client']
|
||||
relation = ['ceph-mon', 'ceph-mon:client']
|
||||
expected = {
|
||||
'private-address': u.valid_ip
|
||||
}
|
||||
@@ -257,7 +263,7 @@ class CephBasicDeployment(OpenStackAmuletDeployment):
|
||||
"""Verify the glance to ceph client relation data."""
|
||||
u.log.debug('Checking glance:ceph client relation data...')
|
||||
unit = self.glance_sentry
|
||||
relation = ['ceph', 'ceph:client']
|
||||
relation = ['ceph-mon', 'ceph-mon:client']
|
||||
expected = {
|
||||
'private-address': u.valid_ip
|
||||
}
|
||||
@@ -287,7 +293,7 @@ class CephBasicDeployment(OpenStackAmuletDeployment):
|
||||
"""Verify the cinder to ceph ceph-client relation data."""
|
||||
u.log.debug('Checking cinder:ceph ceph relation data...')
|
||||
unit = self.cinder_sentry
|
||||
relation = ['ceph', 'ceph:client']
|
||||
relation = ['ceph-mon', 'ceph-mon:client']
|
||||
expected = {
|
||||
'private-address': u.valid_ip
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user