Add support for RADOS gateway multi-site replication

Add new radosgw-multisite typed master and slave relations to
support configuration of separate ceph-radosgw deployments as
a single realm and zonegroup to support replication of data
between distinct RADOS gateway deployments.

This mandates the use of the realm, zonegroup and zone
configuration options of which realm and zonegroup must match
between instances of the ceph-radosgw application participating
in the master/slave relation.

The radosgw-multisite relation may be deployed as a model local
relation or as a cross-model relation.

Change-Id: I094f89b0f668e012482ca8aace1756c911b79d17
Closes-Bug: 1666880
This commit is contained in:
James Page 2019-01-09 16:47:28 +00:00
parent 76cddc525e
commit 7722f9d620
40 changed files with 1970 additions and 30 deletions

129
README.md
View File

@ -100,7 +100,7 @@ To use this feature, use the --bind option when deploying the charm:
alternatively these can also be provided as part of a juju native bundle configuration:
ceph-radosgw:
charm: cs:xenial/ceph-radosgw
charm: cs:ceph-radosgw
num_units: 1
bindings:
public: public-space
@ -109,19 +109,122 @@ alternatively these can also be provided as part of a juju native bundle configu
NOTE: Spaces must be configured in the underlying provider prior to attempting to use them.
NOTE: Existing deployments using os-*-network configuration options will continue to function; these options are preferred over any network space binding provided if set.
NOTE: Existing deployments using os-\*-network configuration options will continue to function; these options are preferred over any network space binding provided if set.
Contact Information
===================
Multi-Site replication
======================
Author: James Page <james.page@ubuntu.com>
Report bugs at: http://bugs.launchpad.net/charms/+source/ceph-radosgw/+filebug
Location: http://jujucharms.com/charms/ceph-radosgw
Overview
--------
Bootnotes
=========
This charm supports configuration of native replication between Ceph RADOS
gateway deployments.
The Ceph RADOS Gateway makes use of a multiverse package libapache2-mod-fastcgi.
As such it will try to automatically enable the multiverse pocket in
/etc/apt/sources.list. Note that there is noting 'wrong' with multiverse
components - they typically have less liberal licensing policies or suchlike.
This is supported both within a single model and between different models
using cross-model relations.
By default either ceph-radosgw deployment will accept write operations.
Deployment
----------
NOTE: example bundles for the us-west and us-east models can be found
in the bundles subdirectory of the ceph-radosgw charm.
NOTE: switching from a standalone deployment to a multi-site replicated
deployment is not supported.
To deploy in this configuration ensure that the following configuration
options are set on the ceph-radosgw charm deployments - in this example
rgw-us-east and rgw-us-west are both instances of the ceph-radosgw charm:
rgw-us-east:
realm: replicated
zonegroup: us
zone: us-east
rgw-us-west:
realm: replicated
zonegroup: us
zone: us-west
When deploying with this configuration the ceph-radosgw applications will
deploy into a blocked state until the master/slave (cross-model) relation
is added.
Typically each ceph-radosgw deployment will be associated with a separate
ceph cluster at different physical locations - in this example the deployments
are in different models ('us-east' and 'us-west').
One ceph-radosgw application acts as the initial master for the deployment -
setup the master relation endpoint as the provider of the offer for the
cross-model relation:
juju offer -m us-east rgw-us-east:master
The cross-model relation offer can then be consumed in the other model and
related to the slave ceph-radosgw application:
juju consume -m us-west admin/us-east.rgw-us-east
juju add-relation -m us-west rgw-us-west:slave rgw-us-east:master
Once the relation has been added the realm, zonegroup and zone configuration
will be created in the master deployment and then synced to the slave
deployment.
The current sync status can be validated from either model:
juju ssh -m us-east ceph-mon/0
sudo radosgw-admin sync status
realm 142eb39c-67c4-42b3-9116-1f4ffca23964 (replicated)
zonegroup 7b69f059-425b-44f5-8a21-ade63c2034bd (us)
zone 4ee3bc39-b526-4ac9-a233-64ebeacc4574 (us-east)
metadata sync no sync (zone is master)
data sync source: db876cf0-62a8-4b95-88f4-d0f543136a07 (us-west)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is caught up with source
Once the deployment is complete, the default zone and zonegroup can
optionally be tidied using the 'tidydefaults' action:
juju run-action -m us-west --unit rgw-us-west/0 tidydefaults
This operation is not reversible.
Failover/Recovery
-----------------
In the event that the site hosting the zone which is the master for metadata
(in this example us-east) has an outage, the master metadata zone must be
failed over to the slave site; this operation is performed using the 'promote'
action:
juju run-action -m us-west --wait rgw-us-west/0 promote
Once this action has completed, the slave site will be the master for metadata
updates and the deployment will accept new uploads of data.
Once the failed site has been recovered it will resync and resume as a slave
to the promoted master site (us-west in this example).
The master metadata zone can be failed back to its original location once resync
has completed using the 'promote' action:
juju run-action -m us-east --wait rgw-us-east/0 promote
Read/write vs Read-only
-----------------------
By default all zones within a deployment will be read/write capable but only
the master zone can be used to create new containers.
Non-master zones can optionally be marked as read-only by using the 'readonly'
action:
juju run-action -m us-east --wait rgw-us-east/0 readonly
a zone that is currently read-only can be switched to read/write mode by either
promoting it to be the current master or by using the 'readwrite' action:
juju run-action -m us-east --wait rgw-us-east/0 readwrite

4
TODO
View File

@ -1,4 +0,0 @@
RADOS Gateway Charm
-------------------
* Improved process control of radosgw daemon (to many restarts)

View File

@ -2,3 +2,11 @@ pause:
description: Pause the ceph-radosgw unit.
resume:
descrpition: Resume the ceph-radosgw unit.
promote:
description: Promote the zone associated with the local units to master/default (multi-site).
readonly:
description: Mark the zone associated with the local units as read only (multi-site).
readwrite:
description: Mark the zone associated with the local units as read/write (multi-site).
tidydefaults:
description: Delete default zone and zonegroup configuration (multi-site).

View File

@ -15,10 +15,18 @@
# limitations under the License.
import os
import subprocess
import sys
sys.path.append('hooks/')
from charmhelpers.core.hookenv import action_fail
import multisite
from charmhelpers.core.hookenv import (
action_fail,
config,
action_set,
)
from utils import (
pause_unit_helper,
resume_unit_helper,
@ -39,9 +47,91 @@ def resume(args):
resume_unit_helper(register_configs())
def promote(args):
"""Promote zone associated with local RGW units to master/default"""
zone = config('zone')
if not zone:
action_fail('No zone configuration set, not promoting')
return
try:
multisite.modify_zone(zone,
default=True, master=True)
multisite.update_period()
action_set(
values={'message': 'zone:{} promoted to '
'master/default'.format(zone)}
)
except subprocess.CalledProcessError as cpe:
action_fail('Unable to promote zone:{} '
'to master: {}'.format(zone, cpe.output))
def readonly(args):
"""Mark zone associated with local RGW units as read only"""
zone = config('zone')
if not zone:
action_fail('No zone configuration set, not marking read only')
return
try:
multisite.modify_zone(zone, readonly=True)
multisite.update_period()
action_set(
values={
'message': 'zone:{} marked as read only'.format(zone)
}
)
except subprocess.CalledProcessError as cpe:
action_fail('Unable mark zone:{} '
'as read only: {}'.format(zone, cpe.output))
def readwrite(args):
"""Mark zone associated with local RGW units as read write"""
zone = config('zone')
if not zone:
action_fail('No zone configuration set, not marking read write')
return
try:
multisite.modify_zone(zone, readonly=False)
multisite.update_period()
action_set(
values={
'message': 'zone:{} marked as read write'.format(zone)
}
)
except subprocess.CalledProcessError as cpe:
action_fail('Unable mark zone:{} '
'as read write: {}'.format(zone, cpe.output))
def tidydefaults(args):
"""Delete default zone and zonegroup metadata"""
zone = config('zone')
if not zone:
action_fail('No zone configuration set, not deleting defaults')
return
try:
multisite.tidy_defaults()
action_set(
values={
'message': 'default zone and zonegroup deleted'
}
)
except subprocess.CalledProcessError as cpe:
action_fail('Unable delete default zone and zonegroup'
': {}'.format(zone, cpe.output))
# A dictionary of all the defined actions to callables (which take
# parsed arguments).
ACTIONS = {"pause": pause, "resume": resume}
ACTIONS = {
"pause": pause,
"resume": resume,
"promote": promote,
"readonly": readonly,
"readwrite": readwrite,
"tidydefaults": tidydefaults,
}
def main(args):

1
actions/promote Symbolic link
View File

@ -0,0 +1 @@
actions.py

1
actions/readonly Symbolic link
View File

@ -0,0 +1 @@
actions.py

1
actions/readwrite Symbolic link
View File

@ -0,0 +1 @@
actions.py

1
actions/tidydefaults Symbolic link
View File

@ -0,0 +1 @@
actions.py

View File

@ -0,0 +1,73 @@
options:
source: &source cloud:bionic-rocky
series: bionic
applications:
east-ceph-radosgw:
charm: cs:~openstack-charmers-next/ceph-radosgw-multisite
num_units: 1
options:
source: *source
realm: testrealm
zonegroup: testzonegroup
zone: east-1
region: east-1
east-ceph-osd:
charm: cs:~openstack-charmers-next/ceph-osd
num_units: 3
storage:
osd-devices: 'cinder,10G'
options:
source: *source
east-ceph-mon:
charm: cs:~openstack-charmers-next/ceph-mon
num_units: 3
options:
source: *source
west-ceph-radosgw:
charm: cs:~openstack-charmers-next/ceph-radosgw-multisite
num_units: 1
options:
source: *source
realm: testrealm
zonegroup: testzonegroup
zone: west-1
region: west-1
west-ceph-osd:
charm: cs:~openstack-charmers-next/ceph-osd
num_units: 3
storage:
osd-devices: 'cinder,10G'
options:
source: *source
west-ceph-mon:
charm: cs:~openstack-charmers-next/ceph-mon
num_units: 3
options:
source: *source
percona-cluster:
charm: cs:~openstack-charmers-next/percona-cluster
num_units: 1
keystone:
expose: True
charm: cs:~openstack-charmers-next/keystone
num_units: 1
options:
openstack-origin: *source
region: "east-1 west-1"
relations:
- - keystone:shared-db
- percona-cluster:shared-db
- - east-ceph-osd:mon
- east-ceph-mon:osd
- - east-ceph-radosgw:mon
- east-ceph-mon:radosgw
- - east-ceph-radosgw:identity-service
- keystone:identity-service
- - west-ceph-osd:mon
- west-ceph-mon:osd
- - west-ceph-radosgw:mon
- west-ceph-mon:radosgw
- - west-ceph-radosgw:identity-service
- keystone:identity-service
- - west-ceph-radosgw:master
- east-ceph-radosgw:slave

41
bundles/us-east.yaml Normal file
View File

@ -0,0 +1,41 @@
machines:
'0':
constraints:
'1':
constraints:
'2':
constraints:
series: bionic
applications:
ceph-mon:
charm: 'cs:ceph-mon'
num_units: 3
options:
expected-osd-count: 9
to:
- lxd:0
- lxd:1
- lxd:2
ceph-osd:
charm: 'cs:ceph-osd'
num_units: 3
options:
osd-devices: "/dev/disk/by-dname/bcache1 /dev/disk/by-dname/bcache2 /dev/disk/by-dname/bcache3"
to:
- 0
- 1
- 2
rgw-us-east:
charm: 'cs:ceph-radosgw'
num_units: 1
options:
realm: replicated
zone: us-east
zonegroup: us
to:
- lxd:0
relations:
- - 'ceph-mon:osd'
- 'ceph-osd:mon'
- - 'rgw-us-east:mon'
- 'ceph-mon:radosgw'

41
bundles/us-west.yaml Normal file
View File

@ -0,0 +1,41 @@
machines:
'0':
constraints:
'1':
constraints:
'2':
constraints:
series: bionic
applications:
ceph-mon:
charm: 'cs:ceph-mon'
num_units: 3
options:
expected-osd-count: 9
to:
- lxd:0
- lxd:1
- lxd:2
ceph-osd:
charm: 'cs:ceph-osd'
num_units: 3
options:
osd-devices: "/dev/disk/by-dname/bcache1 /dev/disk/by-dname/bcache2 /dev/disk/by-dname/bcache3"
to:
- 0
- 1
- 2
rgw-us-west:
charm: 'cs:ceph-radosgw'
num_units: 1
options:
realm: replicated
zone: us-west
zonegroup: us
to:
- lxd:0
relations:
- - 'ceph-mon:osd'
- 'ceph-osd:mon'
- - 'rgw-us-west:mon'
- 'ceph-mon:radosgw'

View File

@ -292,3 +292,22 @@ options:
description: |
SSL CA to use with the certificate and key provided - this is only
required if you are providing a privately signed ssl_cert and ssl_key.
# Multi Site Options
realm:
type: string
default:
description: |
Name of RADOS Gateway Realm to create for multi-site replication. Setting
this option will enable support for multi-site replication, at which
point the zonegroup and zone options must also be provided.
zonegroup:
type: string
default:
description: |
Name of RADOS Gateway Zone Group to create for multi-site replication.
zone:
type: string
default:
description: |
Name of RADOS Gateway Zone to create for multi-site replication. This
option must be specific to the local site e.g. us-west or us-east.

View File

@ -212,6 +212,9 @@ class MonContext(context.CephContext):
ctxt.update(user_provided)
if self.context_complete(ctxt):
# Multi-site Zone configuration is optional,
# so add after assessment
ctxt['rgw_zone'] = config('zone')
return ctxt
return {}

View File

@ -23,7 +23,8 @@ from charmhelpers.core.hookenv import (
)
from charmhelpers.core.host import (
mkdir
mkdir,
symlink,
)
from charmhelpers.contrib.storage.linux.ceph import (
CephBrokerRq,
@ -39,9 +40,12 @@ def import_radosgw_key(key, name=None):
keyring_path = os.path.join(CEPH_RADOSGW_DIR,
'ceph-{}'.format(name),
'keyring')
link_path = os.path.join(CEPH_DIR,
'ceph.client.{}.keyring'.format(name))
owner = group = 'ceph'
else:
keyring_path = os.path.join(CEPH_DIR, _radosgw_keyring)
link_path = None
owner = group = 'root'
if not os.path.exists(keyring_path):
@ -63,6 +67,11 @@ def import_radosgw_key(key, name=None):
keyring_path
]
subprocess.check_call(cmd)
# NOTE: add a link to the keyring in /var/lib/ceph
# to /etc/ceph so we can use it for radosgw-admin
# operations for multi-site configuration
if link_path:
symlink(keyring_path, link_path)
return True
return False

View File

@ -18,11 +18,13 @@ import os
import subprocess
import sys
import socket
import uuid
sys.path.append('lib')
import ceph_rgw as ceph
import ceph.utils as ceph_utils
import multisite
from charmhelpers.core.hookenv import (
relation_get,
@ -35,6 +37,9 @@ from charmhelpers.core.hookenv import (
DEBUG,
Hooks, UnregisteredHookError,
status_set,
is_leader,
leader_set,
leader_get,
)
from charmhelpers.fetch import (
apt_update,
@ -86,6 +91,9 @@ from utils import (
service_name,
systemd_based_radosgw,
request_per_unit_key,
ready_for_service,
restart_nonce_changed,
multisite_deployment,
)
from charmhelpers.contrib.charmsupport import nrpe
from charmhelpers.contrib.hardening.harden import harden
@ -109,6 +117,8 @@ APACHE_PACKAGES = [
'libapache2-mod-fastcgi',
]
MULTISITE_SYSTEM_USER = 'multisite-sync'
def upgrade_available():
"""Check for upgrade for ceph
@ -200,6 +210,8 @@ def config_changed():
for r_id in relation_ids('certificates'):
certs_joined(r_id)
process_multisite_relations()
CONFIGS.write_all()
configure_https()
@ -218,8 +230,9 @@ def mon_relation(rid=None, unit=None):
if request_per_unit_key():
relation_set(relation_id=rid,
key_name=key_name)
# NOTE: prefer zone name if in use over pool-prefix.
rq = ceph.get_create_rgw_pools_rq(
prefix=config('pool-prefix'))
prefix=config('zone') or config('pool-prefix'))
if is_request_complete(rq, relation='mon'):
log('Broker request complete', level=DEBUG)
CONFIGS.write_all()
@ -242,9 +255,20 @@ def mon_relation(rid=None, unit=None):
if systemd_based_radosgw():
service_stop('radosgw')
service('disable', 'radosgw')
if not is_unit_paused_set() and new_keyring:
service('enable', service_name())
# NOTE(jamespage):
# Multi-site deployments need to defer restart as the
# zone is not created until the master relation is
# joined; restarting here will cause a restart burst
# in systemd and stop the process restarting once
# zone configuration is complete.
if (not is_unit_paused_set() and
new_keyring and
not multisite_deployment()):
service_restart(service_name())
process_multisite_relations()
else:
send_request_if_needed(rq, relation='mon')
_mon_relation()
@ -410,6 +434,174 @@ def certs_changed(relation_id=None, unit=None):
_certs_changed()
@hooks.hook('master-relation-joined')
def master_relation_joined(relation_id=None):
if not ready_for_service(legacy=False):
log('unit not ready, deferring multisite configuration')
return
internal_url = '{}:{}'.format(
canonical_url(CONFIGS, INTERNAL),
config('port')
)
endpoints = [internal_url]
realm = config('realm')
zonegroup = config('zonegroup')
zone = config('zone')
access_key = leader_get('access_key')
secret = leader_get('secret')
if not all((realm, zonegroup, zone)):
return
relation_set(relation_id=relation_id,
realm=realm,
zonegroup=zonegroup,
url=endpoints[0],
access_key=access_key,
secret=secret)
if not is_leader():
return
if not leader_get('restart_nonce'):
# NOTE(jamespage):
# This is an ugly kludge to force creation of the required data
# items in the .rgw.root pool prior to the radosgw process being
# started; radosgw-admin does not currently have a way of doing
# this operation but a period update will force it to be created.
multisite.update_period(fatal=False)
mutation = False
if realm not in multisite.list_realms():
multisite.create_realm(realm, default=True)
mutation = True
if zonegroup not in multisite.list_zonegroups():
multisite.create_zonegroup(zonegroup,
endpoints=endpoints,
default=True, master=True,
realm=realm)
mutation = True
if zone not in multisite.list_zones():
multisite.create_zone(zone,
endpoints=endpoints,
default=True, master=True,
zonegroup=zonegroup)
mutation = True
if MULTISITE_SYSTEM_USER not in multisite.list_users():
access_key, secret = multisite.create_system_user(
MULTISITE_SYSTEM_USER
)
multisite.modify_zone(zone,
access_key=access_key,
secret=secret)
leader_set(access_key=access_key,
secret=secret)
mutation = True
if mutation:
multisite.update_period()
service_restart(service_name())
leader_set(restart_nonce=str(uuid.uuid4()))
relation_set(relation_id=relation_id,
access_key=access_key,
secret=secret)
@hooks.hook('slave-relation-changed')
def slave_relation_changed(relation_id=None, unit=None):
if not is_leader():
return
if not ready_for_service(legacy=False):
log('unit not ready, deferring multisite configuration')
return
master_data = relation_get(rid=relation_id, unit=unit)
if not all((master_data.get('realm'),
master_data.get('zonegroup'),
master_data.get('access_key'),
master_data.get('secret'),
master_data.get('url'))):
log("Defer processing until master RGW has provided required data")
return
internal_url = '{}:{}'.format(
canonical_url(CONFIGS, INTERNAL),
config('port')
)
endpoints = [internal_url]
realm = config('realm')
zonegroup = config('zonegroup')
zone = config('zone')
if (realm, zonegroup) != (master_data['realm'],
master_data['zonegroup']):
log("Mismatched configuration so stop multi-site configuration now")
return
if not leader_get('restart_nonce'):
# NOTE(jamespage):
# This is an ugly kludge to force creation of the required data
# items in the .rgw.root pool prior to the radosgw process being
# started; radosgw-admin does not currently have a way of doing
# this operation but a period update will force it to be created.
multisite.update_period(fatal=False)
mutation = False
if realm not in multisite.list_realms():
multisite.pull_realm(url=master_data['url'],
access_key=master_data['access_key'],
secret=master_data['secret'])
multisite.pull_period(url=master_data['url'],
access_key=master_data['access_key'],
secret=master_data['secret'])
multisite.set_default_realm(realm)
mutation = True
if zone not in multisite.list_zones():
multisite.create_zone(zone,
endpoints=endpoints,
default=False, master=False,
zonegroup=zonegroup,
access_key=master_data['access_key'],
secret=master_data['secret'])
mutation = True
if mutation:
multisite.update_period()
service_restart(service_name())
leader_set(restart_nonce=str(uuid.uuid4()))
@hooks.hook('leader-settings-changed')
def leader_settings_changed():
# NOTE: leader unit will only ever set leader storage
# data when multi-site realm, zonegroup, zone or user
# data has been created/changed - trigger restarts
# of rgw services.
if restart_nonce_changed(leader_get('restart_nonce')):
service_restart(service_name())
if not is_leader():
for r_id in relation_ids('master'):
master_relation_joined(r_id)
def process_multisite_relations():
"""Re-trigger any pending master/slave relations"""
for r_id in relation_ids('master'):
master_relation_joined(r_id)
for r_id in relation_ids('slave'):
for unit in related_units(r_id):
slave_relation_changed(r_id, unit)
if __name__ == '__main__':
try:
hooks.execute(sys.argv)

View File

@ -0,0 +1 @@
hooks.py

View File

@ -0,0 +1 @@
hooks.py

View File

@ -0,0 +1 @@
hooks.py

View File

@ -0,0 +1 @@
hooks.py

View File

@ -0,0 +1 @@
hooks.py

367
hooks/multisite.py Normal file
View File

@ -0,0 +1,367 @@
#
# Copyright 2016 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import functools
import subprocess
import socket
import charmhelpers.core.hookenv as hookenv
import charmhelpers.core.decorators as decorators
RGW_ADMIN = 'radosgw-admin'
@decorators.retry_on_exception(num_retries=5, base_delay=3,
exc_type=subprocess.CalledProcessError)
def _check_output(cmd):
"""Logging wrapper for subprocess.check_ouput"""
hookenv.log("Executing: {}".format(' '.join(cmd)), level=hookenv.DEBUG)
return subprocess.check_output(cmd).decode('UTF-8')
@decorators.retry_on_exception(num_retries=5, base_delay=3,
exc_type=subprocess.CalledProcessError)
def _check_call(cmd):
"""Logging wrapper for subprocess.check_call"""
hookenv.log("Executing: {}".format(' '.join(cmd)), level=hookenv.DEBUG)
return subprocess.check_call(cmd)
def _call(cmd):
"""Logging wrapper for subprocess.call"""
hookenv.log("Executing: {}".format(' '.join(cmd)), level=hookenv.DEBUG)
return subprocess.call(cmd)
def _key_name():
"""Determine the name of the cephx key for the local unit"""
return 'rgw.{}'.format(socket.gethostname())
def _list(key):
"""
Internal implementation for list_* functions
:param key: string for required entity (zone, zonegroup, realm, user)
:type key: str
:return: List of specified entities found
:rtype: list
"""
cmd = [
RGW_ADMIN, '--id={}'.format(_key_name()),
key, 'list'
]
try:
result = json.loads(_check_output(cmd))
if isinstance(result, dict):
return result['{}s'.format(key)]
else:
return result
except TypeError:
return []
list_realms = functools.partial(_list, 'realm')
list_zonegroups = functools.partial(_list, 'zonegroup')
list_zones = functools.partial(_list, 'zone')
list_users = functools.partial(_list, 'user')
def create_realm(name, default=False):
"""
Create a new RADOS Gateway Realm.
:param name: name of realm to create
:type name: str
:param default: set new realm as the default realm
:type default: boolean
:return: realm configuration
:rtype: dict
"""
cmd = [
RGW_ADMIN, '--id={}'.format(_key_name()),
'realm', 'create',
'--rgw-realm={}'.format(name)
]
if default:
cmd += ['--default']
try:
return json.loads(_check_output(cmd))
except TypeError:
return None
def set_default_realm(name):
"""
Set the default RADOS Gateway Realm
:param name: name of realm to create
:type name: str
"""
cmd = [
RGW_ADMIN, '--id={}'.format(_key_name()),
'realm', 'default',
'--rgw-realm={}'.format(name)
]
_check_call(cmd)
def create_zonegroup(name, endpoints, default=False, master=False, realm=None):
"""
Create a new RADOS Gateway Zone Group
:param name: name of zonegroup to create
:type name: str
:param endpoints: list of URLs to endpoints for zonegroup
:type endpoints: list[str]
:param default: set new zonegroup as the default zonegroup
:type default: boolean
:param master: set new zonegroup as the master zonegroup
:type master: boolean
:param realm: realm to use for zonegroup
:type realm: str
:return: zonegroup configuration
:rtype: dict
"""
cmd = [
RGW_ADMIN, '--id={}'.format(_key_name()),
'zonegroup', 'create',
'--rgw-zonegroup={}'.format(name),
'--endpoints={}'.format(','.join(endpoints)),
]
if realm:
cmd.append('--rgw-realm={}'.format(realm))
if default:
cmd.append('--default')
if master:
cmd.append('--master')
try:
return json.loads(_check_output(cmd))
except TypeError:
return None
def create_zone(name, endpoints, default=False, master=False, zonegroup=None,
access_key=None, secret=None, readonly=False):
"""
Create a new RADOS Gateway Zone
:param name: name of zone to create
:type name: str
:param endpoints: list of URLs to endpoints for zone
:type endpoints: list[str]
:param default: set new zone as the default zone
:type default: boolean
:param master: set new zone as the master zone
:type master: boolean
:param zonegroup: zonegroup to use for zone
:type zonegroup: str
:param access_key: access-key to use for the zone
:type access_key: str
:param secret: secret to use with access-key for the zone
:type secret: str
:param readonly: set zone as read only
:type: readonly: boolean
:return: dict of zone configuration
:rtype: dict
"""
cmd = [
RGW_ADMIN, '--id={}'.format(_key_name()),
'zone', 'create',
'--rgw-zone={}'.format(name),
'--endpoints={}'.format(','.join(endpoints)),
]
if zonegroup:
cmd.append('--rgw-zonegroup={}'.format(zonegroup))
if default:
cmd.append('--default')
if master:
cmd.append('--master')
if access_key and secret:
cmd.append('--access-key={}'.format(access_key))
cmd.append('--secret={}'.format(secret))
cmd.append('--read-only={}'.format(1 if readonly else 0))
try:
return json.loads(_check_output(cmd))
except TypeError:
return None
def modify_zone(name, endpoints=None, default=False, master=False,
access_key=None, secret=None, readonly=False):
"""
Modify an existing RADOS Gateway zone
:param name: name of zone to create
:type name: str
:param endpoints: list of URLs to endpoints for zone
:type endpoints: list[str]
:param default: set zone as the default zone
:type default: boolean
:param master: set zone as the master zone
:type master: boolean
:param access_key: access-key to use for the zone
:type access_key: str
:param secret: secret to use with access-key for the zone
:type secret: str
:param readonly: set zone as read only
:type: readonly: boolean
:return: zone configuration
:rtype: dict
"""
cmd = [
RGW_ADMIN, '--id={}'.format(_key_name()),
'zone', 'modify',
'--rgw-zone={}'.format(name),
]
if endpoints:
cmd.append('--endpoints={}'.format(','.join(endpoints)))
if access_key and secret:
cmd.append('--access-key={}'.format(access_key))
cmd.append('--secret={}'.format(secret))
if master:
cmd.append('--master')
if default:
cmd.append('--default')
cmd.append('--read-only={}'.format(1 if readonly else 0))
try:
return json.loads(_check_output(cmd))
except TypeError:
return None
def update_period(fatal=True):
"""
Update RADOS Gateway configuration period
"""
cmd = [
RGW_ADMIN, '--id={}'.format(_key_name()),
'period', 'update', '--commit'
]
if fatal:
_check_call(cmd)
else:
_call(cmd)
def tidy_defaults():
"""
Purge any default zonegroup and zone definitions
"""
if ('default' in list_zonegroups() and
'default' in list_zones()):
cmd = [
RGW_ADMIN, '--id={}'.format(_key_name()),
'zonegroup', 'remove',
'--rgw-zonegroup=default',
'--rgw-zone=default'
]
_call(cmd)
update_period()
if 'default' in list_zones():
cmd = [
RGW_ADMIN, '--id={}'.format(_key_name()),
'zone', 'delete',
'--rgw-zone=default'
]
_call(cmd)
update_period()
if 'default' in list_zonegroups():
cmd = [
RGW_ADMIN, '--id={}'.format(_key_name()),
'zonegroup', 'delete',
'--rgw-zonegroup=default'
]
_call(cmd)
update_period()
def create_system_user(username):
"""
Create a RADOS Gateway system use for sync usage
:param username: username of user to create
:type username: str
:return: access key and secret
:rtype: (str, str)
"""
cmd = [
RGW_ADMIN, '--id={}'.format(_key_name()),
'user', 'create',
'--uid={}'.format(username),
'--display-name=Synchronization User',
'--system',
]
try:
result = json.loads(_check_output(cmd))
return (result['keys'][0]['access_key'],
result['keys'][0]['secret_key'])
except TypeError:
return (None, None)
def pull_realm(url, access_key, secret):
"""
Pull in a RADOS Gateway Realm from a master RGW instance
:param url: url of remote rgw deployment
:type url: str
:param access_key: access-key for remote rgw deployment
:type access_key: str
:param secret: secret for remote rgw deployment
:type secret: str
:return: realm configuration
:rtype: dict
"""
cmd = [
RGW_ADMIN, '--id={}'.format(_key_name()),
'realm', 'pull',
'--url={}'.format(url),
'--access-key={}'.format(access_key),
'--secret={}'.format(secret),
]
try:
return json.loads(_check_output(cmd))
except TypeError:
return None
def pull_period(url, access_key, secret):
"""
Pull in a RADOS Gateway period from a master RGW instance
:param url: url of remote rgw deployment
:type url: str
:param access_key: access-key for remote rgw deployment
:type access_key: str
:param secret: secret for remote rgw deployment
:type secret: str
:return: realm configuration
:rtype: dict
"""
cmd = [
RGW_ADMIN, '--id={}'.format(_key_name()),
'period', 'pull',
'--url={}'.format(url),
'--access-key={}'.format(access_key),
'--secret={}'.format(secret),
]
try:
return json.loads(_check_output(cmd))
except TypeError:
return None

1
hooks/slave-relation-broken Symbolic link
View File

@ -0,0 +1 @@
hooks.py

View File

@ -0,0 +1 @@
hooks.py

View File

@ -0,0 +1 @@
hooks.py

1
hooks/slave-relation-joined Symbolic link
View File

@ -0,0 +1 @@
hooks.py

View File

@ -26,6 +26,8 @@ from charmhelpers.core.hookenv import (
relation_ids,
related_units,
application_version_set,
config,
leader_get,
)
from charmhelpers.contrib.openstack import (
context,
@ -54,6 +56,7 @@ from charmhelpers.fetch import (
filter_installed_packages,
get_upstream_version,
)
from charmhelpers.core import unitdata
# The interface is said to be satisfied if anyone of the interfaces in the
# list has a complete context.
@ -64,7 +67,8 @@ CEPHRG_HA_RES = 'grp_cephrg_vips'
TEMPLATES_DIR = 'templates'
TEMPLATES = 'templates/'
HAPROXY_CONF = '/etc/haproxy/haproxy.cfg'
CEPH_CONF = '/etc/ceph/ceph.conf'
CEPH_DIR = '/etc/ceph'
CEPH_CONF = '{}/ceph.conf'.format(CEPH_DIR)
VERSION_PACKAGE = 'radosgw'
@ -177,6 +181,41 @@ def check_optional_relations(configs):
return ('blocked',
'hacluster missing configuration: '
'vip, vip_iface, vip_cidr')
# NOTE: misc multi-site relation and config checks
multisite_config = (config('realm'),
config('zonegroup'),
config('zone'))
if relation_ids('master') or relation_ids('slave'):
if not all(multisite_config):
return ('blocked',
'multi-site configuration incomplete '
'(realm={realm}, zonegroup={zonegroup}'
', zone={zone})'.format(**config()))
if (all(multisite_config) and not
(relation_ids('master') or relation_ids('slave'))):
return ('blocked',
'multi-site configuration but master/slave '
'relation missing')
if (all(multisite_config) and relation_ids('slave')):
multisite_ready = False
for rid in relation_ids('slave'):
for unit in related_units(rid):
if relation_get('url', unit=unit, rid=rid):
multisite_ready = True
continue
if not multisite_ready:
return ('waiting',
'multi-site master relation incomplete')
master_configured = (
leader_get('access_key'),
leader_get('secret'),
leader_get('restart_nonce'),
)
if (all(multisite_config) and
relation_ids('master') and
not all(master_configured)):
return ('waiting',
'waiting for configuration of master zone')
# return 'unknown' as the lowest priority to not clobber an existing
# status.
return 'unknown', ''
@ -317,8 +356,77 @@ def request_per_unit_key():
def service_name():
"""Determine the name of the RADOS Gateway service"""
"""Determine the name of the RADOS Gateway service
:return: service name to use
:rtype: str
"""
if systemd_based_radosgw():
return 'ceph-radosgw@rgw.{}'.format(socket.gethostname())
else:
return 'radosgw'
def ready_for_service(legacy=True):
"""
Determine when local unit is ready to service requests determined
by presentation of required cephx keys on the mon relation and
presence of the associated keyring in /etc/ceph.
:param legacy: whether to check for legacy key support
:type legacy: boolean
:return: whether unit is ready
:rtype: boolean
"""
name = 'rgw.{}'.format(socket.gethostname())
for rid in relation_ids('mon'):
for unit in related_units(rid):
if (relation_get('{}_key'.format(name),
rid=rid, unit=unit) and
os.path.exists(
os.path.join(
CEPH_DIR,
'ceph.client.{}.keyring'.format(name)
))):
return True
if (legacy and
relation_get('radosgw_key',
rid=rid, unit=unit) and
os.path.exists(
os.path.join(
CEPH_DIR,
'keyring.rados.gateway'
))):
return True
return False
def restart_nonce_changed(nonce):
"""
Determine whether the restart nonce provided has changed
since this function was last invoked.
:param nonce: value to confirm has changed against the
remembered value for restart_nonce.
:type nonce: str
:return: whether nonce has changed value
:rtype: boolean
"""
db = unitdata.kv()
nonce_key = 'restart_nonce'
if nonce != db.get(nonce_key):
db.set(nonce_key, nonce)
db.flush()
return True
return False
def multisite_deployment():
"""Determine if deployment is multi-site
:returns: whether multi-site deployment is configured
:rtype: boolean
"""
return all((config('zone'),
config('zonegroup'),
config('realm')))

View File

@ -31,12 +31,16 @@ requires:
scope: container
certificates:
interface: tls-certificates
slave:
interface: radosgw-multisite
provides:
nrpe-external-master:
interface: nrpe-external-master
scope: container
gateway:
interface: http
master:
interface: radosgw-multisite
peers:
cluster:
interface: swift-ha

View File

@ -33,6 +33,10 @@ rgw socket path = /tmp/radosgw.sock
log file = /var/log/ceph/radosgw.log
{% endif %}
{% if rgw_zone -%}
rgw_zone = {{ rgw_zone }}
{% endif %}
rgw init timeout = 1200
rgw frontends = civetweb port={{ port }}
{% if auth_type == 'keystone' %}

View File

@ -76,3 +76,67 @@ class MainTestCase(CharmTestCase):
with mock.patch.dict(actions.ACTIONS, {"foo": dummy_action}):
actions.main(["foo"])
self.assertEqual(dummy_calls, ["uh oh"])
class MultisiteActionsTestCase(CharmTestCase):
TO_PATCH = [
'action_fail',
'action_set',
'multisite',
'config',
]
def setUp(self):
super(MultisiteActionsTestCase, self).setUp(actions,
self.TO_PATCH)
self.config.side_effect = self.test_config.get
def test_promote(self):
self.test_config.set('zone', 'testzone')
actions.promote([])
self.multisite.modify_zone.assert_called_once_with(
'testzone',
default=True,
master=True,
)
self.multisite.update_period.assert_called_once_with()
def test_promote_unconfigured(self):
actions.promote([])
self.action_fail.assert_called_once()
def test_readonly(self):
self.test_config.set('zone', 'testzone')
actions.readonly([])
self.multisite.modify_zone.assert_called_once_with(
'testzone',
readonly=True,
)
self.multisite.update_period.assert_called_once_with()
def test_readonly_unconfigured(self):
actions.readonly([])
self.action_fail.assert_called_once()
def test_readwrite(self):
self.test_config.set('zone', 'testzone')
actions.readwrite([])
self.multisite.modify_zone.assert_called_once_with(
'testzone',
readonly=False,
)
self.multisite.update_period.assert_called_once_with()
def test_readwrite_unconfigured(self):
actions.readwrite([])
self.action_fail.assert_called_once()
def test_tidydefaults(self):
self.test_config.set('zone', 'testzone')
actions.tidydefaults([])
self.multisite.tidy_defaults.assert_called_once_with()
def test_tidydefaults_unconfigured(self):
actions.tidydefaults([])
self.action_fail.assert_called_once()

View File

@ -324,7 +324,8 @@ class MonContextTest(CharmTestCase):
'loglevel': 1,
'port': 70,
'client_radosgw_gateway': {'rgw init timeout': 60},
'ipv6': False
'ipv6': False,
'rgw_zone': None,
}
self.assertEqual(expect, mon_ctxt())
self.assertFalse(mock_ensure_rsv_v6.called)
@ -368,7 +369,8 @@ class MonContextTest(CharmTestCase):
'loglevel': 1,
'port': 70,
'client_radosgw_gateway': {'rgw init timeout': 60},
'ipv6': False
'ipv6': False,
'rgw_zone': None,
}
self.assertEqual(expect, mon_ctxt())
self.assertFalse(mock_ensure_rsv_v6.called)
@ -421,7 +423,8 @@ class MonContextTest(CharmTestCase):
'loglevel': 1,
'port': 70,
'client_radosgw_gateway': {'rgw init timeout': 60},
'ipv6': False
'ipv6': False,
'rgw_zone': None,
}
self.assertEqual(expect, mon_ctxt())
@ -456,7 +459,8 @@ class MonContextTest(CharmTestCase):
'loglevel': 1,
'port': 70,
'client_radosgw_gateway': {'rgw init timeout': 60},
'ipv6': False
'ipv6': False,
'rgw_zone': None,
}
self.assertEqual(expect, mon_ctxt())

View File

@ -31,6 +31,8 @@ TO_PATCH = [
'socket',
'cmp_pkgrevno',
'init_is_systemd',
'unitdata',
'config',
]
@ -39,6 +41,7 @@ class CephRadosGWUtilTests(CharmTestCase):
super(CephRadosGWUtilTests, self).setUp(utils, TO_PATCH)
self.get_upstream_version.return_value = '10.2.2'
self.socket.gethostname.return_value = 'testhost'
self.config.side_effect = self.test_config.get
def test_assess_status(self):
with patch.object(utils, 'assess_status_func') as asf:
@ -136,6 +139,105 @@ class CephRadosGWUtilTests(CharmTestCase):
self._setup_relation_data(_relation_data)
self.assertTrue(utils.systemd_based_radosgw())
@patch.object(utils.os.path, 'exists')
def test_ready_for_service(self, mock_exists):
mock_exists.return_value = True
_relation_data = {
'mon:1': {
'ceph-mon/0': {
'rgw.testhost_key': 'testkey',
},
'ceph-mon/1': {
'rgw.testhost_key': 'testkey',
},
'ceph-mon/2': {
'rgw.testhost_key': 'testkey',
},
}
}
self._setup_relation_data(_relation_data)
self.assertTrue(utils.ready_for_service())
mock_exists.assert_called_with(
'/etc/ceph/ceph.client.rgw.testhost.keyring'
)
@patch.object(utils.os.path, 'exists')
def test_ready_for_service_legacy(self, mock_exists):
mock_exists.return_value = True
_relation_data = {
'mon:1': {
'ceph-mon/0': {
'radosgw_key': 'testkey',
},
'ceph-mon/1': {
'radosgw_key': 'testkey',
},
'ceph-mon/2': {
'radosgw_key': 'testkey',
},
}
}
self._setup_relation_data(_relation_data)
self.assertTrue(utils.ready_for_service())
mock_exists.assert_called_with(
'/etc/ceph/keyring.rados.gateway'
)
@patch.object(utils.os.path, 'exists')
def test_ready_for_service_legacy_skip(self, mock_exists):
mock_exists.return_value = True
_relation_data = {
'mon:1': {
'ceph-mon/0': {
'radosgw_key': 'testkey',
},
'ceph-mon/1': {
'radosgw_key': 'testkey',
},
'ceph-mon/2': {
'radosgw_key': 'testkey',
},
}
}
self._setup_relation_data(_relation_data)
self.assertFalse(utils.ready_for_service(legacy=False))
def test_not_ready_for_service(self):
_relation_data = {
'mon:1': {
'ceph-mon/0': {
},
'ceph-mon/1': {
},
'ceph-mon/2': {
},
}
}
self._setup_relation_data(_relation_data)
self.assertFalse(utils.ready_for_service())
@patch.object(utils.os.path, 'exists')
def test_ready_for_service_no_keyring(self, mock_exists):
mock_exists.return_value = False
_relation_data = {
'mon:1': {
'ceph-mon/0': {
'rgw.testhost_key': 'testkey',
},
'ceph-mon/1': {
'rgw.testhost_key': 'testkey',
},
'ceph-mon/2': {
'rgw.testhost_key': 'testkey',
},
}
}
self._setup_relation_data(_relation_data)
self.assertFalse(utils.ready_for_service())
mock_exists.assert_called_with(
'/etc/ceph/ceph.client.rgw.testhost.keyring'
)
def test_request_per_unit_key(self):
self.init_is_systemd.return_value = False
self.cmp_pkgrevno.return_value = -1
@ -157,3 +259,44 @@ class CephRadosGWUtilTests(CharmTestCase):
mock_systemd_based_radosgw.return_value = False
self.assertEqual(utils.service_name(),
'radosgw')
def test_restart_nonce_changed_new(self):
_db_data = {}
mock_db = MagicMock()
mock_db.get.side_effect = lambda key: _db_data.get(key)
self.unitdata.kv.return_value = mock_db
self.assertTrue(utils.restart_nonce_changed('foobar'))
mock_db.set.assert_called_once_with('restart_nonce',
'foobar')
mock_db.flush.assert_called_once_with()
def test_restart_nonce_changed_existing(self):
_db_data = {
'restart_nonce': 'foobar'
}
mock_db = MagicMock()
mock_db.get.side_effect = lambda key: _db_data.get(key)
self.unitdata.kv.return_value = mock_db
self.assertFalse(utils.restart_nonce_changed('foobar'))
mock_db.set.assert_not_called()
mock_db.flush.assert_not_called()
def test_restart_nonce_changed_changed(self):
_db_data = {
'restart_nonce': 'foobar'
}
mock_db = MagicMock()
mock_db.get.side_effect = lambda key: _db_data.get(key)
self.unitdata.kv.return_value = mock_db
self.assertTrue(utils.restart_nonce_changed('soofar'))
mock_db.set.assert_called_once_with('restart_nonce',
'soofar')
mock_db.flush.assert_called_once_with()
def test_multisite_deployment(self):
self.test_config.set('zone', 'testzone')
self.test_config.set('zonegroup', 'testzonegroup')
self.test_config.set('realm', 'testrealm')
self.assertTrue(utils.multisite_deployment())
self.test_config.set('realm', None)
self.assertFalse(utils.multisite_deployment())

View File

@ -13,7 +13,7 @@
# limitations under the License.
from mock import (
patch, call, MagicMock
patch, call, MagicMock, ANY
)
from test_utils import (
@ -64,6 +64,7 @@ TO_PATCH = [
'filter_installed_packages',
'filter_missing_packages',
'ceph_utils',
'multisite_deployment',
]
@ -81,6 +82,7 @@ class CephRadosGWTests(CharmTestCase):
self.systemd_based_radosgw.return_value = False
self.filter_installed_packages.side_effect = lambda pkgs: pkgs
self.filter_missing_packages.side_effect = lambda pkgs: pkgs
self.multisite_deployment.return_value = False
def test_upgrade_available(self):
_vers = {
@ -367,3 +369,305 @@ class CephRadosGWTests(CharmTestCase):
'vault/0'
)
mock_configure_https.assert_called_once_with()
class MiscMultisiteTests(CharmTestCase):
TO_PATCH = [
'restart_nonce_changed',
'relation_ids',
'related_units',
'leader_get',
'is_leader',
'master_relation_joined',
'slave_relation_changed',
'service_restart',
'service_name',
]
_relation_ids = {
'master': ['master:1'],
'slave': ['slave:1'],
}
_related_units = {
'master:1': ['rgw/0', 'rgw/1'],
'slave:1': ['rgw-s/0', 'rgw-s/1'],
}
def setUp(self):
super(MiscMultisiteTests, self).setUp(ceph_hooks,
self.TO_PATCH)
self.relation_ids.side_effect = (
lambda endpoint: self._relation_ids.get(endpoint) or []
)
self.related_units.side_effect = (
lambda rid: self._related_units.get(rid) or []
)
self.service_name.return_value = 'rgw@hostname'
def test_leader_settings_changed(self):
self.restart_nonce_changed.return_value = True
self.is_leader.return_value = False
ceph_hooks.leader_settings_changed()
self.service_restart.assert_called_once_with('rgw@hostname')
self.master_relation_joined.assert_called_once_with('master:1')
def test_process_multisite_relations(self):
ceph_hooks.process_multisite_relations()
self.master_relation_joined.assert_called_once_with('master:1')
self.slave_relation_changed.assert_has_calls([
call('slave:1', 'rgw-s/0'),
call('slave:1', 'rgw-s/1'),
])
class CephRadosMultisiteTests(CharmTestCase):
TO_PATCH = [
'ready_for_service',
'canonical_url',
'relation_set',
'relation_get',
'leader_get',
'config',
'is_leader',
'multisite',
'leader_set',
'service_restart',
'service_name',
'log',
'multisite_deployment',
'systemd_based_radosgw',
]
def setUp(self):
super(CephRadosMultisiteTests, self).setUp(ceph_hooks,
self.TO_PATCH)
self.config.side_effect = self.test_config.get
self.ready_for_service.return_value = True
self.canonical_url.return_value = 'http://rgw'
self.service_name.return_value = 'rgw@hostname'
self.multisite_deployment.return_value = True
self.systemd_based_radosgw.return_value = True
class MasterMultisiteTests(CephRadosMultisiteTests):
_complete_config = {
'realm': 'testrealm',
'zonegroup': 'testzonegroup',
'zone': 'testzone',
}
_leader_data = {
'access_key': 'mykey',
'secret': 'mysecret',
}
_leader_data_done = {
'access_key': 'mykey',
'secret': 'mysecret',
'restart_nonce': 'foobar',
}
def test_master_relation_joined_missing_config(self):
ceph_hooks.master_relation_joined('master:1')
self.config.assert_has_calls([
call('realm'),
call('zonegroup'),
call('zone'),
])
self.relation_set.assert_not_called()
def test_master_relation_joined_create_everything(self):
for k, v in self._complete_config.items():
self.test_config.set(k, v)
self.is_leader.return_value = True
self.leader_get.side_effect = lambda attr: self._leader_data.get(attr)
self.multisite.list_realms.return_value = []
self.multisite.list_zonegroups.return_value = []
self.multisite.list_zones.return_value = []
self.multisite.list_users.return_value = []
self.multisite.create_system_user.return_value = (
'mykey', 'mysecret',
)
ceph_hooks.master_relation_joined('master:1')
self.config.assert_has_calls([
call('realm'),
call('zonegroup'),
call('zone'),
])
self.multisite.create_realm.assert_called_once_with(
'testrealm',
default=True,
)
self.multisite.create_zonegroup.assert_called_once_with(
'testzonegroup',
endpoints=['http://rgw:80'],
default=True,
master=True,
realm='testrealm',
)
self.multisite.create_zone.assert_called_once_with(
'testzone',
endpoints=['http://rgw:80'],
default=True,
master=True,
zonegroup='testzonegroup',
)
self.multisite.create_system_user.assert_called_once_with(
ceph_hooks.MULTISITE_SYSTEM_USER
)
self.multisite.modify_zone.assert_called_once_with(
'testzone',
access_key='mykey',
secret='mysecret',
)
self.multisite.update_period.assert_has_calls([
call(fatal=False),
call(),
])
self.service_restart.assert_called_once_with('rgw@hostname')
self.leader_set.assert_has_calls([
call(access_key='mykey',
secret='mysecret'),
call(restart_nonce=ANY),
])
self.relation_set.assert_called_with(
relation_id='master:1',
access_key='mykey',
secret='mysecret',
)
def test_master_relation_joined_create_nothing(self):
for k, v in self._complete_config.items():
self.test_config.set(k, v)
self.is_leader.return_value = True
self.leader_get.side_effect = (
lambda attr: self._leader_data_done.get(attr)
)
self.multisite.list_realms.return_value = ['testrealm']
self.multisite.list_zonegroups.return_value = ['testzonegroup']
self.multisite.list_zones.return_value = ['testzone']
self.multisite.list_users.return_value = [
ceph_hooks.MULTISITE_SYSTEM_USER
]
ceph_hooks.master_relation_joined('master:1')
self.multisite.create_realm.assert_not_called()
self.multisite.create_zonegroup.assert_not_called()
self.multisite.create_zone.assert_not_called()
self.multisite.create_system_user.assert_not_called()
self.multisite.update_period.assert_not_called()
self.service_restart.assert_not_called()
self.leader_set.assert_not_called()
def test_master_relation_joined_not_leader(self):
for k, v in self._complete_config.items():
self.test_config.set(k, v)
self.is_leader.return_value = False
self.leader_get.side_effect = lambda attr: self._leader_data.get(attr)
ceph_hooks.master_relation_joined('master:1')
self.relation_set.assert_called_once_with(
relation_id='master:1',
realm='testrealm',
zonegroup='testzonegroup',
url='http://rgw:80',
access_key='mykey',
secret='mysecret',
)
self.multisite.list_realms.assert_not_called()
class SlaveMultisiteTests(CephRadosMultisiteTests):
_complete_config = {
'realm': 'testrealm',
'zonegroup': 'testzonegroup',
'zone': 'testzone2',
}
_test_relation = {
'realm': 'testrealm',
'zonegroup': 'testzonegroup',
'access_key': 'anotherkey',
'secret': 'anothersecret',
'url': 'http://master:80'
}
_test_bad_relation = {
'realm': 'anotherrealm',
'zonegroup': 'anotherzg',
'access_key': 'anotherkey',
'secret': 'anothersecret',
'url': 'http://master:80'
}
def test_slave_relation_changed(self):
for k, v in self._complete_config.items():
self.test_config.set(k, v)
self.is_leader.return_value = True
self.leader_get.return_value = None
self.relation_get.return_value = self._test_relation
self.multisite.list_realms.return_value = []
self.multisite.list_zones.return_value = []
ceph_hooks.slave_relation_changed('slave:1', 'rgw/0')
self.config.assert_has_calls([
call('realm'),
call('zonegroup'),
call('zone'),
])
self.multisite.pull_realm.assert_called_once_with(
url=self._test_relation['url'],
access_key=self._test_relation['access_key'],
secret=self._test_relation['secret'],
)
self.multisite.pull_period.assert_called_once_with(
url=self._test_relation['url'],
access_key=self._test_relation['access_key'],
secret=self._test_relation['secret'],
)
self.multisite.set_default_realm.assert_called_once_with(
'testrealm'
)
self.multisite.create_zone.assert_called_once_with(
'testzone2',
endpoints=['http://rgw:80'],
default=False,
master=False,
zonegroup='testzonegroup',
access_key=self._test_relation['access_key'],
secret=self._test_relation['secret'],
)
self.multisite.update_period.assert_has_calls([
call(fatal=False),
call(),
])
self.service_restart.assert_called_once()
self.leader_set.assert_called_once_with(restart_nonce=ANY)
def test_slave_relation_changed_incomplete_relation(self):
for k, v in self._complete_config.items():
self.test_config.set(k, v)
self.is_leader.return_value = True
self.relation_get.return_value = {}
ceph_hooks.slave_relation_changed('slave:1', 'rgw/0')
self.config.assert_not_called()
def test_slave_relation_changed_mismatching_config(self):
for k, v in self._complete_config.items():
self.test_config.set(k, v)
self.is_leader.return_value = True
self.relation_get.return_value = self._test_bad_relation
ceph_hooks.slave_relation_changed('slave:1', 'rgw/0')
self.config.assert_has_calls([
call('realm'),
call('zonegroup'),
call('zone'),
])
self.multisite.list_realms.assert_not_called()
def test_slave_relation_changed_not_leader(self):
self.is_leader.return_value = False
ceph_hooks.slave_relation_changed('slave:1', 'rgw/0')
self.relation_get.assert_not_called()

View File

@ -0,0 +1,237 @@
# Copyright 2019 Canonical Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import inspect
import os
import mock
import multisite
from test_utils import CharmTestCase
def whoami():
return inspect.stack()[1][3]
class TestMultisiteHelpers(CharmTestCase):
TO_PATCH = [
'subprocess',
'socket',
'hookenv',
]
def setUp(self):
super(TestMultisiteHelpers, self).setUp(multisite, self.TO_PATCH)
self.socket.gethostname.return_value = 'testhost'
def _testdata(self, funcname):
return os.path.join(os.path.dirname(__file__),
'testdata',
'{}.json'.format(funcname))
def test_create_realm(self):
with open(self._testdata(whoami()), 'rb') as f:
self.subprocess.check_output.return_value = f.read()
result = multisite.create_realm('beedata', default=True)
self.assertEqual(result['name'], 'beedata')
self.subprocess.check_output.assert_called_with([
'radosgw-admin', '--id=rgw.testhost',
'realm', 'create',
'--rgw-realm=beedata', '--default'
])
def test_list_realms(self):
with open(self._testdata(whoami()), 'rb') as f:
self.subprocess.check_output.return_value = f.read()
result = multisite.list_realms()
self.assertTrue('beedata' in result)
def test_set_default_zone(self):
multisite.set_default_realm('newrealm')
self.subprocess.check_call.assert_called_with([
'radosgw-admin', '--id=rgw.testhost',
'realm', 'default',
'--rgw-realm=newrealm'
])
def test_create_zonegroup(self):
with open(self._testdata(whoami()), 'rb') as f:
self.subprocess.check_output.return_value = f.read()
result = multisite.create_zonegroup(
'brundall',
endpoints=['http://localhost:80'],
master=True,
default=True,
realm='beedata',
)
self.assertEqual(result['name'], 'brundall')
self.subprocess.check_output.assert_called_with([
'radosgw-admin', '--id=rgw.testhost',
'zonegroup', 'create',
'--rgw-zonegroup=brundall',
'--endpoints=http://localhost:80',
'--rgw-realm=beedata',
'--default',
'--master'
])
def test_list_zonegroups(self):
with open(self._testdata(whoami()), 'rb') as f:
self.subprocess.check_output.return_value = f.read()
result = multisite.list_zonegroups()
self.assertTrue('brundall' in result)
def test_create_zone(self):
with open(self._testdata(whoami()), 'rb') as f:
self.subprocess.check_output.return_value = f.read()
result = multisite.create_zone(
'brundall-east',
endpoints=['http://localhost:80'],
master=True,
default=True,
zonegroup='brundall',
access_key='mykey',
secret='mypassword',
)
self.assertEqual(result['name'], 'brundall-east')
self.subprocess.check_output.assert_called_with([
'radosgw-admin', '--id=rgw.testhost',
'zone', 'create',
'--rgw-zone=brundall-east',
'--endpoints=http://localhost:80',
'--rgw-zonegroup=brundall',
'--default', '--master',
'--access-key=mykey',
'--secret=mypassword',
'--read-only=0',
])
def test_modify_zone(self):
multisite.modify_zone(
'brundall-east',
endpoints=['http://localhost:80', 'https://localhost:443'],
access_key='mykey',
secret='secret',
readonly=True
)
self.subprocess.check_output.assert_called_with([
'radosgw-admin', '--id=rgw.testhost',
'zone', 'modify',
'--rgw-zone=brundall-east',
'--endpoints=http://localhost:80,https://localhost:443',
'--access-key=mykey', '--secret=secret',
'--read-only=1',
])
def test_modify_zone_promote_master(self):
multisite.modify_zone(
'brundall-east',
default=True,
master=True,
)
self.subprocess.check_output.assert_called_with([
'radosgw-admin', '--id=rgw.testhost',
'zone', 'modify',
'--rgw-zone=brundall-east',
'--master',
'--default',
'--read-only=0',
])
def test_modify_zone_partial_credentials(self):
multisite.modify_zone(
'brundall-east',
endpoints=['http://localhost:80', 'https://localhost:443'],
access_key='mykey',
)
self.subprocess.check_output.assert_called_with([
'radosgw-admin', '--id=rgw.testhost',
'zone', 'modify',
'--rgw-zone=brundall-east',
'--endpoints=http://localhost:80,https://localhost:443',
'--read-only=0',
])
def test_list_zones(self):
with open(self._testdata(whoami()), 'rb') as f:
self.subprocess.check_output.return_value = f.read()
result = multisite.list_zones()
self.assertTrue('brundall-east' in result)
def test_update_period(self):
multisite.update_period()
self.subprocess.check_call.assert_called_once_with([
'radosgw-admin', '--id=rgw.testhost',
'period', 'update', '--commit'
])
@mock.patch.object(multisite, 'list_zonegroups')
@mock.patch.object(multisite, 'list_zones')
@mock.patch.object(multisite, 'update_period')
def test_tidy_defaults(self,
mock_update_period,
mock_list_zones,
mock_list_zonegroups):
mock_list_zones.return_value = ['default']
mock_list_zonegroups.return_value = ['default']
multisite.tidy_defaults()
self.subprocess.call.assert_has_calls([
mock.call(['radosgw-admin', '--id=rgw.testhost',
'zonegroup', 'remove',
'--rgw-zonegroup=default', '--rgw-zone=default']),
mock.call(['radosgw-admin', '--id=rgw.testhost',
'zone', 'delete',
'--rgw-zone=default']),
mock.call(['radosgw-admin', '--id=rgw.testhost',
'zonegroup', 'delete',
'--rgw-zonegroup=default'])
])
mock_update_period.assert_called_with()
@mock.patch.object(multisite, 'list_zonegroups')
@mock.patch.object(multisite, 'list_zones')
@mock.patch.object(multisite, 'update_period')
def test_tidy_defaults_noop(self,
mock_update_period,
mock_list_zones,
mock_list_zonegroups):
mock_list_zones.return_value = ['brundall-east']
mock_list_zonegroups.return_value = ['brundall']
multisite.tidy_defaults()
self.subprocess.call.assert_not_called()
mock_update_period.assert_not_called()
def test_pull_realm(self):
multisite.pull_realm(url='http://master:80',
access_key='testkey',
secret='testsecret')
self.subprocess.check_output.assert_called_once_with([
'radosgw-admin', '--id=rgw.testhost',
'realm', 'pull',
'--url=http://master:80',
'--access-key=testkey', '--secret=testsecret',
])
def test_pull_period(self):
multisite.pull_period(url='http://master:80',
access_key='testkey',
secret='testsecret')
self.subprocess.check_output.assert_called_once_with([
'radosgw-admin', '--id=rgw.testhost',
'period', 'pull',
'--url=http://master:80',
'--access-key=testkey', '--secret=testsecret',
])

View File

@ -0,0 +1,7 @@
{
"id": "793a0176-ef7d-4d97-b544-a921e19a52e7",
"name": "beedata",
"current_period": "1f30e5fa-2c24-471d-b17d-61135c9f9510",
"epoch": 3
}

View File

@ -0,0 +1,36 @@
{
"id": "a69d4cd8-1881-4040-ad7c-914ca35af3b2",
"name": "brundall-east",
"domain_root": "brundall-east.rgw.meta:root",
"control_pool": "brundall-east.rgw.control",
"gc_pool": "brundall-east.rgw.log:gc",
"lc_pool": "brundall-east.rgw.log:lc",
"log_pool": "brundall-east.rgw.log",
"intent_log_pool": "brundall-east.rgw.log:intent",
"usage_log_pool": "brundall-east.rgw.log:usage",
"reshard_pool": "brundall-east.rgw.log:reshard",
"user_keys_pool": "brundall-east.rgw.meta:users.keys",
"user_email_pool": "brundall-east.rgw.meta:users.email",
"user_swift_pool": "brundall-east.rgw.meta:users.swift",
"user_uid_pool": "brundall-east.rgw.meta:users.uid",
"system_key": {
"access_key": "90FM6V8B44BSN1MVKYW6",
"secret_key": "bFHSPN3PB4QZqHfTiNIn11ey8kA8OA6Php6kGpdH"
},
"placement_pools": [
{
"key": "default-placement",
"val": {
"index_pool": "brundall-east.rgw.buckets.index",
"data_pool": "brundall-east.rgw.buckets.data",
"data_extra_pool": "brundall-east.rgw.buckets.non-ec",
"index_type": 0,
"compression": ""
}
}
],
"metadata_heap": "",
"tier_config": [],
"realm_id": "793a0176-ef7d-4d97-b544-a921e19a52e7"
}

View File

@ -0,0 +1,51 @@
{
"id": "3f41f138-5669-4b63-bf61-278f28fc9306",
"name": "brundall",
"api_name": "brundall",
"is_master": "true",
"endpoints": [
"http://10.5.100.2:80"
],
"hostnames": [],
"hostnames_s3website": [],
"master_zone": "a69d4cd8-1881-4040-ad7c-914ca35af3b2",
"zones": [
{
"id": "8be215da-5316-4d12-a584-44b246285a3f",
"name": "brundall-west",
"endpoints": [
"http://10.5.100.2:80"
],
"log_meta": "false",
"log_data": "true",
"bucket_index_max_shards": 0,
"read_only": "false",
"tier_type": "",
"sync_from_all": "true",
"sync_from": []
},
{
"id": "a69d4cd8-1881-4040-ad7c-914ca35af3b2",
"name": "brundall-east",
"endpoints": [
"http://10.5.100.1:80"
],
"log_meta": "false",
"log_data": "true",
"bucket_index_max_shards": 0,
"read_only": "false",
"tier_type": "",
"sync_from_all": "true",
"sync_from": []
}
],
"placement_targets": [
{
"name": "default-placement",
"tags": []
}
],
"default_placement": "default-placement",
"realm_id": "793a0176-ef7d-4d97-b544-a921e19a52e7"
}

View File

@ -0,0 +1,6 @@
{
"default_info": "793a0176-ef7d-4d97-b544-a921e19a52e7",
"realms": [
"beedata"
]
}

View File

@ -0,0 +1,5 @@
[
"testuser",
"multisite-sync"
]

View File

@ -0,0 +1,6 @@
{
"default_info": "3f41f138-5669-4b63-bf61-278f28fc9306",
"zonegroups": [
"brundall"
]
}

View File

@ -0,0 +1,6 @@
{
"default_info": "a69d4cd8-1881-4040-ad7c-914ca35af3b2",
"zones": [
"brundall-east"
]
}