Merge next branch

This commit is contained in:
Corey Bryant 2015-03-11 11:45:09 +00:00
commit eafbb59cf2
20 changed files with 2368 additions and 290 deletions

View File

@ -1,37 +1,72 @@
This charm provides Keystone, the Openstack identity service. It's target
platform is Ubuntu Precise + Openstack Essex. This has not been tested
using Oneiric + Diablo.
Overview
========
It provides three interfaces.
- identity-service: Openstack API endpoints request an entry in the
Keystone service catalog + endpoint template catalog. When a relation
This charm provides Keystone, the Openstack identity service. It's target
platform is (ideally) Ubuntu LTS + Openstack.
Usage
=====
The following interfaces are provided:
- nrpe-external-master: Used to generate Nagios checks.
- identity-service: Openstack API endpoints request an entry in the
Keystone service catalog + endpoint template catalog. When a relation
is established, Keystone receives: service name, region, public_url,
admin_url and internal_url. It first checks that the requested service
is listed as a supported service. This list should stay updated to
support current Openstack core services. If the services is supported,
a entry in the service catalog is created, an endpoint template is
created and a admin token is generated. The other end of the relation
recieves the token as well as info on which ports Keystone is listening.
admin_url and internal_url. It first checks that the requested service
is listed as a supported service. This list should stay updated to
support current Openstack core services. If the service is supported,
an entry in the service catalog is created, an endpoint template is
created and a admin token is generated. The other end of the relation
receives the token as well as info on which ports Keystone is listening
on.
- keystone-service: This is currently only used by Horizon/dashboard
- keystone-service: This is currently only used by Horizon/dashboard
as its interaction with Keystone is different from other Openstack API
servicies. That is, Horizon requests a Keystone role and token exists.
services. That is, Horizon requests a Keystone role and token exists.
During a relation, Horizon requests its configured default role and
Keystone responds with a token and the auth + admin ports on which
Keystone is listening.
- identity-admin: Charms use this relation to obtain the credentials
for the admin user. This is intended for charms that automatically
- identity-admin: Charms use this relation to obtain the credentials
for the admin user. This is intended for charms that automatically
provision users, tenants, etc. or that otherwise automate using the
Openstack cluster deployment.
Keystone requires a database. By default, a local sqlite database is used.
The charm supports relations to a shared-db via mysql-shared interface. When
- identity-notifications: Used to broadcast messages to any services
listening on the interface.
Database
--------
Keystone requires a database. By default, a local sqlite database is used.
The charm supports relations to a shared-db via mysql-shared interface. When
a new data store is configured, the charm ensures the minimum administrator
credentials exist (as configured via charm configuration)
VIP is only required if you plan on multi-unit clusterming. The VIP becomes a highly-available API endpoint.
HA/Clustering
-------------
VIP is only required if you plan on multi-unit clustering (requires relating
with hacluster charm). The VIP becomes a highly-available API endpoint.
SSL/HTTPS
---------
This charm also supports SSL and HTTPS endpoints. In order to ensure SSL
certificates are only created once and distributed to all units, one unit gets
elected as an ssl-cert-master. One side-effect of this is that as units are
scaled-out the currently elected leader needs to be running in order for nodes
to sync certificates. This 'feature' is to work around the lack of native
leadership election via Juju itself, a feature that is due for release some
time soon but until then we have to rely on this. Also, if a keystone unit does
go down, it must be removed from Juju i.e.
juju destroy-unit keystone/<unit-num>
Otherwise it will be assumed that this unit may come back at some point and
therefore must be know to be in-sync with the rest before continuing.
Deploying from source
---------------------

View File

@ -177,7 +177,7 @@ options:
with the other members of the HA Cluster.
ha-mcastport:
type: int
default: 5403
default: 5434
description: |
Default multicast port number that will be used to communicate between
HA Cluster nodes.
@ -267,4 +267,10 @@ options:
juju-myservice-0
If you're running multiple environments with the same services in them
this allows you to differentiate between them.
nagios_servicegroups:
default: ""
type: string
description: |
A comma-separated list of nagios servicegroups.
If left empty, the nagios_context will be used as the servicegroup

View File

@ -0,0 +1,18 @@
# Copyright 2014-2015 Canonical Limited.
#
# This file is part of charm-helpers.
#
# charm-helpers is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License version 3 as
# published by the Free Software Foundation.
#
# charm-helpers is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
# dummy __init__.py to fool syncer into thinking this is a syncable python
# module

View File

@ -0,0 +1,32 @@
#!/bin/bash
#--------------------------------------------
# This file is managed by Juju
#--------------------------------------------
#
# Copyright 2009,2012 Canonical Ltd.
# Author: Tom Haddon
CRITICAL=0
NOTACTIVE=''
LOGFILE=/var/log/nagios/check_haproxy.log
AUTH=$(grep -r "stats auth" /etc/haproxy | head -1 | awk '{print $4}')
for appserver in $(grep ' server' /etc/haproxy/haproxy.cfg | awk '{print $2'});
do
output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 --regex="class=\"(active|backup)(2|3).*${appserver}" -e ' 200 OK')
if [ $? != 0 ]; then
date >> $LOGFILE
echo $output >> $LOGFILE
/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -v | grep $appserver >> $LOGFILE 2>&1
CRITICAL=1
NOTACTIVE="${NOTACTIVE} $appserver"
fi
done
if [ $CRITICAL = 1 ]; then
echo "CRITICAL:${NOTACTIVE}"
exit 2
fi
echo "OK: All haproxy instances looking good"
exit 0

View File

@ -0,0 +1,30 @@
#!/bin/bash
#--------------------------------------------
# This file is managed by Juju
#--------------------------------------------
#
# Copyright 2009,2012 Canonical Ltd.
# Author: Tom Haddon
# These should be config options at some stage
CURRQthrsh=0
MAXQthrsh=100
AUTH=$(grep -r "stats auth" /etc/haproxy | head -1 | awk '{print $4}')
HAPROXYSTATS=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -u '/;csv' -v)
for BACKEND in $(echo $HAPROXYSTATS| xargs -n1 | grep BACKEND | awk -F , '{print $1}')
do
CURRQ=$(echo "$HAPROXYSTATS" | grep $BACKEND | grep BACKEND | cut -d , -f 3)
MAXQ=$(echo "$HAPROXYSTATS" | grep $BACKEND | grep BACKEND | cut -d , -f 4)
if [[ $CURRQ -gt $CURRQthrsh || $MAXQ -gt $MAXQthrsh ]] ; then
echo "CRITICAL: queue depth for $BACKEND - CURRENT:$CURRQ MAX:$MAXQ"
exit 2
fi
done
echo "OK: All haproxy queue depths looking good"
exit 0

View File

@ -0,0 +1,14 @@
{% if zmq_host -%}
# ZeroMQ configuration (restart-nonce: {{ zmq_nonce }})
rpc_backend = zmq
rpc_zmq_host = {{ zmq_host }}
{% if zmq_redis_address -%}
rpc_zmq_matchmaker = oslo.messaging._drivers.matchmaker_redis.MatchMakerRedis
matchmaker_heartbeat_freq = 15
matchmaker_heartbeat_ttl = 30
[matchmaker_redis]
host = {{ zmq_redis_address }}
{% else -%}
rpc_zmq_matchmaker = oslo.messaging._drivers.matchmaker_ring.MatchMakerRing
{% endif -%}
{% endif -%}

View File

@ -0,0 +1,42 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2014-2015 Canonical Limited.
#
# This file is part of charm-helpers.
#
# charm-helpers is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License version 3 as
# published by the Free Software Foundation.
#
# charm-helpers is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
import six
def bool_from_string(value):
"""Interpret string value as boolean.
Returns True if value translates to True otherwise False.
"""
if isinstance(value, six.string_types):
value = six.text_type(value)
else:
msg = "Unable to interpret non-string value '%s' as boolean" % (value)
raise ValueError(msg)
value = value.strip().lower()
if value in ['y', 'yes', 'true', 't']:
return True
elif value in ['n', 'no', 'false', 'f']:
return False
msg = "Unable to interpret string value '%s' as boolean" % (value)
raise ValueError(msg)

View File

@ -0,0 +1,477 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright 2014-2015 Canonical Limited.
#
# This file is part of charm-helpers.
#
# charm-helpers is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License version 3 as
# published by the Free Software Foundation.
#
# charm-helpers is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
#
#
# Authors:
# Kapil Thangavelu <kapil.foss@gmail.com>
#
"""
Intro
-----
A simple way to store state in units. This provides a key value
storage with support for versioned, transactional operation,
and can calculate deltas from previous values to simplify unit logic
when processing changes.
Hook Integration
----------------
There are several extant frameworks for hook execution, including
- charmhelpers.core.hookenv.Hooks
- charmhelpers.core.services.ServiceManager
The storage classes are framework agnostic, one simple integration is
via the HookData contextmanager. It will record the current hook
execution environment (including relation data, config data, etc.),
setup a transaction and allow easy access to the changes from
previously seen values. One consequence of the integration is the
reservation of particular keys ('rels', 'unit', 'env', 'config',
'charm_revisions') for their respective values.
Here's a fully worked integration example using hookenv.Hooks::
from charmhelper.core import hookenv, unitdata
hook_data = unitdata.HookData()
db = unitdata.kv()
hooks = hookenv.Hooks()
@hooks.hook
def config_changed():
# Print all changes to configuration from previously seen
# values.
for changed, (prev, cur) in hook_data.conf.items():
print('config changed', changed,
'previous value', prev,
'current value', cur)
# Get some unit specific bookeeping
if not db.get('pkg_key'):
key = urllib.urlopen('https://example.com/pkg_key').read()
db.set('pkg_key', key)
# Directly access all charm config as a mapping.
conf = db.getrange('config', True)
# Directly access all relation data as a mapping
rels = db.getrange('rels', True)
if __name__ == '__main__':
with hook_data():
hook.execute()
A more basic integration is via the hook_scope context manager which simply
manages transaction scope (and records hook name, and timestamp)::
>>> from unitdata import kv
>>> db = kv()
>>> with db.hook_scope('install'):
... # do work, in transactional scope.
... db.set('x', 1)
>>> db.get('x')
1
Usage
-----
Values are automatically json de/serialized to preserve basic typing
and complex data struct capabilities (dicts, lists, ints, booleans, etc).
Individual values can be manipulated via get/set::
>>> kv.set('y', True)
>>> kv.get('y')
True
# We can set complex values (dicts, lists) as a single key.
>>> kv.set('config', {'a': 1, 'b': True'})
# Also supports returning dictionaries as a record which
# provides attribute access.
>>> config = kv.get('config', record=True)
>>> config.b
True
Groups of keys can be manipulated with update/getrange::
>>> kv.update({'z': 1, 'y': 2}, prefix="gui.")
>>> kv.getrange('gui.', strip=True)
{'z': 1, 'y': 2}
When updating values, its very helpful to understand which values
have actually changed and how have they changed. The storage
provides a delta method to provide for this::
>>> data = {'debug': True, 'option': 2}
>>> delta = kv.delta(data, 'config.')
>>> delta.debug.previous
None
>>> delta.debug.current
True
>>> delta
{'debug': (None, True), 'option': (None, 2)}
Note the delta method does not persist the actual change, it needs to
be explicitly saved via 'update' method::
>>> kv.update(data, 'config.')
Values modified in the context of a hook scope retain historical values
associated to the hookname.
>>> with db.hook_scope('config-changed'):
... db.set('x', 42)
>>> db.gethistory('x')
[(1, u'x', 1, u'install', u'2015-01-21T16:49:30.038372'),
(2, u'x', 42, u'config-changed', u'2015-01-21T16:49:30.038786')]
"""
import collections
import contextlib
import datetime
import json
import os
import pprint
import sqlite3
import sys
__author__ = 'Kapil Thangavelu <kapil.foss@gmail.com>'
class Storage(object):
"""Simple key value database for local unit state within charms.
Modifications are automatically committed at hook exit. That's
currently regardless of exit code.
To support dicts, lists, integer, floats, and booleans values
are automatically json encoded/decoded.
"""
def __init__(self, path=None):
self.db_path = path
if path is None:
self.db_path = os.path.join(
os.environ.get('CHARM_DIR', ''), '.unit-state.db')
self.conn = sqlite3.connect('%s' % self.db_path)
self.cursor = self.conn.cursor()
self.revision = None
self._closed = False
self._init()
def close(self):
if self._closed:
return
self.flush(False)
self.cursor.close()
self.conn.close()
self._closed = True
def _scoped_query(self, stmt, params=None):
if params is None:
params = []
return stmt, params
def get(self, key, default=None, record=False):
self.cursor.execute(
*self._scoped_query(
'select data from kv where key=?', [key]))
result = self.cursor.fetchone()
if not result:
return default
if record:
return Record(json.loads(result[0]))
return json.loads(result[0])
def getrange(self, key_prefix, strip=False):
stmt = "select key, data from kv where key like '%s%%'" % key_prefix
self.cursor.execute(*self._scoped_query(stmt))
result = self.cursor.fetchall()
if not result:
return None
if not strip:
key_prefix = ''
return dict([
(k[len(key_prefix):], json.loads(v)) for k, v in result])
def update(self, mapping, prefix=""):
for k, v in mapping.items():
self.set("%s%s" % (prefix, k), v)
def unset(self, key):
self.cursor.execute('delete from kv where key=?', [key])
if self.revision and self.cursor.rowcount:
self.cursor.execute(
'insert into kv_revisions values (?, ?, ?)',
[key, self.revision, json.dumps('DELETED')])
def set(self, key, value):
serialized = json.dumps(value)
self.cursor.execute(
'select data from kv where key=?', [key])
exists = self.cursor.fetchone()
# Skip mutations to the same value
if exists:
if exists[0] == serialized:
return value
if not exists:
self.cursor.execute(
'insert into kv (key, data) values (?, ?)',
(key, serialized))
else:
self.cursor.execute('''
update kv
set data = ?
where key = ?''', [serialized, key])
# Save
if not self.revision:
return value
self.cursor.execute(
'select 1 from kv_revisions where key=? and revision=?',
[key, self.revision])
exists = self.cursor.fetchone()
if not exists:
self.cursor.execute(
'''insert into kv_revisions (
revision, key, data) values (?, ?, ?)''',
(self.revision, key, serialized))
else:
self.cursor.execute(
'''
update kv_revisions
set data = ?
where key = ?
and revision = ?''',
[serialized, key, self.revision])
return value
def delta(self, mapping, prefix):
"""
return a delta containing values that have changed.
"""
previous = self.getrange(prefix, strip=True)
if not previous:
pk = set()
else:
pk = set(previous.keys())
ck = set(mapping.keys())
delta = DeltaSet()
# added
for k in ck.difference(pk):
delta[k] = Delta(None, mapping[k])
# removed
for k in pk.difference(ck):
delta[k] = Delta(previous[k], None)
# changed
for k in pk.intersection(ck):
c = mapping[k]
p = previous[k]
if c != p:
delta[k] = Delta(p, c)
return delta
@contextlib.contextmanager
def hook_scope(self, name=""):
"""Scope all future interactions to the current hook execution
revision."""
assert not self.revision
self.cursor.execute(
'insert into hooks (hook, date) values (?, ?)',
(name or sys.argv[0],
datetime.datetime.utcnow().isoformat()))
self.revision = self.cursor.lastrowid
try:
yield self.revision
self.revision = None
except:
self.flush(False)
self.revision = None
raise
else:
self.flush()
def flush(self, save=True):
if save:
self.conn.commit()
elif self._closed:
return
else:
self.conn.rollback()
def _init(self):
self.cursor.execute('''
create table if not exists kv (
key text,
data text,
primary key (key)
)''')
self.cursor.execute('''
create table if not exists kv_revisions (
key text,
revision integer,
data text,
primary key (key, revision)
)''')
self.cursor.execute('''
create table if not exists hooks (
version integer primary key autoincrement,
hook text,
date text
)''')
self.conn.commit()
def gethistory(self, key, deserialize=False):
self.cursor.execute(
'''
select kv.revision, kv.key, kv.data, h.hook, h.date
from kv_revisions kv,
hooks h
where kv.key=?
and kv.revision = h.version
''', [key])
if deserialize is False:
return self.cursor.fetchall()
return map(_parse_history, self.cursor.fetchall())
def debug(self, fh=sys.stderr):
self.cursor.execute('select * from kv')
pprint.pprint(self.cursor.fetchall(), stream=fh)
self.cursor.execute('select * from kv_revisions')
pprint.pprint(self.cursor.fetchall(), stream=fh)
def _parse_history(d):
return (d[0], d[1], json.loads(d[2]), d[3],
datetime.datetime.strptime(d[-1], "%Y-%m-%dT%H:%M:%S.%f"))
class HookData(object):
"""Simple integration for existing hook exec frameworks.
Records all unit information, and stores deltas for processing
by the hook.
Sample::
from charmhelper.core import hookenv, unitdata
changes = unitdata.HookData()
db = unitdata.kv()
hooks = hookenv.Hooks()
@hooks.hook
def config_changed():
# View all changes to configuration
for changed, (prev, cur) in changes.conf.items():
print('config changed', changed,
'previous value', prev,
'current value', cur)
# Get some unit specific bookeeping
if not db.get('pkg_key'):
key = urllib.urlopen('https://example.com/pkg_key').read()
db.set('pkg_key', key)
if __name__ == '__main__':
with changes():
hook.execute()
"""
def __init__(self):
self.kv = kv()
self.conf = None
self.rels = None
@contextlib.contextmanager
def __call__(self):
from charmhelpers.core import hookenv
hook_name = hookenv.hook_name()
with self.kv.hook_scope(hook_name):
self._record_charm_version(hookenv.charm_dir())
delta_config, delta_relation = self._record_hook(hookenv)
yield self.kv, delta_config, delta_relation
def _record_charm_version(self, charm_dir):
# Record revisions.. charm revisions are meaningless
# to charm authors as they don't control the revision.
# so logic dependnent on revision is not particularly
# useful, however it is useful for debugging analysis.
charm_rev = open(
os.path.join(charm_dir, 'revision')).read().strip()
charm_rev = charm_rev or '0'
revs = self.kv.get('charm_revisions', [])
if charm_rev not in revs:
revs.append(charm_rev.strip() or '0')
self.kv.set('charm_revisions', revs)
def _record_hook(self, hookenv):
data = hookenv.execution_environment()
self.conf = conf_delta = self.kv.delta(data['conf'], 'config')
self.rels = rels_delta = self.kv.delta(data['rels'], 'rels')
self.kv.set('env', data['env'])
self.kv.set('unit', data['unit'])
self.kv.set('relid', data.get('relid'))
return conf_delta, rels_delta
class Record(dict):
__slots__ = ()
def __getattr__(self, k):
if k in self:
return self[k]
raise AttributeError(k)
class DeltaSet(Record):
__slots__ = ()
Delta = collections.namedtuple('Delta', ['previous', 'current'])
_KV = None
def kv():
global _KV
if _KV is None:
_KV = Storage()
return _KV

View File

@ -1,18 +1,32 @@
import hashlib
import os
from charmhelpers.core.hookenv import config
from charmhelpers.core.host import mkdir, write_file
from charmhelpers.core.host import (
mkdir,
write_file,
service_restart,
)
from charmhelpers.contrib.openstack import context
from charmhelpers.contrib.hahelpers.cluster import (
determine_apache_port,
determine_api_port
determine_api_port,
)
from charmhelpers.core.hookenv import (
log,
INFO,
)
from charmhelpers.core.strutils import (
bool_from_string,
)
from charmhelpers.contrib.hahelpers.apache import install_ca_cert
import os
CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
@ -24,25 +38,88 @@ class ApacheSSLContext(context.ApacheSSLContext):
def __call__(self):
# late import to work around circular dependency
from keystone_utils import determine_ports
from keystone_utils import (
determine_ports,
update_hash_from_path,
)
ssl_paths = [CA_CERT_PATH,
os.path.join('/etc/apache2/ssl/',
self.service_namespace)]
self.external_ports = determine_ports()
return super(ApacheSSLContext, self).__call__()
before = hashlib.sha256()
for path in ssl_paths:
update_hash_from_path(before, path)
ret = super(ApacheSSLContext, self).__call__()
after = hashlib.sha256()
for path in ssl_paths:
update_hash_from_path(after, path)
# Ensure that apache2 is restarted if these change
if before.hexdigest() != after.hexdigest():
service_restart('apache2')
return ret
def configure_cert(self, cn):
from keystone_utils import SSH_USER, get_ca
from keystone_utils import (
SSH_USER,
get_ca,
ensure_permissions,
is_ssl_cert_master,
is_ssl_enabled,
)
if not is_ssl_enabled():
return
# Ensure ssl dir exists whether master or not
ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace)
mkdir(path=ssl_dir)
perms = 0o755
mkdir(path=ssl_dir, owner=SSH_USER, group='keystone', perms=perms)
# Ensure accessible by keystone ssh user and group (for sync)
ensure_permissions(ssl_dir, user=SSH_USER, group='keystone',
perms=perms)
if not is_ssl_cert_master():
log("Not ssl-cert-master - skipping apache cert config until "
"master is elected", level=INFO)
return
log("Creating apache ssl certs in %s" % (ssl_dir), level=INFO)
ca = get_ca(user=SSH_USER)
cert, key = ca.get_cert_and_key(common_name=cn)
write_file(path=os.path.join(ssl_dir, 'cert_{}'.format(cn)),
content=cert)
content=cert, owner=SSH_USER, group='keystone', perms=0o644)
write_file(path=os.path.join(ssl_dir, 'key_{}'.format(cn)),
content=key)
content=key, owner=SSH_USER, group='keystone', perms=0o644)
def configure_ca(self):
from keystone_utils import SSH_USER, get_ca
from keystone_utils import (
SSH_USER,
get_ca,
ensure_permissions,
is_ssl_cert_master,
is_ssl_enabled,
)
if not is_ssl_enabled():
return
if not is_ssl_cert_master():
log("Not ssl-cert-master - skipping apache ca config until "
"master is elected", level=INFO)
return
ca = get_ca(user=SSH_USER)
install_ca_cert(ca.get_ca_bundle())
# Ensure accessible by keystone ssh user and group (unison)
ensure_permissions(CA_CERT_PATH, user=SSH_USER, group='keystone',
perms=0o0644)
def canonical_names(self):
addresses = self.get_network_addresses()
@ -106,8 +183,12 @@ class KeystoneContext(context.OSContextGenerator):
singlenode_mode=True)
ctxt['public_port'] = determine_api_port(api_port('keystone-public'),
singlenode_mode=True)
ctxt['debug'] = config('debug') in ['yes', 'true', 'True']
ctxt['verbose'] = config('verbose') in ['yes', 'true', 'True']
debug = config('debug')
ctxt['debug'] = debug and bool_from_string(debug)
verbose = config('verbose')
ctxt['verbose'] = verbose and bool_from_string(verbose)
ctxt['identity_backend'] = config('identity-backend')
ctxt['assignment_backend'] = config('assignment-backend')
if config('identity-backend') == 'ldap':
@ -121,7 +202,8 @@ class KeystoneContext(context.OSContextGenerator):
flags = context.config_flags_parser(ldap_flags)
ctxt['ldap_config_flags'] = flags
if config('enable-pki') not in ['false', 'False', 'no', 'No']:
enable_pki = config('enable-pki')
if enable_pki and bool_from_string(enable_pki):
ctxt['signing'] = True
# Base endpoint URL's which are used in keystone responses
@ -134,3 +216,14 @@ class KeystoneContext(context.OSContextGenerator):
resolve_address(ADMIN),
api_port('keystone-admin')).rstrip('v2.0')
return ctxt
class KeystoneLoggingContext(context.OSContextGenerator):
def __call__(self):
ctxt = {}
debug = config('debug')
if debug and bool_from_string(debug):
ctxt['root_level'] = 'DEBUG'
return ctxt

View File

@ -1,9 +1,9 @@
#!/usr/bin/python
import hashlib
import json
import os
import stat
import sys
import time
from subprocess import check_call
@ -16,6 +16,9 @@ from charmhelpers.core.hookenv import (
is_relation_made,
log,
local_unit,
DEBUG,
INFO,
WARNING,
ERROR,
relation_get,
relation_ids,
@ -29,6 +32,10 @@ from charmhelpers.core.host import (
restart_on_change,
)
from charmhelpers.core.strutils import (
bool_from_string,
)
from charmhelpers.fetch import (
apt_install, apt_update,
filter_installed_packages
@ -50,9 +57,8 @@ from keystone_utils import (
git_install,
migrate_database,
save_script_rc,
synchronize_ca,
synchronize_ca_if_changed,
register_configs,
relation_list,
restart_map,
services,
CLUSTER_RES,
@ -60,12 +66,21 @@ from keystone_utils import (
SSH_USER,
setup_ipv6,
send_notifications,
check_peer_actions,
CA_CERT_PATH,
ensure_permissions,
get_ssl_sync_request_units,
is_ssl_cert_master,
is_db_ready,
clear_ssl_synced_units,
is_db_initialised,
filter_null,
)
from charmhelpers.contrib.hahelpers.cluster import (
eligible_leader,
is_leader,
is_elected_leader,
get_hacluster_config,
peer_units,
)
from charmhelpers.payload.execd import execd_preinstall
@ -109,16 +124,18 @@ def install():
@hooks.hook('config-changed')
@restart_on_change(restart_map())
@synchronize_ca_if_changed()
def config_changed():
if config('prefer-ipv6'):
setup_ipv6()
sync_db_with_multi_ipv6_addresses(config('database'),
config('database-user'))
unison.ensure_user(user=SSH_USER, group='juju_keystone')
unison.ensure_user(user=SSH_USER, group='keystone')
homedir = unison.get_homedir(SSH_USER)
if not os.path.isdir(homedir):
mkdir(homedir, SSH_USER, 'keystone', 0o775)
mkdir(homedir, SSH_USER, 'juju_keystone', 0o775)
if not git_install_requested():
if openstack_upgrade_available('keystone'):
@ -126,25 +143,32 @@ def config_changed():
check_call(['chmod', '-R', 'g+wrx', '/var/lib/keystone/'])
# Ensure unison can write to certs dir.
# FIXME: need to a better way around this e.g. move cert to it's own dir
# and give that unison permissions.
path = os.path.dirname(CA_CERT_PATH)
perms = int(oct(stat.S_IMODE(os.stat(path).st_mode) |
(stat.S_IWGRP | stat.S_IXGRP)), base=8)
ensure_permissions(path, group='keystone', perms=perms)
save_script_rc()
configure_https()
update_nrpe_config()
CONFIGS.write_all()
if eligible_leader(CLUSTER_RES):
migrate_database()
ensure_initial_admin(config)
log('Firing identity_changed hook for all related services.')
# HTTPS may have been set - so fire all identity relations
# again
for r_id in relation_ids('identity-service'):
for unit in relation_list(r_id):
identity_changed(relation_id=r_id,
remote_unit=unit)
# Update relations since SSL may have been configured. If we have peer
# units we can rely on the sync to do this in cluster relation.
if is_elected_leader(CLUSTER_RES) and not peer_units():
update_all_identity_relation_units()
for rid in relation_ids('identity-admin'):
admin_relation_changed(rid)
for rid in relation_ids('cluster'):
cluster_joined(rid)
# Ensure sync request is sent out (needed for any/all ssl change)
send_ssl_sync_request()
for r_id in relation_ids('ha'):
ha_joined(relation_id=r_id)
#TODO(coreycb): For deploy from git support, need to implement action-set
@ -183,54 +207,94 @@ def pgsql_db_joined():
relation_set(database=config('database'))
def update_all_identity_relation_units(check_db_ready=True):
CONFIGS.write_all()
if check_db_ready and not is_db_ready():
log('Allowed_units list provided and this unit not present',
level=INFO)
return
if not is_db_initialised():
log("Database not yet initialised - deferring identity-relation "
"updates", level=INFO)
return
if is_elected_leader(CLUSTER_RES):
ensure_initial_admin(config)
log('Firing identity_changed hook for all related services.')
for rid in relation_ids('identity-service'):
for unit in related_units(rid):
identity_changed(relation_id=rid, remote_unit=unit)
@synchronize_ca_if_changed(force=True)
def update_all_identity_relation_units_force_sync():
update_all_identity_relation_units()
@hooks.hook('shared-db-relation-changed')
@restart_on_change(restart_map())
@synchronize_ca_if_changed()
def db_changed():
if 'shared-db' not in CONFIGS.complete_contexts():
log('shared-db relation incomplete. Peer not ready?')
else:
CONFIGS.write(KEYSTONE_CONF)
if eligible_leader(CLUSTER_RES):
if is_elected_leader(CLUSTER_RES):
# Bugs 1353135 & 1187508. Dbs can appear to be ready before the
# units acl entry has been added. So, if the db supports passing
# a list of permitted units then check if we're in the list.
allowed_units = relation_get('allowed_units')
if allowed_units and local_unit() not in allowed_units.split():
log('Allowed_units list provided and this unit not present')
if not is_db_ready(use_current_context=True):
log('Allowed_units list provided and this unit not present',
level=INFO)
return
migrate_database()
ensure_initial_admin(config)
# Ensure any existing service entries are updated in the
# new database backend
for rid in relation_ids('identity-service'):
for unit in related_units(rid):
identity_changed(relation_id=rid, remote_unit=unit)
# new database backend. Also avoid duplicate db ready check.
update_all_identity_relation_units(check_db_ready=False)
@hooks.hook('pgsql-db-relation-changed')
@restart_on_change(restart_map())
@synchronize_ca_if_changed()
def pgsql_db_changed():
if 'pgsql-db' not in CONFIGS.complete_contexts():
log('pgsql-db relation incomplete. Peer not ready?')
else:
CONFIGS.write(KEYSTONE_CONF)
if eligible_leader(CLUSTER_RES):
if is_elected_leader(CLUSTER_RES):
if not is_db_ready(use_current_context=True):
log('Allowed_units list provided and this unit not present',
level=INFO)
return
migrate_database()
ensure_initial_admin(config)
# Ensure any existing service entries are updated in the
# new database backend
for rid in relation_ids('identity-service'):
for unit in related_units(rid):
identity_changed(relation_id=rid, remote_unit=unit)
# new database backend. Also avoid duplicate db ready check.
update_all_identity_relation_units(check_db_ready=False)
@hooks.hook('identity-service-relation-changed')
@restart_on_change(restart_map())
@synchronize_ca_if_changed()
def identity_changed(relation_id=None, remote_unit=None):
notifications = {}
if eligible_leader(CLUSTER_RES):
add_service_to_keystone(relation_id, remote_unit)
synchronize_ca()
CONFIGS.write_all()
notifications = {}
if is_elected_leader(CLUSTER_RES):
if not is_db_ready():
log("identity-service-relation-changed hook fired before db "
"ready - deferring until db ready", level=WARNING)
return
if not is_db_initialised():
log("Database not yet initialised - deferring identity-relation "
"updates", level=INFO)
return
add_service_to_keystone(relation_id, remote_unit)
settings = relation_get(rid=relation_id, unit=remote_unit)
service = settings.get('service', None)
if service:
@ -249,6 +313,8 @@ def identity_changed(relation_id=None, remote_unit=None):
# with the info dies the settings die with it Bug# 1355848
for rel_id in relation_ids('identity-service'):
peerdb_settings = peer_retrieve_by_prefix(rel_id)
# Ensure the null'd settings are unset in the relation.
peerdb_settings = filter_null(peerdb_settings)
if 'service_password' in peerdb_settings:
relation_set(relation_id=rel_id, **peerdb_settings)
log('Deferring identity_changed() to service leader.')
@ -257,50 +323,113 @@ def identity_changed(relation_id=None, remote_unit=None):
send_notifications(notifications)
def send_ssl_sync_request():
"""Set sync request on cluster relation.
Value set equals number of ssl configs currently enabled so that if they
change, we ensure that certs are synced. This setting is consumed by
cluster-relation-changed ssl master. We also clear the 'synced' set to
guarantee that a sync will occur.
Note the we do nothing if the setting is already applied.
"""
unit = local_unit().replace('/', '-')
count = 0
if bool_from_string(config('use-https')):
count += 1
if bool_from_string(config('https-service-endpoints')):
count += 2
key = 'ssl-sync-required-%s' % (unit)
settings = {key: count}
# If all ssl is disabled ensure this is set to 0 so that cluster hook runs
# and endpoints are updated.
if not count:
log("Setting %s=%s" % (key, count), level=DEBUG)
for rid in relation_ids('cluster'):
relation_set(relation_id=rid, relation_settings=settings)
return
prev = 0
rid = None
for rid in relation_ids('cluster'):
for unit in related_units(rid):
_prev = relation_get(rid=rid, unit=unit, attribute=key) or 0
if _prev and _prev > prev:
prev = _prev
if rid and prev < count:
clear_ssl_synced_units()
log("Setting %s=%s" % (key, count), level=DEBUG)
relation_set(relation_id=rid, relation_settings=settings)
@hooks.hook('cluster-relation-joined')
def cluster_joined(relation_id=None):
def cluster_joined():
unison.ssh_authorized_peers(user=SSH_USER,
group='juju_keystone',
peer_interface='cluster',
ensure_local_user=True)
settings = {}
for addr_type in ADDRESS_TYPES:
address = get_address_in_network(
config('os-{}-network'.format(addr_type))
)
if address:
relation_set(
relation_id=relation_id,
relation_settings={'{}-address'.format(addr_type): address}
)
settings['{}-address'.format(addr_type)] = address
if config('prefer-ipv6'):
private_addr = get_ipv6_addr(exc_list=[config('vip')])[0]
relation_set(relation_id=relation_id,
relation_settings={'private-address': private_addr})
settings['private-address'] = private_addr
relation_set(relation_settings=settings)
send_ssl_sync_request()
@hooks.hook('cluster-relation-changed',
'cluster-relation-departed')
@restart_on_change(restart_map(), stopstart=True)
def cluster_changed():
# NOTE(jamespage) re-echo passwords for peer storage
peer_echo(includes=['_passwd', 'identity-service:'])
unison.ssh_authorized_peers(user=SSH_USER,
group='keystone',
group='juju_keystone',
peer_interface='cluster',
ensure_local_user=True)
synchronize_ca()
CONFIGS.write_all()
for r_id in relation_ids('identity-service'):
for unit in relation_list(r_id):
identity_changed(relation_id=r_id,
remote_unit=unit)
for rid in relation_ids('identity-admin'):
admin_relation_changed(rid)
# NOTE(jamespage) re-echo passwords for peer storage
echo_whitelist = ['_passwd', 'identity-service:', 'ssl-cert-master',
'db-initialised']
log("Peer echo whitelist: %s" % (echo_whitelist), level=DEBUG)
peer_echo(includes=echo_whitelist)
check_peer_actions()
if is_elected_leader(CLUSTER_RES) or is_ssl_cert_master():
units = get_ssl_sync_request_units()
synced_units = relation_get(attribute='ssl-synced-units',
unit=local_unit())
if synced_units:
synced_units = json.loads(synced_units)
diff = set(units).symmetric_difference(set(synced_units))
if units and (not synced_units or diff):
log("New peers joined and need syncing - %s" %
(', '.join(units)), level=DEBUG)
update_all_identity_relation_units_force_sync()
else:
update_all_identity_relation_units()
for rid in relation_ids('identity-admin'):
admin_relation_changed(rid)
else:
CONFIGS.write_all()
@hooks.hook('ha-relation-joined')
def ha_joined():
def ha_joined(relation_id=None):
cluster_config = get_hacluster_config()
resources = {
'res_ks_haproxy': 'lsb:haproxy',
@ -336,7 +465,8 @@ def ha_joined():
vip_group.append(vip_key)
if len(vip_group) >= 1:
relation_set(groups={'grp_ks_vips': ' '.join(vip_group)})
relation_set(relation_id=relation_id,
groups={CLUSTER_RES: ' '.join(vip_group)})
init_services = {
'res_ks_haproxy': 'haproxy'
@ -344,7 +474,8 @@ def ha_joined():
clones = {
'cl_ks_haproxy': 'res_ks_haproxy'
}
relation_set(init_services=init_services,
relation_set(relation_id=relation_id,
init_services=init_services,
corosync_bindiface=cluster_config['ha-bindiface'],
corosync_mcastport=cluster_config['ha-mcastport'],
resources=resources,
@ -354,17 +485,15 @@ def ha_joined():
@hooks.hook('ha-relation-changed')
@restart_on_change(restart_map())
@synchronize_ca_if_changed()
def ha_changed():
clustered = relation_get('clustered')
CONFIGS.write_all()
if (clustered is not None and
is_leader(CLUSTER_RES)):
ensure_initial_admin(config)
clustered = relation_get('clustered')
if clustered and is_elected_leader(CLUSTER_RES):
log('Cluster configured, notifying other services and updating '
'keystone endpoint configuration')
for rid in relation_ids('identity-service'):
for unit in related_units(rid):
identity_changed(relation_id=rid, remote_unit=unit)
update_all_identity_relation_units()
@hooks.hook('identity-admin-relation-changed')
@ -381,6 +510,7 @@ def admin_relation_changed(relation_id=None):
relation_set(relation_id=relation_id, **relation_data)
@synchronize_ca_if_changed(fatal=True)
def configure_https():
'''
Enables SSL API Apache config if appropriate and kicks identity-service
@ -399,25 +529,21 @@ def configure_https():
@hooks.hook('upgrade-charm')
@restart_on_change(restart_map(), stopstart=True)
@synchronize_ca_if_changed()
def upgrade_charm():
apt_install(filter_installed_packages(determine_packages()))
unison.ssh_authorized_peers(user=SSH_USER,
group='keystone',
group='juju_keystone',
peer_interface='cluster',
ensure_local_user=True)
update_nrpe_config()
synchronize_ca()
if eligible_leader(CLUSTER_RES):
log('Cluster leader - ensuring endpoint configuration'
' is up to date')
time.sleep(10)
ensure_initial_admin(config)
# Deal with interface changes for icehouse
for r_id in relation_ids('identity-service'):
for unit in relation_list(r_id):
identity_changed(relation_id=r_id,
remote_unit=unit)
CONFIGS.write_all()
update_nrpe_config()
if is_elected_leader(CLUSTER_RES):
log('Cluster leader - ensuring endpoint configuration is up to '
'date', level=DEBUG)
update_all_identity_relation_units()
@hooks.hook('nrpe-external-master-relation-joined',
@ -428,7 +554,9 @@ def update_nrpe_config():
hostname = nrpe.get_nagios_hostname()
current_unit = nrpe.get_nagios_unit_name()
nrpe_setup = nrpe.NRPE(hostname=hostname)
nrpe.copy_nrpe_checks()
nrpe.add_init_service_checks(nrpe_setup, services(), current_unit)
nrpe.add_haproxy_checks(nrpe_setup, current_unit)
nrpe_setup.write()

View File

@ -5,6 +5,13 @@ import shutil
import subprocess
import tarfile
import tempfile
import time
from charmhelpers.core.hookenv import (
log,
DEBUG,
WARNING,
)
CA_EXPIRY = '365'
ORG_NAME = 'Ubuntu'
@ -101,6 +108,9 @@ keyUsage = digitalSignature, keyEncipherment, keyAgreement
extendedKeyUsage = serverAuth, clientAuth
"""
# Instance can be appended to this list to represent a singleton
CA_SINGLETON = []
def init_ca(ca_dir, common_name, org_name=ORG_NAME, org_unit_name=ORG_UNIT):
print 'Ensuring certificate authority exists at %s.' % ca_dir
@ -275,23 +285,42 @@ class JujuCA(object):
crt = self._sign_csr(csr, service, common_name)
cmd = ['chown', '-R', '%s.%s' % (self.user, self.group), self.ca_dir]
subprocess.check_call(cmd)
print 'Signed new CSR, crt @ %s' % crt
log('Signed new CSR, crt @ %s' % crt, level=DEBUG)
return crt, key
def get_cert_and_key(self, common_name):
print 'Getting certificate and key for %s.' % common_name
key = os.path.join(self.ca_dir, 'certs', '%s.key' % common_name)
crt = os.path.join(self.ca_dir, 'certs', '%s.crt' % common_name)
if os.path.isfile(crt):
print 'Found existing certificate for %s.' % common_name
crt = open(crt, 'r').read()
try:
key = open(key, 'r').read()
except:
print 'Could not load ssl private key for %s from %s' %\
(common_name, key)
exit(1)
return crt, key
log('Getting certificate and key for %s.' % common_name, level=DEBUG)
keypath = os.path.join(self.ca_dir, 'certs', '%s.key' % common_name)
crtpath = os.path.join(self.ca_dir, 'certs', '%s.crt' % common_name)
if os.path.isfile(crtpath):
log('Found existing certificate for %s.' % common_name,
level=DEBUG)
max_retries = 3
while True:
mtime = os.path.getmtime(crtpath)
crt = open(crtpath, 'r').read()
try:
key = open(keypath, 'r').read()
except:
msg = ('Could not load ssl private key for %s from %s' %
(common_name, keypath))
raise Exception(msg)
# Ensure we are not reading a file that is being written to
if mtime != os.path.getmtime(crtpath):
max_retries -= 1
if max_retries == 0:
msg = ("crt contents changed during read - retry "
"failed")
raise Exception(msg)
log("crt contents changed during read - re-reading",
level=WARNING)
time.sleep(1)
else:
return crt, key
crt, key = self._create_certificate(common_name, common_name)
return open(crt, 'r').read(), open(key, 'r').read()

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,39 @@
[loggers]
keys=root
[formatters]
keys=normal,normal_with_name,debug
[handlers]
keys=production,file,devel
[logger_root]
level=WARNING
handlers=file
[handler_production]
class=handlers.SysLogHandler
level=ERROR
formatter=normal_with_name
args=(('localhost', handlers.SYSLOG_UDP_PORT), handlers.SysLogHandler.LOG_USER)
[handler_file]
class=FileHandler
level=DEBUG
formatter=normal_with_name
args=('/var/log/keystone/keystone.log', 'a')
[handler_devel]
class=StreamHandler
level=NOTSET
formatter=debug
args=(sys.stdout,)
[formatter_normal]
format=%(asctime)s %(levelname)s %(message)s
[formatter_normal_with_name]
format=(%(name)s): %(asctime)s %(levelname)s %(message)s
[formatter_debug]
format=(%(name)s): %(asctime)s %(levelname)s %(module)s %(funcName)s %(message)s

View File

@ -0,0 +1,43 @@
[loggers]
keys=root
[formatters]
keys=normal,normal_with_name,debug
[handlers]
keys=production,file,devel
[logger_root]
{% if root_level -%}
level={{ root_level }}
{% else -%}
level=WARNING
{% endif -%}
handlers=file
[handler_production]
class=handlers.SysLogHandler
level=ERROR
formatter=normal_with_name
args=(('localhost', handlers.SYSLOG_UDP_PORT), handlers.SysLogHandler.LOG_USER)
[handler_file]
class=FileHandler
level=DEBUG
formatter=normal_with_name
args=('/var/log/keystone/keystone.log', 'a')
[handler_devel]
class=StreamHandler
level=NOTSET
formatter=debug
args=(sys.stdout,)
[formatter_normal]
format=%(asctime)s %(levelname)s %(message)s
[formatter_normal_with_name]
format=(%(name)s): %(asctime)s %(levelname)s %(message)s
[formatter_debug]
format=(%(name)s): %(asctime)s %(levelname)s %(module)s %(funcName)s %(message)s

View File

@ -0,0 +1,105 @@
# kilo
###############################################################################
# [ WARNING ]
# Configuration file maintained by Juju. Local changes may be overwritten.
###############################################################################
[DEFAULT]
admin_token = {{ token }}
admin_port = {{ admin_port }}
public_port = {{ public_port }}
use_syslog = {{ use_syslog }}
log_config = /etc/keystone/logging.conf
debug = {{ debug }}
verbose = {{ verbose }}
public_endpoint = {{ public_endpoint }}
admin_endpoint = {{ admin_endpoint }}
bind_host = {{ bind_host }}
public_workers = {{ workers }}
admin_workers = {{ workers }}
[database]
{% if database_host -%}
connection = {{ database_type }}://{{ database_user }}:{{ database_password }}@{{ database_host }}/{{ database }}{% if database_ssl_ca %}?ssl_ca={{ database_ssl_ca }}{% if database_ssl_cert %}&ssl_cert={{ database_ssl_cert }}&ssl_key={{ database_ssl_key }}{% endif %}{% endif %}
{% else -%}
connection = sqlite:////var/lib/keystone/keystone.db
{% endif -%}
idle_timeout = 200
[identity]
driver = keystone.identity.backends.{{ identity_backend }}.Identity
[credential]
driver = keystone.credential.backends.sql.Credential
[trust]
driver = keystone.trust.backends.sql.Trust
[os_inherit]
[catalog]
driver = keystone.catalog.backends.sql.Catalog
[endpoint_filter]
[token]
driver = keystone.token.persistence.backends.sql.Token
provider = keystone.token.providers.uuid.Provider
[cache]
[policy]
driver = keystone.policy.backends.sql.Policy
[ec2]
driver = keystone.contrib.ec2.backends.sql.Ec2
[assignment]
driver = keystone.assignment.backends.{{ assignment_backend }}.Assignment
[oauth1]
[signing]
[auth]
methods = external,password,token,oauth1
password = keystone.auth.plugins.password.Password
token = keystone.auth.plugins.token.Token
oauth1 = keystone.auth.plugins.oauth1.OAuth
[paste_deploy]
config_file = keystone-paste.ini
[extra_headers]
Distribution = Ubuntu
[ldap]
{% if identity_backend == 'ldap' -%}
url = {{ ldap_server }}
user = {{ ldap_user }}
password = {{ ldap_password }}
suffix = {{ ldap_suffix }}
{% if ldap_config_flags -%}
{% for key, value in ldap_config_flags.iteritems() -%}
{{ key }} = {{ value }}
{% endfor -%}
{% endif -%}
{% if ldap_readonly -%}
user_allow_create = False
user_allow_update = False
user_allow_delete = False
tenant_allow_create = False
tenant_allow_update = False
tenant_allow_delete = False
role_allow_create = False
role_allow_update = False
role_allow_delete = False
group_allow_create = False
group_allow_update = False
group_allow_delete = False
{% endif -%}
{% endif -%}

View File

@ -0,0 +1,44 @@
# kilo
[loggers]
keys=root
[formatters]
keys=normal,normal_with_name,debug
[handlers]
keys=production,file,devel
[logger_root]
{% if root_level -%}
level={{ root_level }}
{% else -%}
level=WARNING
{% endif -%}
handlers=file
[handler_production]
class=handlers.SysLogHandler
level=ERROR
formatter=normal_with_name
args=(('localhost', handlers.SYSLOG_UDP_PORT), handlers.SysLogHandler.LOG_USER)
[handler_file]
class=FileHandler
level=DEBUG
formatter=normal_with_name
args=('/var/log/keystone/keystone.log', 'a')
[handler_devel]
class=StreamHandler
level=NOTSET
formatter=debug
args=(sys.stdout,)
[formatter_normal]
format=%(asctime)s %(levelname)s %(message)s
[formatter_normal_with_name]
format=(%(name)s): %(asctime)s %(levelname)s %(message)s
[formatter_debug]
format=(%(name)s): %(asctime)s %(levelname)s %(module)s %(funcName)s %(message)s

View File

@ -258,7 +258,6 @@ class KeystoneBasicDeployment(OpenStackAmuletDeployment):
'auth_port': '35357',
'auth_protocol': 'http',
'private-address': u.valid_ip,
'https_keystone': 'False',
'auth_host': u.valid_ip,
'service_username': 'cinder',
'service_tenant_id': u.not_null,

View File

@ -16,6 +16,32 @@ class TestKeystoneContexts(CharmTestCase):
def setUp(self):
super(TestKeystoneContexts, self).setUp(context, TO_PATCH)
@patch.object(context, 'mkdir')
@patch('keystone_utils.get_ca')
@patch('keystone_utils.ensure_permissions')
@patch('keystone_utils.determine_ports')
@patch('keystone_utils.is_ssl_cert_master')
@patch('keystone_utils.is_ssl_enabled')
@patch.object(context, 'log')
def test_apache_ssl_context_ssl_not_master(self,
mock_log,
mock_is_ssl_enabled,
mock_is_ssl_cert_master,
mock_determine_ports,
mock_ensure_permissions,
mock_get_ca,
mock_mkdir):
mock_is_ssl_enabled.return_value = True
mock_is_ssl_cert_master.return_value = False
context.ApacheSSLContext().configure_cert('foo')
context.ApacheSSLContext().configure_ca()
self.assertTrue(mock_mkdir.called)
self.assertTrue(mock_ensure_permissions.called)
self.assertFalse(mock_get_ca.called)
@patch('keystone_utils.is_ssl_cert_master')
@patch('keystone_utils.is_ssl_enabled')
@patch('charmhelpers.contrib.openstack.context.config')
@patch('charmhelpers.contrib.openstack.context.is_clustered')
@patch('charmhelpers.contrib.openstack.context.determine_apache_port')
@ -27,7 +53,11 @@ class TestKeystoneContexts(CharmTestCase):
mock_determine_api_port,
mock_determine_apache_port,
mock_is_clustered,
mock_config):
mock_config,
mock_is_ssl_enabled,
mock_is_ssl_cert_master):
mock_is_ssl_enabled.return_value = True
mock_is_ssl_cert_master.return_value = True
mock_https.return_value = True
mock_unit_get.return_value = '1.2.3.4'
mock_determine_api_port.return_value = '12'
@ -119,3 +149,13 @@ class TestKeystoneContexts(CharmTestCase):
msg = "Multiple networks configured but net_type" \
" is None (os-public-network)."
mock_log.assert_called_with(msg, level="WARNING")
@patch.object(context, 'config')
def test_keystone_logger_context(self, mock_config):
ctxt = context.KeystoneLoggingContext()
mock_config.return_value = None
self.assertEqual({}, ctxt())
mock_config.return_value = 'True'
self.assertEqual({'root_level': 'DEBUG'}, ctxt())

View File

@ -1,6 +1,7 @@
from mock import call, patch, MagicMock
import os
import json
import uuid
from test_utils import CharmTestCase
@ -30,7 +31,6 @@ TO_PATCH = [
'local_unit',
'filter_installed_packages',
'relation_ids',
'relation_list',
'relation_set',
'relation_get',
'related_units',
@ -42,9 +42,10 @@ TO_PATCH = [
'restart_on_change',
# charmhelpers.contrib.openstack.utils
'configure_installation_source',
# charmhelpers.contrib.openstack.ip
'resolve_address',
# charmhelpers.contrib.hahelpers.cluster_utils
'is_leader',
'eligible_leader',
'is_elected_leader',
'get_hacluster_config',
# keystone_utils
'restart_map',
@ -55,14 +56,13 @@ TO_PATCH = [
'migrate_database',
'ensure_initial_admin',
'add_service_to_keystone',
'synchronize_ca',
'synchronize_ca_if_changed',
'update_nrpe_config',
# other
'check_call',
'execd_preinstall',
'mkdir',
'os',
'time',
# ip
'get_iface_for_address',
'get_netmask_for_address',
@ -184,8 +184,13 @@ class KeystoneRelationTests(CharmTestCase):
'Attempting to associate a postgresql database when there '
'is already associated a mysql one')
@patch('keystone_utils.log')
@patch('keystone_utils.ensure_ssl_cert_master')
@patch.object(hooks, 'CONFIGS')
def test_db_changed_missing_relation_data(self, configs):
def test_db_changed_missing_relation_data(self, configs,
mock_ensure_ssl_cert_master,
mock_log):
mock_ensure_ssl_cert_master.return_value = False
configs.complete_contexts = MagicMock()
configs.complete_contexts.return_value = []
hooks.db_changed()
@ -193,8 +198,13 @@ class KeystoneRelationTests(CharmTestCase):
'shared-db relation incomplete. Peer not ready?'
)
@patch('keystone_utils.log')
@patch('keystone_utils.ensure_ssl_cert_master')
@patch.object(hooks, 'CONFIGS')
def test_postgresql_db_changed_missing_relation_data(self, configs):
def test_postgresql_db_changed_missing_relation_data(self, configs,
mock_ensure_leader,
mock_log):
mock_ensure_leader.return_value = False
configs.complete_contexts = MagicMock()
configs.complete_contexts.return_value = []
hooks.pgsql_db_changed()
@ -216,9 +226,19 @@ class KeystoneRelationTests(CharmTestCase):
configs.write = MagicMock()
hooks.pgsql_db_changed()
@patch.object(hooks, 'is_db_initialised')
@patch.object(hooks, 'is_db_ready')
@patch('keystone_utils.log')
@patch('keystone_utils.ensure_ssl_cert_master')
@patch.object(hooks, 'CONFIGS')
@patch.object(hooks, 'identity_changed')
def test_db_changed_allowed(self, identity_changed, configs):
def test_db_changed_allowed(self, identity_changed, configs,
mock_ensure_ssl_cert_master,
mock_log, mock_is_db_ready,
mock_is_db_initialised):
mock_is_db_initialised.return_value = True
mock_is_db_ready.return_value = True
mock_ensure_ssl_cert_master.return_value = False
self.relation_ids.return_value = ['identity-service:0']
self.related_units.return_value = ['unit/0']
@ -231,9 +251,16 @@ class KeystoneRelationTests(CharmTestCase):
relation_id='identity-service:0',
remote_unit='unit/0')
@patch.object(hooks, 'is_db_ready')
@patch('keystone_utils.log')
@patch('keystone_utils.ensure_ssl_cert_master')
@patch.object(hooks, 'CONFIGS')
@patch.object(hooks, 'identity_changed')
def test_db_changed_not_allowed(self, identity_changed, configs):
def test_db_changed_not_allowed(self, identity_changed, configs,
mock_ensure_ssl_cert_master, mock_log,
mock_is_db_ready):
mock_is_db_ready.return_value = False
mock_ensure_ssl_cert_master.return_value = False
self.relation_ids.return_value = ['identity-service:0']
self.related_units.return_value = ['unit/0']
@ -244,9 +271,18 @@ class KeystoneRelationTests(CharmTestCase):
self.assertFalse(self.ensure_initial_admin.called)
self.assertFalse(identity_changed.called)
@patch('keystone_utils.log')
@patch('keystone_utils.ensure_ssl_cert_master')
@patch.object(hooks, 'is_db_initialised')
@patch.object(hooks, 'is_db_ready')
@patch.object(hooks, 'CONFIGS')
@patch.object(hooks, 'identity_changed')
def test_postgresql_db_changed(self, identity_changed, configs):
def test_postgresql_db_changed(self, identity_changed, configs,
mock_is_db_ready, mock_is_db_initialised,
mock_ensure_ssl_cert_master, mock_log):
mock_is_db_initialised.return_value = True
mock_is_db_ready.return_value = True
mock_ensure_ssl_cert_master.return_value = False
self.relation_ids.return_value = ['identity-service:0']
self.related_units.return_value = ['unit/0']
@ -260,6 +296,13 @@ class KeystoneRelationTests(CharmTestCase):
remote_unit='unit/0')
@patch.object(hooks, 'git_install_requested')
@patch('keystone_utils.log')
@patch('keystone_utils.ensure_ssl_cert_master')
@patch.object(hooks, 'send_ssl_sync_request')
@patch.object(hooks, 'is_db_initialised')
@patch.object(hooks, 'is_db_ready')
@patch.object(hooks, 'peer_units')
@patch.object(hooks, 'ensure_permissions')
@patch.object(hooks, 'admin_relation_changed')
@patch.object(hooks, 'cluster_joined')
@patch.object(unison, 'ensure_user')
@ -270,11 +313,19 @@ class KeystoneRelationTests(CharmTestCase):
def test_config_changed_no_openstack_upgrade_leader(
self, configure_https, identity_changed,
configs, get_homedir, ensure_user, cluster_joined,
admin_relation_changed, git_requested):
admin_relation_changed, ensure_permissions, mock_peer_units,
mock_is_db_ready, mock_is_db_initialised,
mock_send_ssl_sync_request,
mock_ensure_ssl_cert_master, mock_log, git_requested):
mock_is_db_initialised.return_value = True
mock_is_db_ready.return_value = True
self.openstack_upgrade_available.return_value = False
self.eligible_leader.return_value = True
self.relation_ids.return_value = ['dummyid:0']
self.relation_list.return_value = ['unit/0']
self.is_elected_leader.return_value = True
# avoid having to mock syncer
mock_ensure_ssl_cert_master.return_value = False
mock_peer_units.return_value = []
self.relation_ids.return_value = ['identity-service:0']
self.related_units.return_value = ['unit/0']
hooks.config_changed()
ensure_user.assert_called_with(user=self.ssh_user, group='keystone')
@ -284,16 +335,18 @@ class KeystoneRelationTests(CharmTestCase):
configure_https.assert_called_with()
self.assertTrue(configs.write_all.called)
self.migrate_database.assert_called_with()
self.assertTrue(self.ensure_initial_admin.called)
self.log.assert_called_with(
'Firing identity_changed hook for all related services.')
identity_changed.assert_called_with(
relation_id='dummyid:0',
relation_id='identity-service:0',
remote_unit='unit/0')
admin_relation_changed.assert_called_with('dummyid:0')
admin_relation_changed.assert_called_with('identity-service:0')
@patch.object(hooks, 'git_install_requested')
@patch('keystone_utils.log')
@patch('keystone_utils.ensure_ssl_cert_master')
@patch.object(hooks, 'ensure_permissions')
@patch.object(hooks, 'cluster_joined')
@patch.object(unison, 'ensure_user')
@patch.object(unison, 'get_homedir')
@ -302,9 +355,12 @@ class KeystoneRelationTests(CharmTestCase):
@patch.object(hooks, 'configure_https')
def test_config_changed_no_openstack_upgrade_not_leader(
self, configure_https, identity_changed,
configs, get_homedir, ensure_user, cluster_joined, git_requested):
configs, get_homedir, ensure_user, cluster_joined,
ensure_permissions, mock_ensure_ssl_cert_master,
mock_log, git_requested):
self.openstack_upgrade_available.return_value = False
self.eligible_leader.return_value = False
self.is_elected_leader.return_value = False
mock_ensure_ssl_cert_master.return_value = False
hooks.config_changed()
ensure_user.assert_called_with(user=self.ssh_user, group='keystone')
@ -319,6 +375,13 @@ class KeystoneRelationTests(CharmTestCase):
self.assertFalse(identity_changed.called)
@patch.object(hooks, 'git_install_requested')
@patch('keystone_utils.log')
@patch('keystone_utils.ensure_ssl_cert_master')
@patch.object(hooks, 'send_ssl_sync_request')
@patch.object(hooks, 'is_db_initialised')
@patch.object(hooks, 'is_db_ready')
@patch.object(hooks, 'peer_units')
@patch.object(hooks, 'ensure_permissions')
@patch.object(hooks, 'admin_relation_changed')
@patch.object(hooks, 'cluster_joined')
@patch.object(unison, 'ensure_user')
@ -326,14 +389,27 @@ class KeystoneRelationTests(CharmTestCase):
@patch.object(hooks, 'CONFIGS')
@patch.object(hooks, 'identity_changed')
@patch.object(hooks, 'configure_https')
def test_config_changed_with_openstack_upgrade(
self, configure_https, identity_changed,
configs, get_homedir, ensure_user, cluster_joined,
admin_relation_changed, git_requested):
def test_config_changed_with_openstack_upgrade(self, configure_https,
identity_changed,
configs, get_homedir,
ensure_user, cluster_joined,
admin_relation_changed,
ensure_permissions,
mock_peer_units,
mock_is_db_ready,
mock_is_db_initialised,
mock_send_ssl_sync_request,
mock_ensure_ssl_cert_master,
mock_log, git_requested):
mock_is_db_ready.return_value = True
mock_is_db_initialised.return_value = True
self.openstack_upgrade_available.return_value = True
self.eligible_leader.return_value = True
self.relation_ids.return_value = ['dummyid:0']
self.relation_list.return_value = ['unit/0']
self.is_elected_leader.return_value = True
# avoid having to mock syncer
mock_ensure_ssl_cert_master.return_value = False
mock_peer_units.return_value = []
self.relation_ids.return_value = ['identity-service:0']
self.related_units.return_value = ['unit/0']
hooks.config_changed()
ensure_user.assert_called_with(user=self.ssh_user, group='keystone')
@ -345,14 +421,13 @@ class KeystoneRelationTests(CharmTestCase):
configure_https.assert_called_with()
self.assertTrue(configs.write_all.called)
self.migrate_database.assert_called_with()
self.assertTrue(self.ensure_initial_admin.called)
self.log.assert_called_with(
'Firing identity_changed hook for all related services.')
identity_changed.assert_called_with(
relation_id='dummyid:0',
relation_id='identity-service:0',
remote_unit='unit/0')
admin_relation_changed.assert_called_with('dummyid:0')
admin_relation_changed.assert_called_with('identity-service:0')
@patch.object(hooks, 'git_install_requested')
@patch.object(hooks, 'cluster_joined')
@ -388,21 +463,34 @@ class KeystoneRelationTests(CharmTestCase):
relation_id='identity-service:0',
remote_unit='unit/0')
@patch.object(hooks, 'is_db_initialised')
@patch.object(hooks, 'is_db_ready')
@patch('keystone_utils.log')
@patch('keystone_utils.ensure_ssl_cert_master')
@patch.object(hooks, 'hashlib')
@patch.object(hooks, 'send_notifications')
def test_identity_changed_leader(self, mock_send_notifications,
mock_hashlib):
self.eligible_leader.return_value = True
mock_hashlib, mock_ensure_ssl_cert_master,
mock_log, mock_is_db_ready,
mock_is_db_initialised):
mock_is_db_initialised.return_value = True
mock_is_db_ready.return_value = True
mock_ensure_ssl_cert_master.return_value = False
hooks.identity_changed(
relation_id='identity-service:0',
remote_unit='unit/0')
self.add_service_to_keystone.assert_called_with(
'identity-service:0',
'unit/0')
self.assertTrue(self.synchronize_ca.called)
def test_identity_changed_no_leader(self):
self.eligible_leader.return_value = False
@patch.object(hooks, 'local_unit')
@patch('keystone_utils.log')
@patch('keystone_utils.ensure_ssl_cert_master')
def test_identity_changed_no_leader(self, mock_ensure_ssl_cert_master,
mock_log, mock_local_unit):
mock_ensure_ssl_cert_master.return_value = False
mock_local_unit.return_value = 'unit/0'
self.is_elected_leader.return_value = False
hooks.identity_changed(
relation_id='identity-service:0',
remote_unit='unit/0')
@ -410,23 +498,43 @@ class KeystoneRelationTests(CharmTestCase):
self.log.assert_called_with(
'Deferring identity_changed() to service leader.')
@patch.object(hooks, 'local_unit')
@patch.object(hooks, 'peer_units')
@patch.object(unison, 'ssh_authorized_peers')
def test_cluster_joined(self, ssh_authorized_peers):
def test_cluster_joined(self, ssh_authorized_peers, mock_peer_units,
mock_local_unit):
mock_local_unit.return_value = 'unit/0'
mock_peer_units.return_value = ['unit/0']
hooks.cluster_joined()
ssh_authorized_peers.assert_called_with(
user=self.ssh_user, group='juju_keystone',
peer_interface='cluster', ensure_local_user=True)
@patch.object(hooks, 'is_ssl_cert_master')
@patch.object(hooks, 'peer_units')
@patch('keystone_utils.log')
@patch('keystone_utils.ensure_ssl_cert_master')
@patch('keystone_utils.synchronize_ca')
@patch.object(hooks, 'check_peer_actions')
@patch.object(unison, 'ssh_authorized_peers')
@patch.object(hooks, 'CONFIGS')
def test_cluster_changed(self, configs, ssh_authorized_peers):
def test_cluster_changed(self, configs, ssh_authorized_peers,
check_peer_actions, mock_synchronize_ca,
mock_ensure_ssl_cert_master,
mock_log, mock_peer_units,
mock_is_ssl_cert_master):
mock_is_ssl_cert_master.return_value = False
mock_peer_units.return_value = ['unit/0']
mock_ensure_ssl_cert_master.return_value = False
self.is_elected_leader.return_value = False
hooks.cluster_changed()
self.peer_echo.assert_called_with(includes=['_passwd',
'identity-service:'])
whitelist = ['_passwd', 'identity-service:', 'ssl-cert-master',
'db-initialised']
self.peer_echo.assert_called_with(includes=whitelist)
ssh_authorized_peers.assert_called_with(
user=self.ssh_user, group='keystone',
user=self.ssh_user, group='juju_keystone',
peer_interface='cluster', ensure_local_user=True)
self.assertTrue(self.synchronize_ca.called)
self.assertFalse(mock_synchronize_ca.called)
self.assertTrue(configs.write_all.called)
def test_ha_joined(self):
@ -439,6 +547,7 @@ class KeystoneRelationTests(CharmTestCase):
self.get_netmask_for_address.return_value = '255.255.255.0'
hooks.ha_joined()
args = {
'relation_id': None,
'corosync_bindiface': 'em0',
'corosync_mcastport': '8080',
'init_services': {'res_ks_haproxy': 'haproxy'},
@ -464,6 +573,7 @@ class KeystoneRelationTests(CharmTestCase):
self.get_netmask_for_address.return_value = None
hooks.ha_joined()
args = {
'relation_id': None,
'corosync_bindiface': 'em0',
'corosync_mcastport': '8080',
'init_services': {'res_ks_haproxy': 'haproxy'},
@ -488,6 +598,7 @@ class KeystoneRelationTests(CharmTestCase):
self.get_netmask_for_address.return_value = '64'
hooks.ha_joined()
args = {
'relation_id': None,
'corosync_bindiface': 'em0',
'corosync_mcastport': '8080',
'init_services': {'res_ks_haproxy': 'haproxy'},
@ -501,34 +612,56 @@ class KeystoneRelationTests(CharmTestCase):
}
self.relation_set.assert_called_with(**args)
@patch('keystone_utils.log')
@patch('keystone_utils.ensure_ssl_cert_master')
@patch('keystone_utils.synchronize_ca')
@patch.object(hooks, 'CONFIGS')
def test_ha_relation_changed_not_clustered_not_leader(self, configs):
def test_ha_relation_changed_not_clustered_not_leader(self, configs,
mock_synchronize_ca,
mock_is_master,
mock_log):
mock_is_master.return_value = False
self.relation_get.return_value = False
self.is_leader.return_value = False
self.is_elected_leader.return_value = False
hooks.ha_changed()
self.assertTrue(configs.write_all.called)
self.assertFalse(mock_synchronize_ca.called)
@patch('keystone_utils.log')
@patch('keystone_utils.ensure_ssl_cert_master')
@patch.object(hooks, 'is_db_ready')
@patch.object(hooks, 'is_db_initialised')
@patch.object(hooks, 'identity_changed')
@patch.object(hooks, 'CONFIGS')
def test_ha_relation_changed_clustered_leader(
self, configs, identity_changed):
def test_ha_relation_changed_clustered_leader(self, configs,
identity_changed,
mock_is_db_initialised,
mock_is_db_ready,
mock_ensure_ssl_cert_master,
mock_log):
mock_is_db_initialised.return_value = True
mock_is_db_ready.return_value = True
mock_ensure_ssl_cert_master.return_value = False
self.relation_get.return_value = True
self.is_leader.return_value = True
self.is_elected_leader.return_value = True
self.relation_ids.return_value = ['identity-service:0']
self.related_units.return_value = ['unit/0']
hooks.ha_changed()
self.assertTrue(configs.write_all.called)
self.log.assert_called_with(
'Cluster configured, notifying other services and updating '
'keystone endpoint configuration')
'Firing identity_changed hook for all related services.')
identity_changed.assert_called_with(
relation_id='identity-service:0',
remote_unit='unit/0')
@patch('keystone_utils.log')
@patch('keystone_utils.ensure_ssl_cert_master')
@patch.object(hooks, 'CONFIGS')
def test_configure_https_enable(self, configs):
def test_configure_https_enable(self, configs, mock_ensure_ssl_cert_master,
mock_log):
mock_ensure_ssl_cert_master.return_value = False
configs.complete_contexts = MagicMock()
configs.complete_contexts.return_value = ['https']
configs.write = MagicMock()
@ -538,8 +671,13 @@ class KeystoneRelationTests(CharmTestCase):
cmd = ['a2ensite', 'openstack_https_frontend']
self.check_call.assert_called_with(cmd)
@patch('keystone_utils.log')
@patch('keystone_utils.ensure_ssl_cert_master')
@patch.object(hooks, 'CONFIGS')
def test_configure_https_disable(self, configs):
def test_configure_https_disable(self, configs,
mock_ensure_ssl_cert_master,
mock_log):
mock_ensure_ssl_cert_master.return_value = False
configs.complete_contexts = MagicMock()
configs.complete_contexts.return_value = ['']
configs.write = MagicMock()
@ -550,35 +688,71 @@ class KeystoneRelationTests(CharmTestCase):
self.check_call.assert_called_with(cmd)
@patch.object(utils, 'git_install_requested')
@patch.object(hooks, 'is_db_ready')
@patch.object(hooks, 'is_db_initialised')
@patch('keystone_utils.log')
@patch('keystone_utils.relation_ids')
@patch('keystone_utils.is_elected_leader')
@patch('keystone_utils.ensure_ssl_cert_master')
@patch('keystone_utils.update_hash_from_path')
@patch('keystone_utils.synchronize_ca')
@patch.object(unison, 'ssh_authorized_peers')
def test_upgrade_charm_leader(self, ssh_authorized_peers, git_requested):
self.eligible_leader.return_value = True
def test_upgrade_charm_leader(self, ssh_authorized_peers,
mock_synchronize_ca,
mock_update_hash_from_path,
mock_ensure_ssl_cert_master,
mock_is_elected_leader,
mock_relation_ids,
mock_log,
mock_is_db_ready,
mock_is_db_initialised,
git_requested):
mock_is_db_initialised.return_value = True
mock_is_db_ready.return_value = True
mock_is_elected_leader.return_value = False
mock_relation_ids.return_value = []
mock_ensure_ssl_cert_master.return_value = True
# Ensure always returns diff
mock_update_hash_from_path.side_effect = \
lambda hash, *args, **kwargs: hash.update(str(uuid.uuid4()))
self.is_elected_leader.return_value = True
self.filter_installed_packages.return_value = []
git_requested.return_value = False
hooks.upgrade_charm()
self.assertTrue(self.apt_install.called)
ssh_authorized_peers.assert_called_with(
user=self.ssh_user, group='keystone',
user=self.ssh_user, group='juju_keystone',
peer_interface='cluster', ensure_local_user=True)
self.assertTrue(self.synchronize_ca.called)
self.assertTrue(mock_synchronize_ca.called)
self.log.assert_called_with(
'Cluster leader - ensuring endpoint configuration'
' is up to date')
'Firing identity_changed hook for all related services.')
self.assertTrue(self.ensure_initial_admin.called)
@patch('charmhelpers.core.hookenv.config')
@patch.object(utils, 'git_install_requested')
@patch('keystone_utils.log')
@patch('keystone_utils.relation_ids')
@patch('keystone_utils.ensure_ssl_cert_master')
@patch('keystone_utils.update_hash_from_path')
@patch.object(unison, 'ssh_authorized_peers')
def test_upgrade_charm_not_leader(self, ssh_authorized_peers,
git_requested, mock_config):
self.eligible_leader.return_value = False
mock_update_hash_from_path,
mock_ensure_ssl_cert_master,
mock_relation_ids,
mock_log, git_requested):
mock_relation_ids.return_value = []
mock_ensure_ssl_cert_master.return_value = False
# Ensure always returns diff
mock_update_hash_from_path.side_effect = \
lambda hash, *args, **kwargs: hash.update(str(uuid.uuid4()))
self.is_elected_leader.return_value = False
self.filter_installed_packages.return_value = []
git_requested.return_value = False
hooks.upgrade_charm()
self.assertTrue(self.apt_install.called)
ssh_authorized_peers.assert_called_with(
user=self.ssh_user, group='keystone',
user=self.ssh_user, group='juju_keystone',
peer_interface='cluster', ensure_local_user=True)
self.assertTrue(self.synchronize_ca.called)
self.assertFalse(self.log.called)
self.assertFalse(self.ensure_initial_admin.called)

View File

@ -7,7 +7,8 @@ os.environ['JUJU_UNIT_NAME'] = 'keystone'
with patch('charmhelpers.core.hookenv.config') as config:
import keystone_utils as utils
import keystone_hooks as hooks
with patch.object(utils, 'register_configs'):
import keystone_hooks as hooks
TO_PATCH = [
'api_port',
@ -26,14 +27,18 @@ TO_PATCH = [
'get_os_codename_install_source',
'grant_role',
'configure_installation_source',
'eligible_leader',
'is_elected_leader',
'is_ssl_cert_master',
'https',
'is_clustered',
'peer_store_and_set',
'service_stop',
'service_start',
'relation_get',
'relation_set',
'relation_ids',
'relation_id',
'local_unit',
'related_units',
'https',
'is_relation_made',
'peer_store',
@ -126,7 +131,7 @@ class TestKeystoneUtils(CharmTestCase):
self, migrate_database, determine_packages, configs):
self.test_config.set('openstack-origin', 'precise')
determine_packages.return_value = []
self.eligible_leader.return_value = True
self.is_elected_leader.return_value = True
utils.do_openstack_upgrade(configs)
@ -213,10 +218,10 @@ class TestKeystoneUtils(CharmTestCase):
self.resolve_address.return_value = '10.0.0.3'
self.test_config.set('admin-port', 80)
self.test_config.set('service-port', 81)
self.is_clustered.return_value = False
self.https.return_value = False
self.test_config.set('https-service-endpoints', 'False')
self.get_local_endpoint.return_value = 'http://localhost:80/v2.0/'
self.relation_ids.return_value = ['cluster/0']
mock_keystone = MagicMock()
mock_keystone.resolve_tenant_id.return_value = 'tenant_id'
@ -246,15 +251,23 @@ class TestKeystoneUtils(CharmTestCase):
'auth_port': 80, 'service_username': 'keystone',
'service_password': 'password',
'service_tenant': 'tenant',
'https_keystone': 'False',
'ssl_cert': '', 'ssl_key': '',
'ca_cert': '', 'auth_host': '10.0.0.3',
'https_keystone': '__null__',
'ssl_cert': '__null__', 'ssl_key': '__null__',
'ca_cert': '__null__', 'auth_host': '10.0.0.3',
'service_host': '10.0.0.3',
'auth_protocol': 'http', 'service_protocol': 'http',
'service_tenant_id': 'tenant_id'}
self.peer_store_and_set.assert_called_with(
relation_id=relation_id,
**relation_data)
filtered = {}
for k, v in relation_data.iteritems():
if v == '__null__':
filtered[k] = None
else:
filtered[k] = v
call1 = call(relation_id=relation_id, **filtered)
call2 = call(relation_id='cluster/0', **relation_data)
self.relation_set.assert_has_calls([call1, call2])
@patch.object(utils, 'ensure_valid_service')
@patch.object(utils, 'add_endpoint')
@ -359,6 +372,174 @@ class TestKeystoneUtils(CharmTestCase):
self.subprocess.check_output.return_value = 'supersecretgen'
self.assertEqual(utils.get_admin_passwd(), 'supersecretgen')
def test_is_db_ready(self):
allowed_units = None
def fake_rel_get(attribute=None, *args, **kwargs):
if attribute == 'allowed_units':
return allowed_units
self.relation_get.side_effect = fake_rel_get
self.relation_id.return_value = 'shared-db:0'
self.relation_ids.return_value = ['shared-db:0']
self.local_unit.return_value = 'unit/0'
allowed_units = 'unit/0'
self.assertTrue(utils.is_db_ready(use_current_context=True))
self.relation_ids.return_value = ['acme:0']
self.assertRaises(utils.is_db_ready, use_current_context=True)
self.related_units.return_value = ['unit/0']
self.relation_ids.return_value = ['shared-db:0', 'shared-db:1']
self.assertTrue(utils.is_db_ready())
allowed_units = 'unit/1'
self.assertFalse(utils.is_db_ready())
self.related_units.return_value = []
self.assertTrue(utils.is_db_ready())
@patch.object(utils, 'peer_units')
@patch.object(utils, 'is_ssl_enabled')
def test_ensure_ssl_cert_master_no_ssl(self, mock_is_ssl_enabled,
mock_peer_units):
mock_is_ssl_enabled.return_value = False
self.assertFalse(utils.ensure_ssl_cert_master())
self.assertFalse(self.relation_set.called)
@patch.object(utils, 'peer_units')
@patch.object(utils, 'is_ssl_enabled')
def test_ensure_ssl_cert_master_ssl_no_peers(self, mock_is_ssl_enabled,
mock_peer_units):
def mock_rel_get(unit=None, **kwargs):
return None
self.relation_get.side_effect = mock_rel_get
mock_is_ssl_enabled.return_value = True
self.relation_ids.return_value = ['cluster:0']
self.local_unit.return_value = 'unit/0'
self.related_units.return_value = []
mock_peer_units.return_value = []
# This should get ignored since we are overriding
self.is_ssl_cert_master.return_value = False
self.is_elected_leader.return_value = False
self.assertTrue(utils.ensure_ssl_cert_master())
settings = {'ssl-cert-master': 'unit/0'}
self.relation_set.assert_called_with(relation_id='cluster:0',
relation_settings=settings)
@patch.object(utils, 'peer_units')
@patch.object(utils, 'is_ssl_enabled')
def test_ensure_ssl_cert_master_ssl_master_no_peers(self,
mock_is_ssl_enabled,
mock_peer_units):
def mock_rel_get(unit=None, **kwargs):
if unit == 'unit/0':
return 'unit/0'
return None
self.relation_get.side_effect = mock_rel_get
mock_is_ssl_enabled.return_value = True
self.relation_ids.return_value = ['cluster:0']
self.local_unit.return_value = 'unit/0'
self.related_units.return_value = []
mock_peer_units.return_value = []
# This should get ignored since we are overriding
self.is_ssl_cert_master.return_value = False
self.is_elected_leader.return_value = False
self.assertTrue(utils.ensure_ssl_cert_master())
settings = {'ssl-cert-master': 'unit/0'}
self.relation_set.assert_called_with(relation_id='cluster:0',
relation_settings=settings)
@patch.object(utils, 'peer_units')
@patch.object(utils, 'is_ssl_enabled')
def test_ensure_ssl_cert_master_ssl_not_leader(self, mock_is_ssl_enabled,
mock_peer_units):
mock_is_ssl_enabled.return_value = True
self.relation_ids.return_value = ['cluster:0']
self.local_unit.return_value = 'unit/0'
mock_peer_units.return_value = ['unit/1']
self.is_ssl_cert_master.return_value = False
self.is_elected_leader.return_value = False
self.assertFalse(utils.ensure_ssl_cert_master())
self.assertFalse(self.relation_set.called)
@patch.object(utils, 'peer_units')
@patch.object(utils, 'is_ssl_enabled')
def test_ensure_ssl_cert_master_is_leader_new_peer(self,
mock_is_ssl_enabled,
mock_peer_units):
def mock_rel_get(unit=None, **kwargs):
if unit == 'unit/0':
return 'unit/0'
return 'unknown'
self.relation_get.side_effect = mock_rel_get
mock_is_ssl_enabled.return_value = True
self.relation_ids.return_value = ['cluster:0']
self.local_unit.return_value = 'unit/0'
mock_peer_units.return_value = ['unit/1']
self.related_units.return_value = ['unit/1']
self.is_ssl_cert_master.return_value = False
self.is_elected_leader.return_value = True
self.assertFalse(utils.ensure_ssl_cert_master())
settings = {'ssl-cert-master': 'unit/0'}
self.relation_set.assert_called_with(relation_id='cluster:0',
relation_settings=settings)
@patch.object(utils, 'peer_units')
@patch.object(utils, 'is_ssl_enabled')
def test_ensure_ssl_cert_master_is_leader_no_new_peer(self,
mock_is_ssl_enabled,
mock_peer_units):
def mock_rel_get(unit=None, **kwargs):
if unit == 'unit/0':
return 'unit/0'
return 'unit/0'
self.relation_get.side_effect = mock_rel_get
mock_is_ssl_enabled.return_value = True
self.relation_ids.return_value = ['cluster:0']
self.local_unit.return_value = 'unit/0'
mock_peer_units.return_value = ['unit/1']
self.related_units.return_value = ['unit/1']
self.is_ssl_cert_master.return_value = False
self.is_elected_leader.return_value = True
self.assertFalse(utils.ensure_ssl_cert_master())
self.assertFalse(self.relation_set.called)
@patch.object(utils, 'peer_units')
@patch.object(utils, 'is_ssl_enabled')
def test_ensure_ssl_cert_master_is_leader_bad_votes(self,
mock_is_ssl_enabled,
mock_peer_units):
counter = {0: 0}
def mock_rel_get(unit=None, **kwargs):
"""Returns a mix of votes."""
if unit == 'unit/0':
return 'unit/0'
ret = 'unit/%d' % (counter[0])
counter[0] += 1
return ret
self.relation_get.side_effect = mock_rel_get
mock_is_ssl_enabled.return_value = True
self.relation_ids.return_value = ['cluster:0']
self.local_unit.return_value = 'unit/0'
mock_peer_units.return_value = ['unit/1']
self.related_units.return_value = ['unit/1']
self.is_ssl_cert_master.return_value = False
self.is_elected_leader.return_value = True
self.assertFalse(utils.ensure_ssl_cert_master())
self.assertFalse(self.relation_set.called)
@patch.object(utils, 'git_install_requested')
@patch.object(utils, 'git_clone_and_install')
@patch.object(utils, 'git_post_install')