Merge of python-redux work for havana cycle
This commit is contained in:
commit
94006884d5
1
.bzrignore
Normal file
1
.bzrignore
Normal file
@ -0,0 +1 @@
|
||||
.coverage
|
6
.coveragerc
Normal file
6
.coveragerc
Normal file
@ -0,0 +1,6 @@
|
||||
[report]
|
||||
# Regexes for lines to exclude from consideration
|
||||
exclude_lines =
|
||||
if __name__ == .__main__.:
|
||||
include=
|
||||
hooks/quantum_*
|
@ -4,6 +4,6 @@
|
||||
<pydev_property name="org.python.pydev.PYTHON_PROJECT_INTERPRETER">Default</pydev_property>
|
||||
<pydev_pathproperty name="org.python.pydev.PROJECT_SOURCE_PATH">
|
||||
<path>/quantum-gateway/hooks</path>
|
||||
<path>/quantum-gateway/templates</path>
|
||||
<path>/quantum-gateway/unit_tests</path>
|
||||
</pydev_pathproperty>
|
||||
</pydev_project>
|
||||
|
14
Makefile
Normal file
14
Makefile
Normal file
@ -0,0 +1,14 @@
|
||||
#!/usr/bin/make
|
||||
PYTHON := /usr/bin/env python
|
||||
|
||||
lint:
|
||||
@flake8 --exclude hooks/charmhelpers hooks
|
||||
@flake8 --exclude hooks/charmhelpers unit_tests
|
||||
@charm proof
|
||||
|
||||
test:
|
||||
@echo Starting tests...
|
||||
@$(PYTHON) /usr/bin/nosetests --nologcapture unit_tests
|
||||
|
||||
sync:
|
||||
@charm-helper-sync -c charm-helpers-sync.yaml
|
20
README.md
20
README.md
@ -1,7 +1,7 @@
|
||||
Overview
|
||||
--------
|
||||
|
||||
Quantum provides flexible software defined networking (SDN) for OpenStack.
|
||||
Neutron provides flexible software defined networking (SDN) for OpenStack.
|
||||
|
||||
This charm is designed to be used in conjunction with the rest of the OpenStack
|
||||
related charms in the charm store) to virtualized the network that Nova Compute
|
||||
@ -11,34 +11,34 @@ Its designed as a replacement for nova-network; however it does not yet
|
||||
support all of the features as nova-network (such as multihost) so may not
|
||||
be suitable for all.
|
||||
|
||||
Quantum supports a rich plugin/extension framework for propriety networking
|
||||
Neutron supports a rich plugin/extension framework for propriety networking
|
||||
solutions and supports (in core) Nicira NVP, NEC, Cisco and others...
|
||||
|
||||
The Openstack charms currently only support the fully free OpenvSwitch plugin
|
||||
and implements the 'Provider Router with Private Networks' use case.
|
||||
|
||||
See the upstream [Quantum documentation](http://docs.openstack.org/trunk/openstack-network/admin/content/use_cases_single_router.html)
|
||||
See the upstream [Neutron documentation](http://docs.openstack.org/trunk/openstack-network/admin/content/use_cases_single_router.html)
|
||||
for more details.
|
||||
|
||||
|
||||
Usage
|
||||
-----
|
||||
|
||||
In order to use Quantum with Openstack, you will need to deploy the
|
||||
In order to use Neutron with Openstack, you will need to deploy the
|
||||
nova-compute and nova-cloud-controller charms with the network-manager
|
||||
configuration set to 'Quantum':
|
||||
configuration set to 'Neutron':
|
||||
|
||||
nova-cloud-controller:
|
||||
network-manager: Quantum
|
||||
network-manager: Neutron
|
||||
|
||||
This decision must be made prior to deploying Openstack with Juju as
|
||||
Quantum is deployed baked into these charms from install onwards:
|
||||
Neutron is deployed baked into these charms from install onwards:
|
||||
|
||||
juju deploy nova-compute
|
||||
juju deploy --config config.yaml nova-cloud-controller
|
||||
juju add-relation nova-compute nova-cloud-controller
|
||||
|
||||
The Quantum Gateway can then be added to the deploying:
|
||||
The Neutron Gateway can then be added to the deploying:
|
||||
|
||||
juju deploy quantum-gateway
|
||||
juju add-relation quantum-gateway mysql
|
||||
@ -47,12 +47,10 @@ The Quantum Gateway can then be added to the deploying:
|
||||
|
||||
The gateway provides two key services; L3 network routing and DHCP services.
|
||||
|
||||
These are both required in a fully functional Quantum Openstack deployment.
|
||||
These are both required in a fully functional Neutron Openstack deployment.
|
||||
|
||||
TODO
|
||||
----
|
||||
|
||||
* Provide more network configuration use cases.
|
||||
* Support VLAN in addition to GRE+OpenFlow for L2 separation.
|
||||
* High Avaliability.
|
||||
* Support for propriety plugins for Quantum.
|
||||
|
9
charm-helpers-sync.yaml
Normal file
9
charm-helpers-sync.yaml
Normal file
@ -0,0 +1,9 @@
|
||||
branch: lp:charm-helpers
|
||||
destination: hooks/charmhelpers
|
||||
include:
|
||||
- core
|
||||
- fetch
|
||||
- contrib.openstack
|
||||
- contrib.hahelpers
|
||||
- contrib.network.ovs
|
||||
- payload.execd
|
12
config.yaml
12
config.yaml
@ -7,6 +7,7 @@ options:
|
||||
Supported values include:
|
||||
.
|
||||
ovs - OpenVSwitch
|
||||
nvp - Nicira NVP
|
||||
ext-port:
|
||||
type: string
|
||||
description: |
|
||||
@ -14,7 +15,7 @@ options:
|
||||
traffic to the external public network.
|
||||
openstack-origin:
|
||||
type: string
|
||||
default: cloud:precise-folsom
|
||||
default: distro
|
||||
description: |
|
||||
Optional configuration to support use of additional sources such as:
|
||||
.
|
||||
@ -22,3 +23,12 @@ options:
|
||||
- cloud:precise-folsom/proposed
|
||||
- cloud:precise-folsom
|
||||
- deb http://my.archive.com/ubuntu main|KEYID
|
||||
.
|
||||
Note that quantum/neutron is only supported >= Folsom.
|
||||
rabbit-user:
|
||||
type: string
|
||||
default: nova
|
||||
rabbit-vhost:
|
||||
type: string
|
||||
default: nova
|
||||
|
||||
|
@ -1 +1 @@
|
||||
hooks.py
|
||||
quantum_hooks.py
|
@ -1 +1 @@
|
||||
hooks.py
|
||||
quantum_hooks.py
|
0
hooks/charmhelpers/contrib/__init__.py
Normal file
0
hooks/charmhelpers/contrib/__init__.py
Normal file
0
hooks/charmhelpers/contrib/hahelpers/__init__.py
Normal file
0
hooks/charmhelpers/contrib/hahelpers/__init__.py
Normal file
58
hooks/charmhelpers/contrib/hahelpers/apache.py
Normal file
58
hooks/charmhelpers/contrib/hahelpers/apache.py
Normal file
@ -0,0 +1,58 @@
|
||||
#
|
||||
# Copyright 2012 Canonical Ltd.
|
||||
#
|
||||
# This file is sourced from lp:openstack-charm-helpers
|
||||
#
|
||||
# Authors:
|
||||
# James Page <james.page@ubuntu.com>
|
||||
# Adam Gandelman <adamg@ubuntu.com>
|
||||
#
|
||||
|
||||
import subprocess
|
||||
|
||||
from charmhelpers.core.hookenv import (
|
||||
config as config_get,
|
||||
relation_get,
|
||||
relation_ids,
|
||||
related_units as relation_list,
|
||||
log,
|
||||
INFO,
|
||||
)
|
||||
|
||||
|
||||
def get_cert():
|
||||
cert = config_get('ssl_cert')
|
||||
key = config_get('ssl_key')
|
||||
if not (cert and key):
|
||||
log("Inspecting identity-service relations for SSL certificate.",
|
||||
level=INFO)
|
||||
cert = key = None
|
||||
for r_id in relation_ids('identity-service'):
|
||||
for unit in relation_list(r_id):
|
||||
if not cert:
|
||||
cert = relation_get('ssl_cert',
|
||||
rid=r_id, unit=unit)
|
||||
if not key:
|
||||
key = relation_get('ssl_key',
|
||||
rid=r_id, unit=unit)
|
||||
return (cert, key)
|
||||
|
||||
|
||||
def get_ca_cert():
|
||||
ca_cert = None
|
||||
log("Inspecting identity-service relations for CA SSL certificate.",
|
||||
level=INFO)
|
||||
for r_id in relation_ids('identity-service'):
|
||||
for unit in relation_list(r_id):
|
||||
if not ca_cert:
|
||||
ca_cert = relation_get('ca_cert',
|
||||
rid=r_id, unit=unit)
|
||||
return ca_cert
|
||||
|
||||
|
||||
def install_ca_cert(ca_cert):
|
||||
if ca_cert:
|
||||
with open('/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt',
|
||||
'w') as crt:
|
||||
crt.write(ca_cert)
|
||||
subprocess.check_call(['update-ca-certificates', '--fresh'])
|
@ -1,24 +1,31 @@
|
||||
#
|
||||
# Copyright 2012 Canonical Ltd.
|
||||
#
|
||||
# This file is sourced from lp:openstack-charm-helpers
|
||||
#
|
||||
# Authors:
|
||||
# James Page <james.page@ubuntu.com>
|
||||
# Adam Gandelman <adamg@ubuntu.com>
|
||||
#
|
||||
|
||||
from lib.utils import (
|
||||
juju_log,
|
||||
relation_ids,
|
||||
relation_list,
|
||||
relation_get,
|
||||
get_unit_hostname,
|
||||
config_get
|
||||
)
|
||||
import subprocess
|
||||
import os
|
||||
|
||||
from socket import gethostname as get_unit_hostname
|
||||
|
||||
from charmhelpers.core.hookenv import (
|
||||
log,
|
||||
relation_ids,
|
||||
related_units as relation_list,
|
||||
relation_get,
|
||||
config as config_get,
|
||||
INFO,
|
||||
ERROR,
|
||||
unit_get,
|
||||
)
|
||||
|
||||
|
||||
class HAIncompleteConfig(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def is_clustered():
|
||||
for r_id in (relation_ids('ha') or []):
|
||||
@ -35,7 +42,7 @@ def is_leader(resource):
|
||||
cmd = [
|
||||
"crm", "resource",
|
||||
"show", resource
|
||||
]
|
||||
]
|
||||
try:
|
||||
status = subprocess.check_output(cmd)
|
||||
except subprocess.CalledProcessError:
|
||||
@ -67,12 +74,12 @@ def oldest_peer(peers):
|
||||
def eligible_leader(resource):
|
||||
if is_clustered():
|
||||
if not is_leader(resource):
|
||||
juju_log('INFO', 'Deferring action to CRM leader.')
|
||||
log('Deferring action to CRM leader.', level=INFO)
|
||||
return False
|
||||
else:
|
||||
peers = peer_units()
|
||||
if peers and not oldest_peer(peers):
|
||||
juju_log('INFO', 'Deferring action to oldest service unit.')
|
||||
log('Deferring action to oldest service unit.', level=INFO)
|
||||
return False
|
||||
return True
|
||||
|
||||
@ -90,10 +97,14 @@ def https():
|
||||
return True
|
||||
for r_id in relation_ids('identity-service'):
|
||||
for unit in relation_list(r_id):
|
||||
if (relation_get('https_keystone', rid=r_id, unit=unit) and
|
||||
relation_get('ssl_cert', rid=r_id, unit=unit) and
|
||||
relation_get('ssl_key', rid=r_id, unit=unit) and
|
||||
relation_get('ca_cert', rid=r_id, unit=unit)):
|
||||
rel_state = [
|
||||
relation_get('https_keystone', rid=r_id, unit=unit),
|
||||
relation_get('ssl_cert', rid=r_id, unit=unit),
|
||||
relation_get('ssl_key', rid=r_id, unit=unit),
|
||||
relation_get('ca_cert', rid=r_id, unit=unit),
|
||||
]
|
||||
# NOTE: works around (LP: #1203241)
|
||||
if (None not in rel_state) and ('' not in rel_state):
|
||||
return True
|
||||
return False
|
||||
|
||||
@ -128,3 +139,45 @@ def determine_haproxy_port(public_port):
|
||||
if https():
|
||||
i += 1
|
||||
return public_port - (i * 10)
|
||||
|
||||
|
||||
def get_hacluster_config():
|
||||
'''
|
||||
Obtains all relevant configuration from charm configuration required
|
||||
for initiating a relation to hacluster:
|
||||
|
||||
ha-bindiface, ha-mcastport, vip, vip_iface, vip_cidr
|
||||
|
||||
returns: dict: A dict containing settings keyed by setting name.
|
||||
raises: HAIncompleteConfig if settings are missing.
|
||||
'''
|
||||
settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'vip_iface', 'vip_cidr']
|
||||
conf = {}
|
||||
for setting in settings:
|
||||
conf[setting] = config_get(setting)
|
||||
missing = []
|
||||
[missing.append(s) for s, v in conf.iteritems() if v is None]
|
||||
if missing:
|
||||
log('Insufficient config data to configure hacluster.', level=ERROR)
|
||||
raise HAIncompleteConfig
|
||||
return conf
|
||||
|
||||
|
||||
def canonical_url(configs, vip_setting='vip'):
|
||||
'''
|
||||
Returns the correct HTTP URL to this host given the state of HTTPS
|
||||
configuration and hacluster.
|
||||
|
||||
:configs : OSTemplateRenderer: A config tempating object to inspect for
|
||||
a complete https context.
|
||||
:vip_setting: str: Setting in charm config that specifies
|
||||
VIP address.
|
||||
'''
|
||||
scheme = 'http'
|
||||
if 'https' in configs.complete_contexts():
|
||||
scheme = 'https'
|
||||
if is_clustered():
|
||||
addr = config_get(vip_setting)
|
||||
else:
|
||||
addr = unit_get('private-address')
|
||||
return '%s://%s' % (scheme, addr)
|
0
hooks/charmhelpers/contrib/network/__init__.py
Normal file
0
hooks/charmhelpers/contrib/network/__init__.py
Normal file
72
hooks/charmhelpers/contrib/network/ovs/__init__.py
Normal file
72
hooks/charmhelpers/contrib/network/ovs/__init__.py
Normal file
@ -0,0 +1,72 @@
|
||||
''' Helpers for interacting with OpenvSwitch '''
|
||||
import subprocess
|
||||
import os
|
||||
from charmhelpers.core.hookenv import (
|
||||
log, WARNING
|
||||
)
|
||||
from charmhelpers.core.host import (
|
||||
service
|
||||
)
|
||||
|
||||
|
||||
def add_bridge(name):
|
||||
''' Add the named bridge to openvswitch '''
|
||||
log('Creating bridge {}'.format(name))
|
||||
subprocess.check_call(["ovs-vsctl", "--", "--may-exist", "add-br", name])
|
||||
|
||||
|
||||
def del_bridge(name):
|
||||
''' Delete the named bridge from openvswitch '''
|
||||
log('Deleting bridge {}'.format(name))
|
||||
subprocess.check_call(["ovs-vsctl", "--", "--if-exists", "del-br", name])
|
||||
|
||||
|
||||
def add_bridge_port(name, port):
|
||||
''' Add a port to the named openvswitch bridge '''
|
||||
log('Adding port {} to bridge {}'.format(port, name))
|
||||
subprocess.check_call(["ovs-vsctl", "--", "--may-exist", "add-port",
|
||||
name, port])
|
||||
subprocess.check_call(["ip", "link", "set", port, "up"])
|
||||
|
||||
|
||||
def del_bridge_port(name, port):
|
||||
''' Delete a port from the named openvswitch bridge '''
|
||||
log('Deleting port {} from bridge {}'.format(port, name))
|
||||
subprocess.check_call(["ovs-vsctl", "--", "--if-exists", "del-port",
|
||||
name, port])
|
||||
subprocess.check_call(["ip", "link", "set", port, "down"])
|
||||
|
||||
|
||||
def set_manager(manager):
|
||||
''' Set the controller for the local openvswitch '''
|
||||
log('Setting manager for local ovs to {}'.format(manager))
|
||||
subprocess.check_call(['ovs-vsctl', 'set-manager',
|
||||
'ssl:{}'.format(manager)])
|
||||
|
||||
|
||||
CERT_PATH = '/etc/openvswitch/ovsclient-cert.pem'
|
||||
|
||||
|
||||
def get_certificate():
|
||||
''' Read openvswitch certificate from disk '''
|
||||
if os.path.exists(CERT_PATH):
|
||||
log('Reading ovs certificate from {}'.format(CERT_PATH))
|
||||
with open(CERT_PATH, 'r') as cert:
|
||||
full_cert = cert.read()
|
||||
begin_marker = "-----BEGIN CERTIFICATE-----"
|
||||
end_marker = "-----END CERTIFICATE-----"
|
||||
begin_index = full_cert.find(begin_marker)
|
||||
end_index = full_cert.rfind(end_marker)
|
||||
if end_index == -1 or begin_index == -1:
|
||||
raise RuntimeError("Certificate does not contain valid begin"
|
||||
" and end markers.")
|
||||
full_cert = full_cert[begin_index:(end_index + len(end_marker))]
|
||||
return full_cert
|
||||
else:
|
||||
log('Certificate not found', level=WARNING)
|
||||
return None
|
||||
|
||||
|
||||
def full_restart():
|
||||
''' Full restart and reload of openvswitch '''
|
||||
service('force-reload-kmod', 'openvswitch-switch')
|
0
hooks/charmhelpers/contrib/openstack/__init__.py
Normal file
0
hooks/charmhelpers/contrib/openstack/__init__.py
Normal file
522
hooks/charmhelpers/contrib/openstack/context.py
Normal file
522
hooks/charmhelpers/contrib/openstack/context.py
Normal file
@ -0,0 +1,522 @@
|
||||
import json
|
||||
import os
|
||||
|
||||
from base64 import b64decode
|
||||
|
||||
from subprocess import (
|
||||
check_call
|
||||
)
|
||||
|
||||
|
||||
from charmhelpers.fetch import (
|
||||
apt_install,
|
||||
filter_installed_packages,
|
||||
)
|
||||
|
||||
from charmhelpers.core.hookenv import (
|
||||
config,
|
||||
local_unit,
|
||||
log,
|
||||
relation_get,
|
||||
relation_ids,
|
||||
related_units,
|
||||
unit_get,
|
||||
unit_private_ip,
|
||||
ERROR,
|
||||
WARNING,
|
||||
)
|
||||
|
||||
from charmhelpers.contrib.hahelpers.cluster import (
|
||||
determine_api_port,
|
||||
determine_haproxy_port,
|
||||
https,
|
||||
is_clustered,
|
||||
peer_units,
|
||||
)
|
||||
|
||||
from charmhelpers.contrib.hahelpers.apache import (
|
||||
get_cert,
|
||||
get_ca_cert,
|
||||
)
|
||||
|
||||
from charmhelpers.contrib.openstack.neutron import (
|
||||
neutron_plugin_attribute,
|
||||
)
|
||||
|
||||
CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
|
||||
|
||||
|
||||
class OSContextError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def ensure_packages(packages):
|
||||
'''Install but do not upgrade required plugin packages'''
|
||||
required = filter_installed_packages(packages)
|
||||
if required:
|
||||
apt_install(required, fatal=True)
|
||||
|
||||
|
||||
def context_complete(ctxt):
|
||||
_missing = []
|
||||
for k, v in ctxt.iteritems():
|
||||
if v is None or v == '':
|
||||
_missing.append(k)
|
||||
if _missing:
|
||||
log('Missing required data: %s' % ' '.join(_missing), level='INFO')
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
class OSContextGenerator(object):
|
||||
interfaces = []
|
||||
|
||||
def __call__(self):
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
class SharedDBContext(OSContextGenerator):
|
||||
interfaces = ['shared-db']
|
||||
|
||||
def __init__(self, database=None, user=None, relation_prefix=None):
|
||||
'''
|
||||
Allows inspecting relation for settings prefixed with relation_prefix.
|
||||
This is useful for parsing access for multiple databases returned via
|
||||
the shared-db interface (eg, nova_password, quantum_password)
|
||||
'''
|
||||
self.relation_prefix = relation_prefix
|
||||
self.database = database
|
||||
self.user = user
|
||||
|
||||
def __call__(self):
|
||||
self.database = self.database or config('database')
|
||||
self.user = self.user or config('database-user')
|
||||
if None in [self.database, self.user]:
|
||||
log('Could not generate shared_db context. '
|
||||
'Missing required charm config options. '
|
||||
'(database name and user)')
|
||||
raise OSContextError
|
||||
ctxt = {}
|
||||
|
||||
password_setting = 'password'
|
||||
if self.relation_prefix:
|
||||
password_setting = self.relation_prefix + '_password'
|
||||
|
||||
for rid in relation_ids('shared-db'):
|
||||
for unit in related_units(rid):
|
||||
passwd = relation_get(password_setting, rid=rid, unit=unit)
|
||||
ctxt = {
|
||||
'database_host': relation_get('db_host', rid=rid,
|
||||
unit=unit),
|
||||
'database': self.database,
|
||||
'database_user': self.user,
|
||||
'database_password': passwd,
|
||||
}
|
||||
if context_complete(ctxt):
|
||||
return ctxt
|
||||
return {}
|
||||
|
||||
|
||||
class IdentityServiceContext(OSContextGenerator):
|
||||
interfaces = ['identity-service']
|
||||
|
||||
def __call__(self):
|
||||
log('Generating template context for identity-service')
|
||||
ctxt = {}
|
||||
|
||||
for rid in relation_ids('identity-service'):
|
||||
for unit in related_units(rid):
|
||||
ctxt = {
|
||||
'service_port': relation_get('service_port', rid=rid,
|
||||
unit=unit),
|
||||
'service_host': relation_get('service_host', rid=rid,
|
||||
unit=unit),
|
||||
'auth_host': relation_get('auth_host', rid=rid, unit=unit),
|
||||
'auth_port': relation_get('auth_port', rid=rid, unit=unit),
|
||||
'admin_tenant_name': relation_get('service_tenant',
|
||||
rid=rid, unit=unit),
|
||||
'admin_user': relation_get('service_username', rid=rid,
|
||||
unit=unit),
|
||||
'admin_password': relation_get('service_password', rid=rid,
|
||||
unit=unit),
|
||||
# XXX: Hard-coded http.
|
||||
'service_protocol': 'http',
|
||||
'auth_protocol': 'http',
|
||||
}
|
||||
if context_complete(ctxt):
|
||||
return ctxt
|
||||
return {}
|
||||
|
||||
|
||||
class AMQPContext(OSContextGenerator):
|
||||
interfaces = ['amqp']
|
||||
|
||||
def __call__(self):
|
||||
log('Generating template context for amqp')
|
||||
conf = config()
|
||||
try:
|
||||
username = conf['rabbit-user']
|
||||
vhost = conf['rabbit-vhost']
|
||||
except KeyError as e:
|
||||
log('Could not generate shared_db context. '
|
||||
'Missing required charm config options: %s.' % e)
|
||||
raise OSContextError
|
||||
|
||||
ctxt = {}
|
||||
for rid in relation_ids('amqp'):
|
||||
for unit in related_units(rid):
|
||||
if relation_get('clustered', rid=rid, unit=unit):
|
||||
ctxt['clustered'] = True
|
||||
ctxt['rabbitmq_host'] = relation_get('vip', rid=rid,
|
||||
unit=unit)
|
||||
else:
|
||||
ctxt['rabbitmq_host'] = relation_get('private-address',
|
||||
rid=rid, unit=unit)
|
||||
ctxt.update({
|
||||
'rabbitmq_user': username,
|
||||
'rabbitmq_password': relation_get('password', rid=rid,
|
||||
unit=unit),
|
||||
'rabbitmq_virtual_host': vhost,
|
||||
})
|
||||
if context_complete(ctxt):
|
||||
# Sufficient information found = break out!
|
||||
break
|
||||
# Used for active/active rabbitmq >= grizzly
|
||||
ctxt['rabbitmq_hosts'] = []
|
||||
for unit in related_units(rid):
|
||||
ctxt['rabbitmq_hosts'].append(relation_get('private-address',
|
||||
rid=rid, unit=unit))
|
||||
if not context_complete(ctxt):
|
||||
return {}
|
||||
else:
|
||||
return ctxt
|
||||
|
||||
|
||||
class CephContext(OSContextGenerator):
|
||||
interfaces = ['ceph']
|
||||
|
||||
def __call__(self):
|
||||
'''This generates context for /etc/ceph/ceph.conf templates'''
|
||||
if not relation_ids('ceph'):
|
||||
return {}
|
||||
log('Generating template context for ceph')
|
||||
mon_hosts = []
|
||||
auth = None
|
||||
key = None
|
||||
for rid in relation_ids('ceph'):
|
||||
for unit in related_units(rid):
|
||||
mon_hosts.append(relation_get('private-address', rid=rid,
|
||||
unit=unit))
|
||||
auth = relation_get('auth', rid=rid, unit=unit)
|
||||
key = relation_get('key', rid=rid, unit=unit)
|
||||
|
||||
ctxt = {
|
||||
'mon_hosts': ' '.join(mon_hosts),
|
||||
'auth': auth,
|
||||
'key': key,
|
||||
}
|
||||
|
||||
if not os.path.isdir('/etc/ceph'):
|
||||
os.mkdir('/etc/ceph')
|
||||
|
||||
if not context_complete(ctxt):
|
||||
return {}
|
||||
|
||||
ensure_packages(['ceph-common'])
|
||||
|
||||
return ctxt
|
||||
|
||||
|
||||
class HAProxyContext(OSContextGenerator):
|
||||
interfaces = ['cluster']
|
||||
|
||||
def __call__(self):
|
||||
'''
|
||||
Builds half a context for the haproxy template, which describes
|
||||
all peers to be included in the cluster. Each charm needs to include
|
||||
its own context generator that describes the port mapping.
|
||||
'''
|
||||
if not relation_ids('cluster'):
|
||||
return {}
|
||||
|
||||
cluster_hosts = {}
|
||||
l_unit = local_unit().replace('/', '-')
|
||||
cluster_hosts[l_unit] = unit_get('private-address')
|
||||
|
||||
for rid in relation_ids('cluster'):
|
||||
for unit in related_units(rid):
|
||||
_unit = unit.replace('/', '-')
|
||||
addr = relation_get('private-address', rid=rid, unit=unit)
|
||||
cluster_hosts[_unit] = addr
|
||||
|
||||
ctxt = {
|
||||
'units': cluster_hosts,
|
||||
}
|
||||
if len(cluster_hosts.keys()) > 1:
|
||||
# Enable haproxy when we have enough peers.
|
||||
log('Ensuring haproxy enabled in /etc/default/haproxy.')
|
||||
with open('/etc/default/haproxy', 'w') as out:
|
||||
out.write('ENABLED=1\n')
|
||||
return ctxt
|
||||
log('HAProxy context is incomplete, this unit has no peers.')
|
||||
return {}
|
||||
|
||||
|
||||
class ImageServiceContext(OSContextGenerator):
|
||||
interfaces = ['image-service']
|
||||
|
||||
def __call__(self):
|
||||
'''
|
||||
Obtains the glance API server from the image-service relation. Useful
|
||||
in nova and cinder (currently).
|
||||
'''
|
||||
log('Generating template context for image-service.')
|
||||
rids = relation_ids('image-service')
|
||||
if not rids:
|
||||
return {}
|
||||
for rid in rids:
|
||||
for unit in related_units(rid):
|
||||
api_server = relation_get('glance-api-server',
|
||||
rid=rid, unit=unit)
|
||||
if api_server:
|
||||
return {'glance_api_servers': api_server}
|
||||
log('ImageService context is incomplete. '
|
||||
'Missing required relation data.')
|
||||
return {}
|
||||
|
||||
|
||||
class ApacheSSLContext(OSContextGenerator):
|
||||
"""
|
||||
Generates a context for an apache vhost configuration that configures
|
||||
HTTPS reverse proxying for one or many endpoints. Generated context
|
||||
looks something like:
|
||||
{
|
||||
'namespace': 'cinder',
|
||||
'private_address': 'iscsi.mycinderhost.com',
|
||||
'endpoints': [(8776, 8766), (8777, 8767)]
|
||||
}
|
||||
|
||||
The endpoints list consists of a tuples mapping external ports
|
||||
to internal ports.
|
||||
"""
|
||||
interfaces = ['https']
|
||||
|
||||
# charms should inherit this context and set external ports
|
||||
# and service namespace accordingly.
|
||||
external_ports = []
|
||||
service_namespace = None
|
||||
|
||||
def enable_modules(self):
|
||||
cmd = ['a2enmod', 'ssl', 'proxy', 'proxy_http']
|
||||
check_call(cmd)
|
||||
|
||||
def configure_cert(self):
|
||||
if not os.path.isdir('/etc/apache2/ssl'):
|
||||
os.mkdir('/etc/apache2/ssl')
|
||||
ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace)
|
||||
if not os.path.isdir(ssl_dir):
|
||||
os.mkdir(ssl_dir)
|
||||
cert, key = get_cert()
|
||||
with open(os.path.join(ssl_dir, 'cert'), 'w') as cert_out:
|
||||
cert_out.write(b64decode(cert))
|
||||
with open(os.path.join(ssl_dir, 'key'), 'w') as key_out:
|
||||
key_out.write(b64decode(key))
|
||||
ca_cert = get_ca_cert()
|
||||
if ca_cert:
|
||||
with open(CA_CERT_PATH, 'w') as ca_out:
|
||||
ca_out.write(b64decode(ca_cert))
|
||||
check_call(['update-ca-certificates'])
|
||||
|
||||
def __call__(self):
|
||||
if isinstance(self.external_ports, basestring):
|
||||
self.external_ports = [self.external_ports]
|
||||
if (not self.external_ports or not https()):
|
||||
return {}
|
||||
|
||||
self.configure_cert()
|
||||
self.enable_modules()
|
||||
|
||||
ctxt = {
|
||||
'namespace': self.service_namespace,
|
||||
'private_address': unit_get('private-address'),
|
||||
'endpoints': []
|
||||
}
|
||||
for ext_port in self.external_ports:
|
||||
if peer_units() or is_clustered():
|
||||
int_port = determine_haproxy_port(ext_port)
|
||||
else:
|
||||
int_port = determine_api_port(ext_port)
|
||||
portmap = (int(ext_port), int(int_port))
|
||||
ctxt['endpoints'].append(portmap)
|
||||
return ctxt
|
||||
|
||||
|
||||
class NeutronContext(object):
|
||||
interfaces = []
|
||||
|
||||
@property
|
||||
def plugin(self):
|
||||
return None
|
||||
|
||||
@property
|
||||
def network_manager(self):
|
||||
return None
|
||||
|
||||
@property
|
||||
def packages(self):
|
||||
return neutron_plugin_attribute(
|
||||
self.plugin, 'packages', self.network_manager)
|
||||
|
||||
@property
|
||||
def neutron_security_groups(self):
|
||||
return None
|
||||
|
||||
def _ensure_packages(self):
|
||||
[ensure_packages(pkgs) for pkgs in self.packages]
|
||||
|
||||
def _save_flag_file(self):
|
||||
if self.network_manager == 'quantum':
|
||||
_file = '/etc/nova/quantum_plugin.conf'
|
||||
else:
|
||||
_file = '/etc/nova/neutron_plugin.conf'
|
||||
with open(_file, 'wb') as out:
|
||||
out.write(self.plugin + '\n')
|
||||
|
||||
def ovs_ctxt(self):
|
||||
driver = neutron_plugin_attribute(self.plugin, 'driver',
|
||||
self.network_manager)
|
||||
|
||||
ovs_ctxt = {
|
||||
'core_plugin': driver,
|
||||
'neutron_plugin': 'ovs',
|
||||
'neutron_security_groups': self.neutron_security_groups,
|
||||
'local_ip': unit_private_ip(),
|
||||
}
|
||||
|
||||
return ovs_ctxt
|
||||
|
||||
def __call__(self):
|
||||
self._ensure_packages()
|
||||
|
||||
if self.network_manager not in ['quantum', 'neutron']:
|
||||
return {}
|
||||
|
||||
if not self.plugin:
|
||||
return {}
|
||||
|
||||
ctxt = {'network_manager': self.network_manager}
|
||||
|
||||
if self.plugin == 'ovs':
|
||||
ctxt.update(self.ovs_ctxt())
|
||||
|
||||
self._save_flag_file()
|
||||
return ctxt
|
||||
|
||||
|
||||
class OSConfigFlagContext(OSContextGenerator):
|
||||
'''
|
||||
Responsible adding user-defined config-flags in charm config to a
|
||||
to a template context.
|
||||
'''
|
||||
def __call__(self):
|
||||
config_flags = config('config-flags')
|
||||
if not config_flags or config_flags in ['None', '']:
|
||||
return {}
|
||||
config_flags = config_flags.split(',')
|
||||
flags = {}
|
||||
for flag in config_flags:
|
||||
if '=' not in flag:
|
||||
log('Improperly formatted config-flag, expected k=v '
|
||||
'got %s' % flag, level=WARNING)
|
||||
continue
|
||||
k, v = flag.split('=')
|
||||
flags[k.strip()] = v
|
||||
ctxt = {'user_config_flags': flags}
|
||||
return ctxt
|
||||
|
||||
|
||||
class SubordinateConfigContext(OSContextGenerator):
|
||||
"""
|
||||
Responsible for inspecting relations to subordinates that
|
||||
may be exporting required config via a json blob.
|
||||
|
||||
The subordinate interface allows subordinates to export their
|
||||
configuration requirements to the principle for multiple config
|
||||
files and multiple serivces. Ie, a subordinate that has interfaces
|
||||
to both glance and nova may export to following yaml blob as json:
|
||||
|
||||
glance:
|
||||
/etc/glance/glance-api.conf:
|
||||
sections:
|
||||
DEFAULT:
|
||||
- [key1, value1]
|
||||
/etc/glance/glance-registry.conf:
|
||||
MYSECTION:
|
||||
- [key2, value2]
|
||||
nova:
|
||||
/etc/nova/nova.conf:
|
||||
sections:
|
||||
DEFAULT:
|
||||
- [key3, value3]
|
||||
|
||||
|
||||
It is then up to the principle charms to subscribe this context to
|
||||
the service+config file it is interestd in. Configuration data will
|
||||
be available in the template context, in glance's case, as:
|
||||
ctxt = {
|
||||
... other context ...
|
||||
'subordinate_config': {
|
||||
'DEFAULT': {
|
||||
'key1': 'value1',
|
||||
},
|
||||
'MYSECTION': {
|
||||
'key2': 'value2',
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
"""
|
||||
def __init__(self, service, config_file, interface):
|
||||
"""
|
||||
:param service : Service name key to query in any subordinate
|
||||
data found
|
||||
:param config_file : Service's config file to query sections
|
||||
:param interface : Subordinate interface to inspect
|
||||
"""
|
||||
self.service = service
|
||||
self.config_file = config_file
|
||||
self.interface = interface
|
||||
|
||||
def __call__(self):
|
||||
ctxt = {}
|
||||
for rid in relation_ids(self.interface):
|
||||
for unit in related_units(rid):
|
||||
sub_config = relation_get('subordinate_configuration',
|
||||
rid=rid, unit=unit)
|
||||
if sub_config and sub_config != '':
|
||||
try:
|
||||
sub_config = json.loads(sub_config)
|
||||
except:
|
||||
log('Could not parse JSON from subordinate_config '
|
||||
'setting from %s' % rid, level=ERROR)
|
||||
continue
|
||||
|
||||
if self.service not in sub_config:
|
||||
log('Found subordinate_config on %s but it contained'
|
||||
'nothing for %s service' % (rid, self.service))
|
||||
continue
|
||||
|
||||
sub_config = sub_config[self.service]
|
||||
if self.config_file not in sub_config:
|
||||
log('Found subordinate_config on %s but it contained'
|
||||
'nothing for %s' % (rid, self.config_file))
|
||||
continue
|
||||
|
||||
sub_config = sub_config[self.config_file]
|
||||
for k, v in sub_config.iteritems():
|
||||
ctxt[k] = v
|
||||
|
||||
if not ctxt:
|
||||
ctxt['sections'] = {}
|
||||
|
||||
return ctxt
|
117
hooks/charmhelpers/contrib/openstack/neutron.py
Normal file
117
hooks/charmhelpers/contrib/openstack/neutron.py
Normal file
@ -0,0 +1,117 @@
|
||||
# Various utilies for dealing with Neutron and the renaming from Quantum.
|
||||
|
||||
from subprocess import check_output
|
||||
|
||||
from charmhelpers.core.hookenv import (
|
||||
config,
|
||||
log,
|
||||
ERROR,
|
||||
)
|
||||
|
||||
from charmhelpers.contrib.openstack.utils import os_release
|
||||
|
||||
|
||||
def headers_package():
|
||||
"""Ensures correct linux-headers for running kernel are installed,
|
||||
for building DKMS package"""
|
||||
kver = check_output(['uname', '-r']).strip()
|
||||
return 'linux-headers-%s' % kver
|
||||
|
||||
|
||||
# legacy
|
||||
def quantum_plugins():
|
||||
from charmhelpers.contrib.openstack import context
|
||||
return {
|
||||
'ovs': {
|
||||
'config': '/etc/quantum/plugins/openvswitch/'
|
||||
'ovs_quantum_plugin.ini',
|
||||
'driver': 'quantum.plugins.openvswitch.ovs_quantum_plugin.'
|
||||
'OVSQuantumPluginV2',
|
||||
'contexts': [
|
||||
context.SharedDBContext(user=config('neutron-database-user'),
|
||||
database=config('neutron-database'),
|
||||
relation_prefix='neutron')],
|
||||
'services': ['quantum-plugin-openvswitch-agent'],
|
||||
'packages': [[headers_package(), 'openvswitch-datapath-dkms'],
|
||||
['quantum-plugin-openvswitch-agent']],
|
||||
},
|
||||
'nvp': {
|
||||
'config': '/etc/quantum/plugins/nicira/nvp.ini',
|
||||
'driver': 'quantum.plugins.nicira.nicira_nvp_plugin.'
|
||||
'QuantumPlugin.NvpPluginV2',
|
||||
'services': [],
|
||||
'packages': [],
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def neutron_plugins():
|
||||
from charmhelpers.contrib.openstack import context
|
||||
return {
|
||||
'ovs': {
|
||||
'config': '/etc/neutron/plugins/openvswitch/'
|
||||
'ovs_neutron_plugin.ini',
|
||||
'driver': 'neutron.plugins.openvswitch.ovs_neutron_plugin.'
|
||||
'OVSNeutronPluginV2',
|
||||
'contexts': [
|
||||
context.SharedDBContext(user=config('neutron-database-user'),
|
||||
database=config('neutron-database'),
|
||||
relation_prefix='neutron')],
|
||||
'services': ['neutron-plugin-openvswitch-agent'],
|
||||
'packages': [[headers_package(), 'openvswitch-datapath-dkms'],
|
||||
['quantum-plugin-openvswitch-agent']],
|
||||
},
|
||||
'nvp': {
|
||||
'config': '/etc/neutron/plugins/nicira/nvp.ini',
|
||||
'driver': 'neutron.plugins.nicira.nicira_nvp_plugin.'
|
||||
'NeutronPlugin.NvpPluginV2',
|
||||
'services': [],
|
||||
'packages': [],
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def neutron_plugin_attribute(plugin, attr, net_manager=None):
|
||||
manager = net_manager or network_manager()
|
||||
if manager == 'quantum':
|
||||
plugins = quantum_plugins()
|
||||
elif manager == 'neutron':
|
||||
plugins = neutron_plugins()
|
||||
else:
|
||||
log('Error: Network manager does not support plugins.')
|
||||
raise Exception
|
||||
|
||||
try:
|
||||
_plugin = plugins[plugin]
|
||||
except KeyError:
|
||||
log('Unrecognised plugin for %s: %s' % (manager, plugin), level=ERROR)
|
||||
raise Exception
|
||||
|
||||
try:
|
||||
return _plugin[attr]
|
||||
except KeyError:
|
||||
return None
|
||||
|
||||
|
||||
def network_manager():
|
||||
'''
|
||||
Deals with the renaming of Quantum to Neutron in H and any situations
|
||||
that require compatability (eg, deploying H with network-manager=quantum,
|
||||
upgrading from G).
|
||||
'''
|
||||
release = os_release('nova-common')
|
||||
manager = config('network-manager').lower()
|
||||
|
||||
if manager not in ['quantum', 'neutron']:
|
||||
return manager
|
||||
|
||||
if release in ['essex']:
|
||||
# E does not support neutron
|
||||
log('Neutron networking not supported in Essex.', level=ERROR)
|
||||
raise Exception
|
||||
elif release in ['folsom', 'grizzly']:
|
||||
# neutron is named quantum in F and G
|
||||
return 'quantum'
|
||||
else:
|
||||
# ensure accurate naming for all releases post-H
|
||||
return 'neutron'
|
@ -0,0 +1,2 @@
|
||||
# dummy __init__.py to fool syncer into thinking this is a syncable python
|
||||
# module
|
280
hooks/charmhelpers/contrib/openstack/templating.py
Normal file
280
hooks/charmhelpers/contrib/openstack/templating.py
Normal file
@ -0,0 +1,280 @@
|
||||
import os
|
||||
|
||||
from charmhelpers.fetch import apt_install
|
||||
|
||||
from charmhelpers.core.hookenv import (
|
||||
log,
|
||||
ERROR,
|
||||
INFO
|
||||
)
|
||||
|
||||
from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
|
||||
|
||||
try:
|
||||
from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
|
||||
except ImportError:
|
||||
# python-jinja2 may not be installed yet, or we're running unittests.
|
||||
FileSystemLoader = ChoiceLoader = Environment = exceptions = None
|
||||
|
||||
|
||||
class OSConfigException(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def get_loader(templates_dir, os_release):
|
||||
"""
|
||||
Create a jinja2.ChoiceLoader containing template dirs up to
|
||||
and including os_release. If directory template directory
|
||||
is missing at templates_dir, it will be omitted from the loader.
|
||||
templates_dir is added to the bottom of the search list as a base
|
||||
loading dir.
|
||||
|
||||
A charm may also ship a templates dir with this module
|
||||
and it will be appended to the bottom of the search list, eg:
|
||||
hooks/charmhelpers/contrib/openstack/templates.
|
||||
|
||||
:param templates_dir: str: Base template directory containing release
|
||||
sub-directories.
|
||||
:param os_release : str: OpenStack release codename to construct template
|
||||
loader.
|
||||
|
||||
:returns : jinja2.ChoiceLoader constructed with a list of
|
||||
jinja2.FilesystemLoaders, ordered in descending
|
||||
order by OpenStack release.
|
||||
"""
|
||||
tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
|
||||
for rel in OPENSTACK_CODENAMES.itervalues()]
|
||||
|
||||
if not os.path.isdir(templates_dir):
|
||||
log('Templates directory not found @ %s.' % templates_dir,
|
||||
level=ERROR)
|
||||
raise OSConfigException
|
||||
|
||||
# the bottom contains tempaltes_dir and possibly a common templates dir
|
||||
# shipped with the helper.
|
||||
loaders = [FileSystemLoader(templates_dir)]
|
||||
helper_templates = os.path.join(os.path.dirname(__file__), 'templates')
|
||||
if os.path.isdir(helper_templates):
|
||||
loaders.append(FileSystemLoader(helper_templates))
|
||||
|
||||
for rel, tmpl_dir in tmpl_dirs:
|
||||
if os.path.isdir(tmpl_dir):
|
||||
loaders.insert(0, FileSystemLoader(tmpl_dir))
|
||||
if rel == os_release:
|
||||
break
|
||||
log('Creating choice loader with dirs: %s' %
|
||||
[l.searchpath for l in loaders], level=INFO)
|
||||
return ChoiceLoader(loaders)
|
||||
|
||||
|
||||
class OSConfigTemplate(object):
|
||||
"""
|
||||
Associates a config file template with a list of context generators.
|
||||
Responsible for constructing a template context based on those generators.
|
||||
"""
|
||||
def __init__(self, config_file, contexts):
|
||||
self.config_file = config_file
|
||||
|
||||
if hasattr(contexts, '__call__'):
|
||||
self.contexts = [contexts]
|
||||
else:
|
||||
self.contexts = contexts
|
||||
|
||||
self._complete_contexts = []
|
||||
|
||||
def context(self):
|
||||
ctxt = {}
|
||||
for context in self.contexts:
|
||||
_ctxt = context()
|
||||
if _ctxt:
|
||||
ctxt.update(_ctxt)
|
||||
# track interfaces for every complete context.
|
||||
[self._complete_contexts.append(interface)
|
||||
for interface in context.interfaces
|
||||
if interface not in self._complete_contexts]
|
||||
return ctxt
|
||||
|
||||
def complete_contexts(self):
|
||||
'''
|
||||
Return a list of interfaces that have atisfied contexts.
|
||||
'''
|
||||
if self._complete_contexts:
|
||||
return self._complete_contexts
|
||||
self.context()
|
||||
return self._complete_contexts
|
||||
|
||||
|
||||
class OSConfigRenderer(object):
|
||||
"""
|
||||
This class provides a common templating system to be used by OpenStack
|
||||
charms. It is intended to help charms share common code and templates,
|
||||
and ease the burden of managing config templates across multiple OpenStack
|
||||
releases.
|
||||
|
||||
Basic usage:
|
||||
# import some common context generates from charmhelpers
|
||||
from charmhelpers.contrib.openstack import context
|
||||
|
||||
# Create a renderer object for a specific OS release.
|
||||
configs = OSConfigRenderer(templates_dir='/tmp/templates',
|
||||
openstack_release='folsom')
|
||||
# register some config files with context generators.
|
||||
configs.register(config_file='/etc/nova/nova.conf',
|
||||
contexts=[context.SharedDBContext(),
|
||||
context.AMQPContext()])
|
||||
configs.register(config_file='/etc/nova/api-paste.ini',
|
||||
contexts=[context.IdentityServiceContext()])
|
||||
configs.register(config_file='/etc/haproxy/haproxy.conf',
|
||||
contexts=[context.HAProxyContext()])
|
||||
# write out a single config
|
||||
configs.write('/etc/nova/nova.conf')
|
||||
# write out all registered configs
|
||||
configs.write_all()
|
||||
|
||||
Details:
|
||||
|
||||
OpenStack Releases and template loading
|
||||
---------------------------------------
|
||||
When the object is instantiated, it is associated with a specific OS
|
||||
release. This dictates how the template loader will be constructed.
|
||||
|
||||
The constructed loader attempts to load the template from several places
|
||||
in the following order:
|
||||
- from the most recent OS release-specific template dir (if one exists)
|
||||
- the base templates_dir
|
||||
- a template directory shipped in the charm with this helper file.
|
||||
|
||||
|
||||
For the example above, '/tmp/templates' contains the following structure:
|
||||
/tmp/templates/nova.conf
|
||||
/tmp/templates/api-paste.ini
|
||||
/tmp/templates/grizzly/api-paste.ini
|
||||
/tmp/templates/havana/api-paste.ini
|
||||
|
||||
Since it was registered with the grizzly release, it first seraches
|
||||
the grizzly directory for nova.conf, then the templates dir.
|
||||
|
||||
When writing api-paste.ini, it will find the template in the grizzly
|
||||
directory.
|
||||
|
||||
If the object were created with folsom, it would fall back to the
|
||||
base templates dir for its api-paste.ini template.
|
||||
|
||||
This system should help manage changes in config files through
|
||||
openstack releases, allowing charms to fall back to the most recently
|
||||
updated config template for a given release
|
||||
|
||||
The haproxy.conf, since it is not shipped in the templates dir, will
|
||||
be loaded from the module directory's template directory, eg
|
||||
$CHARM/hooks/charmhelpers/contrib/openstack/templates. This allows
|
||||
us to ship common templates (haproxy, apache) with the helpers.
|
||||
|
||||
Context generators
|
||||
---------------------------------------
|
||||
Context generators are used to generate template contexts during hook
|
||||
execution. Doing so may require inspecting service relations, charm
|
||||
config, etc. When registered, a config file is associated with a list
|
||||
of generators. When a template is rendered and written, all context
|
||||
generates are called in a chain to generate the context dictionary
|
||||
passed to the jinja2 template. See context.py for more info.
|
||||
"""
|
||||
def __init__(self, templates_dir, openstack_release):
|
||||
if not os.path.isdir(templates_dir):
|
||||
log('Could not locate templates dir %s' % templates_dir,
|
||||
level=ERROR)
|
||||
raise OSConfigException
|
||||
|
||||
self.templates_dir = templates_dir
|
||||
self.openstack_release = openstack_release
|
||||
self.templates = {}
|
||||
self._tmpl_env = None
|
||||
|
||||
if None in [Environment, ChoiceLoader, FileSystemLoader]:
|
||||
# if this code is running, the object is created pre-install hook.
|
||||
# jinja2 shouldn't get touched until the module is reloaded on next
|
||||
# hook execution, with proper jinja2 bits successfully imported.
|
||||
apt_install('python-jinja2')
|
||||
|
||||
def register(self, config_file, contexts):
|
||||
"""
|
||||
Register a config file with a list of context generators to be called
|
||||
during rendering.
|
||||
"""
|
||||
self.templates[config_file] = OSConfigTemplate(config_file=config_file,
|
||||
contexts=contexts)
|
||||
log('Registered config file: %s' % config_file, level=INFO)
|
||||
|
||||
def _get_tmpl_env(self):
|
||||
if not self._tmpl_env:
|
||||
loader = get_loader(self.templates_dir, self.openstack_release)
|
||||
self._tmpl_env = Environment(loader=loader)
|
||||
|
||||
def _get_template(self, template):
|
||||
self._get_tmpl_env()
|
||||
template = self._tmpl_env.get_template(template)
|
||||
log('Loaded template from %s' % template.filename, level=INFO)
|
||||
return template
|
||||
|
||||
def render(self, config_file):
|
||||
if config_file not in self.templates:
|
||||
log('Config not registered: %s' % config_file, level=ERROR)
|
||||
raise OSConfigException
|
||||
ctxt = self.templates[config_file].context()
|
||||
|
||||
_tmpl = os.path.basename(config_file)
|
||||
try:
|
||||
template = self._get_template(_tmpl)
|
||||
except exceptions.TemplateNotFound:
|
||||
# if no template is found with basename, try looking for it
|
||||
# using a munged full path, eg:
|
||||
# /etc/apache2/apache2.conf -> etc_apache2_apache2.conf
|
||||
_tmpl = '_'.join(config_file.split('/')[1:])
|
||||
try:
|
||||
template = self._get_template(_tmpl)
|
||||
except exceptions.TemplateNotFound as e:
|
||||
log('Could not load template from %s by %s or %s.' %
|
||||
(self.templates_dir, os.path.basename(config_file), _tmpl),
|
||||
level=ERROR)
|
||||
raise e
|
||||
|
||||
log('Rendering from template: %s' % _tmpl, level=INFO)
|
||||
return template.render(ctxt)
|
||||
|
||||
def write(self, config_file):
|
||||
"""
|
||||
Write a single config file, raises if config file is not registered.
|
||||
"""
|
||||
if config_file not in self.templates:
|
||||
log('Config not registered: %s' % config_file, level=ERROR)
|
||||
raise OSConfigException
|
||||
|
||||
_out = self.render(config_file)
|
||||
|
||||
with open(config_file, 'wb') as out:
|
||||
out.write(_out)
|
||||
|
||||
log('Wrote template %s.' % config_file, level=INFO)
|
||||
|
||||
def write_all(self):
|
||||
"""
|
||||
Write out all registered config files.
|
||||
"""
|
||||
[self.write(k) for k in self.templates.iterkeys()]
|
||||
|
||||
def set_release(self, openstack_release):
|
||||
"""
|
||||
Resets the template environment and generates a new template loader
|
||||
based on a the new openstack release.
|
||||
"""
|
||||
self._tmpl_env = None
|
||||
self.openstack_release = openstack_release
|
||||
self._get_tmpl_env()
|
||||
|
||||
def complete_contexts(self):
|
||||
'''
|
||||
Returns a list of context interfaces that yield a complete context.
|
||||
'''
|
||||
interfaces = []
|
||||
[interfaces.extend(i.complete_contexts())
|
||||
for i in self.templates.itervalues()]
|
||||
return interfaces
|
365
hooks/charmhelpers/contrib/openstack/utils.py
Normal file
365
hooks/charmhelpers/contrib/openstack/utils.py
Normal file
@ -0,0 +1,365 @@
|
||||
#!/usr/bin/python
|
||||
|
||||
# Common python helper functions used for OpenStack charms.
|
||||
from collections import OrderedDict
|
||||
|
||||
import apt_pkg as apt
|
||||
import subprocess
|
||||
import os
|
||||
import socket
|
||||
import sys
|
||||
|
||||
from charmhelpers.core.hookenv import (
|
||||
config,
|
||||
log as juju_log,
|
||||
charm_dir,
|
||||
)
|
||||
|
||||
from charmhelpers.core.host import (
|
||||
lsb_release,
|
||||
)
|
||||
|
||||
from charmhelpers.fetch import (
|
||||
apt_install,
|
||||
)
|
||||
|
||||
CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu"
|
||||
CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'
|
||||
|
||||
UBUNTU_OPENSTACK_RELEASE = OrderedDict([
|
||||
('oneiric', 'diablo'),
|
||||
('precise', 'essex'),
|
||||
('quantal', 'folsom'),
|
||||
('raring', 'grizzly'),
|
||||
('saucy', 'havana'),
|
||||
])
|
||||
|
||||
|
||||
OPENSTACK_CODENAMES = OrderedDict([
|
||||
('2011.2', 'diablo'),
|
||||
('2012.1', 'essex'),
|
||||
('2012.2', 'folsom'),
|
||||
('2013.1', 'grizzly'),
|
||||
('2013.2', 'havana'),
|
||||
('2014.1', 'icehouse'),
|
||||
])
|
||||
|
||||
# The ugly duckling
|
||||
SWIFT_CODENAMES = OrderedDict([
|
||||
('1.4.3', 'diablo'),
|
||||
('1.4.8', 'essex'),
|
||||
('1.7.4', 'folsom'),
|
||||
('1.8.0', 'grizzly'),
|
||||
('1.7.7', 'grizzly'),
|
||||
('1.7.6', 'grizzly'),
|
||||
('1.10.0', 'havana'),
|
||||
('1.9.1', 'havana'),
|
||||
('1.9.0', 'havana'),
|
||||
])
|
||||
|
||||
|
||||
def error_out(msg):
|
||||
juju_log("FATAL ERROR: %s" % msg, level='ERROR')
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def get_os_codename_install_source(src):
|
||||
'''Derive OpenStack release codename from a given installation source.'''
|
||||
ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
|
||||
rel = ''
|
||||
if src == 'distro':
|
||||
try:
|
||||
rel = UBUNTU_OPENSTACK_RELEASE[ubuntu_rel]
|
||||
except KeyError:
|
||||
e = 'Could not derive openstack release for '\
|
||||
'this Ubuntu release: %s' % ubuntu_rel
|
||||
error_out(e)
|
||||
return rel
|
||||
|
||||
if src.startswith('cloud:'):
|
||||
ca_rel = src.split(':')[1]
|
||||
ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0]
|
||||
return ca_rel
|
||||
|
||||
# Best guess match based on deb string provided
|
||||
if src.startswith('deb') or src.startswith('ppa'):
|
||||
for k, v in OPENSTACK_CODENAMES.iteritems():
|
||||
if v in src:
|
||||
return v
|
||||
|
||||
|
||||
def get_os_version_install_source(src):
|
||||
codename = get_os_codename_install_source(src)
|
||||
return get_os_version_codename(codename)
|
||||
|
||||
|
||||
def get_os_codename_version(vers):
|
||||
'''Determine OpenStack codename from version number.'''
|
||||
try:
|
||||
return OPENSTACK_CODENAMES[vers]
|
||||
except KeyError:
|
||||
e = 'Could not determine OpenStack codename for version %s' % vers
|
||||
error_out(e)
|
||||
|
||||
|
||||
def get_os_version_codename(codename):
|
||||
'''Determine OpenStack version number from codename.'''
|
||||
for k, v in OPENSTACK_CODENAMES.iteritems():
|
||||
if v == codename:
|
||||
return k
|
||||
e = 'Could not derive OpenStack version for '\
|
||||
'codename: %s' % codename
|
||||
error_out(e)
|
||||
|
||||
|
||||
def get_os_codename_package(package, fatal=True):
|
||||
'''Derive OpenStack release codename from an installed package.'''
|
||||
apt.init()
|
||||
cache = apt.Cache()
|
||||
|
||||
try:
|
||||
pkg = cache[package]
|
||||
except:
|
||||
if not fatal:
|
||||
return None
|
||||
# the package is unknown to the current apt cache.
|
||||
e = 'Could not determine version of package with no installation '\
|
||||
'candidate: %s' % package
|
||||
error_out(e)
|
||||
|
||||
if not pkg.current_ver:
|
||||
if not fatal:
|
||||
return None
|
||||
# package is known, but no version is currently installed.
|
||||
e = 'Could not determine version of uninstalled package: %s' % package
|
||||
error_out(e)
|
||||
|
||||
vers = apt.upstream_version(pkg.current_ver.ver_str)
|
||||
|
||||
try:
|
||||
if 'swift' in pkg.name:
|
||||
swift_vers = vers[:5]
|
||||
if swift_vers not in SWIFT_CODENAMES:
|
||||
# Deal with 1.10.0 upward
|
||||
swift_vers = vers[:6]
|
||||
return SWIFT_CODENAMES[swift_vers]
|
||||
else:
|
||||
vers = vers[:6]
|
||||
return OPENSTACK_CODENAMES[vers]
|
||||
except KeyError:
|
||||
e = 'Could not determine OpenStack codename for version %s' % vers
|
||||
error_out(e)
|
||||
|
||||
|
||||
def get_os_version_package(pkg, fatal=True):
|
||||
'''Derive OpenStack version number from an installed package.'''
|
||||
codename = get_os_codename_package(pkg, fatal=fatal)
|
||||
|
||||
if not codename:
|
||||
return None
|
||||
|
||||
if 'swift' in pkg:
|
||||
vers_map = SWIFT_CODENAMES
|
||||
else:
|
||||
vers_map = OPENSTACK_CODENAMES
|
||||
|
||||
for version, cname in vers_map.iteritems():
|
||||
if cname == codename:
|
||||
return version
|
||||
#e = "Could not determine OpenStack version for package: %s" % pkg
|
||||
#error_out(e)
|
||||
|
||||
|
||||
os_rel = None
|
||||
|
||||
|
||||
def os_release(package, base='essex'):
|
||||
'''
|
||||
Returns OpenStack release codename from a cached global.
|
||||
If the codename can not be determined from either an installed package or
|
||||
the installation source, the earliest release supported by the charm should
|
||||
be returned.
|
||||
'''
|
||||
global os_rel
|
||||
if os_rel:
|
||||
return os_rel
|
||||
os_rel = (get_os_codename_package(package, fatal=False) or
|
||||
get_os_codename_install_source(config('openstack-origin')) or
|
||||
base)
|
||||
return os_rel
|
||||
|
||||
|
||||
def import_key(keyid):
|
||||
cmd = "apt-key adv --keyserver keyserver.ubuntu.com " \
|
||||
"--recv-keys %s" % keyid
|
||||
try:
|
||||
subprocess.check_call(cmd.split(' '))
|
||||
except subprocess.CalledProcessError:
|
||||
error_out("Error importing repo key %s" % keyid)
|
||||
|
||||
|
||||
def configure_installation_source(rel):
|
||||
'''Configure apt installation source.'''
|
||||
if rel == 'distro':
|
||||
return
|
||||
elif rel[:4] == "ppa:":
|
||||
src = rel
|
||||
subprocess.check_call(["add-apt-repository", "-y", src])
|
||||
elif rel[:3] == "deb":
|
||||
l = len(rel.split('|'))
|
||||
if l == 2:
|
||||
src, key = rel.split('|')
|
||||
juju_log("Importing PPA key from keyserver for %s" % src)
|
||||
import_key(key)
|
||||
elif l == 1:
|
||||
src = rel
|
||||
with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
|
||||
f.write(src)
|
||||
elif rel[:6] == 'cloud:':
|
||||
ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
|
||||
rel = rel.split(':')[1]
|
||||
u_rel = rel.split('-')[0]
|
||||
ca_rel = rel.split('-')[1]
|
||||
|
||||
if u_rel != ubuntu_rel:
|
||||
e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\
|
||||
'version (%s)' % (ca_rel, ubuntu_rel)
|
||||
error_out(e)
|
||||
|
||||
if 'staging' in ca_rel:
|
||||
# staging is just a regular PPA.
|
||||
os_rel = ca_rel.split('/')[0]
|
||||
ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel
|
||||
cmd = 'add-apt-repository -y %s' % ppa
|
||||
subprocess.check_call(cmd.split(' '))
|
||||
return
|
||||
|
||||
# map charm config options to actual archive pockets.
|
||||
pockets = {
|
||||
'folsom': 'precise-updates/folsom',
|
||||
'folsom/updates': 'precise-updates/folsom',
|
||||
'folsom/proposed': 'precise-proposed/folsom',
|
||||
'grizzly': 'precise-updates/grizzly',
|
||||
'grizzly/updates': 'precise-updates/grizzly',
|
||||
'grizzly/proposed': 'precise-proposed/grizzly',
|
||||
'havana': 'precise-updates/havana',
|
||||
'havana/updates': 'precise-updates/havana',
|
||||
'havana/proposed': 'precise-proposed/havana',
|
||||
}
|
||||
|
||||
try:
|
||||
pocket = pockets[ca_rel]
|
||||
except KeyError:
|
||||
e = 'Invalid Cloud Archive release specified: %s' % rel
|
||||
error_out(e)
|
||||
|
||||
src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket)
|
||||
apt_install('ubuntu-cloud-keyring', fatal=True)
|
||||
|
||||
with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f:
|
||||
f.write(src)
|
||||
else:
|
||||
error_out("Invalid openstack-release specified: %s" % rel)
|
||||
|
||||
|
||||
def save_script_rc(script_path="scripts/scriptrc", **env_vars):
|
||||
"""
|
||||
Write an rc file in the charm-delivered directory containing
|
||||
exported environment variables provided by env_vars. Any charm scripts run
|
||||
outside the juju hook environment can source this scriptrc to obtain
|
||||
updated config information necessary to perform health checks or
|
||||
service changes.
|
||||
"""
|
||||
juju_rc_path = "%s/%s" % (charm_dir(), script_path)
|
||||
if not os.path.exists(os.path.dirname(juju_rc_path)):
|
||||
os.mkdir(os.path.dirname(juju_rc_path))
|
||||
with open(juju_rc_path, 'wb') as rc_script:
|
||||
rc_script.write(
|
||||
"#!/bin/bash\n")
|
||||
[rc_script.write('export %s=%s\n' % (u, p))
|
||||
for u, p in env_vars.iteritems() if u != "script_path"]
|
||||
|
||||
|
||||
def openstack_upgrade_available(package):
|
||||
"""
|
||||
Determines if an OpenStack upgrade is available from installation
|
||||
source, based on version of installed package.
|
||||
|
||||
:param package: str: Name of installed package.
|
||||
|
||||
:returns: bool: : Returns True if configured installation source offers
|
||||
a newer version of package.
|
||||
|
||||
"""
|
||||
|
||||
src = config('openstack-origin')
|
||||
cur_vers = get_os_version_package(package)
|
||||
available_vers = get_os_version_install_source(src)
|
||||
apt.init()
|
||||
return apt.version_compare(available_vers, cur_vers) == 1
|
||||
|
||||
|
||||
def is_ip(address):
|
||||
"""
|
||||
Returns True if address is a valid IP address.
|
||||
"""
|
||||
try:
|
||||
# Test to see if already an IPv4 address
|
||||
socket.inet_aton(address)
|
||||
return True
|
||||
except socket.error:
|
||||
return False
|
||||
|
||||
|
||||
def ns_query(address):
|
||||
try:
|
||||
import dns.resolver
|
||||
except ImportError:
|
||||
apt_install('python-dnspython')
|
||||
import dns.resolver
|
||||
|
||||
if isinstance(address, dns.name.Name):
|
||||
rtype = 'PTR'
|
||||
elif isinstance(address, basestring):
|
||||
rtype = 'A'
|
||||
|
||||
answers = dns.resolver.query(address, rtype)
|
||||
if answers:
|
||||
return str(answers[0])
|
||||
return None
|
||||
|
||||
|
||||
def get_host_ip(hostname):
|
||||
"""
|
||||
Resolves the IP for a given hostname, or returns
|
||||
the input if it is already an IP.
|
||||
"""
|
||||
if is_ip(hostname):
|
||||
return hostname
|
||||
|
||||
return ns_query(hostname)
|
||||
|
||||
|
||||
def get_hostname(address):
|
||||
"""
|
||||
Resolves hostname for given IP, or returns the input
|
||||
if it is already a hostname.
|
||||
"""
|
||||
if not is_ip(address):
|
||||
return address
|
||||
|
||||
try:
|
||||
import dns.reversename
|
||||
except ImportError:
|
||||
apt_install('python-dnspython')
|
||||
import dns.reversename
|
||||
|
||||
rev = dns.reversename.from_address(address)
|
||||
result = ns_query(rev)
|
||||
if not result:
|
||||
return None
|
||||
|
||||
# strip trailing .
|
||||
if result.endswith('.'):
|
||||
return result[:-1]
|
||||
return result
|
0
hooks/charmhelpers/core/__init__.py
Normal file
0
hooks/charmhelpers/core/__init__.py
Normal file
340
hooks/charmhelpers/core/hookenv.py
Normal file
340
hooks/charmhelpers/core/hookenv.py
Normal file
@ -0,0 +1,340 @@
|
||||
"Interactions with the Juju environment"
|
||||
# Copyright 2013 Canonical Ltd.
|
||||
#
|
||||
# Authors:
|
||||
# Charm Helpers Developers <juju@lists.ubuntu.com>
|
||||
|
||||
import os
|
||||
import json
|
||||
import yaml
|
||||
import subprocess
|
||||
import UserDict
|
||||
|
||||
CRITICAL = "CRITICAL"
|
||||
ERROR = "ERROR"
|
||||
WARNING = "WARNING"
|
||||
INFO = "INFO"
|
||||
DEBUG = "DEBUG"
|
||||
MARKER = object()
|
||||
|
||||
cache = {}
|
||||
|
||||
|
||||
def cached(func):
|
||||
''' Cache return values for multiple executions of func + args
|
||||
|
||||
For example:
|
||||
|
||||
@cached
|
||||
def unit_get(attribute):
|
||||
pass
|
||||
|
||||
unit_get('test')
|
||||
|
||||
will cache the result of unit_get + 'test' for future calls.
|
||||
'''
|
||||
def wrapper(*args, **kwargs):
|
||||
global cache
|
||||
key = str((func, args, kwargs))
|
||||
try:
|
||||
return cache[key]
|
||||
except KeyError:
|
||||
res = func(*args, **kwargs)
|
||||
cache[key] = res
|
||||
return res
|
||||
return wrapper
|
||||
|
||||
|
||||
def flush(key):
|
||||
''' Flushes any entries from function cache where the
|
||||
key is found in the function+args '''
|
||||
flush_list = []
|
||||
for item in cache:
|
||||
if key in item:
|
||||
flush_list.append(item)
|
||||
for item in flush_list:
|
||||
del cache[item]
|
||||
|
||||
|
||||
def log(message, level=None):
|
||||
"Write a message to the juju log"
|
||||
command = ['juju-log']
|
||||
if level:
|
||||
command += ['-l', level]
|
||||
command += [message]
|
||||
subprocess.call(command)
|
||||
|
||||
|
||||
class Serializable(UserDict.IterableUserDict):
|
||||
"Wrapper, an object that can be serialized to yaml or json"
|
||||
|
||||
def __init__(self, obj):
|
||||
# wrap the object
|
||||
UserDict.IterableUserDict.__init__(self)
|
||||
self.data = obj
|
||||
|
||||
def __getattr__(self, attr):
|
||||
# See if this object has attribute.
|
||||
if attr in ("json", "yaml", "data"):
|
||||
return self.__dict__[attr]
|
||||
# Check for attribute in wrapped object.
|
||||
got = getattr(self.data, attr, MARKER)
|
||||
if got is not MARKER:
|
||||
return got
|
||||
# Proxy to the wrapped object via dict interface.
|
||||
try:
|
||||
return self.data[attr]
|
||||
except KeyError:
|
||||
raise AttributeError(attr)
|
||||
|
||||
def __getstate__(self):
|
||||
# Pickle as a standard dictionary.
|
||||
return self.data
|
||||
|
||||
def __setstate__(self, state):
|
||||
# Unpickle into our wrapper.
|
||||
self.data = state
|
||||
|
||||
def json(self):
|
||||
"Serialize the object to json"
|
||||
return json.dumps(self.data)
|
||||
|
||||
def yaml(self):
|
||||
"Serialize the object to yaml"
|
||||
return yaml.dump(self.data)
|
||||
|
||||
|
||||
def execution_environment():
|
||||
"""A convenient bundling of the current execution context"""
|
||||
context = {}
|
||||
context['conf'] = config()
|
||||
if relation_id():
|
||||
context['reltype'] = relation_type()
|
||||
context['relid'] = relation_id()
|
||||
context['rel'] = relation_get()
|
||||
context['unit'] = local_unit()
|
||||
context['rels'] = relations()
|
||||
context['env'] = os.environ
|
||||
return context
|
||||
|
||||
|
||||
def in_relation_hook():
|
||||
"Determine whether we're running in a relation hook"
|
||||
return 'JUJU_RELATION' in os.environ
|
||||
|
||||
|
||||
def relation_type():
|
||||
"The scope for the current relation hook"
|
||||
return os.environ.get('JUJU_RELATION', None)
|
||||
|
||||
|
||||
def relation_id():
|
||||
"The relation ID for the current relation hook"
|
||||
return os.environ.get('JUJU_RELATION_ID', None)
|
||||
|
||||
|
||||
def local_unit():
|
||||
"Local unit ID"
|
||||
return os.environ['JUJU_UNIT_NAME']
|
||||
|
||||
|
||||
def remote_unit():
|
||||
"The remote unit for the current relation hook"
|
||||
return os.environ['JUJU_REMOTE_UNIT']
|
||||
|
||||
|
||||
def service_name():
|
||||
"The name service group this unit belongs to"
|
||||
return local_unit().split('/')[0]
|
||||
|
||||
|
||||
@cached
|
||||
def config(scope=None):
|
||||
"Juju charm configuration"
|
||||
config_cmd_line = ['config-get']
|
||||
if scope is not None:
|
||||
config_cmd_line.append(scope)
|
||||
config_cmd_line.append('--format=json')
|
||||
try:
|
||||
return json.loads(subprocess.check_output(config_cmd_line))
|
||||
except ValueError:
|
||||
return None
|
||||
|
||||
|
||||
@cached
|
||||
def relation_get(attribute=None, unit=None, rid=None):
|
||||
_args = ['relation-get', '--format=json']
|
||||
if rid:
|
||||
_args.append('-r')
|
||||
_args.append(rid)
|
||||
_args.append(attribute or '-')
|
||||
if unit:
|
||||
_args.append(unit)
|
||||
try:
|
||||
return json.loads(subprocess.check_output(_args))
|
||||
except ValueError:
|
||||
return None
|
||||
|
||||
|
||||
def relation_set(relation_id=None, relation_settings={}, **kwargs):
|
||||
relation_cmd_line = ['relation-set']
|
||||
if relation_id is not None:
|
||||
relation_cmd_line.extend(('-r', relation_id))
|
||||
for k, v in (relation_settings.items() + kwargs.items()):
|
||||
if v is None:
|
||||
relation_cmd_line.append('{}='.format(k))
|
||||
else:
|
||||
relation_cmd_line.append('{}={}'.format(k, v))
|
||||
subprocess.check_call(relation_cmd_line)
|
||||
# Flush cache of any relation-gets for local unit
|
||||
flush(local_unit())
|
||||
|
||||
|
||||
@cached
|
||||
def relation_ids(reltype=None):
|
||||
"A list of relation_ids"
|
||||
reltype = reltype or relation_type()
|
||||
relid_cmd_line = ['relation-ids', '--format=json']
|
||||
if reltype is not None:
|
||||
relid_cmd_line.append(reltype)
|
||||
return json.loads(subprocess.check_output(relid_cmd_line)) or []
|
||||
return []
|
||||
|
||||
|
||||
@cached
|
||||
def related_units(relid=None):
|
||||
"A list of related units"
|
||||
relid = relid or relation_id()
|
||||
units_cmd_line = ['relation-list', '--format=json']
|
||||
if relid is not None:
|
||||
units_cmd_line.extend(('-r', relid))
|
||||
return json.loads(subprocess.check_output(units_cmd_line)) or []
|
||||
|
||||
|
||||
@cached
|
||||
def relation_for_unit(unit=None, rid=None):
|
||||
"Get the json represenation of a unit's relation"
|
||||
unit = unit or remote_unit()
|
||||
relation = relation_get(unit=unit, rid=rid)
|
||||
for key in relation:
|
||||
if key.endswith('-list'):
|
||||
relation[key] = relation[key].split()
|
||||
relation['__unit__'] = unit
|
||||
return relation
|
||||
|
||||
|
||||
@cached
|
||||
def relations_for_id(relid=None):
|
||||
"Get relations of a specific relation ID"
|
||||
relation_data = []
|
||||
relid = relid or relation_ids()
|
||||
for unit in related_units(relid):
|
||||
unit_data = relation_for_unit(unit, relid)
|
||||
unit_data['__relid__'] = relid
|
||||
relation_data.append(unit_data)
|
||||
return relation_data
|
||||
|
||||
|
||||
@cached
|
||||
def relations_of_type(reltype=None):
|
||||
"Get relations of a specific type"
|
||||
relation_data = []
|
||||
reltype = reltype or relation_type()
|
||||
for relid in relation_ids(reltype):
|
||||
for relation in relations_for_id(relid):
|
||||
relation['__relid__'] = relid
|
||||
relation_data.append(relation)
|
||||
return relation_data
|
||||
|
||||
|
||||
@cached
|
||||
def relation_types():
|
||||
"Get a list of relation types supported by this charm"
|
||||
charmdir = os.environ.get('CHARM_DIR', '')
|
||||
mdf = open(os.path.join(charmdir, 'metadata.yaml'))
|
||||
md = yaml.safe_load(mdf)
|
||||
rel_types = []
|
||||
for key in ('provides', 'requires', 'peers'):
|
||||
section = md.get(key)
|
||||
if section:
|
||||
rel_types.extend(section.keys())
|
||||
mdf.close()
|
||||
return rel_types
|
||||
|
||||
|
||||
@cached
|
||||
def relations():
|
||||
rels = {}
|
||||
for reltype in relation_types():
|
||||
relids = {}
|
||||
for relid in relation_ids(reltype):
|
||||
units = {local_unit(): relation_get(unit=local_unit(), rid=relid)}
|
||||
for unit in related_units(relid):
|
||||
reldata = relation_get(unit=unit, rid=relid)
|
||||
units[unit] = reldata
|
||||
relids[relid] = units
|
||||
rels[reltype] = relids
|
||||
return rels
|
||||
|
||||
|
||||
def open_port(port, protocol="TCP"):
|
||||
"Open a service network port"
|
||||
_args = ['open-port']
|
||||
_args.append('{}/{}'.format(port, protocol))
|
||||
subprocess.check_call(_args)
|
||||
|
||||
|
||||
def close_port(port, protocol="TCP"):
|
||||
"Close a service network port"
|
||||
_args = ['close-port']
|
||||
_args.append('{}/{}'.format(port, protocol))
|
||||
subprocess.check_call(_args)
|
||||
|
||||
|
||||
@cached
|
||||
def unit_get(attribute):
|
||||
_args = ['unit-get', '--format=json', attribute]
|
||||
try:
|
||||
return json.loads(subprocess.check_output(_args))
|
||||
except ValueError:
|
||||
return None
|
||||
|
||||
|
||||
def unit_private_ip():
|
||||
return unit_get('private-address')
|
||||
|
||||
|
||||
class UnregisteredHookError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class Hooks(object):
|
||||
def __init__(self):
|
||||
super(Hooks, self).__init__()
|
||||
self._hooks = {}
|
||||
|
||||
def register(self, name, function):
|
||||
self._hooks[name] = function
|
||||
|
||||
def execute(self, args):
|
||||
hook_name = os.path.basename(args[0])
|
||||
if hook_name in self._hooks:
|
||||
self._hooks[hook_name]()
|
||||
else:
|
||||
raise UnregisteredHookError(hook_name)
|
||||
|
||||
def hook(self, *hook_names):
|
||||
def wrapper(decorated):
|
||||
for hook_name in hook_names:
|
||||
self.register(hook_name, decorated)
|
||||
else:
|
||||
self.register(decorated.__name__, decorated)
|
||||
if '_' in decorated.__name__:
|
||||
self.register(
|
||||
decorated.__name__.replace('_', '-'), decorated)
|
||||
return decorated
|
||||
return wrapper
|
||||
|
||||
|
||||
def charm_dir():
|
||||
return os.environ.get('CHARM_DIR')
|
241
hooks/charmhelpers/core/host.py
Normal file
241
hooks/charmhelpers/core/host.py
Normal file
@ -0,0 +1,241 @@
|
||||
"""Tools for working with the host system"""
|
||||
# Copyright 2012 Canonical Ltd.
|
||||
#
|
||||
# Authors:
|
||||
# Nick Moffitt <nick.moffitt@canonical.com>
|
||||
# Matthew Wedgwood <matthew.wedgwood@canonical.com>
|
||||
|
||||
import os
|
||||
import pwd
|
||||
import grp
|
||||
import random
|
||||
import string
|
||||
import subprocess
|
||||
import hashlib
|
||||
|
||||
from collections import OrderedDict
|
||||
|
||||
from hookenv import log
|
||||
|
||||
|
||||
def service_start(service_name):
|
||||
return service('start', service_name)
|
||||
|
||||
|
||||
def service_stop(service_name):
|
||||
return service('stop', service_name)
|
||||
|
||||
|
||||
def service_restart(service_name):
|
||||
return service('restart', service_name)
|
||||
|
||||
|
||||
def service_reload(service_name, restart_on_failure=False):
|
||||
service_result = service('reload', service_name)
|
||||
if not service_result and restart_on_failure:
|
||||
service_result = service('restart', service_name)
|
||||
return service_result
|
||||
|
||||
|
||||
def service(action, service_name):
|
||||
cmd = ['service', service_name, action]
|
||||
return subprocess.call(cmd) == 0
|
||||
|
||||
|
||||
def service_running(service):
|
||||
try:
|
||||
output = subprocess.check_output(['service', service, 'status'])
|
||||
except subprocess.CalledProcessError:
|
||||
return False
|
||||
else:
|
||||
if ("start/running" in output or "is running" in output):
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
|
||||
def adduser(username, password=None, shell='/bin/bash', system_user=False):
|
||||
"""Add a user"""
|
||||
try:
|
||||
user_info = pwd.getpwnam(username)
|
||||
log('user {0} already exists!'.format(username))
|
||||
except KeyError:
|
||||
log('creating user {0}'.format(username))
|
||||
cmd = ['useradd']
|
||||
if system_user or password is None:
|
||||
cmd.append('--system')
|
||||
else:
|
||||
cmd.extend([
|
||||
'--create-home',
|
||||
'--shell', shell,
|
||||
'--password', password,
|
||||
])
|
||||
cmd.append(username)
|
||||
subprocess.check_call(cmd)
|
||||
user_info = pwd.getpwnam(username)
|
||||
return user_info
|
||||
|
||||
|
||||
def add_user_to_group(username, group):
|
||||
"""Add a user to a group"""
|
||||
cmd = [
|
||||
'gpasswd', '-a',
|
||||
username,
|
||||
group
|
||||
]
|
||||
log("Adding user {} to group {}".format(username, group))
|
||||
subprocess.check_call(cmd)
|
||||
|
||||
|
||||
def rsync(from_path, to_path, flags='-r', options=None):
|
||||
"""Replicate the contents of a path"""
|
||||
options = options or ['--delete', '--executability']
|
||||
cmd = ['/usr/bin/rsync', flags]
|
||||
cmd.extend(options)
|
||||
cmd.append(from_path)
|
||||
cmd.append(to_path)
|
||||
log(" ".join(cmd))
|
||||
return subprocess.check_output(cmd).strip()
|
||||
|
||||
|
||||
def symlink(source, destination):
|
||||
"""Create a symbolic link"""
|
||||
log("Symlinking {} as {}".format(source, destination))
|
||||
cmd = [
|
||||
'ln',
|
||||
'-sf',
|
||||
source,
|
||||
destination,
|
||||
]
|
||||
subprocess.check_call(cmd)
|
||||
|
||||
|
||||
def mkdir(path, owner='root', group='root', perms=0555, force=False):
|
||||
"""Create a directory"""
|
||||
log("Making dir {} {}:{} {:o}".format(path, owner, group,
|
||||
perms))
|
||||
uid = pwd.getpwnam(owner).pw_uid
|
||||
gid = grp.getgrnam(group).gr_gid
|
||||
realpath = os.path.abspath(path)
|
||||
if os.path.exists(realpath):
|
||||
if force and not os.path.isdir(realpath):
|
||||
log("Removing non-directory file {} prior to mkdir()".format(path))
|
||||
os.unlink(realpath)
|
||||
else:
|
||||
os.makedirs(realpath, perms)
|
||||
os.chown(realpath, uid, gid)
|
||||
|
||||
|
||||
def write_file(path, content, owner='root', group='root', perms=0444):
|
||||
"""Create or overwrite a file with the contents of a string"""
|
||||
log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
|
||||
uid = pwd.getpwnam(owner).pw_uid
|
||||
gid = grp.getgrnam(group).gr_gid
|
||||
with open(path, 'w') as target:
|
||||
os.fchown(target.fileno(), uid, gid)
|
||||
os.fchmod(target.fileno(), perms)
|
||||
target.write(content)
|
||||
|
||||
|
||||
def mount(device, mountpoint, options=None, persist=False):
|
||||
'''Mount a filesystem'''
|
||||
cmd_args = ['mount']
|
||||
if options is not None:
|
||||
cmd_args.extend(['-o', options])
|
||||
cmd_args.extend([device, mountpoint])
|
||||
try:
|
||||
subprocess.check_output(cmd_args)
|
||||
except subprocess.CalledProcessError, e:
|
||||
log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
|
||||
return False
|
||||
if persist:
|
||||
# TODO: update fstab
|
||||
pass
|
||||
return True
|
||||
|
||||
|
||||
def umount(mountpoint, persist=False):
|
||||
'''Unmount a filesystem'''
|
||||
cmd_args = ['umount', mountpoint]
|
||||
try:
|
||||
subprocess.check_output(cmd_args)
|
||||
except subprocess.CalledProcessError, e:
|
||||
log('Error unmounting {}\n{}'.format(mountpoint, e.output))
|
||||
return False
|
||||
if persist:
|
||||
# TODO: update fstab
|
||||
pass
|
||||
return True
|
||||
|
||||
|
||||
def mounts():
|
||||
'''List of all mounted volumes as [[mountpoint,device],[...]]'''
|
||||
with open('/proc/mounts') as f:
|
||||
# [['/mount/point','/dev/path'],[...]]
|
||||
system_mounts = [m[1::-1] for m in [l.strip().split()
|
||||
for l in f.readlines()]]
|
||||
return system_mounts
|
||||
|
||||
|
||||
def file_hash(path):
|
||||
''' Generate a md5 hash of the contents of 'path' or None if not found '''
|
||||
if os.path.exists(path):
|
||||
h = hashlib.md5()
|
||||
with open(path, 'r') as source:
|
||||
h.update(source.read()) # IGNORE:E1101 - it does have update
|
||||
return h.hexdigest()
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
def restart_on_change(restart_map):
|
||||
''' Restart services based on configuration files changing
|
||||
|
||||
This function is used a decorator, for example
|
||||
|
||||
@restart_on_change({
|
||||
'/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
|
||||
})
|
||||
def ceph_client_changed():
|
||||
...
|
||||
|
||||
In this example, the cinder-api and cinder-volume services
|
||||
would be restarted if /etc/ceph/ceph.conf is changed by the
|
||||
ceph_client_changed function.
|
||||
'''
|
||||
def wrap(f):
|
||||
def wrapped_f(*args):
|
||||
checksums = {}
|
||||
for path in restart_map:
|
||||
checksums[path] = file_hash(path)
|
||||
f(*args)
|
||||
restarts = []
|
||||
for path in restart_map:
|
||||
if checksums[path] != file_hash(path):
|
||||
restarts += restart_map[path]
|
||||
for service_name in list(OrderedDict.fromkeys(restarts)):
|
||||
service('restart', service_name)
|
||||
return wrapped_f
|
||||
return wrap
|
||||
|
||||
|
||||
def lsb_release():
|
||||
'''Return /etc/lsb-release in a dict'''
|
||||
d = {}
|
||||
with open('/etc/lsb-release', 'r') as lsb:
|
||||
for l in lsb:
|
||||
k, v = l.split('=')
|
||||
d[k.strip()] = v.strip()
|
||||
return d
|
||||
|
||||
|
||||
def pwgen(length=None):
|
||||
'''Generate a random pasword.'''
|
||||
if length is None:
|
||||
length = random.choice(range(35, 45))
|
||||
alphanumeric_chars = [
|
||||
l for l in (string.letters + string.digits)
|
||||
if l not in 'l0QD1vAEIOUaeiou']
|
||||
random_chars = [
|
||||
random.choice(alphanumeric_chars) for _ in range(length)]
|
||||
return(''.join(random_chars))
|
209
hooks/charmhelpers/fetch/__init__.py
Normal file
209
hooks/charmhelpers/fetch/__init__.py
Normal file
@ -0,0 +1,209 @@
|
||||
import importlib
|
||||
from yaml import safe_load
|
||||
from charmhelpers.core.host import (
|
||||
lsb_release
|
||||
)
|
||||
from urlparse import (
|
||||
urlparse,
|
||||
urlunparse,
|
||||
)
|
||||
import subprocess
|
||||
from charmhelpers.core.hookenv import (
|
||||
config,
|
||||
log,
|
||||
)
|
||||
import apt_pkg
|
||||
|
||||
CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
|
||||
deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
|
||||
"""
|
||||
PROPOSED_POCKET = """# Proposed
|
||||
deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted
|
||||
"""
|
||||
|
||||
|
||||
def filter_installed_packages(packages):
|
||||
"""Returns a list of packages that require installation"""
|
||||
apt_pkg.init()
|
||||
cache = apt_pkg.Cache()
|
||||
_pkgs = []
|
||||
for package in packages:
|
||||
try:
|
||||
p = cache[package]
|
||||
p.current_ver or _pkgs.append(package)
|
||||
except KeyError:
|
||||
log('Package {} has no installation candidate.'.format(package),
|
||||
level='WARNING')
|
||||
_pkgs.append(package)
|
||||
return _pkgs
|
||||
|
||||
|
||||
def apt_install(packages, options=None, fatal=False):
|
||||
"""Install one or more packages"""
|
||||
options = options or []
|
||||
cmd = ['apt-get', '-y']
|
||||
cmd.extend(options)
|
||||
cmd.append('install')
|
||||
if isinstance(packages, basestring):
|
||||
cmd.append(packages)
|
||||
else:
|
||||
cmd.extend(packages)
|
||||
log("Installing {} with options: {}".format(packages,
|
||||
options))
|
||||
if fatal:
|
||||
subprocess.check_call(cmd)
|
||||
else:
|
||||
subprocess.call(cmd)
|
||||
|
||||
|
||||
def apt_update(fatal=False):
|
||||
"""Update local apt cache"""
|
||||
cmd = ['apt-get', 'update']
|
||||
if fatal:
|
||||
subprocess.check_call(cmd)
|
||||
else:
|
||||
subprocess.call(cmd)
|
||||
|
||||
|
||||
def apt_purge(packages, fatal=False):
|
||||
"""Purge one or more packages"""
|
||||
cmd = ['apt-get', '-y', 'purge']
|
||||
if isinstance(packages, basestring):
|
||||
cmd.append(packages)
|
||||
else:
|
||||
cmd.extend(packages)
|
||||
log("Purging {}".format(packages))
|
||||
if fatal:
|
||||
subprocess.check_call(cmd)
|
||||
else:
|
||||
subprocess.call(cmd)
|
||||
|
||||
|
||||
def add_source(source, key=None):
|
||||
if ((source.startswith('ppa:') or
|
||||
source.startswith('http:'))):
|
||||
subprocess.check_call(['add-apt-repository', '--yes', source])
|
||||
elif source.startswith('cloud:'):
|
||||
apt_install(filter_installed_packages(['ubuntu-cloud-keyring']),
|
||||
fatal=True)
|
||||
pocket = source.split(':')[-1]
|
||||
with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
|
||||
apt.write(CLOUD_ARCHIVE.format(pocket))
|
||||
elif source == 'proposed':
|
||||
release = lsb_release()['DISTRIB_CODENAME']
|
||||
with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
|
||||
apt.write(PROPOSED_POCKET.format(release))
|
||||
if key:
|
||||
subprocess.check_call(['apt-key', 'import', key])
|
||||
|
||||
|
||||
class SourceConfigError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def configure_sources(update=False,
|
||||
sources_var='install_sources',
|
||||
keys_var='install_keys'):
|
||||
"""
|
||||
Configure multiple sources from charm configuration
|
||||
|
||||
Example config:
|
||||
install_sources:
|
||||
- "ppa:foo"
|
||||
- "http://example.com/repo precise main"
|
||||
install_keys:
|
||||
- null
|
||||
- "a1b2c3d4"
|
||||
|
||||
Note that 'null' (a.k.a. None) should not be quoted.
|
||||
"""
|
||||
sources = safe_load(config(sources_var))
|
||||
keys = safe_load(config(keys_var))
|
||||
if isinstance(sources, basestring) and isinstance(keys, basestring):
|
||||
add_source(sources, keys)
|
||||
else:
|
||||
if not len(sources) == len(keys):
|
||||
msg = 'Install sources and keys lists are different lengths'
|
||||
raise SourceConfigError(msg)
|
||||
for src_num in range(len(sources)):
|
||||
add_source(sources[src_num], keys[src_num])
|
||||
if update:
|
||||
apt_update(fatal=True)
|
||||
|
||||
# The order of this list is very important. Handlers should be listed in from
|
||||
# least- to most-specific URL matching.
|
||||
FETCH_HANDLERS = (
|
||||
'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
|
||||
'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
|
||||
)
|
||||
|
||||
|
||||
class UnhandledSource(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def install_remote(source):
|
||||
"""
|
||||
Install a file tree from a remote source
|
||||
|
||||
The specified source should be a url of the form:
|
||||
scheme://[host]/path[#[option=value][&...]]
|
||||
|
||||
Schemes supported are based on this modules submodules
|
||||
Options supported are submodule-specific"""
|
||||
# We ONLY check for True here because can_handle may return a string
|
||||
# explaining why it can't handle a given source.
|
||||
handlers = [h for h in plugins() if h.can_handle(source) is True]
|
||||
installed_to = None
|
||||
for handler in handlers:
|
||||
try:
|
||||
installed_to = handler.install(source)
|
||||
except UnhandledSource:
|
||||
pass
|
||||
if not installed_to:
|
||||
raise UnhandledSource("No handler found for source {}".format(source))
|
||||
return installed_to
|
||||
|
||||
|
||||
def install_from_config(config_var_name):
|
||||
charm_config = config()
|
||||
source = charm_config[config_var_name]
|
||||
return install_remote(source)
|
||||
|
||||
|
||||
class BaseFetchHandler(object):
|
||||
"""Base class for FetchHandler implementations in fetch plugins"""
|
||||
def can_handle(self, source):
|
||||
"""Returns True if the source can be handled. Otherwise returns
|
||||
a string explaining why it cannot"""
|
||||
return "Wrong source type"
|
||||
|
||||
def install(self, source):
|
||||
"""Try to download and unpack the source. Return the path to the
|
||||
unpacked files or raise UnhandledSource."""
|
||||
raise UnhandledSource("Wrong source type {}".format(source))
|
||||
|
||||
def parse_url(self, url):
|
||||
return urlparse(url)
|
||||
|
||||
def base_url(self, url):
|
||||
"""Return url without querystring or fragment"""
|
||||
parts = list(self.parse_url(url))
|
||||
parts[4:] = ['' for i in parts[4:]]
|
||||
return urlunparse(parts)
|
||||
|
||||
|
||||
def plugins(fetch_handlers=None):
|
||||
if not fetch_handlers:
|
||||
fetch_handlers = FETCH_HANDLERS
|
||||
plugin_list = []
|
||||
for handler_name in fetch_handlers:
|
||||
package, classname = handler_name.rsplit('.', 1)
|
||||
try:
|
||||
handler_class = getattr(importlib.import_module(package), classname)
|
||||
plugin_list.append(handler_class())
|
||||
except (ImportError, AttributeError):
|
||||
# Skip missing plugins so that they can be ommitted from
|
||||
# installation if desired
|
||||
log("FetchHandler {} not found, skipping plugin".format(handler_name))
|
||||
return plugin_list
|
48
hooks/charmhelpers/fetch/archiveurl.py
Normal file
48
hooks/charmhelpers/fetch/archiveurl.py
Normal file
@ -0,0 +1,48 @@
|
||||
import os
|
||||
import urllib2
|
||||
from charmhelpers.fetch import (
|
||||
BaseFetchHandler,
|
||||
UnhandledSource
|
||||
)
|
||||
from charmhelpers.payload.archive import (
|
||||
get_archive_handler,
|
||||
extract,
|
||||
)
|
||||
from charmhelpers.core.host import mkdir
|
||||
|
||||
|
||||
class ArchiveUrlFetchHandler(BaseFetchHandler):
|
||||
"""Handler for archives via generic URLs"""
|
||||
def can_handle(self, source):
|
||||
url_parts = self.parse_url(source)
|
||||
if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):
|
||||
return "Wrong source type"
|
||||
if get_archive_handler(self.base_url(source)):
|
||||
return True
|
||||
return False
|
||||
|
||||
def download(self, source, dest):
|
||||
# propogate all exceptions
|
||||
# URLError, OSError, etc
|
||||
response = urllib2.urlopen(source)
|
||||
try:
|
||||
with open(dest, 'w') as dest_file:
|
||||
dest_file.write(response.read())
|
||||
except Exception as e:
|
||||
if os.path.isfile(dest):
|
||||
os.unlink(dest)
|
||||
raise e
|
||||
|
||||
def install(self, source):
|
||||
url_parts = self.parse_url(source)
|
||||
dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
|
||||
if not os.path.exists(dest_dir):
|
||||
mkdir(dest_dir, perms=0755)
|
||||
dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
|
||||
try:
|
||||
self.download(source, dld_file)
|
||||
except urllib2.URLError as e:
|
||||
raise UnhandledSource(e.reason)
|
||||
except OSError as e:
|
||||
raise UnhandledSource(e.strerror)
|
||||
return extract(dld_file)
|
49
hooks/charmhelpers/fetch/bzrurl.py
Normal file
49
hooks/charmhelpers/fetch/bzrurl.py
Normal file
@ -0,0 +1,49 @@
|
||||
import os
|
||||
from charmhelpers.fetch import (
|
||||
BaseFetchHandler,
|
||||
UnhandledSource
|
||||
)
|
||||
from charmhelpers.core.host import mkdir
|
||||
|
||||
try:
|
||||
from bzrlib.branch import Branch
|
||||
except ImportError:
|
||||
from charmhelpers.fetch import apt_install
|
||||
apt_install("python-bzrlib")
|
||||
from bzrlib.branch import Branch
|
||||
|
||||
class BzrUrlFetchHandler(BaseFetchHandler):
|
||||
"""Handler for bazaar branches via generic and lp URLs"""
|
||||
def can_handle(self, source):
|
||||
url_parts = self.parse_url(source)
|
||||
if url_parts.scheme not in ('bzr+ssh', 'lp'):
|
||||
return False
|
||||
else:
|
||||
return True
|
||||
|
||||
def branch(self, source, dest):
|
||||
url_parts = self.parse_url(source)
|
||||
# If we use lp:branchname scheme we need to load plugins
|
||||
if not self.can_handle(source):
|
||||
raise UnhandledSource("Cannot handle {}".format(source))
|
||||
if url_parts.scheme == "lp":
|
||||
from bzrlib.plugin import load_plugins
|
||||
load_plugins()
|
||||
try:
|
||||
remote_branch = Branch.open(source)
|
||||
remote_branch.bzrdir.sprout(dest).open_branch()
|
||||
except Exception as e:
|
||||
raise e
|
||||
|
||||
def install(self, source):
|
||||
url_parts = self.parse_url(source)
|
||||
branch_name = url_parts.path.strip("/").split("/")[-1]
|
||||
dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched", branch_name)
|
||||
if not os.path.exists(dest_dir):
|
||||
mkdir(dest_dir, perms=0755)
|
||||
try:
|
||||
self.branch(source, dest_dir)
|
||||
except OSError as e:
|
||||
raise UnhandledSource(e.strerror)
|
||||
return dest_dir
|
||||
|
1
hooks/charmhelpers/payload/__init__.py
Normal file
1
hooks/charmhelpers/payload/__init__.py
Normal file
@ -0,0 +1 @@
|
||||
"Tools for working with files injected into a charm just before deployment."
|
50
hooks/charmhelpers/payload/execd.py
Normal file
50
hooks/charmhelpers/payload/execd.py
Normal file
@ -0,0 +1,50 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
import os
|
||||
import sys
|
||||
import subprocess
|
||||
from charmhelpers.core import hookenv
|
||||
|
||||
|
||||
def default_execd_dir():
|
||||
return os.path.join(os.environ['CHARM_DIR'], 'exec.d')
|
||||
|
||||
|
||||
def execd_module_paths(execd_dir=None):
|
||||
"""Generate a list of full paths to modules within execd_dir."""
|
||||
if not execd_dir:
|
||||
execd_dir = default_execd_dir()
|
||||
|
||||
if not os.path.exists(execd_dir):
|
||||
return
|
||||
|
||||
for subpath in os.listdir(execd_dir):
|
||||
module = os.path.join(execd_dir, subpath)
|
||||
if os.path.isdir(module):
|
||||
yield module
|
||||
|
||||
|
||||
def execd_submodule_paths(command, execd_dir=None):
|
||||
"""Generate a list of full paths to the specified command within exec_dir.
|
||||
"""
|
||||
for module_path in execd_module_paths(execd_dir):
|
||||
path = os.path.join(module_path, command)
|
||||
if os.access(path, os.X_OK) and os.path.isfile(path):
|
||||
yield path
|
||||
|
||||
|
||||
def execd_run(command, execd_dir=None, die_on_error=False, stderr=None):
|
||||
"""Run command for each module within execd_dir which defines it."""
|
||||
for submodule_path in execd_submodule_paths(command, execd_dir):
|
||||
try:
|
||||
subprocess.check_call(submodule_path, shell=True, stderr=stderr)
|
||||
except subprocess.CalledProcessError as e:
|
||||
hookenv.log("Error ({}) running {}. Output: {}".format(
|
||||
e.returncode, e.cmd, e.output))
|
||||
if die_on_error:
|
||||
sys.exit(e.returncode)
|
||||
|
||||
|
||||
def execd_preinstall(execd_dir=None):
|
||||
"""Run charm-pre-install for each module within execd_dir."""
|
||||
execd_run('charm-pre-install', execd_dir=execd_dir)
|
@ -1 +1 @@
|
||||
hooks.py
|
||||
quantum_hooks.py
|
@ -1 +1 @@
|
||||
hooks.py
|
||||
quantum_hooks.py
|
@ -1 +1 @@
|
||||
hooks.py
|
||||
quantum_hooks.py
|
313
hooks/hooks.py
313
hooks/hooks.py
@ -1,313 +0,0 @@
|
||||
#!/usr/bin/python
|
||||
|
||||
import lib.utils as utils
|
||||
import lib.cluster_utils as cluster
|
||||
import lib.openstack_common as openstack
|
||||
import sys
|
||||
import quantum_utils as qutils
|
||||
import os
|
||||
|
||||
PLUGIN = utils.config_get('plugin')
|
||||
|
||||
|
||||
def install():
|
||||
utils.configure_source()
|
||||
if PLUGIN in qutils.GATEWAY_PKGS.keys():
|
||||
if PLUGIN == qutils.OVS:
|
||||
# Install OVS DKMS first to ensure that the ovs module
|
||||
# loaded supports GRE tunnels
|
||||
utils.install('openvswitch-datapath-dkms')
|
||||
utils.install(*qutils.GATEWAY_PKGS[PLUGIN])
|
||||
else:
|
||||
utils.juju_log('ERROR', 'Please provide a valid plugin config')
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
@utils.inteli_restart(qutils.RESTART_MAP)
|
||||
def config_changed():
|
||||
src = utils.config_get('openstack-origin')
|
||||
available = openstack.get_os_codename_install_source(src)
|
||||
installed = openstack.get_os_codename_package('quantum-common')
|
||||
if (available and
|
||||
openstack.get_os_version_codename(available) > \
|
||||
openstack.get_os_version_codename(installed)):
|
||||
qutils.do_openstack_upgrade()
|
||||
|
||||
if PLUGIN in qutils.GATEWAY_PKGS.keys():
|
||||
render_quantum_conf()
|
||||
render_dhcp_agent_conf()
|
||||
render_l3_agent_conf()
|
||||
render_metadata_agent_conf()
|
||||
render_metadata_api_conf()
|
||||
render_plugin_conf()
|
||||
render_ext_port_upstart()
|
||||
render_evacuate_unit()
|
||||
if PLUGIN == qutils.OVS:
|
||||
qutils.add_bridge(qutils.INT_BRIDGE)
|
||||
qutils.add_bridge(qutils.EXT_BRIDGE)
|
||||
ext_port = utils.config_get('ext-port')
|
||||
if ext_port:
|
||||
qutils.add_bridge_port(qutils.EXT_BRIDGE, ext_port)
|
||||
else:
|
||||
utils.juju_log('ERROR',
|
||||
'Please provide a valid plugin config')
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def upgrade_charm():
|
||||
install()
|
||||
config_changed()
|
||||
|
||||
|
||||
def render_ext_port_upstart():
|
||||
if utils.config_get('ext-port'):
|
||||
with open(qutils.EXT_PORT_CONF, "w") as conf:
|
||||
conf.write(utils.render_template(
|
||||
os.path.basename(qutils.EXT_PORT_CONF),
|
||||
{"ext_port": utils.config_get('ext-port')}
|
||||
)
|
||||
)
|
||||
else:
|
||||
if os.path.exists(qutils.EXT_PORT_CONF):
|
||||
os.remove(qutils.EXT_PORT_CONF)
|
||||
|
||||
|
||||
def render_l3_agent_conf():
|
||||
context = get_keystone_conf()
|
||||
if (context and
|
||||
os.path.exists(qutils.L3_AGENT_CONF)):
|
||||
with open(qutils.L3_AGENT_CONF, "w") as conf:
|
||||
conf.write(utils.render_template(
|
||||
os.path.basename(qutils.L3_AGENT_CONF),
|
||||
context
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def render_dhcp_agent_conf():
|
||||
if (os.path.exists(qutils.DHCP_AGENT_CONF)):
|
||||
with open(qutils.DHCP_AGENT_CONF, "w") as conf:
|
||||
conf.write(utils.render_template(
|
||||
os.path.basename(qutils.DHCP_AGENT_CONF),
|
||||
{}
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def render_metadata_agent_conf():
|
||||
context = get_keystone_conf()
|
||||
if (context and
|
||||
os.path.exists(qutils.METADATA_AGENT_CONF)):
|
||||
context['local_ip'] = utils.get_host_ip()
|
||||
context['shared_secret'] = qutils.get_shared_secret()
|
||||
with open(qutils.METADATA_AGENT_CONF, "w") as conf:
|
||||
conf.write(utils.render_template(
|
||||
os.path.basename(qutils.METADATA_AGENT_CONF),
|
||||
context
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def render_quantum_conf():
|
||||
context = get_rabbit_conf()
|
||||
if (context and
|
||||
os.path.exists(qutils.QUANTUM_CONF)):
|
||||
context['core_plugin'] = \
|
||||
qutils.CORE_PLUGIN[PLUGIN]
|
||||
with open(qutils.QUANTUM_CONF, "w") as conf:
|
||||
conf.write(utils.render_template(
|
||||
os.path.basename(qutils.QUANTUM_CONF),
|
||||
context
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def render_plugin_conf():
|
||||
context = get_quantum_db_conf()
|
||||
if (context and
|
||||
os.path.exists(qutils.PLUGIN_CONF[PLUGIN])):
|
||||
context['local_ip'] = utils.get_host_ip()
|
||||
conf_file = qutils.PLUGIN_CONF[PLUGIN]
|
||||
with open(conf_file, "w") as conf:
|
||||
conf.write(utils.render_template(
|
||||
os.path.basename(conf_file),
|
||||
context
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def render_metadata_api_conf():
|
||||
context = get_nova_db_conf()
|
||||
r_context = get_rabbit_conf()
|
||||
q_context = get_keystone_conf()
|
||||
if (context and r_context and q_context and
|
||||
os.path.exists(qutils.NOVA_CONF)):
|
||||
context.update(r_context)
|
||||
context.update(q_context)
|
||||
context['shared_secret'] = qutils.get_shared_secret()
|
||||
with open(qutils.NOVA_CONF, "w") as conf:
|
||||
conf.write(utils.render_template(
|
||||
os.path.basename(qutils.NOVA_CONF),
|
||||
context
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def render_evacuate_unit():
|
||||
context = get_keystone_conf()
|
||||
if context:
|
||||
with open('/usr/local/bin/quantum-evacuate-unit', "w") as conf:
|
||||
conf.write(utils.render_template('evacuate_unit.py', context))
|
||||
os.chmod('/usr/local/bin/quantum-evacuate-unit', 0700)
|
||||
|
||||
|
||||
def get_keystone_conf():
|
||||
for relid in utils.relation_ids('quantum-network-service'):
|
||||
for unit in utils.relation_list(relid):
|
||||
conf = {
|
||||
"keystone_host": utils.relation_get('keystone_host',
|
||||
unit, relid),
|
||||
"service_port": utils.relation_get('service_port',
|
||||
unit, relid),
|
||||
"auth_port": utils.relation_get('auth_port', unit, relid),
|
||||
"service_username": utils.relation_get('service_username',
|
||||
unit, relid),
|
||||
"service_password": utils.relation_get('service_password',
|
||||
unit, relid),
|
||||
"service_tenant": utils.relation_get('service_tenant',
|
||||
unit, relid),
|
||||
"quantum_host": utils.relation_get('quantum_host',
|
||||
unit, relid),
|
||||
"quantum_port": utils.relation_get('quantum_port',
|
||||
unit, relid),
|
||||
"quantum_url": utils.relation_get('quantum_url',
|
||||
unit, relid),
|
||||
"region": utils.relation_get('region',
|
||||
unit, relid)
|
||||
}
|
||||
if None not in conf.itervalues():
|
||||
return conf
|
||||
return None
|
||||
|
||||
|
||||
def db_joined():
|
||||
utils.relation_set(quantum_username=qutils.DB_USER,
|
||||
quantum_database=qutils.QUANTUM_DB,
|
||||
quantum_hostname=utils.unit_get('private-address'),
|
||||
nova_username=qutils.NOVA_DB_USER,
|
||||
nova_database=qutils.NOVA_DB,
|
||||
nova_hostname=utils.unit_get('private-address'))
|
||||
|
||||
|
||||
@utils.inteli_restart(qutils.RESTART_MAP)
|
||||
def db_changed():
|
||||
render_plugin_conf()
|
||||
render_metadata_api_conf()
|
||||
|
||||
|
||||
def get_quantum_db_conf():
|
||||
for relid in utils.relation_ids('shared-db'):
|
||||
for unit in utils.relation_list(relid):
|
||||
conf = {
|
||||
"host": utils.relation_get('db_host',
|
||||
unit, relid),
|
||||
"user": qutils.DB_USER,
|
||||
"password": utils.relation_get('quantum_password',
|
||||
unit, relid),
|
||||
"db": qutils.QUANTUM_DB
|
||||
}
|
||||
if None not in conf.itervalues():
|
||||
return conf
|
||||
return None
|
||||
|
||||
|
||||
def get_nova_db_conf():
|
||||
for relid in utils.relation_ids('shared-db'):
|
||||
for unit in utils.relation_list(relid):
|
||||
conf = {
|
||||
"host": utils.relation_get('db_host',
|
||||
unit, relid),
|
||||
"user": qutils.NOVA_DB_USER,
|
||||
"password": utils.relation_get('nova_password',
|
||||
unit, relid),
|
||||
"db": qutils.NOVA_DB
|
||||
}
|
||||
if None not in conf.itervalues():
|
||||
return conf
|
||||
return None
|
||||
|
||||
|
||||
def amqp_joined():
|
||||
utils.relation_set(username=qutils.RABBIT_USER,
|
||||
vhost=qutils.RABBIT_VHOST)
|
||||
|
||||
|
||||
@utils.inteli_restart(qutils.RESTART_MAP)
|
||||
def amqp_changed():
|
||||
render_dhcp_agent_conf()
|
||||
render_quantum_conf()
|
||||
render_metadata_api_conf()
|
||||
|
||||
|
||||
def get_rabbit_conf():
|
||||
for relid in utils.relation_ids('amqp'):
|
||||
for unit in utils.relation_list(relid):
|
||||
conf = {
|
||||
"rabbit_host": utils.relation_get('private-address',
|
||||
unit, relid),
|
||||
"rabbit_virtual_host": qutils.RABBIT_VHOST,
|
||||
"rabbit_userid": qutils.RABBIT_USER,
|
||||
"rabbit_password": utils.relation_get('password',
|
||||
unit, relid)
|
||||
}
|
||||
clustered = utils.relation_get('clustered', unit, relid)
|
||||
if clustered:
|
||||
conf['rabbit_host'] = utils.relation_get('vip', unit, relid)
|
||||
if None not in conf.itervalues():
|
||||
return conf
|
||||
return None
|
||||
|
||||
|
||||
@utils.inteli_restart(qutils.RESTART_MAP)
|
||||
def nm_changed():
|
||||
render_dhcp_agent_conf()
|
||||
render_l3_agent_conf()
|
||||
render_metadata_agent_conf()
|
||||
render_metadata_api_conf()
|
||||
render_evacuate_unit()
|
||||
store_ca_cert()
|
||||
|
||||
|
||||
def store_ca_cert():
|
||||
ca_cert = get_ca_cert()
|
||||
if ca_cert:
|
||||
qutils.install_ca(ca_cert)
|
||||
|
||||
|
||||
def get_ca_cert():
|
||||
for relid in utils.relation_ids('quantum-network-service'):
|
||||
for unit in utils.relation_list(relid):
|
||||
ca_cert = utils.relation_get('ca_cert', unit, relid)
|
||||
if ca_cert:
|
||||
return ca_cert
|
||||
return None
|
||||
|
||||
|
||||
def cluster_departed():
|
||||
conf = get_keystone_conf()
|
||||
if conf and cluster.eligible_leader(None):
|
||||
qutils.reassign_agent_resources(conf)
|
||||
|
||||
utils.do_hooks({
|
||||
"install": install,
|
||||
"config-changed": config_changed,
|
||||
"upgrade-charm": upgrade_charm,
|
||||
"shared-db-relation-joined": db_joined,
|
||||
"shared-db-relation-changed": db_changed,
|
||||
"amqp-relation-joined": amqp_joined,
|
||||
"amqp-relation-changed": amqp_changed,
|
||||
"quantum-network-service-relation-changed": nm_changed,
|
||||
"cluster-relation-departed": cluster_departed
|
||||
})
|
||||
|
||||
sys.exit(0)
|
@ -1 +1 @@
|
||||
hooks.py
|
||||
quantum_hooks.py
|
@ -1,230 +0,0 @@
|
||||
#!/usr/bin/python
|
||||
|
||||
# Common python helper functions used for OpenStack charms.
|
||||
|
||||
import apt_pkg as apt
|
||||
import subprocess
|
||||
import os
|
||||
|
||||
CLOUD_ARCHIVE_URL = "http://ubuntu-cloud.archive.canonical.com/ubuntu"
|
||||
CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'
|
||||
|
||||
ubuntu_openstack_release = {
|
||||
'oneiric': 'diablo',
|
||||
'precise': 'essex',
|
||||
'quantal': 'folsom',
|
||||
'raring': 'grizzly',
|
||||
}
|
||||
|
||||
|
||||
openstack_codenames = {
|
||||
'2011.2': 'diablo',
|
||||
'2012.1': 'essex',
|
||||
'2012.2': 'folsom',
|
||||
'2013.1': 'grizzly',
|
||||
'2013.2': 'havana',
|
||||
}
|
||||
|
||||
# The ugly duckling
|
||||
swift_codenames = {
|
||||
'1.4.3': 'diablo',
|
||||
'1.4.8': 'essex',
|
||||
'1.7.4': 'folsom',
|
||||
'1.7.6': 'grizzly',
|
||||
'1.7.7': 'grizzly',
|
||||
'1.8.0': 'grizzly',
|
||||
}
|
||||
|
||||
|
||||
def juju_log(msg):
|
||||
subprocess.check_call(['juju-log', msg])
|
||||
|
||||
|
||||
def error_out(msg):
|
||||
juju_log("FATAL ERROR: %s" % msg)
|
||||
exit(1)
|
||||
|
||||
|
||||
def lsb_release():
|
||||
'''Return /etc/lsb-release in a dict'''
|
||||
lsb = open('/etc/lsb-release', 'r')
|
||||
d = {}
|
||||
for l in lsb:
|
||||
k, v = l.split('=')
|
||||
d[k.strip()] = v.strip()
|
||||
return d
|
||||
|
||||
|
||||
def get_os_codename_install_source(src):
|
||||
'''Derive OpenStack release codename from a given installation source.'''
|
||||
ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
|
||||
|
||||
rel = ''
|
||||
if src == 'distro':
|
||||
try:
|
||||
rel = ubuntu_openstack_release[ubuntu_rel]
|
||||
except KeyError:
|
||||
e = 'Code not derive openstack release for '\
|
||||
'this Ubuntu release: %s' % rel
|
||||
error_out(e)
|
||||
return rel
|
||||
|
||||
if src.startswith('cloud:'):
|
||||
ca_rel = src.split(':')[1]
|
||||
ca_rel = ca_rel.split('%s-' % ubuntu_rel)[1].split('/')[0]
|
||||
return ca_rel
|
||||
|
||||
# Best guess match based on deb string provided
|
||||
if src.startswith('deb') or src.startswith('ppa'):
|
||||
for k, v in openstack_codenames.iteritems():
|
||||
if v in src:
|
||||
return v
|
||||
|
||||
|
||||
def get_os_codename_version(vers):
|
||||
'''Determine OpenStack codename from version number.'''
|
||||
try:
|
||||
return openstack_codenames[vers]
|
||||
except KeyError:
|
||||
e = 'Could not determine OpenStack codename for version %s' % vers
|
||||
error_out(e)
|
||||
|
||||
|
||||
def get_os_version_codename(codename):
|
||||
'''Determine OpenStack version number from codename.'''
|
||||
for k, v in openstack_codenames.iteritems():
|
||||
if v == codename:
|
||||
return k
|
||||
e = 'Code not derive OpenStack version for '\
|
||||
'codename: %s' % codename
|
||||
error_out(e)
|
||||
|
||||
|
||||
def get_os_codename_package(pkg):
|
||||
'''Derive OpenStack release codename from an installed package.'''
|
||||
apt.init()
|
||||
cache = apt.Cache()
|
||||
try:
|
||||
pkg = cache[pkg]
|
||||
except:
|
||||
e = 'Could not determine version of installed package: %s' % pkg
|
||||
error_out(e)
|
||||
|
||||
vers = apt.UpstreamVersion(pkg.current_ver.ver_str)
|
||||
|
||||
try:
|
||||
if 'swift' in pkg.name:
|
||||
vers = vers[:5]
|
||||
return swift_codenames[vers]
|
||||
else:
|
||||
vers = vers[:6]
|
||||
return openstack_codenames[vers]
|
||||
except KeyError:
|
||||
e = 'Could not determine OpenStack codename for version %s' % vers
|
||||
error_out(e)
|
||||
|
||||
|
||||
def get_os_version_package(pkg):
|
||||
'''Derive OpenStack version number from an installed package.'''
|
||||
codename = get_os_codename_package(pkg)
|
||||
|
||||
if 'swift' in pkg:
|
||||
vers_map = swift_codenames
|
||||
else:
|
||||
vers_map = openstack_codenames
|
||||
|
||||
for version, cname in vers_map.iteritems():
|
||||
if cname == codename:
|
||||
return version
|
||||
e = "Could not determine OpenStack version for package: %s" % pkg
|
||||
error_out(e)
|
||||
|
||||
|
||||
def configure_installation_source(rel):
|
||||
'''Configure apt installation source.'''
|
||||
|
||||
def _import_key(keyid):
|
||||
cmd = "apt-key adv --keyserver keyserver.ubuntu.com " \
|
||||
"--recv-keys %s" % keyid
|
||||
try:
|
||||
subprocess.check_call(cmd.split(' '))
|
||||
except subprocess.CalledProcessError:
|
||||
error_out("Error importing repo key %s" % keyid)
|
||||
|
||||
if rel == 'distro':
|
||||
return
|
||||
elif rel[:4] == "ppa:":
|
||||
src = rel
|
||||
subprocess.check_call(["add-apt-repository", "-y", src])
|
||||
elif rel[:3] == "deb":
|
||||
l = len(rel.split('|'))
|
||||
if l == 2:
|
||||
src, key = rel.split('|')
|
||||
juju_log("Importing PPA key from keyserver for %s" % src)
|
||||
_import_key(key)
|
||||
elif l == 1:
|
||||
src = rel
|
||||
else:
|
||||
error_out("Invalid openstack-release: %s" % rel)
|
||||
|
||||
with open('/etc/apt/sources.list.d/juju_deb.list', 'w') as f:
|
||||
f.write(src)
|
||||
elif rel[:6] == 'cloud:':
|
||||
ubuntu_rel = lsb_release()['DISTRIB_CODENAME']
|
||||
rel = rel.split(':')[1]
|
||||
u_rel = rel.split('-')[0]
|
||||
ca_rel = rel.split('-')[1]
|
||||
|
||||
if u_rel != ubuntu_rel:
|
||||
e = 'Cannot install from Cloud Archive pocket %s on this Ubuntu '\
|
||||
'version (%s)' % (ca_rel, ubuntu_rel)
|
||||
error_out(e)
|
||||
|
||||
if 'staging' in ca_rel:
|
||||
# staging is just a regular PPA.
|
||||
os_rel = ca_rel.split('/')[0]
|
||||
ppa = 'ppa:ubuntu-cloud-archive/%s-staging' % os_rel
|
||||
cmd = 'add-apt-repository -y %s' % ppa
|
||||
subprocess.check_call(cmd.split(' '))
|
||||
return
|
||||
|
||||
# map charm config options to actual archive pockets.
|
||||
pockets = {
|
||||
'folsom': 'precise-updates/folsom',
|
||||
'folsom/updates': 'precise-updates/folsom',
|
||||
'folsom/proposed': 'precise-proposed/folsom',
|
||||
'grizzly': 'precise-updates/grizzly',
|
||||
'grizzly/updates': 'precise-updates/grizzly',
|
||||
'grizzly/proposed': 'precise-proposed/grizzly'
|
||||
}
|
||||
|
||||
try:
|
||||
pocket = pockets[ca_rel]
|
||||
except KeyError:
|
||||
e = 'Invalid Cloud Archive release specified: %s' % rel
|
||||
error_out(e)
|
||||
|
||||
src = "deb %s %s main" % (CLOUD_ARCHIVE_URL, pocket)
|
||||
_import_key(CLOUD_ARCHIVE_KEY_ID)
|
||||
|
||||
with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as f:
|
||||
f.write(src)
|
||||
else:
|
||||
error_out("Invalid openstack-release specified: %s" % rel)
|
||||
|
||||
|
||||
def save_script_rc(script_path="scripts/scriptrc", **env_vars):
|
||||
"""
|
||||
Write an rc file in the charm-delivered directory containing
|
||||
exported environment variables provided by env_vars. Any charm scripts run
|
||||
outside the juju hook environment can source this scriptrc to obtain
|
||||
updated config information necessary to perform health checks or
|
||||
service changes.
|
||||
"""
|
||||
charm_dir = os.getenv('CHARM_DIR')
|
||||
juju_rc_path = "%s/%s" % (charm_dir, script_path)
|
||||
with open(juju_rc_path, 'wb') as rc_script:
|
||||
rc_script.write(
|
||||
"#!/bin/bash\n")
|
||||
[rc_script.write('export %s=%s\n' % (u, p))
|
||||
for u, p in env_vars.iteritems() if u != "script_path"]
|
@ -1,359 +0,0 @@
|
||||
#
|
||||
# Copyright 2012 Canonical Ltd.
|
||||
#
|
||||
# This file is sourced from lp:openstack-charm-helpers
|
||||
#
|
||||
# Authors:
|
||||
# James Page <james.page@ubuntu.com>
|
||||
# Paul Collins <paul.collins@canonical.com>
|
||||
# Adam Gandelman <adamg@ubuntu.com>
|
||||
#
|
||||
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
import socket
|
||||
import sys
|
||||
import hashlib
|
||||
|
||||
|
||||
def do_hooks(hooks):
|
||||
hook = os.path.basename(sys.argv[0])
|
||||
|
||||
try:
|
||||
hook_func = hooks[hook]
|
||||
except KeyError:
|
||||
juju_log('INFO',
|
||||
"This charm doesn't know how to handle '{}'.".format(hook))
|
||||
else:
|
||||
hook_func()
|
||||
|
||||
|
||||
def install(*pkgs):
|
||||
cmd = [
|
||||
'apt-get',
|
||||
'-y',
|
||||
'install'
|
||||
]
|
||||
for pkg in pkgs:
|
||||
cmd.append(pkg)
|
||||
subprocess.check_call(cmd)
|
||||
|
||||
TEMPLATES_DIR = 'templates'
|
||||
|
||||
try:
|
||||
import jinja2
|
||||
except ImportError:
|
||||
install('python-jinja2')
|
||||
import jinja2
|
||||
|
||||
try:
|
||||
import dns.resolver
|
||||
except ImportError:
|
||||
install('python-dnspython')
|
||||
import dns.resolver
|
||||
|
||||
|
||||
def render_template(template_name, context, template_dir=TEMPLATES_DIR):
|
||||
templates = jinja2.Environment(
|
||||
loader=jinja2.FileSystemLoader(template_dir)
|
||||
)
|
||||
template = templates.get_template(template_name)
|
||||
return template.render(context)
|
||||
|
||||
CLOUD_ARCHIVE = \
|
||||
""" # Ubuntu Cloud Archive
|
||||
deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
|
||||
"""
|
||||
|
||||
CLOUD_ARCHIVE_POCKETS = {
|
||||
'folsom': 'precise-updates/folsom',
|
||||
'folsom/updates': 'precise-updates/folsom',
|
||||
'folsom/proposed': 'precise-proposed/folsom',
|
||||
'grizzly': 'precise-updates/grizzly',
|
||||
'grizzly/updates': 'precise-updates/grizzly',
|
||||
'grizzly/proposed': 'precise-proposed/grizzly'
|
||||
}
|
||||
|
||||
|
||||
def configure_source():
|
||||
source = str(config_get('openstack-origin'))
|
||||
if not source:
|
||||
return
|
||||
if source.startswith('ppa:'):
|
||||
cmd = [
|
||||
'add-apt-repository',
|
||||
source
|
||||
]
|
||||
subprocess.check_call(cmd)
|
||||
if source.startswith('cloud:'):
|
||||
# CA values should be formatted as cloud:ubuntu-openstack/pocket, eg:
|
||||
# cloud:precise-folsom/updates or cloud:precise-folsom/proposed
|
||||
install('ubuntu-cloud-keyring')
|
||||
pocket = source.split(':')[1]
|
||||
pocket = pocket.split('-')[1]
|
||||
with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
|
||||
apt.write(CLOUD_ARCHIVE.format(CLOUD_ARCHIVE_POCKETS[pocket]))
|
||||
if source.startswith('deb'):
|
||||
l = len(source.split('|'))
|
||||
if l == 2:
|
||||
(apt_line, key) = source.split('|')
|
||||
cmd = [
|
||||
'apt-key',
|
||||
'adv', '--keyserver keyserver.ubuntu.com',
|
||||
'--recv-keys', key
|
||||
]
|
||||
subprocess.check_call(cmd)
|
||||
elif l == 1:
|
||||
apt_line = source
|
||||
|
||||
with open('/etc/apt/sources.list.d/quantum.list', 'w') as apt:
|
||||
apt.write(apt_line + "\n")
|
||||
cmd = [
|
||||
'apt-get',
|
||||
'update'
|
||||
]
|
||||
subprocess.check_call(cmd)
|
||||
|
||||
# Protocols
|
||||
TCP = 'TCP'
|
||||
UDP = 'UDP'
|
||||
|
||||
|
||||
def expose(port, protocol='TCP'):
|
||||
cmd = [
|
||||
'open-port',
|
||||
'{}/{}'.format(port, protocol)
|
||||
]
|
||||
subprocess.check_call(cmd)
|
||||
|
||||
|
||||
def juju_log(severity, message):
|
||||
cmd = [
|
||||
'juju-log',
|
||||
'--log-level', severity,
|
||||
message
|
||||
]
|
||||
subprocess.check_call(cmd)
|
||||
|
||||
|
||||
cache = {}
|
||||
|
||||
|
||||
def cached(func):
|
||||
def wrapper(*args, **kwargs):
|
||||
global cache
|
||||
key = str((func, args, kwargs))
|
||||
try:
|
||||
return cache[key]
|
||||
except KeyError:
|
||||
res = func(*args, **kwargs)
|
||||
cache[key] = res
|
||||
return res
|
||||
return wrapper
|
||||
|
||||
|
||||
@cached
|
||||
def relation_ids(relation):
|
||||
cmd = [
|
||||
'relation-ids',
|
||||
relation
|
||||
]
|
||||
result = str(subprocess.check_output(cmd)).split()
|
||||
if result == "":
|
||||
return None
|
||||
else:
|
||||
return result
|
||||
|
||||
|
||||
@cached
|
||||
def relation_list(rid):
|
||||
cmd = [
|
||||
'relation-list',
|
||||
'-r', rid,
|
||||
]
|
||||
result = str(subprocess.check_output(cmd)).split()
|
||||
if result == "":
|
||||
return None
|
||||
else:
|
||||
return result
|
||||
|
||||
|
||||
@cached
|
||||
def relation_get(attribute, unit=None, rid=None):
|
||||
cmd = [
|
||||
'relation-get',
|
||||
]
|
||||
if rid:
|
||||
cmd.append('-r')
|
||||
cmd.append(rid)
|
||||
cmd.append(attribute)
|
||||
if unit:
|
||||
cmd.append(unit)
|
||||
value = subprocess.check_output(cmd).strip() # IGNORE:E1103
|
||||
if value == "":
|
||||
return None
|
||||
else:
|
||||
return value
|
||||
|
||||
|
||||
@cached
|
||||
def relation_get_dict(relation_id=None, remote_unit=None):
|
||||
"""Obtain all relation data as dict by way of JSON"""
|
||||
cmd = [
|
||||
'relation-get', '--format=json'
|
||||
]
|
||||
if relation_id:
|
||||
cmd.append('-r')
|
||||
cmd.append(relation_id)
|
||||
if remote_unit:
|
||||
remote_unit_orig = os.getenv('JUJU_REMOTE_UNIT', None)
|
||||
os.environ['JUJU_REMOTE_UNIT'] = remote_unit
|
||||
j = subprocess.check_output(cmd)
|
||||
if remote_unit and remote_unit_orig:
|
||||
os.environ['JUJU_REMOTE_UNIT'] = remote_unit_orig
|
||||
d = json.loads(j)
|
||||
settings = {}
|
||||
# convert unicode to strings
|
||||
for k, v in d.iteritems():
|
||||
settings[str(k)] = str(v)
|
||||
return settings
|
||||
|
||||
|
||||
def relation_set(**kwargs):
|
||||
cmd = [
|
||||
'relation-set'
|
||||
]
|
||||
args = []
|
||||
for k, v in kwargs.items():
|
||||
if k == 'rid':
|
||||
if v:
|
||||
cmd.append('-r')
|
||||
cmd.append(v)
|
||||
else:
|
||||
args.append('{}={}'.format(k, v))
|
||||
cmd += args
|
||||
subprocess.check_call(cmd)
|
||||
|
||||
|
||||
@cached
|
||||
def unit_get(attribute):
|
||||
cmd = [
|
||||
'unit-get',
|
||||
attribute
|
||||
]
|
||||
value = subprocess.check_output(cmd).strip() # IGNORE:E1103
|
||||
if value == "":
|
||||
return None
|
||||
else:
|
||||
return value
|
||||
|
||||
|
||||
@cached
|
||||
def config_get(attribute):
|
||||
cmd = [
|
||||
'config-get',
|
||||
'--format',
|
||||
'json',
|
||||
]
|
||||
out = subprocess.check_output(cmd).strip() # IGNORE:E1103
|
||||
cfg = json.loads(out)
|
||||
|
||||
try:
|
||||
return cfg[attribute]
|
||||
except KeyError:
|
||||
return None
|
||||
|
||||
|
||||
@cached
|
||||
def get_unit_hostname():
|
||||
return socket.gethostname()
|
||||
|
||||
|
||||
@cached
|
||||
def get_host_ip(hostname=unit_get('private-address')):
|
||||
try:
|
||||
# Test to see if already an IPv4 address
|
||||
socket.inet_aton(hostname)
|
||||
return hostname
|
||||
except socket.error:
|
||||
answers = dns.resolver.query(hostname, 'A')
|
||||
if answers:
|
||||
return answers[0].address
|
||||
return None
|
||||
|
||||
|
||||
def _svc_control(service, action):
|
||||
subprocess.check_call(['service', service, action])
|
||||
|
||||
|
||||
def restart(*services):
|
||||
for service in services:
|
||||
_svc_control(service, 'restart')
|
||||
|
||||
|
||||
def stop(*services):
|
||||
for service in services:
|
||||
_svc_control(service, 'stop')
|
||||
|
||||
|
||||
def start(*services):
|
||||
for service in services:
|
||||
_svc_control(service, 'start')
|
||||
|
||||
|
||||
def reload(*services):
|
||||
for service in services:
|
||||
try:
|
||||
_svc_control(service, 'reload')
|
||||
except subprocess.CalledProcessError:
|
||||
# Reload failed - either service does not support reload
|
||||
# or it was not running - restart will fixup most things
|
||||
_svc_control(service, 'restart')
|
||||
|
||||
|
||||
def running(service):
|
||||
try:
|
||||
output = subprocess.check_output(['service', service, 'status'])
|
||||
except subprocess.CalledProcessError:
|
||||
return False
|
||||
else:
|
||||
if ("start/running" in output or
|
||||
"is running" in output):
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
|
||||
def file_hash(path):
|
||||
if os.path.exists(path):
|
||||
h = hashlib.md5()
|
||||
with open(path, 'r') as source:
|
||||
h.update(source.read()) # IGNORE:E1101 - it does have update
|
||||
return h.hexdigest()
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
def inteli_restart(restart_map):
|
||||
def wrap(f):
|
||||
def wrapped_f(*args):
|
||||
checksums = {}
|
||||
for path in restart_map:
|
||||
checksums[path] = file_hash(path)
|
||||
f(*args)
|
||||
restarts = []
|
||||
for path in restart_map:
|
||||
if checksums[path] != file_hash(path):
|
||||
restarts += restart_map[path]
|
||||
restart(*list(set(restarts)))
|
||||
return wrapped_f
|
||||
return wrap
|
||||
|
||||
|
||||
def is_relation_made(relation, key='private-address'):
|
||||
for r_id in (relation_ids(relation) or []):
|
||||
for unit in (relation_list(r_id) or []):
|
||||
if relation_get(key, rid=r_id, unit=unit):
|
||||
return True
|
||||
return False
|
@ -1 +1 @@
|
||||
hooks.py
|
||||
quantum_hooks.py
|
175
hooks/quantum_contexts.py
Normal file
175
hooks/quantum_contexts.py
Normal file
@ -0,0 +1,175 @@
|
||||
# vim: set ts=4:et
|
||||
import os
|
||||
import uuid
|
||||
import socket
|
||||
from charmhelpers.core.hookenv import (
|
||||
config,
|
||||
relation_ids,
|
||||
related_units,
|
||||
relation_get,
|
||||
unit_get,
|
||||
cached,
|
||||
)
|
||||
from charmhelpers.fetch import (
|
||||
apt_install,
|
||||
)
|
||||
from charmhelpers.contrib.openstack.context import (
|
||||
OSContextGenerator,
|
||||
context_complete
|
||||
)
|
||||
from charmhelpers.contrib.openstack.utils import (
|
||||
get_os_codename_install_source
|
||||
)
|
||||
|
||||
DB_USER = "quantum"
|
||||
QUANTUM_DB = "quantum"
|
||||
NOVA_DB_USER = "nova"
|
||||
NOVA_DB = "nova"
|
||||
|
||||
QUANTUM_OVS_PLUGIN = \
|
||||
"quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2"
|
||||
QUANTUM_NVP_PLUGIN = \
|
||||
"quantum.plugins.nicira.nicira_nvp_plugin.QuantumPlugin.NvpPluginV2"
|
||||
NEUTRON_OVS_PLUGIN = \
|
||||
"neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2"
|
||||
NEUTRON_NVP_PLUGIN = \
|
||||
"neutron.plugins.nicira.nicira_nvp_plugin.NeutronPlugin.NvpPluginV2"
|
||||
NEUTRON = 'neutron'
|
||||
QUANTUM = 'quantum'
|
||||
|
||||
|
||||
def networking_name():
|
||||
''' Determine whether neutron or quantum should be used for name '''
|
||||
if get_os_codename_install_source(config('openstack-origin')) >= 'havana':
|
||||
return NEUTRON
|
||||
else:
|
||||
return QUANTUM
|
||||
|
||||
OVS = 'ovs'
|
||||
NVP = 'nvp'
|
||||
|
||||
CORE_PLUGIN = {
|
||||
QUANTUM: {
|
||||
OVS: QUANTUM_OVS_PLUGIN,
|
||||
NVP: QUANTUM_NVP_PLUGIN
|
||||
},
|
||||
NEUTRON: {
|
||||
OVS: NEUTRON_OVS_PLUGIN,
|
||||
NVP: NEUTRON_NVP_PLUGIN
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def core_plugin():
|
||||
return CORE_PLUGIN[networking_name()][config('plugin')]
|
||||
|
||||
|
||||
class NetworkServiceContext(OSContextGenerator):
|
||||
interfaces = ['quantum-network-service']
|
||||
|
||||
def __call__(self):
|
||||
for rid in relation_ids('quantum-network-service'):
|
||||
for unit in related_units(rid):
|
||||
ctxt = {
|
||||
'keystone_host': relation_get('keystone_host',
|
||||
rid=rid, unit=unit),
|
||||
'service_port': relation_get('service_port', rid=rid,
|
||||
unit=unit),
|
||||
'auth_port': relation_get('auth_port', rid=rid, unit=unit),
|
||||
'service_tenant': relation_get('service_tenant',
|
||||
rid=rid, unit=unit),
|
||||
'service_username': relation_get('service_username',
|
||||
rid=rid, unit=unit),
|
||||
'service_password': relation_get('service_password',
|
||||
rid=rid, unit=unit),
|
||||
'quantum_host': relation_get('quantum_host',
|
||||
rid=rid, unit=unit),
|
||||
'quantum_port': relation_get('quantum_port',
|
||||
rid=rid, unit=unit),
|
||||
'quantum_url': relation_get('quantum_url',
|
||||
rid=rid, unit=unit),
|
||||
'region': relation_get('region',
|
||||
rid=rid, unit=unit),
|
||||
# XXX: Hard-coded http.
|
||||
'service_protocol': 'http',
|
||||
'auth_protocol': 'http',
|
||||
}
|
||||
if context_complete(ctxt):
|
||||
return ctxt
|
||||
return {}
|
||||
|
||||
|
||||
class ExternalPortContext(OSContextGenerator):
|
||||
def __call__(self):
|
||||
if config('ext-port'):
|
||||
return {"ext_port": config('ext-port')}
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
class QuantumGatewayContext(OSContextGenerator):
|
||||
def __call__(self):
|
||||
ctxt = {
|
||||
'shared_secret': get_shared_secret(),
|
||||
'local_ip': get_host_ip(), # XXX: data network impact
|
||||
'core_plugin': core_plugin(),
|
||||
'plugin': config('plugin')
|
||||
}
|
||||
return ctxt
|
||||
|
||||
|
||||
class QuantumSharedDBContext(OSContextGenerator):
|
||||
interfaces = ['shared-db']
|
||||
|
||||
def __call__(self):
|
||||
for rid in relation_ids('shared-db'):
|
||||
for unit in related_units(rid):
|
||||
ctxt = {
|
||||
'database_host': relation_get('db_host', rid=rid,
|
||||
unit=unit),
|
||||
'quantum_db': QUANTUM_DB,
|
||||
'quantum_user': DB_USER,
|
||||
'quantum_password': relation_get('quantum_password',
|
||||
rid=rid, unit=unit),
|
||||
'nova_db': NOVA_DB,
|
||||
'nova_user': NOVA_DB_USER,
|
||||
'nova_password': relation_get('nova_password', rid=rid,
|
||||
unit=unit)
|
||||
}
|
||||
if context_complete(ctxt):
|
||||
return ctxt
|
||||
return {}
|
||||
|
||||
|
||||
@cached
|
||||
def get_host_ip(hostname=None):
|
||||
try:
|
||||
import dns.resolver
|
||||
except ImportError:
|
||||
apt_install('python-dnspython', fatal=True)
|
||||
import dns.resolver
|
||||
hostname = hostname or unit_get('private-address')
|
||||
try:
|
||||
# Test to see if already an IPv4 address
|
||||
socket.inet_aton(hostname)
|
||||
return hostname
|
||||
except socket.error:
|
||||
answers = dns.resolver.query(hostname, 'A')
|
||||
if answers:
|
||||
return answers[0].address
|
||||
|
||||
|
||||
SHARED_SECRET = "/etc/{}/secret.txt"
|
||||
|
||||
|
||||
def get_shared_secret():
|
||||
secret = None
|
||||
_path = SHARED_SECRET.format(networking_name())
|
||||
if not os.path.exists(_path):
|
||||
secret = str(uuid.uuid4())
|
||||
with open(_path, 'w') as secret_file:
|
||||
secret_file.write(secret)
|
||||
else:
|
||||
with open(_path, 'r') as secret_file:
|
||||
secret = secret_file.read().strip()
|
||||
return secret
|
136
hooks/quantum_hooks.py
Executable file
136
hooks/quantum_hooks.py
Executable file
@ -0,0 +1,136 @@
|
||||
#!/usr/bin/python
|
||||
|
||||
from charmhelpers.core.hookenv import (
|
||||
log, ERROR, WARNING,
|
||||
config,
|
||||
relation_get,
|
||||
relation_set,
|
||||
unit_get,
|
||||
Hooks, UnregisteredHookError
|
||||
)
|
||||
from charmhelpers.fetch import (
|
||||
apt_update,
|
||||
apt_install,
|
||||
filter_installed_packages,
|
||||
)
|
||||
from charmhelpers.core.host import (
|
||||
restart_on_change,
|
||||
lsb_release
|
||||
)
|
||||
from charmhelpers.contrib.hahelpers.cluster import(
|
||||
eligible_leader
|
||||
)
|
||||
from charmhelpers.contrib.hahelpers.apache import(
|
||||
install_ca_cert
|
||||
)
|
||||
from charmhelpers.contrib.openstack.utils import (
|
||||
configure_installation_source,
|
||||
openstack_upgrade_available
|
||||
)
|
||||
from charmhelpers.payload.execd import execd_preinstall
|
||||
|
||||
import sys
|
||||
from quantum_utils import (
|
||||
register_configs,
|
||||
restart_map,
|
||||
do_openstack_upgrade,
|
||||
get_packages,
|
||||
get_early_packages,
|
||||
get_common_package,
|
||||
valid_plugin,
|
||||
configure_ovs,
|
||||
reassign_agent_resources,
|
||||
)
|
||||
from quantum_contexts import (
|
||||
DB_USER, QUANTUM_DB,
|
||||
NOVA_DB_USER, NOVA_DB,
|
||||
)
|
||||
|
||||
hooks = Hooks()
|
||||
CONFIGS = register_configs()
|
||||
|
||||
|
||||
@hooks.hook('install')
|
||||
def install():
|
||||
execd_preinstall()
|
||||
src = config('openstack-origin')
|
||||
if (lsb_release()['DISTRIB_CODENAME'] == 'precise' and
|
||||
src == 'distro'):
|
||||
src = 'cloud:precise-folsom'
|
||||
configure_installation_source(src)
|
||||
apt_update(fatal=True)
|
||||
if valid_plugin():
|
||||
apt_install(filter_installed_packages(get_early_packages()),
|
||||
fatal=True)
|
||||
apt_install(filter_installed_packages(get_packages()),
|
||||
fatal=True)
|
||||
else:
|
||||
log('Please provide a valid plugin config', level=ERROR)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
@hooks.hook('config-changed')
|
||||
@restart_on_change(restart_map())
|
||||
def config_changed():
|
||||
if openstack_upgrade_available(get_common_package()):
|
||||
do_openstack_upgrade(CONFIGS)
|
||||
if valid_plugin():
|
||||
CONFIGS.write_all()
|
||||
configure_ovs()
|
||||
else:
|
||||
log('Please provide a valid plugin config', level=ERROR)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
@hooks.hook('upgrade-charm')
|
||||
def upgrade_charm():
|
||||
install()
|
||||
config_changed()
|
||||
|
||||
|
||||
@hooks.hook('shared-db-relation-joined')
|
||||
def db_joined():
|
||||
relation_set(quantum_username=DB_USER,
|
||||
quantum_database=QUANTUM_DB,
|
||||
quantum_hostname=unit_get('private-address'),
|
||||
nova_username=NOVA_DB_USER,
|
||||
nova_database=NOVA_DB,
|
||||
nova_hostname=unit_get('private-address'))
|
||||
|
||||
|
||||
@hooks.hook('amqp-relation-joined')
|
||||
def amqp_joined():
|
||||
relation_set(username=config('rabbit-user'),
|
||||
vhost=config('rabbit-vhost'))
|
||||
|
||||
|
||||
@hooks.hook('shared-db-relation-changed',
|
||||
'amqp-relation-changed')
|
||||
@restart_on_change(restart_map())
|
||||
def db_amqp_changed():
|
||||
CONFIGS.write_all()
|
||||
|
||||
|
||||
@hooks.hook('quantum-network-service-relation-changed')
|
||||
@restart_on_change(restart_map())
|
||||
def nm_changed():
|
||||
CONFIGS.write_all()
|
||||
if relation_get('ca_cert'):
|
||||
install_ca_cert(relation_get('ca_cert'))
|
||||
|
||||
|
||||
@hooks.hook("cluster-relation-departed")
|
||||
def cluster_departed():
|
||||
if config('plugin') == 'nvp':
|
||||
log('Unable to re-assign agent resources for failed nodes with nvp',
|
||||
level=WARNING)
|
||||
return
|
||||
if eligible_leader(None):
|
||||
reassign_agent_resources()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
try:
|
||||
hooks.execute(sys.argv)
|
||||
except UnregisteredHookError as e:
|
||||
log('Unknown hook {} - skipping.'.format(e))
|
@ -1,183 +1,312 @@
|
||||
import subprocess
|
||||
import os
|
||||
import uuid
|
||||
import base64
|
||||
import apt_pkg as apt
|
||||
from lib.utils import (
|
||||
juju_log as log,
|
||||
configure_source,
|
||||
config_get
|
||||
)
|
||||
from charmhelpers.core.host import service_running
|
||||
from charmhelpers.core.hookenv import (
|
||||
log,
|
||||
config,
|
||||
)
|
||||
from charmhelpers.fetch import (
|
||||
apt_install,
|
||||
apt_update
|
||||
)
|
||||
from charmhelpers.contrib.network.ovs import (
|
||||
add_bridge,
|
||||
add_bridge_port,
|
||||
full_restart,
|
||||
)
|
||||
from charmhelpers.contrib.openstack.utils import (
|
||||
configure_installation_source,
|
||||
get_os_codename_install_source,
|
||||
get_os_codename_package
|
||||
)
|
||||
|
||||
import charmhelpers.contrib.openstack.context as context
|
||||
import charmhelpers.contrib.openstack.templating as templating
|
||||
from charmhelpers.contrib.openstack.neutron import headers_package
|
||||
from quantum_contexts import (
|
||||
CORE_PLUGIN, OVS, NVP,
|
||||
NEUTRON, QUANTUM,
|
||||
networking_name,
|
||||
QuantumGatewayContext,
|
||||
NetworkServiceContext,
|
||||
QuantumSharedDBContext,
|
||||
ExternalPortContext,
|
||||
)
|
||||
|
||||
|
||||
OVS = "ovs"
|
||||
def valid_plugin():
|
||||
return config('plugin') in CORE_PLUGIN[networking_name()]
|
||||
|
||||
OVS_PLUGIN = \
|
||||
"quantum.plugins.openvswitch.ovs_quantum_plugin.OVSQuantumPluginV2"
|
||||
CORE_PLUGIN = {
|
||||
OVS: OVS_PLUGIN,
|
||||
}
|
||||
|
||||
OVS_PLUGIN_CONF = \
|
||||
QUANTUM_OVS_PLUGIN_CONF = \
|
||||
"/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini"
|
||||
PLUGIN_CONF = {
|
||||
OVS: OVS_PLUGIN_CONF,
|
||||
}
|
||||
QUANTUM_NVP_PLUGIN_CONF = \
|
||||
"/etc/quantum/plugins/nicira/nvp.ini"
|
||||
QUANTUM_PLUGIN_CONF = {
|
||||
OVS: QUANTUM_OVS_PLUGIN_CONF,
|
||||
NVP: QUANTUM_NVP_PLUGIN_CONF
|
||||
}
|
||||
|
||||
GATEWAY_PKGS = {
|
||||
NEUTRON_OVS_PLUGIN_CONF = \
|
||||
"/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini"
|
||||
NEUTRON_NVP_PLUGIN_CONF = \
|
||||
"/etc/neutron/plugins/nicira/nvp.ini"
|
||||
NEUTRON_PLUGIN_CONF = {
|
||||
OVS: NEUTRON_OVS_PLUGIN_CONF,
|
||||
NVP: NEUTRON_NVP_PLUGIN_CONF
|
||||
}
|
||||
|
||||
QUANTUM_GATEWAY_PKGS = {
|
||||
OVS: [
|
||||
"quantum-plugin-openvswitch-agent",
|
||||
"quantum-l3-agent",
|
||||
"quantum-dhcp-agent",
|
||||
'python-mysqldb',
|
||||
"nova-api-metadata"
|
||||
],
|
||||
}
|
||||
|
||||
GATEWAY_AGENTS = {
|
||||
OVS: [
|
||||
"quantum-plugin-openvswitch-agent",
|
||||
"quantum-l3-agent",
|
||||
],
|
||||
NVP: [
|
||||
"openvswitch-switch",
|
||||
"quantum-dhcp-agent",
|
||||
'python-mysqldb',
|
||||
"nova-api-metadata"
|
||||
],
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
NEUTRON_GATEWAY_PKGS = {
|
||||
OVS: [
|
||||
"neutron-plugin-openvswitch-agent",
|
||||
"openvswitch-switch",
|
||||
"neutron-l3-agent",
|
||||
"neutron-dhcp-agent",
|
||||
'python-mysqldb',
|
||||
'python-oslo.config', # Force upgrade
|
||||
"nova-api-metadata"
|
||||
],
|
||||
NVP: [
|
||||
"openvswitch-switch",
|
||||
"neutron-dhcp-agent",
|
||||
'python-mysqldb',
|
||||
'python-oslo.config', # Force upgrade
|
||||
"nova-api-metadata"
|
||||
]
|
||||
}
|
||||
|
||||
GATEWAY_PKGS = {
|
||||
QUANTUM: QUANTUM_GATEWAY_PKGS,
|
||||
NEUTRON: NEUTRON_GATEWAY_PKGS,
|
||||
}
|
||||
|
||||
EARLY_PACKAGES = {
|
||||
OVS: ['openvswitch-datapath-dkms']
|
||||
}
|
||||
|
||||
|
||||
def get_early_packages():
|
||||
'''Return a list of package for pre-install based on configured plugin'''
|
||||
if config('plugin') in EARLY_PACKAGES:
|
||||
pkgs = EARLY_PACKAGES[config('plugin')]
|
||||
else:
|
||||
return []
|
||||
|
||||
# ensure headers are installed build any required dkms packages
|
||||
if [p for p in pkgs if 'dkms' in p]:
|
||||
return pkgs + [headers_package()]
|
||||
return pkgs
|
||||
|
||||
|
||||
def get_packages():
|
||||
'''Return a list of packages for install based on the configured plugin'''
|
||||
return GATEWAY_PKGS[networking_name()][config('plugin')]
|
||||
|
||||
|
||||
def get_common_package():
|
||||
if get_os_codename_package('quantum-common', fatal=False) is not None:
|
||||
return 'quantum-common'
|
||||
else:
|
||||
return 'neutron-common'
|
||||
|
||||
EXT_PORT_CONF = '/etc/init/ext-port.conf'
|
||||
|
||||
|
||||
def get_os_version(package=None):
|
||||
apt.init()
|
||||
cache = apt.Cache()
|
||||
pkg = cache[package or 'quantum-common']
|
||||
if pkg.current_ver:
|
||||
return apt.upstream_version(pkg.current_ver.ver_str)
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
if get_os_version('quantum-common') >= "2013.1":
|
||||
for plugin in GATEWAY_AGENTS:
|
||||
GATEWAY_AGENTS[plugin].append("quantum-metadata-agent")
|
||||
|
||||
DB_USER = "quantum"
|
||||
QUANTUM_DB = "quantum"
|
||||
KEYSTONE_SERVICE = "quantum"
|
||||
NOVA_DB_USER = "nova"
|
||||
NOVA_DB = "nova"
|
||||
TEMPLATES = 'templates'
|
||||
|
||||
QUANTUM_CONF = "/etc/quantum/quantum.conf"
|
||||
L3_AGENT_CONF = "/etc/quantum/l3_agent.ini"
|
||||
DHCP_AGENT_CONF = "/etc/quantum/dhcp_agent.ini"
|
||||
METADATA_AGENT_CONF = "/etc/quantum/metadata_agent.ini"
|
||||
QUANTUM_L3_AGENT_CONF = "/etc/quantum/l3_agent.ini"
|
||||
QUANTUM_DHCP_AGENT_CONF = "/etc/quantum/dhcp_agent.ini"
|
||||
QUANTUM_METADATA_AGENT_CONF = "/etc/quantum/metadata_agent.ini"
|
||||
|
||||
NEUTRON_CONF = "/etc/neutron/neutron.conf"
|
||||
NEUTRON_L3_AGENT_CONF = "/etc/neutron/l3_agent.ini"
|
||||
NEUTRON_DHCP_AGENT_CONF = "/etc/neutron/dhcp_agent.ini"
|
||||
NEUTRON_METADATA_AGENT_CONF = "/etc/neutron/metadata_agent.ini"
|
||||
|
||||
NOVA_CONF = "/etc/nova/nova.conf"
|
||||
|
||||
RESTART_MAP = {
|
||||
QUANTUM_CONF: [
|
||||
'quantum-l3-agent',
|
||||
'quantum-dhcp-agent',
|
||||
'quantum-metadata-agent',
|
||||
'quantum-plugin-openvswitch-agent'
|
||||
],
|
||||
DHCP_AGENT_CONF: [
|
||||
'quantum-dhcp-agent'
|
||||
],
|
||||
L3_AGENT_CONF: [
|
||||
'quantum-l3-agent'
|
||||
],
|
||||
METADATA_AGENT_CONF: [
|
||||
'quantum-metadata-agent'
|
||||
],
|
||||
OVS_PLUGIN_CONF: [
|
||||
'quantum-plugin-openvswitch-agent'
|
||||
],
|
||||
NOVA_CONF: [
|
||||
'nova-api-metadata'
|
||||
]
|
||||
}
|
||||
NOVA_CONFIG_FILES = {
|
||||
NOVA_CONF: {
|
||||
'hook_contexts': [context.AMQPContext(),
|
||||
QuantumSharedDBContext(),
|
||||
NetworkServiceContext(),
|
||||
QuantumGatewayContext()],
|
||||
'services': ['nova-api-metadata']
|
||||
},
|
||||
}
|
||||
|
||||
QUANTUM_SHARED_CONFIG_FILES = {
|
||||
QUANTUM_DHCP_AGENT_CONF: {
|
||||
'hook_contexts': [QuantumGatewayContext()],
|
||||
'services': ['quantum-dhcp-agent']
|
||||
},
|
||||
QUANTUM_METADATA_AGENT_CONF: {
|
||||
'hook_contexts': [NetworkServiceContext(),
|
||||
QuantumGatewayContext()],
|
||||
'services': ['quantum-metadata-agent']
|
||||
},
|
||||
}
|
||||
QUANTUM_SHARED_CONFIG_FILES.update(NOVA_CONFIG_FILES)
|
||||
|
||||
NEUTRON_SHARED_CONFIG_FILES = {
|
||||
NEUTRON_DHCP_AGENT_CONF: {
|
||||
'hook_contexts': [QuantumGatewayContext()],
|
||||
'services': ['neutron-dhcp-agent']
|
||||
},
|
||||
NEUTRON_METADATA_AGENT_CONF: {
|
||||
'hook_contexts': [NetworkServiceContext(),
|
||||
QuantumGatewayContext()],
|
||||
'services': ['neutron-metadata-agent']
|
||||
},
|
||||
}
|
||||
NEUTRON_SHARED_CONFIG_FILES.update(NOVA_CONFIG_FILES)
|
||||
|
||||
QUANTUM_OVS_CONFIG_FILES = {
|
||||
QUANTUM_CONF: {
|
||||
'hook_contexts': [context.AMQPContext(),
|
||||
QuantumGatewayContext()],
|
||||
'services': ['quantum-l3-agent',
|
||||
'quantum-dhcp-agent',
|
||||
'quantum-metadata-agent',
|
||||
'quantum-plugin-openvswitch-agent']
|
||||
},
|
||||
QUANTUM_L3_AGENT_CONF: {
|
||||
'hook_contexts': [NetworkServiceContext()],
|
||||
'services': ['quantum-l3-agent']
|
||||
},
|
||||
# TODO: Check to see if this is actually required
|
||||
QUANTUM_OVS_PLUGIN_CONF: {
|
||||
'hook_contexts': [QuantumSharedDBContext(),
|
||||
QuantumGatewayContext()],
|
||||
'services': ['quantum-plugin-openvswitch-agent']
|
||||
},
|
||||
EXT_PORT_CONF: {
|
||||
'hook_contexts': [ExternalPortContext()],
|
||||
'services': []
|
||||
}
|
||||
}
|
||||
QUANTUM_OVS_CONFIG_FILES.update(QUANTUM_SHARED_CONFIG_FILES)
|
||||
|
||||
NEUTRON_OVS_CONFIG_FILES = {
|
||||
NEUTRON_CONF: {
|
||||
'hook_contexts': [context.AMQPContext(),
|
||||
QuantumGatewayContext()],
|
||||
'services': ['neutron-l3-agent',
|
||||
'neutron-dhcp-agent',
|
||||
'neutron-metadata-agent',
|
||||
'neutron-plugin-openvswitch-agent']
|
||||
},
|
||||
NEUTRON_L3_AGENT_CONF: {
|
||||
'hook_contexts': [NetworkServiceContext()],
|
||||
'services': ['neutron-l3-agent']
|
||||
},
|
||||
# TODO: Check to see if this is actually required
|
||||
NEUTRON_OVS_PLUGIN_CONF: {
|
||||
'hook_contexts': [QuantumSharedDBContext(),
|
||||
QuantumGatewayContext()],
|
||||
'services': ['neutron-plugin-openvswitch-agent']
|
||||
},
|
||||
EXT_PORT_CONF: {
|
||||
'hook_contexts': [ExternalPortContext()],
|
||||
'services': []
|
||||
}
|
||||
}
|
||||
NEUTRON_OVS_CONFIG_FILES.update(NEUTRON_SHARED_CONFIG_FILES)
|
||||
|
||||
QUANTUM_NVP_CONFIG_FILES = {
|
||||
QUANTUM_CONF: {
|
||||
'hook_contexts': [context.AMQPContext()],
|
||||
'services': ['quantum-dhcp-agent', 'quantum-metadata-agent']
|
||||
},
|
||||
}
|
||||
QUANTUM_NVP_CONFIG_FILES.update(QUANTUM_SHARED_CONFIG_FILES)
|
||||
|
||||
NEUTRON_NVP_CONFIG_FILES = {
|
||||
NEUTRON_CONF: {
|
||||
'hook_contexts': [context.AMQPContext()],
|
||||
'services': ['neutron-dhcp-agent', 'neutron-metadata-agent']
|
||||
},
|
||||
}
|
||||
NEUTRON_NVP_CONFIG_FILES.update(NEUTRON_SHARED_CONFIG_FILES)
|
||||
|
||||
CONFIG_FILES = {
|
||||
QUANTUM: {
|
||||
NVP: QUANTUM_NVP_CONFIG_FILES,
|
||||
OVS: QUANTUM_OVS_CONFIG_FILES,
|
||||
},
|
||||
NEUTRON: {
|
||||
NVP: NEUTRON_NVP_CONFIG_FILES,
|
||||
OVS: NEUTRON_OVS_CONFIG_FILES,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def register_configs():
|
||||
''' Register config files with their respective contexts. '''
|
||||
release = get_os_codename_install_source(config('openstack-origin'))
|
||||
configs = templating.OSConfigRenderer(templates_dir=TEMPLATES,
|
||||
openstack_release=release)
|
||||
|
||||
plugin = config('plugin')
|
||||
name = networking_name()
|
||||
for conf in CONFIG_FILES[name][plugin]:
|
||||
configs.register(conf,
|
||||
CONFIG_FILES[name][plugin][conf]['hook_contexts'])
|
||||
|
||||
return configs
|
||||
|
||||
|
||||
def restart_map():
|
||||
'''
|
||||
Determine the correct resource map to be passed to
|
||||
charmhelpers.core.restart_on_change() based on the services configured.
|
||||
|
||||
:returns: dict: A dictionary mapping config file to lists of services
|
||||
that should be restarted when file changes.
|
||||
'''
|
||||
_map = {}
|
||||
name = networking_name()
|
||||
for f, ctxt in CONFIG_FILES[name][config('plugin')].iteritems():
|
||||
svcs = []
|
||||
for svc in ctxt['services']:
|
||||
svcs.append(svc)
|
||||
if svcs:
|
||||
_map[f] = svcs
|
||||
return _map
|
||||
|
||||
RABBIT_USER = "nova"
|
||||
RABBIT_VHOST = "nova"
|
||||
|
||||
INT_BRIDGE = "br-int"
|
||||
EXT_BRIDGE = "br-ex"
|
||||
|
||||
|
||||
def add_bridge(name):
|
||||
status = subprocess.check_output(["ovs-vsctl", "show"])
|
||||
if "Bridge {}".format(name) not in status:
|
||||
log('INFO', 'Creating bridge {}'.format(name))
|
||||
subprocess.check_call(["ovs-vsctl", "add-br", name])
|
||||
|
||||
|
||||
def del_bridge(name):
|
||||
status = subprocess.check_output(["ovs-vsctl", "show"])
|
||||
if "Bridge {}".format(name) in status:
|
||||
log('INFO', 'Deleting bridge {}'.format(name))
|
||||
subprocess.check_call(["ovs-vsctl", "del-br", name])
|
||||
|
||||
|
||||
def add_bridge_port(name, port):
|
||||
status = subprocess.check_output(["ovs-vsctl", "show"])
|
||||
if ("Bridge {}".format(name) in status and
|
||||
"Interface \"{}\"".format(port) not in status):
|
||||
log('INFO',
|
||||
'Adding port {} to bridge {}'.format(port, name))
|
||||
subprocess.check_call(["ovs-vsctl", "add-port", name, port])
|
||||
subprocess.check_call(["ip", "link", "set", port, "up"])
|
||||
|
||||
|
||||
def del_bridge_port(name, port):
|
||||
status = subprocess.check_output(["ovs-vsctl", "show"])
|
||||
if ("Bridge {}".format(name) in status and
|
||||
"Interface \"{}\"".format(port) in status):
|
||||
log('INFO',
|
||||
'Deleting port {} from bridge {}'.format(port, name))
|
||||
subprocess.check_call(["ovs-vsctl", "del-port", name, port])
|
||||
subprocess.check_call(["ip", "link", "set", port, "down"])
|
||||
|
||||
|
||||
SHARED_SECRET = "/etc/quantum/secret.txt"
|
||||
|
||||
|
||||
def get_shared_secret():
|
||||
secret = None
|
||||
if not os.path.exists(SHARED_SECRET):
|
||||
secret = str(uuid.uuid4())
|
||||
with open(SHARED_SECRET, 'w') as secret_file:
|
||||
secret_file.write(secret)
|
||||
else:
|
||||
with open(SHARED_SECRET, 'r') as secret_file:
|
||||
secret = secret_file.read().strip()
|
||||
return secret
|
||||
|
||||
|
||||
def flush_local_configuration():
|
||||
if os.path.exists('/usr/bin/quantum-netns-cleanup'):
|
||||
cmd = [
|
||||
"quantum-netns-cleanup",
|
||||
"--config-file=/etc/quantum/quantum.conf"
|
||||
]
|
||||
for agent_conf in ['l3_agent.ini', 'dhcp_agent.ini']:
|
||||
agent_cmd = list(cmd)
|
||||
agent_cmd.append('--config-file=/etc/quantum/{}'\
|
||||
.format(agent_conf))
|
||||
subprocess.call(agent_cmd)
|
||||
|
||||
|
||||
def install_ca(ca_cert):
|
||||
with open('/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt',
|
||||
'w') as crt:
|
||||
crt.write(base64.b64decode(ca_cert))
|
||||
subprocess.check_call(['update-ca-certificates', '--fresh'])
|
||||
|
||||
DHCP_AGENT = "DHCP Agent"
|
||||
L3_AGENT = "L3 Agent"
|
||||
|
||||
|
||||
def reassign_agent_resources(env):
|
||||
# TODO: make work with neutron
|
||||
def reassign_agent_resources():
|
||||
''' Use agent scheduler API to detect down agents and re-schedule '''
|
||||
from quantumclient.v2_0 import client
|
||||
env = NetworkServiceContext()()
|
||||
if not env:
|
||||
log('Unable to re-assign resources at this time')
|
||||
return
|
||||
try:
|
||||
from quantumclient.v2_0 import client
|
||||
except ImportError:
|
||||
''' Try to import neutronclient instead for havana+ '''
|
||||
from neutronclient.v2_0 import client
|
||||
|
||||
# TODO: Fixup for https keystone
|
||||
auth_url = 'http://%(keystone_host)s:%(auth_port)s/v2.0' % env
|
||||
quantum = client.Client(username=env['service_username'],
|
||||
@ -192,9 +321,10 @@ def reassign_agent_resources(env):
|
||||
networks = {}
|
||||
for agent in agents['agents']:
|
||||
if not agent['alive']:
|
||||
log('INFO', 'DHCP Agent %s down' % agent['id'])
|
||||
log('DHCP Agent %s down' % agent['id'])
|
||||
for network in \
|
||||
quantum.list_networks_on_dhcp_agent(agent['id'])['networks']:
|
||||
quantum.list_networks_on_dhcp_agent(
|
||||
agent['id'])['networks']:
|
||||
networks[network['id']] = agent['id']
|
||||
else:
|
||||
dhcp_agents.append(agent['id'])
|
||||
@ -203,9 +333,10 @@ def reassign_agent_resources(env):
|
||||
routers = {}
|
||||
for agent in agents['agents']:
|
||||
if not agent['alive']:
|
||||
log('INFO', 'L3 Agent %s down' % agent['id'])
|
||||
log('L3 Agent %s down' % agent['id'])
|
||||
for router in \
|
||||
quantum.list_routers_on_l3_agent(agent['id'])['routers']:
|
||||
quantum.list_routers_on_l3_agent(
|
||||
agent['id'])['routers']:
|
||||
routers[router['id']] = agent['id']
|
||||
else:
|
||||
l3_agents.append(agent['id'])
|
||||
@ -213,8 +344,7 @@ def reassign_agent_resources(env):
|
||||
index = 0
|
||||
for router_id in routers:
|
||||
agent = index % len(l3_agents)
|
||||
log('INFO',
|
||||
'Moving router %s from %s to %s' % \
|
||||
log('Moving router %s from %s to %s' %
|
||||
(router_id, routers[router_id], l3_agents[agent]))
|
||||
quantum.remove_router_from_l3_agent(l3_agent=routers[router_id],
|
||||
router_id=router_id)
|
||||
@ -225,8 +355,7 @@ def reassign_agent_resources(env):
|
||||
index = 0
|
||||
for network_id in networks:
|
||||
agent = index % len(dhcp_agents)
|
||||
log('INFO',
|
||||
'Moving network %s from %s to %s' % \
|
||||
log('Moving network %s from %s to %s' %
|
||||
(network_id, networks[network_id], dhcp_agents[agent]))
|
||||
quantum.remove_network_from_dhcp_agent(dhcp_agent=networks[network_id],
|
||||
network_id=network_id)
|
||||
@ -234,16 +363,45 @@ def reassign_agent_resources(env):
|
||||
body={'network_id': network_id})
|
||||
index += 1
|
||||
|
||||
def do_openstack_upgrade():
|
||||
configure_source()
|
||||
plugin = config_get('plugin')
|
||||
pkgs = []
|
||||
if plugin in GATEWAY_PKGS.keys():
|
||||
pkgs += GATEWAY_PKGS[plugin]
|
||||
if plugin == OVS:
|
||||
pkgs.append('openvswitch-datapath-dkms')
|
||||
cmd = ['apt-get', '-y',
|
||||
'--option', 'Dpkg::Options::=--force-confold',
|
||||
'--option', 'Dpkg::Options::=--force-confdef',
|
||||
'install'] + pkgs
|
||||
subprocess.check_call(cmd)
|
||||
|
||||
def do_openstack_upgrade(configs):
|
||||
"""
|
||||
Perform an upgrade. Takes care of upgrading packages, rewriting
|
||||
configs, database migrations and potentially any other post-upgrade
|
||||
actions.
|
||||
|
||||
:param configs: The charms main OSConfigRenderer object.
|
||||
"""
|
||||
new_src = config('openstack-origin')
|
||||
new_os_rel = get_os_codename_install_source(new_src)
|
||||
|
||||
log('Performing OpenStack upgrade to %s.' % (new_os_rel))
|
||||
|
||||
configure_installation_source(new_src)
|
||||
dpkg_opts = [
|
||||
'--option', 'Dpkg::Options::=--force-confnew',
|
||||
'--option', 'Dpkg::Options::=--force-confdef',
|
||||
]
|
||||
apt_update(fatal=True)
|
||||
apt_install(packages=get_early_packages(),
|
||||
options=dpkg_opts,
|
||||
fatal=True)
|
||||
apt_install(packages=get_packages(),
|
||||
options=dpkg_opts,
|
||||
fatal=True)
|
||||
|
||||
# set CONFIGS to load templates from new release
|
||||
configs.set_release(openstack_release=new_os_rel)
|
||||
|
||||
|
||||
def configure_ovs():
|
||||
if not service_running('openvswitch-switch'):
|
||||
full_restart()
|
||||
if config('plugin') == OVS:
|
||||
add_bridge(INT_BRIDGE)
|
||||
add_bridge(EXT_BRIDGE)
|
||||
ext_port = config('ext-port')
|
||||
if ext_port:
|
||||
add_bridge_port(EXT_BRIDGE, ext_port)
|
||||
if config('plugin') == NVP:
|
||||
add_bridge(INT_BRIDGE)
|
||||
|
@ -1 +1 @@
|
||||
hooks.py
|
||||
quantum_hooks.py
|
@ -1 +1 @@
|
||||
hooks.py
|
||||
quantum_hooks.py
|
@ -1 +1 @@
|
||||
hooks.py
|
||||
quantum_hooks.py
|
@ -1 +1 @@
|
||||
hooks.py
|
||||
quantum_hooks.py
|
@ -1 +1 @@
|
||||
hooks.py
|
||||
quantum_hooks.py
|
@ -1,18 +1,20 @@
|
||||
name: quantum-gateway
|
||||
summary: Virtual Networking for OpenStack - Quantum Gateway
|
||||
summary: Virtual Networking for OpenStack - Neutron Gateway
|
||||
maintainer: James Page <james.page@ubuntu.com>
|
||||
description: |
|
||||
Quantum is a virtual network service for Openstack, and a part of
|
||||
Neutron is a virtual network service for Openstack, and a part of
|
||||
Netstack. Just like OpenStack Nova provides an API to dynamically
|
||||
request and configure virtual servers, Quantum provides an API to
|
||||
request and configure virtual servers, Neutron provides an API to
|
||||
dynamically request and configure virtual networks. These networks
|
||||
connect "interfaces" from other OpenStack services (e.g., virtual NICs
|
||||
from Nova VMs). The Quantum API supports extensions to provide
|
||||
from Nova VMs). The Neutron API supports extensions to provide
|
||||
advanced network capabilities (e.g., QoS, ACLs, network monitoring,
|
||||
etc.)
|
||||
.
|
||||
This charm provides central Quantum networking services as part
|
||||
of a Quantum based Openstack deployment
|
||||
This charm provides central Neutron networking services as part
|
||||
of a Neutron based Openstack deployment
|
||||
categories:
|
||||
- openstack
|
||||
provides:
|
||||
quantum-network-service:
|
||||
interface: quantum
|
||||
@ -23,4 +25,4 @@ requires:
|
||||
interface: rabbitmq
|
||||
peers:
|
||||
cluster:
|
||||
interface: quantum-gateway-ha
|
||||
interface: quantum-gateway-ha
|
||||
|
5
setup.cfg
Normal file
5
setup.cfg
Normal file
@ -0,0 +1,5 @@
|
||||
[nosetests]
|
||||
verbosity=2
|
||||
with-coverage=1
|
||||
cover-erase=1
|
||||
cover-package=hooks
|
@ -1,70 +0,0 @@
|
||||
#!/usr/bin/python
|
||||
|
||||
import subprocess
|
||||
|
||||
|
||||
def log(priority, message):
|
||||
print "{}: {}".format(priority, message)
|
||||
|
||||
DHCP_AGENT = "DHCP Agent"
|
||||
L3_AGENT = "L3 Agent"
|
||||
|
||||
|
||||
def evacuate_unit(unit):
|
||||
''' Use agent scheduler API to detect down agents and re-schedule '''
|
||||
from quantumclient.v2_0 import client
|
||||
# TODO: Fixup for https keystone
|
||||
auth_url = 'http://{{ keystone_host }}:{{ auth_port }}/v2.0'
|
||||
quantum = client.Client(username='{{ service_username }}',
|
||||
password='{{ service_password }}',
|
||||
tenant_name='{{ service_tenant }}',
|
||||
auth_url=auth_url,
|
||||
region_name='{{ region }}')
|
||||
|
||||
agents = quantum.list_agents(agent_type=DHCP_AGENT)
|
||||
dhcp_agents = []
|
||||
l3_agents = []
|
||||
networks = {}
|
||||
for agent in agents['agents']:
|
||||
if agent['alive'] and agent['host'] != unit:
|
||||
dhcp_agents.append(agent['id'])
|
||||
elif agent['host'] == unit:
|
||||
for network in \
|
||||
quantum.list_networks_on_dhcp_agent(agent['id'])['networks']:
|
||||
networks[network['id']] = agent['id']
|
||||
|
||||
agents = quantum.list_agents(agent_type=L3_AGENT)
|
||||
routers = {}
|
||||
for agent in agents['agents']:
|
||||
if agent['alive'] and agent['host'] != unit:
|
||||
l3_agents.append(agent['id'])
|
||||
elif agent['host'] == unit:
|
||||
for router in \
|
||||
quantum.list_routers_on_l3_agent(agent['id'])['routers']:
|
||||
routers[router['id']] = agent['id']
|
||||
|
||||
index = 0
|
||||
for router_id in routers:
|
||||
agent = index % len(l3_agents)
|
||||
log('INFO',
|
||||
'Moving router %s from %s to %s' % \
|
||||
(router_id, routers[router_id], l3_agents[agent]))
|
||||
quantum.remove_router_from_l3_agent(l3_agent=routers[router_id],
|
||||
router_id=router_id)
|
||||
quantum.add_router_to_l3_agent(l3_agent=l3_agents[agent],
|
||||
body={'router_id': router_id})
|
||||
index += 1
|
||||
|
||||
index = 0
|
||||
for network_id in networks:
|
||||
agent = index % len(dhcp_agents)
|
||||
log('INFO',
|
||||
'Moving network %s from %s to %s' % \
|
||||
(network_id, networks[network_id], dhcp_agents[agent]))
|
||||
quantum.remove_network_from_dhcp_agent(dhcp_agent=networks[network_id],
|
||||
network_id=network_id)
|
||||
quantum.add_network_to_dhcp_agent(dhcp_agent=dhcp_agents[agent],
|
||||
body={'network_id': network_id})
|
||||
index += 1
|
||||
|
||||
evacuate_unit(subprocess.check_output(['hostname', '-f']).strip())
|
@ -3,3 +3,8 @@ state_path = /var/lib/quantum
|
||||
interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver
|
||||
dhcp_driver = quantum.agent.linux.dhcp.Dnsmasq
|
||||
root_helper = sudo /usr/bin/quantum-rootwrap /etc/quantum/rootwrap.conf
|
||||
{% if plugin == 'nvp' %}
|
||||
ovs_use_veth = True
|
||||
enable_metadata_network = True
|
||||
enable_isolated_metadata = True
|
||||
{% endif %}
|
@ -1,6 +1,6 @@
|
||||
[DEFAULT]
|
||||
interface_driver = quantum.agent.linux.interface.OVSInterfaceDriver
|
||||
auth_url = http://{{ keystone_host }}:{{ service_port }}/v2.0
|
||||
auth_url = {{ service_protocol }}://{{ keystone_host }}:{{ service_port }}/v2.0
|
||||
auth_region = {{ region }}
|
||||
admin_tenant_name = {{ service_tenant }}
|
||||
admin_user = {{ service_username }}
|
@ -1,6 +1,6 @@
|
||||
[DEFAULT]
|
||||
debug = True
|
||||
auth_url = http://{{ keystone_host }}:{{ service_port }}/v2.0
|
||||
auth_url = {{ service_protocol }}://{{ keystone_host }}:{{ service_port }}/v2.0
|
||||
auth_region = {{ region }}
|
||||
admin_tenant_name = {{ service_tenant }}
|
||||
admin_user = {{ service_username }}
|
@ -7,14 +7,14 @@ verbose=True
|
||||
api_paste_config=/etc/nova/api-paste.ini
|
||||
enabled_apis=metadata
|
||||
multi_host=True
|
||||
sql_connection=mysql://{{ user }}:{{ password }}@{{ host }}/{{ db }}
|
||||
sql_connection=mysql://{{ nova_user }}:{{ nova_password }}@{{ database_host }}/{{ nova_db }}
|
||||
quantum_metadata_proxy_shared_secret={{ shared_secret }}
|
||||
service_quantum_metadata_proxy=True
|
||||
# Access to message bus
|
||||
rabbit_userid={{ rabbit_userid }}
|
||||
rabbit_virtual_host={{ rabbit_virtual_host }}
|
||||
rabbit_host={{ rabbit_host }}
|
||||
rabbit_password={{ rabbit_password }}
|
||||
rabbit_userid={{ rabbitmq_user }}
|
||||
rabbit_virtual_host={{ rabbitmq_virtual_host }}
|
||||
rabbit_host={{ rabbitmq_host }}
|
||||
rabbit_password={{ rabbitmq_password }}
|
||||
# Access to quantum API services
|
||||
network_api_class=nova.network.quantumv2.api.API
|
||||
quantum_auth_strategy=keystone
|
||||
@ -22,4 +22,4 @@ quantum_url={{ quantum_url }}
|
||||
quantum_admin_tenant_name={{ service_tenant }}
|
||||
quantum_admin_username={{ service_username }}
|
||||
quantum_admin_password={{ service_password }}
|
||||
quantum_admin_auth_url=http://{{ keystone_host }}:{{ service_port }}/v2.0
|
||||
quantum_admin_auth_url={{ service_protocol }}://{{ keystone_host }}:{{ service_port }}/v2.0
|
@ -1,5 +1,5 @@
|
||||
[DATABASE]
|
||||
sql_connection = mysql://{{ user }}:{{ password }}@{{ host }}/{{ db }}?charset=utf8
|
||||
sql_connection = mysql://{{ quantum_user }}:{{ quantum_password }}@{{ database_host }}/{{ quantum_db }}?charset=utf8
|
||||
reconnect_interval = 2
|
||||
[OVS]
|
||||
local_ip = {{ local_ip }}
|
@ -1,9 +1,9 @@
|
||||
[DEFAULT]
|
||||
verbose = True
|
||||
rabbit_userid = {{ rabbit_userid }}
|
||||
rabbit_virtual_host = {{ rabbit_virtual_host }}
|
||||
rabbit_host = {{ rabbit_host }}
|
||||
rabbit_password = {{ rabbit_password }}
|
||||
rabbit_userid = {{ rabbitmq_user }}
|
||||
rabbit_virtual_host = {{ rabbitmq_virtual_host }}
|
||||
rabbit_host = {{ rabbitmq_host }}
|
||||
rabbit_password = {{ rabbitmq_password }}
|
||||
debug = True
|
||||
bind_host = 0.0.0.0
|
||||
bind_port = 9696
|
10
templates/havana/dhcp_agent.ini
Normal file
10
templates/havana/dhcp_agent.ini
Normal file
@ -0,0 +1,10 @@
|
||||
[DEFAULT]
|
||||
state_path = /var/lib/neutron
|
||||
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
|
||||
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
|
||||
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
|
||||
ovs_use_veth = True
|
||||
{% if plugin == 'nvp' %}
|
||||
enable_metadata_network = True
|
||||
enable_isolated_metadata = True
|
||||
{% endif %}
|
9
templates/havana/l3_agent.ini
Normal file
9
templates/havana/l3_agent.ini
Normal file
@ -0,0 +1,9 @@
|
||||
[DEFAULT]
|
||||
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
|
||||
auth_url = {{ service_protocol }}://{{ keystone_host }}:{{ service_port }}/v2.0
|
||||
auth_region = {{ region }}
|
||||
admin_tenant_name = {{ service_tenant }}
|
||||
admin_user = {{ service_username }}
|
||||
admin_password = {{ service_password }}
|
||||
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
|
||||
ovs_use_veth = True
|
17
templates/havana/metadata_agent.ini
Normal file
17
templates/havana/metadata_agent.ini
Normal file
@ -0,0 +1,17 @@
|
||||
[DEFAULT]
|
||||
debug = True
|
||||
auth_url = {{ service_protocol }}://{{ keystone_host }}:{{ service_port }}/v2.0
|
||||
auth_region = {{ region }}
|
||||
admin_tenant_name = {{ service_tenant }}
|
||||
admin_user = {{ service_username }}
|
||||
admin_password = {{ service_password }}
|
||||
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
|
||||
state_path = /var/lib/neutron
|
||||
# Gateway runs a metadata API server locally
|
||||
nova_metadata_ip = {{ local_ip }}
|
||||
nova_metadata_port = 8775
|
||||
# When proxying metadata requests, Quantum signs the Instance-ID header with a
|
||||
# shared secret to prevent spoofing. You may select any string for a secret,
|
||||
# but it must match here and in the configuration used by the Nova Metadata
|
||||
# Server. NOTE: Nova uses a different key: neutron_metadata_proxy_shared_secret
|
||||
metadata_proxy_shared_secret = {{ shared_secret }}
|
20
templates/havana/neutron.conf
Normal file
20
templates/havana/neutron.conf
Normal file
@ -0,0 +1,20 @@
|
||||
[DEFAULT]
|
||||
verbose = True
|
||||
rabbit_userid = {{ rabbitmq_user }}
|
||||
rabbit_virtual_host = {{ rabbitmq_virtual_host }}
|
||||
rabbit_host = {{ rabbitmq_host }}
|
||||
rabbit_password = {{ rabbitmq_password }}
|
||||
debug = True
|
||||
bind_host = 0.0.0.0
|
||||
bind_port = 9696
|
||||
core_plugin = {{ core_plugin }}
|
||||
api_paste_config = /etc/neutron/api-paste.ini
|
||||
control_exchange = neutron
|
||||
notification_driver = neutron.openstack.common.notifier.list_notifier
|
||||
list_notifier_drivers = neutron.openstack.common.notifier.rabbit_notifier
|
||||
lock_path = /var/lock/neutron
|
||||
# Ensure that netns cleanup operations kill processes and remove ports
|
||||
# force = true
|
||||
[AGENT]
|
||||
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
|
||||
[QUOTAS]
|
25
templates/havana/nova.conf
Normal file
25
templates/havana/nova.conf
Normal file
@ -0,0 +1,25 @@
|
||||
[DEFAULT]
|
||||
logdir=/var/log/nova
|
||||
state_path=/var/lib/nova
|
||||
lock_path=/var/lock/nova
|
||||
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
|
||||
verbose=True
|
||||
api_paste_config=/etc/nova/api-paste.ini
|
||||
enabled_apis=metadata
|
||||
multi_host=True
|
||||
sql_connection=mysql://{{ nova_user }}:{{ nova_password }}@{{ database_host }}/{{ nova_db }}
|
||||
neutron_metadata_proxy_shared_secret={{ shared_secret }}
|
||||
service_neutron_metadata_proxy=True
|
||||
# Access to message bus
|
||||
rabbit_userid={{ rabbitmq_user }}
|
||||
rabbit_virtual_host={{ rabbitmq_virtual_host }}
|
||||
rabbit_host={{ rabbitmq_host }}
|
||||
rabbit_password={{ rabbitmq_password }}
|
||||
# Access to neutron API services
|
||||
network_api_class=nova.network.neutronv2.api.API
|
||||
neutron_auth_strategy=keystone
|
||||
neutron_url={{ quantum_url }}
|
||||
neutron_admin_tenant_name={{ service_tenant }}
|
||||
neutron_admin_username={{ service_username }}
|
||||
neutron_admin_password={{ service_password }}
|
||||
neutron_admin_auth_url={{ service_protocol }}://{{ keystone_host }}:{{ service_port }}/v2.0
|
11
templates/havana/ovs_neutron_plugin.ini
Normal file
11
templates/havana/ovs_neutron_plugin.ini
Normal file
@ -0,0 +1,11 @@
|
||||
[DATABASE]
|
||||
sql_connection = mysql://{{ quantum_user }}:{{ quantum_password }}@{{ database_host }}/{{ quantum_db }}?charset=utf8
|
||||
reconnect_interval = 2
|
||||
[OVS]
|
||||
local_ip = {{ local_ip }}
|
||||
tenant_network_type = gre
|
||||
enable_tunneling = True
|
||||
tunnel_id_ranges = 1:1000
|
||||
[AGENT]
|
||||
polling_interval = 10
|
||||
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
|
2
unit_tests/__init__.py
Normal file
2
unit_tests/__init__.py
Normal file
@ -0,0 +1,2 @@
|
||||
import sys
|
||||
sys.path.append('hooks')
|
239
unit_tests/test_quantum_contexts.py
Normal file
239
unit_tests/test_quantum_contexts.py
Normal file
@ -0,0 +1,239 @@
|
||||
from mock import MagicMock, patch
|
||||
import quantum_contexts
|
||||
from contextlib import contextmanager
|
||||
|
||||
from test_utils import (
|
||||
CharmTestCase
|
||||
)
|
||||
|
||||
TO_PATCH = [
|
||||
'config',
|
||||
'relation_get',
|
||||
'relation_ids',
|
||||
'related_units',
|
||||
'context_complete',
|
||||
'unit_get',
|
||||
'apt_install',
|
||||
'get_os_codename_install_source',
|
||||
]
|
||||
|
||||
|
||||
@contextmanager
|
||||
def patch_open():
|
||||
'''Patch open() to allow mocking both open() itself and the file that is
|
||||
yielded.
|
||||
|
||||
Yields the mock for "open" and "file", respectively.'''
|
||||
mock_open = MagicMock(spec=open)
|
||||
mock_file = MagicMock(spec=file)
|
||||
|
||||
@contextmanager
|
||||
def stub_open(*args, **kwargs):
|
||||
mock_open(*args, **kwargs)
|
||||
yield mock_file
|
||||
|
||||
with patch('__builtin__.open', stub_open):
|
||||
yield mock_open, mock_file
|
||||
|
||||
|
||||
class _TestQuantumContext(CharmTestCase):
|
||||
def setUp(self):
|
||||
super(_TestQuantumContext, self).setUp(quantum_contexts, TO_PATCH)
|
||||
self.config.side_effect = self.test_config.get
|
||||
|
||||
def test_not_related(self):
|
||||
self.relation_ids.return_value = []
|
||||
self.assertEquals(self.context(), {})
|
||||
|
||||
def test_no_units(self):
|
||||
self.relation_ids.return_value = []
|
||||
self.relation_ids.return_value = ['foo']
|
||||
self.related_units.return_value = []
|
||||
self.assertEquals(self.context(), {})
|
||||
|
||||
def test_no_data(self):
|
||||
self.relation_ids.return_value = ['foo']
|
||||
self.related_units.return_value = ['bar']
|
||||
self.relation_get.side_effect = self.test_relation.get
|
||||
self.context_complete.return_value = False
|
||||
self.assertEquals(self.context(), {})
|
||||
|
||||
def test_data_multi_unit(self):
|
||||
self.relation_ids.return_value = ['foo']
|
||||
self.related_units.return_value = ['bar', 'baz']
|
||||
self.context_complete.return_value = True
|
||||
self.relation_get.side_effect = self.test_relation.get
|
||||
self.assertEquals(self.context(), self.data_result)
|
||||
|
||||
def test_data_single_unit(self):
|
||||
self.relation_ids.return_value = ['foo']
|
||||
self.related_units.return_value = ['bar']
|
||||
self.context_complete.return_value = True
|
||||
self.relation_get.side_effect = self.test_relation.get
|
||||
self.assertEquals(self.context(), self.data_result)
|
||||
|
||||
|
||||
class TestQuantumSharedDBContext(_TestQuantumContext):
|
||||
def setUp(self):
|
||||
super(TestQuantumSharedDBContext, self).setUp()
|
||||
self.context = quantum_contexts.QuantumSharedDBContext()
|
||||
self.test_relation.set(
|
||||
{'db_host': '10.5.0.1',
|
||||
'nova_password': 'novapass',
|
||||
'quantum_password': 'quantumpass'}
|
||||
)
|
||||
self.data_result = {
|
||||
'database_host': '10.5.0.1',
|
||||
'nova_user': 'nova',
|
||||
'nova_password': 'novapass',
|
||||
'nova_db': 'nova',
|
||||
'quantum_user': 'quantum',
|
||||
'quantum_password': 'quantumpass',
|
||||
'quantum_db': 'quantum'
|
||||
}
|
||||
|
||||
|
||||
class TestNetworkServiceContext(_TestQuantumContext):
|
||||
def setUp(self):
|
||||
super(TestNetworkServiceContext, self).setUp()
|
||||
self.context = quantum_contexts.NetworkServiceContext()
|
||||
self.test_relation.set(
|
||||
{'keystone_host': '10.5.0.1',
|
||||
'service_port': '5000',
|
||||
'auth_port': '20000',
|
||||
'service_tenant': 'tenant',
|
||||
'service_username': 'username',
|
||||
'service_password': 'password',
|
||||
'quantum_host': '10.5.0.2',
|
||||
'quantum_port': '9696',
|
||||
'quantum_url': 'http://10.5.0.2:9696/v2',
|
||||
'region': 'aregion'}
|
||||
)
|
||||
self.data_result = {
|
||||
'keystone_host': '10.5.0.1',
|
||||
'service_port': '5000',
|
||||
'auth_port': '20000',
|
||||
'service_tenant': 'tenant',
|
||||
'service_username': 'username',
|
||||
'service_password': 'password',
|
||||
'quantum_host': '10.5.0.2',
|
||||
'quantum_port': '9696',
|
||||
'quantum_url': 'http://10.5.0.2:9696/v2',
|
||||
'region': 'aregion',
|
||||
'service_protocol': 'http',
|
||||
'auth_protocol': 'http',
|
||||
}
|
||||
|
||||
|
||||
class TestExternalPortContext(CharmTestCase):
|
||||
def setUp(self):
|
||||
super(TestExternalPortContext, self).setUp(quantum_contexts,
|
||||
TO_PATCH)
|
||||
|
||||
def test_no_ext_port(self):
|
||||
self.config.return_value = None
|
||||
self.assertEquals(quantum_contexts.ExternalPortContext()(),
|
||||
None)
|
||||
|
||||
def test_ext_port(self):
|
||||
self.config.return_value = 'eth1010'
|
||||
self.assertEquals(quantum_contexts.ExternalPortContext()(),
|
||||
{'ext_port': 'eth1010'})
|
||||
|
||||
|
||||
class TestQuantumGatewayContext(CharmTestCase):
|
||||
def setUp(self):
|
||||
super(TestQuantumGatewayContext, self).setUp(quantum_contexts,
|
||||
TO_PATCH)
|
||||
|
||||
@patch.object(quantum_contexts, 'get_shared_secret')
|
||||
@patch.object(quantum_contexts, 'get_host_ip')
|
||||
def test_all(self, _host_ip, _secret):
|
||||
self.config.return_value = 'ovs'
|
||||
self.get_os_codename_install_source.return_value = 'folsom'
|
||||
_host_ip.return_value = '10.5.0.1'
|
||||
_secret.return_value = 'testsecret'
|
||||
self.assertEquals(quantum_contexts.QuantumGatewayContext()(), {
|
||||
'shared_secret': 'testsecret',
|
||||
'local_ip': '10.5.0.1',
|
||||
'core_plugin': "quantum.plugins.openvswitch.ovs_quantum_plugin."
|
||||
"OVSQuantumPluginV2",
|
||||
'plugin': 'ovs'
|
||||
})
|
||||
|
||||
|
||||
class TestSharedSecret(CharmTestCase):
|
||||
def setUp(self):
|
||||
super(TestSharedSecret, self).setUp(quantum_contexts,
|
||||
TO_PATCH)
|
||||
self.config.side_effect = self.test_config.get
|
||||
|
||||
@patch('os.path')
|
||||
@patch('uuid.uuid4')
|
||||
def test_secret_created_stored(self, _uuid4, _path):
|
||||
_path.exists.return_value = False
|
||||
_uuid4.return_value = 'secret_thing'
|
||||
with patch_open() as (_open, _file):
|
||||
self.assertEquals(quantum_contexts.get_shared_secret(),
|
||||
'secret_thing')
|
||||
_open.assert_called_with(
|
||||
quantum_contexts.SHARED_SECRET.format('quantum'), 'w')
|
||||
_file.write.assert_called_with('secret_thing')
|
||||
|
||||
@patch('os.path')
|
||||
def test_secret_retrieved(self, _path):
|
||||
_path.exists.return_value = True
|
||||
with patch_open() as (_open, _file):
|
||||
_file.read.return_value = 'secret_thing\n'
|
||||
self.assertEquals(quantum_contexts.get_shared_secret(),
|
||||
'secret_thing')
|
||||
_open.assert_called_with(
|
||||
quantum_contexts.SHARED_SECRET.format('quantum'), 'r')
|
||||
|
||||
|
||||
class TestHostIP(CharmTestCase):
|
||||
def setUp(self):
|
||||
super(TestHostIP, self).setUp(quantum_contexts,
|
||||
TO_PATCH)
|
||||
self.config.side_effect = self.test_config.get
|
||||
|
||||
def test_get_host_ip_already_ip(self):
|
||||
self.assertEquals(quantum_contexts.get_host_ip('10.5.0.1'),
|
||||
'10.5.0.1')
|
||||
|
||||
def test_get_host_ip_noarg(self):
|
||||
self.unit_get.return_value = "10.5.0.1"
|
||||
self.assertEquals(quantum_contexts.get_host_ip(),
|
||||
'10.5.0.1')
|
||||
|
||||
@patch('dns.resolver.query')
|
||||
def test_get_host_ip_hostname_unresolvable(self, _query):
|
||||
class NXDOMAIN(Exception):
|
||||
pass
|
||||
_query.side_effect = NXDOMAIN()
|
||||
self.assertRaises(NXDOMAIN, quantum_contexts.get_host_ip,
|
||||
'missing.example.com')
|
||||
|
||||
@patch('dns.resolver.query')
|
||||
def test_get_host_ip_hostname_resolvable(self, _query):
|
||||
data = MagicMock()
|
||||
data.address = '10.5.0.1'
|
||||
_query.return_value = [data]
|
||||
self.assertEquals(quantum_contexts.get_host_ip('myhost.example.com'),
|
||||
'10.5.0.1')
|
||||
_query.assert_called_with('myhost.example.com', 'A')
|
||||
|
||||
|
||||
class TestNetworkingName(CharmTestCase):
|
||||
def setUp(self):
|
||||
super(TestNetworkingName,
|
||||
self).setUp(quantum_contexts,
|
||||
TO_PATCH)
|
||||
|
||||
def test_lt_havana(self):
|
||||
self.get_os_codename_install_source.return_value = 'folsom'
|
||||
self.assertEquals(quantum_contexts.networking_name(), 'quantum')
|
||||
|
||||
def test_ge_havana(self):
|
||||
self.get_os_codename_install_source.return_value = 'havana'
|
||||
self.assertEquals(quantum_contexts.networking_name(), 'neutron')
|
154
unit_tests/test_quantum_hooks.py
Normal file
154
unit_tests/test_quantum_hooks.py
Normal file
@ -0,0 +1,154 @@
|
||||
from mock import MagicMock, patch, call
|
||||
import quantum_utils as utils
|
||||
_register_configs = utils.register_configs
|
||||
_restart_map = utils.restart_map
|
||||
utils.register_configs = MagicMock()
|
||||
utils.restart_map = MagicMock()
|
||||
import quantum_hooks as hooks
|
||||
utils.register_configs = _register_configs
|
||||
utils.restart_map = _restart_map
|
||||
|
||||
from test_utils import CharmTestCase
|
||||
|
||||
|
||||
TO_PATCH = [
|
||||
'config',
|
||||
'configure_installation_source',
|
||||
'valid_plugin',
|
||||
'apt_update',
|
||||
'apt_install',
|
||||
'filter_installed_packages',
|
||||
'get_early_packages',
|
||||
'get_packages',
|
||||
'log',
|
||||
'do_openstack_upgrade',
|
||||
'openstack_upgrade_available',
|
||||
'CONFIGS',
|
||||
'configure_ovs',
|
||||
'relation_set',
|
||||
'unit_get',
|
||||
'relation_get',
|
||||
'install_ca_cert',
|
||||
'eligible_leader',
|
||||
'reassign_agent_resources',
|
||||
'get_common_package',
|
||||
'execd_preinstall',
|
||||
'lsb_release'
|
||||
]
|
||||
|
||||
|
||||
class TestQuantumHooks(CharmTestCase):
|
||||
|
||||
def setUp(self):
|
||||
super(TestQuantumHooks, self).setUp(hooks, TO_PATCH)
|
||||
self.config.side_effect = self.test_config.get
|
||||
self.test_config.set('openstack-origin', 'cloud:precise-havana')
|
||||
self.test_config.set('plugin', 'ovs')
|
||||
self.lsb_release.return_value = {'DISTRIB_CODENAME': 'precise'}
|
||||
|
||||
def _call_hook(self, hookname):
|
||||
hooks.hooks.execute([
|
||||
'hooks/{}'.format(hookname)])
|
||||
|
||||
def test_install_hook(self):
|
||||
self.valid_plugin.return_value = True
|
||||
_pkgs = ['foo', 'bar']
|
||||
self.filter_installed_packages.return_value = _pkgs
|
||||
self._call_hook('install')
|
||||
self.configure_installation_source.assert_called_with(
|
||||
'cloud:precise-havana'
|
||||
)
|
||||
self.apt_update.assert_called_with(fatal=True)
|
||||
self.apt_install.assert_has_calls([
|
||||
call(_pkgs, fatal=True),
|
||||
call(_pkgs, fatal=True),
|
||||
])
|
||||
self.get_early_packages.assert_called()
|
||||
self.get_packages.assert_called()
|
||||
self.execd_preinstall.assert_called()
|
||||
|
||||
def test_install_hook_precise_nocloudarchive(self):
|
||||
self.test_config.set('openstack-origin', 'distro')
|
||||
self._call_hook('install')
|
||||
self.configure_installation_source.assert_called_with(
|
||||
'cloud:precise-folsom'
|
||||
)
|
||||
|
||||
@patch('sys.exit')
|
||||
def test_install_hook_invalid_plugin(self, _exit):
|
||||
self.valid_plugin.return_value = False
|
||||
self._call_hook('install')
|
||||
self.log.assert_called()
|
||||
_exit.assert_called_with(1)
|
||||
|
||||
def test_config_changed_upgrade(self):
|
||||
self.openstack_upgrade_available.return_value = True
|
||||
self.valid_plugin.return_value = True
|
||||
self._call_hook('config-changed')
|
||||
self.do_openstack_upgrade.assert_called_with(self.CONFIGS)
|
||||
self.CONFIGS.write_all.assert_called()
|
||||
self.configure_ovs.assert_called()
|
||||
|
||||
@patch('sys.exit')
|
||||
def test_config_changed_invalid_plugin(self, _exit):
|
||||
self.valid_plugin.return_value = False
|
||||
self._call_hook('config-changed')
|
||||
self.log.assert_called()
|
||||
_exit.assert_called_with(1)
|
||||
|
||||
def test_upgrade_charm(self):
|
||||
_install = self.patch('install')
|
||||
_config_changed = self.patch('config_changed')
|
||||
self._call_hook('upgrade-charm')
|
||||
_install.assert_called()
|
||||
_config_changed.assert_called()
|
||||
|
||||
def test_db_joined(self):
|
||||
self.unit_get.return_value = 'myhostname'
|
||||
self._call_hook('shared-db-relation-joined')
|
||||
self.relation_set.assert_called_with(
|
||||
quantum_username='quantum',
|
||||
quantum_database='quantum',
|
||||
quantum_hostname='myhostname',
|
||||
nova_username='nova',
|
||||
nova_database='nova',
|
||||
nova_hostname='myhostname',
|
||||
)
|
||||
|
||||
def test_amqp_joined(self):
|
||||
self._call_hook('amqp-relation-joined')
|
||||
self.relation_set.assert_called_with(
|
||||
username='nova',
|
||||
vhost='nova',
|
||||
)
|
||||
|
||||
def test_amqp_changed(self):
|
||||
self._call_hook('amqp-relation-changed')
|
||||
self.CONFIGS.write_all.assert_called()
|
||||
|
||||
def test_shared_db_changed(self):
|
||||
self._call_hook('shared-db-relation-changed')
|
||||
self.CONFIGS.write_all.assert_called()
|
||||
|
||||
def test_nm_changed(self):
|
||||
self.relation_get.return_value = "cert"
|
||||
self._call_hook('quantum-network-service-relation-changed')
|
||||
self.CONFIGS.write_all.assert_called()
|
||||
self.install_ca_cert.assert_called_with('cert')
|
||||
|
||||
def test_cluster_departed_nvp(self):
|
||||
self.test_config.set('plugin', 'nvp')
|
||||
self._call_hook('cluster-relation-departed')
|
||||
self.log.assert_called()
|
||||
self.eligible_leader.assert_not_called()
|
||||
self.reassign_agent_resources.assert_not_called()
|
||||
|
||||
def test_cluster_departed_ovs_not_leader(self):
|
||||
self.eligible_leader.return_value = False
|
||||
self._call_hook('cluster-relation-departed')
|
||||
self.reassign_agent_resources.assert_not_called()
|
||||
|
||||
def test_cluster_departed_ovs_leader(self):
|
||||
self.eligible_leader.return_value = True
|
||||
self._call_hook('cluster-relation-departed')
|
||||
self.reassign_agent_resources.assert_called()
|
202
unit_tests/test_quantum_utils.py
Normal file
202
unit_tests/test_quantum_utils.py
Normal file
@ -0,0 +1,202 @@
|
||||
from mock import MagicMock, call
|
||||
|
||||
import charmhelpers.contrib.openstack.templating as templating
|
||||
|
||||
templating.OSConfigRenderer = MagicMock()
|
||||
|
||||
import quantum_utils
|
||||
|
||||
from test_utils import (
|
||||
CharmTestCase
|
||||
)
|
||||
|
||||
import charmhelpers.core.hookenv as hookenv
|
||||
|
||||
TO_PATCH = [
|
||||
'config',
|
||||
'get_os_codename_install_source',
|
||||
'get_os_codename_package',
|
||||
'apt_update',
|
||||
'apt_install',
|
||||
'configure_installation_source',
|
||||
'log',
|
||||
'add_bridge',
|
||||
'add_bridge_port',
|
||||
'networking_name',
|
||||
'headers_package',
|
||||
'full_restart',
|
||||
'service_running',
|
||||
]
|
||||
|
||||
|
||||
class TestQuantumUtils(CharmTestCase):
|
||||
def setUp(self):
|
||||
super(TestQuantumUtils, self).setUp(quantum_utils, TO_PATCH)
|
||||
self.networking_name.return_value = 'neutron'
|
||||
self.headers_package.return_value = 'linux-headers-2.6.18'
|
||||
|
||||
def tearDown(self):
|
||||
# Reset cached cache
|
||||
hookenv.cache = {}
|
||||
|
||||
def test_valid_plugin(self):
|
||||
self.config.return_value = 'ovs'
|
||||
self.assertTrue(quantum_utils.valid_plugin())
|
||||
self.config.return_value = 'nvp'
|
||||
self.assertTrue(quantum_utils.valid_plugin())
|
||||
|
||||
def test_invalid_plugin(self):
|
||||
self.config.return_value = 'invalid'
|
||||
self.assertFalse(quantum_utils.valid_plugin())
|
||||
|
||||
def test_get_early_packages_ovs(self):
|
||||
self.config.return_value = 'ovs'
|
||||
self.assertEquals(
|
||||
quantum_utils.get_early_packages(),
|
||||
['openvswitch-datapath-dkms', 'linux-headers-2.6.18'])
|
||||
|
||||
def test_get_early_packages_nvp(self):
|
||||
self.config.return_value = 'nvp'
|
||||
self.assertEquals(quantum_utils.get_early_packages(),
|
||||
[])
|
||||
|
||||
def test_get_packages_ovs(self):
|
||||
self.config.return_value = 'ovs'
|
||||
self.assertNotEqual(quantum_utils.get_packages(), [])
|
||||
|
||||
def test_configure_ovs_starts_service_if_required(self):
|
||||
self.service_running.return_value = False
|
||||
quantum_utils.configure_ovs()
|
||||
self.assertTrue(self.full_restart.called)
|
||||
|
||||
def test_configure_ovs_doesnt_restart_service(self):
|
||||
self.service_running.return_value = True
|
||||
quantum_utils.configure_ovs()
|
||||
self.assertFalse(self.full_restart.called)
|
||||
|
||||
def test_configure_ovs_ovs_ext_port(self):
|
||||
self.config.side_effect = self.test_config.get
|
||||
self.test_config.set('plugin', 'ovs')
|
||||
self.test_config.set('ext-port', 'eth0')
|
||||
quantum_utils.configure_ovs()
|
||||
self.add_bridge.assert_has_calls([
|
||||
call('br-int'),
|
||||
call('br-ex')
|
||||
])
|
||||
self.add_bridge_port.assert_called_with('br-ex', 'eth0')
|
||||
|
||||
def test_configure_ovs_nvp(self):
|
||||
self.config.return_value = 'nvp'
|
||||
quantum_utils.configure_ovs()
|
||||
self.add_bridge.assert_called_with('br-int')
|
||||
|
||||
def test_do_openstack_upgrade(self):
|
||||
self.config.side_effect = self.test_config.get
|
||||
self.test_config.set('openstack-origin', 'cloud:precise-havana')
|
||||
self.test_config.set('plugin', 'ovs')
|
||||
self.config.return_value = 'cloud:precise-havana'
|
||||
self.get_os_codename_install_source.return_value = 'havana'
|
||||
configs = MagicMock()
|
||||
quantum_utils.do_openstack_upgrade(configs)
|
||||
configs.set_release.assert_called_with(openstack_release='havana')
|
||||
self.log.assert_called()
|
||||
self.apt_update.assert_called_with(fatal=True)
|
||||
dpkg_opts = [
|
||||
'--option', 'Dpkg::Options::=--force-confnew',
|
||||
'--option', 'Dpkg::Options::=--force-confdef',
|
||||
]
|
||||
self.apt_install.assert_called_with(
|
||||
packages=quantum_utils.GATEWAY_PKGS['neutron']['ovs'],
|
||||
options=dpkg_opts, fatal=True
|
||||
)
|
||||
self.configure_installation_source.assert_called_with(
|
||||
'cloud:precise-havana'
|
||||
)
|
||||
|
||||
def test_register_configs_ovs(self):
|
||||
self.config.return_value = 'ovs'
|
||||
configs = quantum_utils.register_configs()
|
||||
confs = [quantum_utils.NEUTRON_DHCP_AGENT_CONF,
|
||||
quantum_utils.NEUTRON_METADATA_AGENT_CONF,
|
||||
quantum_utils.NOVA_CONF,
|
||||
quantum_utils.NEUTRON_CONF,
|
||||
quantum_utils.NEUTRON_L3_AGENT_CONF,
|
||||
quantum_utils.NEUTRON_OVS_PLUGIN_CONF,
|
||||
quantum_utils.EXT_PORT_CONF]
|
||||
print configs.register.calls()
|
||||
for conf in confs:
|
||||
configs.register.assert_any_call(
|
||||
conf,
|
||||
quantum_utils.CONFIG_FILES['neutron'][quantum_utils.OVS][conf]
|
||||
['hook_contexts']
|
||||
)
|
||||
|
||||
def test_restart_map_ovs(self):
|
||||
self.config.return_value = 'ovs'
|
||||
ex_map = {
|
||||
quantum_utils.NEUTRON_L3_AGENT_CONF: ['neutron-l3-agent'],
|
||||
quantum_utils.NEUTRON_OVS_PLUGIN_CONF:
|
||||
['neutron-plugin-openvswitch-agent'],
|
||||
quantum_utils.NOVA_CONF: ['nova-api-metadata'],
|
||||
quantum_utils.NEUTRON_METADATA_AGENT_CONF:
|
||||
['neutron-metadata-agent'],
|
||||
quantum_utils.NEUTRON_DHCP_AGENT_CONF: ['neutron-dhcp-agent'],
|
||||
quantum_utils.NEUTRON_CONF: ['neutron-l3-agent',
|
||||
'neutron-dhcp-agent',
|
||||
'neutron-metadata-agent',
|
||||
'neutron-plugin-openvswitch-agent']
|
||||
}
|
||||
self.assertEquals(quantum_utils.restart_map(), ex_map)
|
||||
|
||||
def test_register_configs_nvp(self):
|
||||
self.config.return_value = 'nvp'
|
||||
configs = quantum_utils.register_configs()
|
||||
confs = [quantum_utils.NEUTRON_DHCP_AGENT_CONF,
|
||||
quantum_utils.NEUTRON_METADATA_AGENT_CONF,
|
||||
quantum_utils.NOVA_CONF,
|
||||
quantum_utils.NEUTRON_CONF]
|
||||
for conf in confs:
|
||||
configs.register.assert_any_call(
|
||||
conf,
|
||||
quantum_utils.CONFIG_FILES['neutron'][quantum_utils.NVP][conf]
|
||||
['hook_contexts']
|
||||
)
|
||||
|
||||
def test_restart_map_nvp(self):
|
||||
self.config.return_value = 'nvp'
|
||||
ex_map = {
|
||||
quantum_utils.NEUTRON_DHCP_AGENT_CONF: ['neutron-dhcp-agent'],
|
||||
quantum_utils.NOVA_CONF: ['nova-api-metadata'],
|
||||
quantum_utils.NEUTRON_CONF: ['neutron-dhcp-agent',
|
||||
'neutron-metadata-agent'],
|
||||
quantum_utils.NEUTRON_METADATA_AGENT_CONF:
|
||||
['neutron-metadata-agent'],
|
||||
}
|
||||
self.assertEquals(quantum_utils.restart_map(), ex_map)
|
||||
|
||||
def test_register_configs_pre_install(self):
|
||||
self.config.return_value = 'ovs'
|
||||
self.networking_name.return_value = 'quantum'
|
||||
configs = quantum_utils.register_configs()
|
||||
confs = [quantum_utils.QUANTUM_DHCP_AGENT_CONF,
|
||||
quantum_utils.QUANTUM_METADATA_AGENT_CONF,
|
||||
quantum_utils.NOVA_CONF,
|
||||
quantum_utils.QUANTUM_CONF,
|
||||
quantum_utils.QUANTUM_L3_AGENT_CONF,
|
||||
quantum_utils.QUANTUM_OVS_PLUGIN_CONF,
|
||||
quantum_utils.EXT_PORT_CONF]
|
||||
print configs.register.mock_calls
|
||||
for conf in confs:
|
||||
configs.register.assert_any_call(
|
||||
conf,
|
||||
quantum_utils.CONFIG_FILES['quantum'][quantum_utils.OVS][conf]
|
||||
['hook_contexts']
|
||||
)
|
||||
|
||||
def test_get_common_package_quantum(self):
|
||||
self.get_os_codename_package.return_value = 'folsom'
|
||||
self.assertEquals(quantum_utils.get_common_package(), 'quantum-common')
|
||||
|
||||
def test_get_common_package_neutron(self):
|
||||
self.get_os_codename_package.return_value = None
|
||||
self.assertEquals(quantum_utils.get_common_package(), 'neutron-common')
|
97
unit_tests/test_utils.py
Normal file
97
unit_tests/test_utils.py
Normal file
@ -0,0 +1,97 @@
|
||||
import logging
|
||||
import unittest
|
||||
import os
|
||||
import yaml
|
||||
|
||||
from mock import patch
|
||||
|
||||
|
||||
def load_config():
|
||||
'''
|
||||
Walk backwords from __file__ looking for config.yaml, load and return the
|
||||
'options' section'
|
||||
'''
|
||||
config = None
|
||||
f = __file__
|
||||
while config is None:
|
||||
d = os.path.dirname(f)
|
||||
if os.path.isfile(os.path.join(d, 'config.yaml')):
|
||||
config = os.path.join(d, 'config.yaml')
|
||||
break
|
||||
f = d
|
||||
|
||||
if not config:
|
||||
logging.error('Could not find config.yaml in any parent directory '
|
||||
'of %s. ' % file)
|
||||
raise Exception
|
||||
|
||||
return yaml.safe_load(open(config).read())['options']
|
||||
|
||||
|
||||
def get_default_config():
|
||||
'''
|
||||
Load default charm config from config.yaml return as a dict.
|
||||
If no default is set in config.yaml, its value is None.
|
||||
'''
|
||||
default_config = {}
|
||||
config = load_config()
|
||||
for k, v in config.iteritems():
|
||||
if 'default' in v:
|
||||
default_config[k] = v['default']
|
||||
else:
|
||||
default_config[k] = None
|
||||
return default_config
|
||||
|
||||
|
||||
class CharmTestCase(unittest.TestCase):
|
||||
def setUp(self, obj, patches):
|
||||
super(CharmTestCase, self).setUp()
|
||||
self.patches = patches
|
||||
self.obj = obj
|
||||
self.test_config = TestConfig()
|
||||
self.test_relation = TestRelation()
|
||||
self.patch_all()
|
||||
|
||||
def patch(self, method):
|
||||
_m = patch.object(self.obj, method)
|
||||
mock = _m.start()
|
||||
self.addCleanup(_m.stop)
|
||||
return mock
|
||||
|
||||
def patch_all(self):
|
||||
for method in self.patches:
|
||||
setattr(self, method, self.patch(method))
|
||||
|
||||
|
||||
class TestConfig(object):
|
||||
def __init__(self):
|
||||
self.config = get_default_config()
|
||||
|
||||
def get(self, attr):
|
||||
try:
|
||||
return self.config[attr]
|
||||
except KeyError:
|
||||
return None
|
||||
|
||||
def get_all(self):
|
||||
return self.config
|
||||
|
||||
def set(self, attr, value):
|
||||
if attr not in self.config:
|
||||
raise KeyError
|
||||
self.config[attr] = value
|
||||
|
||||
|
||||
class TestRelation(object):
|
||||
def __init__(self, relation_data={}):
|
||||
self.relation_data = relation_data
|
||||
|
||||
def set(self, relation_data):
|
||||
self.relation_data = relation_data
|
||||
|
||||
def get(self, attr=None, unit=None, rid=None):
|
||||
if attr is None:
|
||||
return self.relation_data
|
||||
elif attr in self.relation_data:
|
||||
return self.relation_data[attr]
|
||||
return None
|
Loading…
Reference in New Issue
Block a user