synced /next

This commit is contained in:
Cory Benfield 2015-08-21 08:33:43 +01:00
commit 8d5aa82328
51 changed files with 2819 additions and 444 deletions

View File

@ -2,6 +2,7 @@ branch: lp:charm-helpers
destination: hooks/charmhelpers destination: hooks/charmhelpers
include: include:
- core - core
- cli
- fetch - fetch
- contrib.openstack|inc=* - contrib.openstack|inc=*
- contrib.hahelpers - contrib.hahelpers

View File

@ -1,4 +1,17 @@
options: options:
debug:
default: False
type: boolean
description: Enable debug logging.
verbose:
default: False
type: boolean
description: Enable verbose logging.
use-syslog:
type: boolean
default: False
description: |
Setting this to True will allow supporting services to log to syslog.
openstack-origin: openstack-origin:
default: distro default: distro
type: string type: string
@ -7,17 +20,27 @@ options:
distro (default), ppa:somecustom/ppa, a deb url sources entry, distro (default), ppa:somecustom/ppa, a deb url sources entry,
or a supported Cloud Archive release pocket. or a supported Cloud Archive release pocket.
Supported Cloud Archive sources include: cloud:precise-folsom, Supported Cloud Archive sources include:
cloud:precise-folsom/updates, cloud:precise-folsom/staging,
cloud:precise-folsom/proposed.
Note that updating this setting to a source that is known to cloud:<series>-<openstack-release>
provide a later version of OpenStack will trigger a software cloud:<series>-<openstack-release>/updates
upgrade. cloud:<series>-<openstack-release>/staging
cloud:<series>-<openstack-release>/proposed
Note that when openstack-origin-git is specified, openstack For series=Precise we support cloud archives for openstack-release:
specific packages will be installed from source rather than * icehouse
from the openstack-origin repository.
For series=Trusty we support cloud archives for openstack-release:
* juno
* kilo
* ...
NOTE: updating this setting to a source that is known to provide
a later version of OpenStack will trigger a software upgrade.
NOTE: when openstack-origin-git is specified, openstack specific
packages will be installed from source rather than from the
openstack-origin repository.
openstack-origin-git: openstack-origin-git:
default: default:
type: string type: string
@ -46,11 +69,6 @@ options:
default: neutron default: neutron
type: string type: string
description: Database name for Neutron (if enabled) description: Database name for Neutron (if enabled)
use-syslog:
type: boolean
default: False
description: |
If set to True, supporting services will log to syslog.
region: region:
default: RegionOne default: RegionOne
type: string type: string
@ -63,7 +81,9 @@ options:
neutron-external-network: neutron-external-network:
type: string type: string
default: ext_net default: ext_net
description: Name of the external network for floating IP addresses provided by Neutron. description: |
Name of the external network for floating IP addresses provided by
Neutron.
network-device-mtu: network-device-mtu:
type: int type: int
default: default:
@ -146,10 +166,10 @@ options:
default: -1 default: -1
type: int type: int
description: | description: |
Number of pool members allowed per tenant. A negative value means unlimited. Number of pool members allowed per tenant. A negative value means
The default is unlimited because a member is not a real resource consumer unlimited. The default is unlimited because a member is not a real
on Openstack. However, on back-end, a member is a resource consumer resource consumer on Openstack. However, on back-end, a member is a
and that is the reason why quota is possible. resource consumer and that is the reason why quota is possible.
quota-health-monitors: quota-health-monitors:
default: -1 default: -1
type: int type: int
@ -157,8 +177,8 @@ options:
Number of health monitors allowed per tenant. A negative value means Number of health monitors allowed per tenant. A negative value means
unlimited. unlimited.
The default is unlimited because a health monitor is not a real resource The default is unlimited because a health monitor is not a real resource
consumer on Openstack. However, on back-end, a member is a resource consumer consumer on Openstack. However, on back-end, a member is a resource
and that is the reason why quota is possible. consumer and that is the reason why quota is possible.
quota-router: quota-router:
default: 10 default: 10
type: int type: int
@ -168,7 +188,8 @@ options:
default: 50 default: 50
type: int type: int
description: | description: |
Number of floating IPs allowed per tenant. A negative value means unlimited. Number of floating IPs allowed per tenant. A negative value means
unlimited.
# HA configuration settings # HA configuration settings
vip: vip:
type: string type: string
@ -182,8 +203,8 @@ options:
type: string type: string
default: eth0 default: eth0
description: | description: |
Default network interface to use for HA vip when it cannot be automatically Default network interface to use for HA vip when it cannot be
determined. automatically determined.
vip_cidr: vip_cidr:
type: int type: int
default: 24 default: 24
@ -202,14 +223,6 @@ options:
description: | description: |
Default multicast port number that will be used to communicate between Default multicast port number that will be used to communicate between
HA Cluster nodes. HA Cluster nodes.
debug:
default: False
type: boolean
description: Enable debug logging
verbose:
default: False
type: boolean
description: Enable verbose logging
# Network configuration options # Network configuration options
# by default all access is over 'private-address' # by default all access is over 'private-address'
os-admin-network: os-admin-network:
@ -236,6 +249,18 @@ options:
192.168.0.0/24) 192.168.0.0/24)
. .
This network will be used for public endpoints. This network will be used for public endpoints.
os-public-hostname:
type: string
default:
description: |
The hostname or address of the public endpoints created for neutron-api
in the keystone identity provider.
.
This value will be used for public endpoints. For example, an
os-public-hostname set to 'neutron-api.example.com' with ssl enabled
will create the following endpoint for neutron-api:
.
https://neutron-api.example.com:9696/
ssl_cert: ssl_cert:
type: string type: string
default: default:
@ -344,6 +369,12 @@ options:
description: | description: |
A comma-separated list of nagios servicegroups. A comma-separated list of nagios servicegroups.
If left empty, the nagios_context will be used as the servicegroup If left empty, the nagios_context will be used as the servicegroup
manage-neutron-plugin-legacy-mode:
type: boolean
default: True
description: |
If True neutron-server will install neutron packages for the plugin
configured.
# Calico plugin configuration # Calico plugin configuration
calico-origin: calico-origin:
default: default:

View File

@ -0,0 +1,191 @@
# Copyright 2014-2015 Canonical Limited.
#
# This file is part of charm-helpers.
#
# charm-helpers is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License version 3 as
# published by the Free Software Foundation.
#
# charm-helpers is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
import inspect
import argparse
import sys
from six.moves import zip
from charmhelpers.core import unitdata
class OutputFormatter(object):
def __init__(self, outfile=sys.stdout):
self.formats = (
"raw",
"json",
"py",
"yaml",
"csv",
"tab",
)
self.outfile = outfile
def add_arguments(self, argument_parser):
formatgroup = argument_parser.add_mutually_exclusive_group()
choices = self.supported_formats
formatgroup.add_argument("--format", metavar='FMT',
help="Select output format for returned data, "
"where FMT is one of: {}".format(choices),
choices=choices, default='raw')
for fmt in self.formats:
fmtfunc = getattr(self, fmt)
formatgroup.add_argument("-{}".format(fmt[0]),
"--{}".format(fmt), action='store_const',
const=fmt, dest='format',
help=fmtfunc.__doc__)
@property
def supported_formats(self):
return self.formats
def raw(self, output):
"""Output data as raw string (default)"""
if isinstance(output, (list, tuple)):
output = '\n'.join(map(str, output))
self.outfile.write(str(output))
def py(self, output):
"""Output data as a nicely-formatted python data structure"""
import pprint
pprint.pprint(output, stream=self.outfile)
def json(self, output):
"""Output data in JSON format"""
import json
json.dump(output, self.outfile)
def yaml(self, output):
"""Output data in YAML format"""
import yaml
yaml.safe_dump(output, self.outfile)
def csv(self, output):
"""Output data as excel-compatible CSV"""
import csv
csvwriter = csv.writer(self.outfile)
csvwriter.writerows(output)
def tab(self, output):
"""Output data in excel-compatible tab-delimited format"""
import csv
csvwriter = csv.writer(self.outfile, dialect=csv.excel_tab)
csvwriter.writerows(output)
def format_output(self, output, fmt='raw'):
fmtfunc = getattr(self, fmt)
fmtfunc(output)
class CommandLine(object):
argument_parser = None
subparsers = None
formatter = None
exit_code = 0
def __init__(self):
if not self.argument_parser:
self.argument_parser = argparse.ArgumentParser(description='Perform common charm tasks')
if not self.formatter:
self.formatter = OutputFormatter()
self.formatter.add_arguments(self.argument_parser)
if not self.subparsers:
self.subparsers = self.argument_parser.add_subparsers(help='Commands')
def subcommand(self, command_name=None):
"""
Decorate a function as a subcommand. Use its arguments as the
command-line arguments"""
def wrapper(decorated):
cmd_name = command_name or decorated.__name__
subparser = self.subparsers.add_parser(cmd_name,
description=decorated.__doc__)
for args, kwargs in describe_arguments(decorated):
subparser.add_argument(*args, **kwargs)
subparser.set_defaults(func=decorated)
return decorated
return wrapper
def test_command(self, decorated):
"""
Subcommand is a boolean test function, so bool return values should be
converted to a 0/1 exit code.
"""
decorated._cli_test_command = True
return decorated
def no_output(self, decorated):
"""
Subcommand is not expected to return a value, so don't print a spurious None.
"""
decorated._cli_no_output = True
return decorated
def subcommand_builder(self, command_name, description=None):
"""
Decorate a function that builds a subcommand. Builders should accept a
single argument (the subparser instance) and return the function to be
run as the command."""
def wrapper(decorated):
subparser = self.subparsers.add_parser(command_name)
func = decorated(subparser)
subparser.set_defaults(func=func)
subparser.description = description or func.__doc__
return wrapper
def run(self):
"Run cli, processing arguments and executing subcommands."
arguments = self.argument_parser.parse_args()
argspec = inspect.getargspec(arguments.func)
vargs = []
for arg in argspec.args:
vargs.append(getattr(arguments, arg))
if argspec.varargs:
vargs.extend(getattr(arguments, argspec.varargs))
output = arguments.func(*vargs)
if getattr(arguments.func, '_cli_test_command', False):
self.exit_code = 0 if output else 1
output = ''
if getattr(arguments.func, '_cli_no_output', False):
output = ''
self.formatter.format_output(output, arguments.format)
if unitdata._KV:
unitdata._KV.flush()
cmdline = CommandLine()
def describe_arguments(func):
"""
Analyze a function's signature and return a data structure suitable for
passing in as arguments to an argparse parser's add_argument() method."""
argspec = inspect.getargspec(func)
# we should probably raise an exception somewhere if func includes **kwargs
if argspec.defaults:
positional_args = argspec.args[:-len(argspec.defaults)]
keyword_names = argspec.args[-len(argspec.defaults):]
for arg, default in zip(keyword_names, argspec.defaults):
yield ('--{}'.format(arg),), {'default': default}
else:
positional_args = argspec.args
for arg in positional_args:
yield (arg,), {}
if argspec.varargs:
yield (argspec.varargs,), {'nargs': '*'}

View File

@ -0,0 +1,36 @@
# Copyright 2014-2015 Canonical Limited.
#
# This file is part of charm-helpers.
#
# charm-helpers is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License version 3 as
# published by the Free Software Foundation.
#
# charm-helpers is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
from . import cmdline
from charmhelpers.contrib.benchmark import Benchmark
@cmdline.subcommand(command_name='benchmark-start')
def start():
Benchmark.start()
@cmdline.subcommand(command_name='benchmark-finish')
def finish():
Benchmark.finish()
@cmdline.subcommand_builder('benchmark-composite', description="Set the benchmark composite score")
def service(subparser):
subparser.add_argument("value", help="The composite score.")
subparser.add_argument("units", help="The units the composite score represents, i.e., 'reads/sec'.")
subparser.add_argument("direction", help="'asc' if a lower score is better, 'desc' if a higher score is better.")
return Benchmark.set_composite_score

View File

@ -0,0 +1,32 @@
# Copyright 2014-2015 Canonical Limited.
#
# This file is part of charm-helpers.
#
# charm-helpers is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License version 3 as
# published by the Free Software Foundation.
#
# charm-helpers is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
"""
This module loads sub-modules into the python runtime so they can be
discovered via the inspect module. In order to prevent flake8 from (rightfully)
telling us these are unused modules, throw a ' # noqa' at the end of each import
so that the warning is suppressed.
"""
from . import CommandLine # noqa
"""
Import the sub-modules which have decorated subcommands to register with chlp.
"""
from . import host # noqa
from . import benchmark # noqa
from . import unitdata # noqa
from . import hookenv # noqa

View File

@ -0,0 +1,23 @@
# Copyright 2014-2015 Canonical Limited.
#
# This file is part of charm-helpers.
#
# charm-helpers is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License version 3 as
# published by the Free Software Foundation.
#
# charm-helpers is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
from . import cmdline
from charmhelpers.core import hookenv
cmdline.subcommand('relation-id')(hookenv.relation_id._wrapped)
cmdline.subcommand('service-name')(hookenv.service_name)
cmdline.subcommand('remote-service-name')(hookenv.remote_service_name._wrapped)

View File

@ -0,0 +1,31 @@
# Copyright 2014-2015 Canonical Limited.
#
# This file is part of charm-helpers.
#
# charm-helpers is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License version 3 as
# published by the Free Software Foundation.
#
# charm-helpers is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
from . import cmdline
from charmhelpers.core import host
@cmdline.subcommand()
def mounts():
"List mounts"
return host.mounts()
@cmdline.subcommand_builder('service', description="Control system services")
def service(subparser):
subparser.add_argument("action", help="The action to perform (start, stop, etc...)")
subparser.add_argument("service_name", help="Name of the service to control")
return host.service

View File

@ -0,0 +1,39 @@
# Copyright 2014-2015 Canonical Limited.
#
# This file is part of charm-helpers.
#
# charm-helpers is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License version 3 as
# published by the Free Software Foundation.
#
# charm-helpers is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
from . import cmdline
from charmhelpers.core import unitdata
@cmdline.subcommand_builder('unitdata', description="Store and retrieve data")
def unitdata_cmd(subparser):
nested = subparser.add_subparsers()
get_cmd = nested.add_parser('get', help='Retrieve data')
get_cmd.add_argument('key', help='Key to retrieve the value of')
get_cmd.set_defaults(action='get', value=None)
set_cmd = nested.add_parser('set', help='Store data')
set_cmd.add_argument('key', help='Key to set')
set_cmd.add_argument('value', help='Value to store')
set_cmd.set_defaults(action='set')
def _unitdata_cmd(action, key, value):
if action == 'get':
return unitdata.kv().get(key)
elif action == 'set':
unitdata.kv().set(key, value)
unitdata.kv().flush()
return ''
return _unitdata_cmd

View File

@ -44,6 +44,7 @@ from charmhelpers.core.hookenv import (
ERROR, ERROR,
WARNING, WARNING,
unit_get, unit_get,
is_leader as juju_is_leader
) )
from charmhelpers.core.decorators import ( from charmhelpers.core.decorators import (
retry_on_exception, retry_on_exception,
@ -63,17 +64,30 @@ class CRMResourceNotFound(Exception):
pass pass
class CRMDCNotFound(Exception):
pass
def is_elected_leader(resource): def is_elected_leader(resource):
""" """
Returns True if the charm executing this is the elected cluster leader. Returns True if the charm executing this is the elected cluster leader.
It relies on two mechanisms to determine leadership: It relies on two mechanisms to determine leadership:
1. If the charm is part of a corosync cluster, call corosync to 1. If juju is sufficiently new and leadership election is supported,
the is_leader command will be used.
2. If the charm is part of a corosync cluster, call corosync to
determine leadership. determine leadership.
2. If the charm is not part of a corosync cluster, the leader is 3. If the charm is not part of a corosync cluster, the leader is
determined as being "the alive unit with the lowest unit numer". In determined as being "the alive unit with the lowest unit numer". In
other words, the oldest surviving unit. other words, the oldest surviving unit.
""" """
try:
return juju_is_leader()
except NotImplementedError:
log('Juju leadership election feature not enabled'
', using fallback support',
level=WARNING)
if is_clustered(): if is_clustered():
if not is_crm_leader(resource): if not is_crm_leader(resource):
log('Deferring action to CRM leader.', level=INFO) log('Deferring action to CRM leader.', level=INFO)
@ -106,8 +120,9 @@ def is_crm_dc():
status = subprocess.check_output(cmd, stderr=subprocess.STDOUT) status = subprocess.check_output(cmd, stderr=subprocess.STDOUT)
if not isinstance(status, six.text_type): if not isinstance(status, six.text_type):
status = six.text_type(status, "utf-8") status = six.text_type(status, "utf-8")
except subprocess.CalledProcessError: except subprocess.CalledProcessError as ex:
return False raise CRMDCNotFound(str(ex))
current_dc = '' current_dc = ''
for line in status.split('\n'): for line in status.split('\n'):
if line.startswith('Current DC'): if line.startswith('Current DC'):
@ -115,10 +130,14 @@ def is_crm_dc():
current_dc = line.split(':')[1].split()[0] current_dc = line.split(':')[1].split()[0]
if current_dc == get_unit_hostname(): if current_dc == get_unit_hostname():
return True return True
elif current_dc == 'NONE':
raise CRMDCNotFound('Current DC: NONE')
return False return False
@retry_on_exception(5, base_delay=2, exc_type=CRMResourceNotFound) @retry_on_exception(5, base_delay=2,
exc_type=(CRMResourceNotFound, CRMDCNotFound))
def is_crm_leader(resource, retry=False): def is_crm_leader(resource, retry=False):
""" """
Returns True if the charm calling this is the elected corosync leader, Returns True if the charm calling this is the elected corosync leader,

View File

@ -44,7 +44,7 @@ class OpenStackAmuletDeployment(AmuletDeployment):
Determine if the local branch being tested is derived from its Determine if the local branch being tested is derived from its
stable or next (dev) branch, and based on this, use the corresonding stable or next (dev) branch, and based on this, use the corresonding
stable or next branches for the other_services.""" stable or next branches for the other_services."""
base_charms = ['mysql', 'mongodb'] base_charms = ['mysql', 'mongodb', 'nrpe']
if self.series in ['precise', 'trusty']: if self.series in ['precise', 'trusty']:
base_series = self.series base_series = self.series
@ -79,9 +79,9 @@ class OpenStackAmuletDeployment(AmuletDeployment):
services.append(this_service) services.append(this_service)
use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph', use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
'ceph-osd', 'ceph-radosgw'] 'ceph-osd', 'ceph-radosgw']
# Openstack subordinate charms do not expose an origin option as that # Most OpenStack subordinate charms do not expose an origin option
# is controlled by the principle # as that is controlled by the principle.
ignore = ['neutron-openvswitch'] ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe']
if self.openstack: if self.openstack:
for svc in services: for svc in services:
@ -110,7 +110,8 @@ class OpenStackAmuletDeployment(AmuletDeployment):
(self.precise_essex, self.precise_folsom, self.precise_grizzly, (self.precise_essex, self.precise_folsom, self.precise_grizzly,
self.precise_havana, self.precise_icehouse, self.precise_havana, self.precise_icehouse,
self.trusty_icehouse, self.trusty_juno, self.utopic_juno, self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
self.trusty_kilo, self.vivid_kilo) = range(10) self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
self.wily_liberty) = range(12)
releases = { releases = {
('precise', None): self.precise_essex, ('precise', None): self.precise_essex,
@ -121,8 +122,10 @@ class OpenStackAmuletDeployment(AmuletDeployment):
('trusty', None): self.trusty_icehouse, ('trusty', None): self.trusty_icehouse,
('trusty', 'cloud:trusty-juno'): self.trusty_juno, ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
('trusty', 'cloud:trusty-kilo'): self.trusty_kilo, ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
('utopic', None): self.utopic_juno, ('utopic', None): self.utopic_juno,
('vivid', None): self.vivid_kilo} ('vivid', None): self.vivid_kilo,
('wily', None): self.wily_liberty}
return releases[(self.series, self.openstack)] return releases[(self.series, self.openstack)]
def _get_openstack_release_string(self): def _get_openstack_release_string(self):
@ -138,9 +141,43 @@ class OpenStackAmuletDeployment(AmuletDeployment):
('trusty', 'icehouse'), ('trusty', 'icehouse'),
('utopic', 'juno'), ('utopic', 'juno'),
('vivid', 'kilo'), ('vivid', 'kilo'),
('wily', 'liberty'),
]) ])
if self.openstack: if self.openstack:
os_origin = self.openstack.split(':')[1] os_origin = self.openstack.split(':')[1]
return os_origin.split('%s-' % self.series)[1].split('/')[0] return os_origin.split('%s-' % self.series)[1].split('/')[0]
else: else:
return releases[self.series] return releases[self.series]
def get_ceph_expected_pools(self, radosgw=False):
"""Return a list of expected ceph pools in a ceph + cinder + glance
test scenario, based on OpenStack release and whether ceph radosgw
is flagged as present or not."""
if self._get_openstack_release() >= self.trusty_kilo:
# Kilo or later
pools = [
'rbd',
'cinder',
'glance'
]
else:
# Juno or earlier
pools = [
'data',
'metadata',
'rbd',
'cinder',
'glance'
]
if radosgw:
pools.extend([
'.rgw.root',
'.rgw.control',
'.rgw',
'.rgw.gc',
'.users.uid'
])
return pools

View File

@ -14,16 +14,20 @@
# You should have received a copy of the GNU Lesser General Public License # You should have received a copy of the GNU Lesser General Public License
# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
import amulet
import json
import logging import logging
import os import os
import six
import time import time
import urllib import urllib
import cinderclient.v1.client as cinder_client
import glanceclient.v1.client as glance_client import glanceclient.v1.client as glance_client
import heatclient.v1.client as heat_client
import keystoneclient.v2_0 as keystone_client import keystoneclient.v2_0 as keystone_client
import novaclient.v1_1.client as nova_client import novaclient.v1_1.client as nova_client
import swiftclient
import six
from charmhelpers.contrib.amulet.utils import ( from charmhelpers.contrib.amulet.utils import (
AmuletUtils AmuletUtils
@ -37,7 +41,7 @@ class OpenStackAmuletUtils(AmuletUtils):
"""OpenStack amulet utilities. """OpenStack amulet utilities.
This class inherits from AmuletUtils and has additional support This class inherits from AmuletUtils and has additional support
that is specifically for use by OpenStack charms. that is specifically for use by OpenStack charm tests.
""" """
def __init__(self, log_level=ERROR): def __init__(self, log_level=ERROR):
@ -51,6 +55,8 @@ class OpenStackAmuletUtils(AmuletUtils):
Validate actual endpoint data vs expected endpoint data. The ports Validate actual endpoint data vs expected endpoint data. The ports
are used to find the matching endpoint. are used to find the matching endpoint.
""" """
self.log.debug('Validating endpoint data...')
self.log.debug('actual: {}'.format(repr(endpoints)))
found = False found = False
for ep in endpoints: for ep in endpoints:
self.log.debug('endpoint: {}'.format(repr(ep))) self.log.debug('endpoint: {}'.format(repr(ep)))
@ -77,6 +83,7 @@ class OpenStackAmuletUtils(AmuletUtils):
Validate a list of actual service catalog endpoints vs a list of Validate a list of actual service catalog endpoints vs a list of
expected service catalog endpoints. expected service catalog endpoints.
""" """
self.log.debug('Validating service catalog endpoint data...')
self.log.debug('actual: {}'.format(repr(actual))) self.log.debug('actual: {}'.format(repr(actual)))
for k, v in six.iteritems(expected): for k, v in six.iteritems(expected):
if k in actual: if k in actual:
@ -93,6 +100,7 @@ class OpenStackAmuletUtils(AmuletUtils):
Validate a list of actual tenant data vs list of expected tenant Validate a list of actual tenant data vs list of expected tenant
data. data.
""" """
self.log.debug('Validating tenant data...')
self.log.debug('actual: {}'.format(repr(actual))) self.log.debug('actual: {}'.format(repr(actual)))
for e in expected: for e in expected:
found = False found = False
@ -114,6 +122,7 @@ class OpenStackAmuletUtils(AmuletUtils):
Validate a list of actual role data vs a list of expected role Validate a list of actual role data vs a list of expected role
data. data.
""" """
self.log.debug('Validating role data...')
self.log.debug('actual: {}'.format(repr(actual))) self.log.debug('actual: {}'.format(repr(actual)))
for e in expected: for e in expected:
found = False found = False
@ -134,6 +143,7 @@ class OpenStackAmuletUtils(AmuletUtils):
Validate a list of actual user data vs a list of expected user Validate a list of actual user data vs a list of expected user
data. data.
""" """
self.log.debug('Validating user data...')
self.log.debug('actual: {}'.format(repr(actual))) self.log.debug('actual: {}'.format(repr(actual)))
for e in expected: for e in expected:
found = False found = False
@ -155,17 +165,30 @@ class OpenStackAmuletUtils(AmuletUtils):
Validate a list of actual flavors vs a list of expected flavors. Validate a list of actual flavors vs a list of expected flavors.
""" """
self.log.debug('Validating flavor data...')
self.log.debug('actual: {}'.format(repr(actual))) self.log.debug('actual: {}'.format(repr(actual)))
act = [a.name for a in actual] act = [a.name for a in actual]
return self._validate_list_data(expected, act) return self._validate_list_data(expected, act)
def tenant_exists(self, keystone, tenant): def tenant_exists(self, keystone, tenant):
"""Return True if tenant exists.""" """Return True if tenant exists."""
self.log.debug('Checking if tenant exists ({})...'.format(tenant))
return tenant in [t.name for t in keystone.tenants.list()] return tenant in [t.name for t in keystone.tenants.list()]
def authenticate_cinder_admin(self, keystone_sentry, username,
password, tenant):
"""Authenticates admin user with cinder."""
# NOTE(beisner): cinder python client doesn't accept tokens.
service_ip = \
keystone_sentry.relation('shared-db',
'mysql:shared-db')['private-address']
ept = "http://{}:5000/v2.0".format(service_ip.strip().decode('utf-8'))
return cinder_client.Client(username, password, tenant, ept)
def authenticate_keystone_admin(self, keystone_sentry, user, password, def authenticate_keystone_admin(self, keystone_sentry, user, password,
tenant): tenant):
"""Authenticates admin user with the keystone admin endpoint.""" """Authenticates admin user with the keystone admin endpoint."""
self.log.debug('Authenticating keystone admin...')
unit = keystone_sentry unit = keystone_sentry
service_ip = unit.relation('shared-db', service_ip = unit.relation('shared-db',
'mysql:shared-db')['private-address'] 'mysql:shared-db')['private-address']
@ -175,6 +198,7 @@ class OpenStackAmuletUtils(AmuletUtils):
def authenticate_keystone_user(self, keystone, user, password, tenant): def authenticate_keystone_user(self, keystone, user, password, tenant):
"""Authenticates a regular user with the keystone public endpoint.""" """Authenticates a regular user with the keystone public endpoint."""
self.log.debug('Authenticating keystone user ({})...'.format(user))
ep = keystone.service_catalog.url_for(service_type='identity', ep = keystone.service_catalog.url_for(service_type='identity',
endpoint_type='publicURL') endpoint_type='publicURL')
return keystone_client.Client(username=user, password=password, return keystone_client.Client(username=user, password=password,
@ -182,19 +206,49 @@ class OpenStackAmuletUtils(AmuletUtils):
def authenticate_glance_admin(self, keystone): def authenticate_glance_admin(self, keystone):
"""Authenticates admin user with glance.""" """Authenticates admin user with glance."""
self.log.debug('Authenticating glance admin...')
ep = keystone.service_catalog.url_for(service_type='image', ep = keystone.service_catalog.url_for(service_type='image',
endpoint_type='adminURL') endpoint_type='adminURL')
return glance_client.Client(ep, token=keystone.auth_token) return glance_client.Client(ep, token=keystone.auth_token)
def authenticate_heat_admin(self, keystone):
"""Authenticates the admin user with heat."""
self.log.debug('Authenticating heat admin...')
ep = keystone.service_catalog.url_for(service_type='orchestration',
endpoint_type='publicURL')
return heat_client.Client(endpoint=ep, token=keystone.auth_token)
def authenticate_nova_user(self, keystone, user, password, tenant): def authenticate_nova_user(self, keystone, user, password, tenant):
"""Authenticates a regular user with nova-api.""" """Authenticates a regular user with nova-api."""
self.log.debug('Authenticating nova user ({})...'.format(user))
ep = keystone.service_catalog.url_for(service_type='identity', ep = keystone.service_catalog.url_for(service_type='identity',
endpoint_type='publicURL') endpoint_type='publicURL')
return nova_client.Client(username=user, api_key=password, return nova_client.Client(username=user, api_key=password,
project_id=tenant, auth_url=ep) project_id=tenant, auth_url=ep)
def authenticate_swift_user(self, keystone, user, password, tenant):
"""Authenticates a regular user with swift api."""
self.log.debug('Authenticating swift user ({})...'.format(user))
ep = keystone.service_catalog.url_for(service_type='identity',
endpoint_type='publicURL')
return swiftclient.Connection(authurl=ep,
user=user,
key=password,
tenant_name=tenant,
auth_version='2.0')
def create_cirros_image(self, glance, image_name): def create_cirros_image(self, glance, image_name):
"""Download the latest cirros image and upload it to glance.""" """Download the latest cirros image and upload it to glance,
validate and return a resource pointer.
:param glance: pointer to authenticated glance connection
:param image_name: display name for new image
:returns: glance image pointer
"""
self.log.debug('Creating glance cirros image '
'({})...'.format(image_name))
# Download cirros image
http_proxy = os.getenv('AMULET_HTTP_PROXY') http_proxy = os.getenv('AMULET_HTTP_PROXY')
self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy)) self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))
if http_proxy: if http_proxy:
@ -203,57 +257,67 @@ class OpenStackAmuletUtils(AmuletUtils):
else: else:
opener = urllib.FancyURLopener() opener = urllib.FancyURLopener()
f = opener.open("http://download.cirros-cloud.net/version/released") f = opener.open('http://download.cirros-cloud.net/version/released')
version = f.read().strip() version = f.read().strip()
cirros_img = "cirros-{}-x86_64-disk.img".format(version) cirros_img = 'cirros-{}-x86_64-disk.img'.format(version)
local_path = os.path.join('tests', cirros_img) local_path = os.path.join('tests', cirros_img)
if not os.path.exists(local_path): if not os.path.exists(local_path):
cirros_url = "http://{}/{}/{}".format("download.cirros-cloud.net", cirros_url = 'http://{}/{}/{}'.format('download.cirros-cloud.net',
version, cirros_img) version, cirros_img)
opener.retrieve(cirros_url, local_path) opener.retrieve(cirros_url, local_path)
f.close() f.close()
# Create glance image
with open(local_path) as f: with open(local_path) as f:
image = glance.images.create(name=image_name, is_public=True, image = glance.images.create(name=image_name, is_public=True,
disk_format='qcow2', disk_format='qcow2',
container_format='bare', data=f) container_format='bare', data=f)
count = 1
status = image.status
while status != 'active' and count < 10:
time.sleep(3)
image = glance.images.get(image.id)
status = image.status
self.log.debug('image status: {}'.format(status))
count += 1
if status != 'active': # Wait for image to reach active status
self.log.error('image creation timed out') img_id = image.id
return None ret = self.resource_reaches_status(glance.images, img_id,
expected_stat='active',
msg='Image status wait')
if not ret:
msg = 'Glance image failed to reach expected state.'
amulet.raise_status(amulet.FAIL, msg=msg)
# Re-validate new image
self.log.debug('Validating image attributes...')
val_img_name = glance.images.get(img_id).name
val_img_stat = glance.images.get(img_id).status
val_img_pub = glance.images.get(img_id).is_public
val_img_cfmt = glance.images.get(img_id).container_format
val_img_dfmt = glance.images.get(img_id).disk_format
msg_attr = ('Image attributes - name:{} public:{} id:{} stat:{} '
'container fmt:{} disk fmt:{}'.format(
val_img_name, val_img_pub, img_id,
val_img_stat, val_img_cfmt, val_img_dfmt))
if val_img_name == image_name and val_img_stat == 'active' \
and val_img_pub is True and val_img_cfmt == 'bare' \
and val_img_dfmt == 'qcow2':
self.log.debug(msg_attr)
else:
msg = ('Volume validation failed, {}'.format(msg_attr))
amulet.raise_status(amulet.FAIL, msg=msg)
return image return image
def delete_image(self, glance, image): def delete_image(self, glance, image):
"""Delete the specified image.""" """Delete the specified image."""
num_before = len(list(glance.images.list()))
glance.images.delete(image)
count = 1 # /!\ DEPRECATION WARNING
num_after = len(list(glance.images.list())) self.log.warn('/!\\ DEPRECATION WARNING: use '
while num_after != (num_before - 1) and count < 10: 'delete_resource instead of delete_image.')
time.sleep(3) self.log.debug('Deleting glance image ({})...'.format(image))
num_after = len(list(glance.images.list())) return self.delete_resource(glance.images, image, msg='glance image')
self.log.debug('number of images: {}'.format(num_after))
count += 1
if num_after != (num_before - 1):
self.log.error('image deletion timed out')
return False
return True
def create_instance(self, nova, image_name, instance_name, flavor): def create_instance(self, nova, image_name, instance_name, flavor):
"""Create the specified instance.""" """Create the specified instance."""
self.log.debug('Creating instance '
'({}|{}|{})'.format(instance_name, image_name, flavor))
image = nova.images.find(name=image_name) image = nova.images.find(name=image_name)
flavor = nova.flavors.find(name=flavor) flavor = nova.flavors.find(name=flavor)
instance = nova.servers.create(name=instance_name, image=image, instance = nova.servers.create(name=instance_name, image=image,
@ -276,19 +340,265 @@ class OpenStackAmuletUtils(AmuletUtils):
def delete_instance(self, nova, instance): def delete_instance(self, nova, instance):
"""Delete the specified instance.""" """Delete the specified instance."""
num_before = len(list(nova.servers.list()))
nova.servers.delete(instance)
count = 1 # /!\ DEPRECATION WARNING
num_after = len(list(nova.servers.list())) self.log.warn('/!\\ DEPRECATION WARNING: use '
while num_after != (num_before - 1) and count < 10: 'delete_resource instead of delete_instance.')
time.sleep(3) self.log.debug('Deleting instance ({})...'.format(instance))
num_after = len(list(nova.servers.list())) return self.delete_resource(nova.servers, instance,
self.log.debug('number of instances: {}'.format(num_after)) msg='nova instance')
count += 1
if num_after != (num_before - 1): def create_or_get_keypair(self, nova, keypair_name="testkey"):
self.log.error('instance deletion timed out') """Create a new keypair, or return pointer if it already exists."""
try:
_keypair = nova.keypairs.get(keypair_name)
self.log.debug('Keypair ({}) already exists, '
'using it.'.format(keypair_name))
return _keypair
except:
self.log.debug('Keypair ({}) does not exist, '
'creating it.'.format(keypair_name))
_keypair = nova.keypairs.create(name=keypair_name)
return _keypair
def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1,
img_id=None, src_vol_id=None, snap_id=None):
"""Create cinder volume, optionally from a glance image, OR
optionally as a clone of an existing volume, OR optionally
from a snapshot. Wait for the new volume status to reach
the expected status, validate and return a resource pointer.
:param vol_name: cinder volume display name
:param vol_size: size in gigabytes
:param img_id: optional glance image id
:param src_vol_id: optional source volume id to clone
:param snap_id: optional snapshot id to use
:returns: cinder volume pointer
"""
# Handle parameter input and avoid impossible combinations
if img_id and not src_vol_id and not snap_id:
# Create volume from image
self.log.debug('Creating cinder volume from glance image...')
bootable = 'true'
elif src_vol_id and not img_id and not snap_id:
# Clone an existing volume
self.log.debug('Cloning cinder volume...')
bootable = cinder.volumes.get(src_vol_id).bootable
elif snap_id and not src_vol_id and not img_id:
# Create volume from snapshot
self.log.debug('Creating cinder volume from snapshot...')
snap = cinder.volume_snapshots.find(id=snap_id)
vol_size = snap.size
snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id
bootable = cinder.volumes.get(snap_vol_id).bootable
elif not img_id and not src_vol_id and not snap_id:
# Create volume
self.log.debug('Creating cinder volume...')
bootable = 'false'
else:
# Impossible combination of parameters
msg = ('Invalid method use - name:{} size:{} img_id:{} '
'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size,
img_id, src_vol_id,
snap_id))
amulet.raise_status(amulet.FAIL, msg=msg)
# Create new volume
try:
vol_new = cinder.volumes.create(display_name=vol_name,
imageRef=img_id,
size=vol_size,
source_volid=src_vol_id,
snapshot_id=snap_id)
vol_id = vol_new.id
except Exception as e:
msg = 'Failed to create volume: {}'.format(e)
amulet.raise_status(amulet.FAIL, msg=msg)
# Wait for volume to reach available status
ret = self.resource_reaches_status(cinder.volumes, vol_id,
expected_stat="available",
msg="Volume status wait")
if not ret:
msg = 'Cinder volume failed to reach expected state.'
amulet.raise_status(amulet.FAIL, msg=msg)
# Re-validate new volume
self.log.debug('Validating volume attributes...')
val_vol_name = cinder.volumes.get(vol_id).display_name
val_vol_boot = cinder.volumes.get(vol_id).bootable
val_vol_stat = cinder.volumes.get(vol_id).status
val_vol_size = cinder.volumes.get(vol_id).size
msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:'
'{} size:{}'.format(val_vol_name, vol_id,
val_vol_stat, val_vol_boot,
val_vol_size))
if val_vol_boot == bootable and val_vol_stat == 'available' \
and val_vol_name == vol_name and val_vol_size == vol_size:
self.log.debug(msg_attr)
else:
msg = ('Volume validation failed, {}'.format(msg_attr))
amulet.raise_status(amulet.FAIL, msg=msg)
return vol_new
def delete_resource(self, resource, resource_id,
msg="resource", max_wait=120):
"""Delete one openstack resource, such as one instance, keypair,
image, volume, stack, etc., and confirm deletion within max wait time.
:param resource: pointer to os resource type, ex:glance_client.images
:param resource_id: unique name or id for the openstack resource
:param msg: text to identify purpose in logging
:param max_wait: maximum wait time in seconds
:returns: True if successful, otherwise False
"""
self.log.debug('Deleting OpenStack resource '
'{} ({})'.format(resource_id, msg))
num_before = len(list(resource.list()))
resource.delete(resource_id)
tries = 0
num_after = len(list(resource.list()))
while num_after != (num_before - 1) and tries < (max_wait / 4):
self.log.debug('{} delete check: '
'{} [{}:{}] {}'.format(msg, tries,
num_before,
num_after,
resource_id))
time.sleep(4)
num_after = len(list(resource.list()))
tries += 1
self.log.debug('{}: expected, actual count = {}, '
'{}'.format(msg, num_before - 1, num_after))
if num_after == (num_before - 1):
return True
else:
self.log.error('{} delete timed out'.format(msg))
return False return False
def resource_reaches_status(self, resource, resource_id,
expected_stat='available',
msg='resource', max_wait=120):
"""Wait for an openstack resources status to reach an
expected status within a specified time. Useful to confirm that
nova instances, cinder vols, snapshots, glance images, heat stacks
and other resources eventually reach the expected status.
:param resource: pointer to os resource type, ex: heat_client.stacks
:param resource_id: unique id for the openstack resource
:param expected_stat: status to expect resource to reach
:param msg: text to identify purpose in logging
:param max_wait: maximum wait time in seconds
:returns: True if successful, False if status is not reached
"""
tries = 0
resource_stat = resource.get(resource_id).status
while resource_stat != expected_stat and tries < (max_wait / 4):
self.log.debug('{} status check: '
'{} [{}:{}] {}'.format(msg, tries,
resource_stat,
expected_stat,
resource_id))
time.sleep(4)
resource_stat = resource.get(resource_id).status
tries += 1
self.log.debug('{}: expected, actual status = {}, '
'{}'.format(msg, resource_stat, expected_stat))
if resource_stat == expected_stat:
return True return True
else:
self.log.debug('{} never reached expected status: '
'{}'.format(resource_id, expected_stat))
return False
def get_ceph_osd_id_cmd(self, index):
"""Produce a shell command that will return a ceph-osd id."""
return ("`initctl list | grep 'ceph-osd ' | "
"awk 'NR=={} {{ print $2 }}' | "
"grep -o '[0-9]*'`".format(index + 1))
def get_ceph_pools(self, sentry_unit):
"""Return a dict of ceph pools from a single ceph unit, with
pool name as keys, pool id as vals."""
pools = {}
cmd = 'sudo ceph osd lspools'
output, code = sentry_unit.run(cmd)
if code != 0:
msg = ('{} `{}` returned {} '
'{}'.format(sentry_unit.info['unit_name'],
cmd, code, output))
amulet.raise_status(amulet.FAIL, msg=msg)
# Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance,
for pool in str(output).split(','):
pool_id_name = pool.split(' ')
if len(pool_id_name) == 2:
pool_id = pool_id_name[0]
pool_name = pool_id_name[1]
pools[pool_name] = int(pool_id)
self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'],
pools))
return pools
def get_ceph_df(self, sentry_unit):
"""Return dict of ceph df json output, including ceph pool state.
:param sentry_unit: Pointer to amulet sentry instance (juju unit)
:returns: Dict of ceph df output
"""
cmd = 'sudo ceph df --format=json'
output, code = sentry_unit.run(cmd)
if code != 0:
msg = ('{} `{}` returned {} '
'{}'.format(sentry_unit.info['unit_name'],
cmd, code, output))
amulet.raise_status(amulet.FAIL, msg=msg)
return json.loads(output)
def get_ceph_pool_sample(self, sentry_unit, pool_id=0):
"""Take a sample of attributes of a ceph pool, returning ceph
pool name, object count and disk space used for the specified
pool ID number.
:param sentry_unit: Pointer to amulet sentry instance (juju unit)
:param pool_id: Ceph pool ID
:returns: List of pool name, object count, kb disk space used
"""
df = self.get_ceph_df(sentry_unit)
pool_name = df['pools'][pool_id]['name']
obj_count = df['pools'][pool_id]['stats']['objects']
kb_used = df['pools'][pool_id]['stats']['kb_used']
self.log.debug('Ceph {} pool (ID {}): {} objects, '
'{} kb used'.format(pool_name, pool_id,
obj_count, kb_used))
return pool_name, obj_count, kb_used
def validate_ceph_pool_samples(self, samples, sample_type="resource pool"):
"""Validate ceph pool samples taken over time, such as pool
object counts or pool kb used, before adding, after adding, and
after deleting items which affect those pool attributes. The
2nd element is expected to be greater than the 1st; 3rd is expected
to be less than the 2nd.
:param samples: List containing 3 data samples
:param sample_type: String for logging and usage context
:returns: None if successful, Failure message otherwise
"""
original, created, deleted = range(3)
if samples[created] <= samples[original] or \
samples[deleted] >= samples[created]:
return ('Ceph {} samples ({}) '
'unexpected.'.format(sample_type, samples))
else:
self.log.debug('Ceph {} samples (OK): '
'{}'.format(sample_type, samples))
return None

View File

@ -50,6 +50,8 @@ from charmhelpers.core.sysctl import create as sysctl_create
from charmhelpers.core.strutils import bool_from_string from charmhelpers.core.strutils import bool_from_string
from charmhelpers.core.host import ( from charmhelpers.core.host import (
get_bond_master,
is_phy_iface,
list_nics, list_nics,
get_nic_hwaddr, get_nic_hwaddr,
mkdir, mkdir,
@ -122,12 +124,14 @@ def config_flags_parser(config_flags):
of specifying multiple key value pairs within the same string. For of specifying multiple key value pairs within the same string. For
example, a string in the format of 'key1=value1, key2=value2' will example, a string in the format of 'key1=value1, key2=value2' will
return a dict of: return a dict of:
{'key1': 'value1', {'key1': 'value1',
'key2': 'value2'}. 'key2': 'value2'}.
2. A string in the above format, but supporting a comma-delimited list 2. A string in the above format, but supporting a comma-delimited list
of values for the same key. For example, a string in the format of of values for the same key. For example, a string in the format of
'key1=value1, key2=value3,value4,value5' will return a dict of: 'key1=value1, key2=value3,value4,value5' will return a dict of:
{'key1', 'value1', {'key1', 'value1',
'key2', 'value2,value3,value4'} 'key2', 'value2,value3,value4'}
@ -136,6 +140,7 @@ def config_flags_parser(config_flags):
used to specify more complex key value pairs. For example, used to specify more complex key value pairs. For example,
a string in the format of 'key1: subkey1=value1, subkey2=value2' will a string in the format of 'key1: subkey1=value1, subkey2=value2' will
return a dict of: return a dict of:
{'key1', 'subkey1=value1, subkey2=value2'} {'key1', 'subkey1=value1, subkey2=value2'}
The provided config_flags string may be a list of comma-separated values The provided config_flags string may be a list of comma-separated values
@ -240,7 +245,7 @@ class SharedDBContext(OSContextGenerator):
if self.relation_prefix: if self.relation_prefix:
password_setting = self.relation_prefix + '_password' password_setting = self.relation_prefix + '_password'
for rid in relation_ids('shared-db'): for rid in relation_ids(self.interfaces[0]):
for unit in related_units(rid): for unit in related_units(rid):
rdata = relation_get(rid=rid, unit=unit) rdata = relation_get(rid=rid, unit=unit)
host = rdata.get('db_host') host = rdata.get('db_host')
@ -891,8 +896,6 @@ class NeutronContext(OSContextGenerator):
return ctxt return ctxt
def __call__(self): def __call__(self):
self._ensure_packages()
if self.network_manager not in ['quantum', 'neutron']: if self.network_manager not in ['quantum', 'neutron']:
return {} return {}
@ -922,7 +925,6 @@ class NeutronContext(OSContextGenerator):
class NeutronPortContext(OSContextGenerator): class NeutronPortContext(OSContextGenerator):
NIC_PREFIXES = ['eth', 'bond']
def resolve_ports(self, ports): def resolve_ports(self, ports):
"""Resolve NICs not yet bound to bridge(s) """Resolve NICs not yet bound to bridge(s)
@ -934,7 +936,18 @@ class NeutronPortContext(OSContextGenerator):
hwaddr_to_nic = {} hwaddr_to_nic = {}
hwaddr_to_ip = {} hwaddr_to_ip = {}
for nic in list_nics(self.NIC_PREFIXES): for nic in list_nics():
# Ignore virtual interfaces (bond masters will be identified from
# their slaves)
if not is_phy_iface(nic):
continue
_nic = get_bond_master(nic)
if _nic:
log("Replacing iface '%s' with bond master '%s'" % (nic, _nic),
level=DEBUG)
nic = _nic
hwaddr = get_nic_hwaddr(nic) hwaddr = get_nic_hwaddr(nic)
hwaddr_to_nic[hwaddr] = nic hwaddr_to_nic[hwaddr] = nic
addresses = get_ipv4_addr(nic, fatal=False) addresses = get_ipv4_addr(nic, fatal=False)
@ -960,7 +973,8 @@ class NeutronPortContext(OSContextGenerator):
# trust it to be the real external network). # trust it to be the real external network).
resolved.append(entry) resolved.append(entry)
return resolved # Ensure no duplicates
return list(set(resolved))
class OSConfigFlagContext(OSContextGenerator): class OSConfigFlagContext(OSContextGenerator):
@ -1050,13 +1064,22 @@ class SubordinateConfigContext(OSContextGenerator):
:param config_file : Service's config file to query sections :param config_file : Service's config file to query sections
:param interface : Subordinate interface to inspect :param interface : Subordinate interface to inspect
""" """
self.service = service
self.config_file = config_file self.config_file = config_file
self.interface = interface if isinstance(service, list):
self.services = service
else:
self.services = [service]
if isinstance(interface, list):
self.interfaces = interface
else:
self.interfaces = [interface]
def __call__(self): def __call__(self):
ctxt = {'sections': {}} ctxt = {'sections': {}}
for rid in relation_ids(self.interface): rids = []
for interface in self.interfaces:
rids.extend(relation_ids(interface))
for rid in rids:
for unit in related_units(rid): for unit in related_units(rid):
sub_config = relation_get('subordinate_configuration', sub_config = relation_get('subordinate_configuration',
rid=rid, unit=unit) rid=rid, unit=unit)
@ -1068,13 +1091,14 @@ class SubordinateConfigContext(OSContextGenerator):
'setting from %s' % rid, level=ERROR) 'setting from %s' % rid, level=ERROR)
continue continue
if self.service not in sub_config: for service in self.services:
if service not in sub_config:
log('Found subordinate_config on %s but it contained' log('Found subordinate_config on %s but it contained'
'nothing for %s service' % (rid, self.service), 'nothing for %s service' % (rid, service),
level=INFO) level=INFO)
continue continue
sub_config = sub_config[self.service] sub_config = sub_config[service]
if self.config_file not in sub_config: if self.config_file not in sub_config:
log('Found subordinate_config on %s but it contained' log('Found subordinate_config on %s but it contained'
'nothing for %s' % (rid, self.config_file), 'nothing for %s' % (rid, self.config_file),
@ -1084,13 +1108,15 @@ class SubordinateConfigContext(OSContextGenerator):
sub_config = sub_config[self.config_file] sub_config = sub_config[self.config_file]
for k, v in six.iteritems(sub_config): for k, v in six.iteritems(sub_config):
if k == 'sections': if k == 'sections':
for section, config_dict in six.iteritems(v): for section, config_list in six.iteritems(v):
log("adding section '%s'" % (section), log("adding section '%s'" % (section),
level=DEBUG) level=DEBUG)
ctxt[k][section] = config_dict if ctxt[k].get(section):
ctxt[k][section].extend(config_list)
else:
ctxt[k][section] = config_list
else: else:
ctxt[k] = v ctxt[k] = v
log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG) log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG)
return ctxt return ctxt
@ -1267,15 +1293,19 @@ class DataPortContext(NeutronPortContext):
def __call__(self): def __call__(self):
ports = config('data-port') ports = config('data-port')
if ports: if ports:
# Map of {port/mac:bridge}
portmap = parse_data_port_mappings(ports) portmap = parse_data_port_mappings(ports)
ports = portmap.values() ports = portmap.keys()
# Resolve provided ports or mac addresses and filter out those
# already attached to a bridge.
resolved = self.resolve_ports(ports) resolved = self.resolve_ports(ports)
# FIXME: is this necessary?
normalized = {get_nic_hwaddr(port): port for port in resolved normalized = {get_nic_hwaddr(port): port for port in resolved
if port not in ports} if port not in ports}
normalized.update({port: port for port in resolved normalized.update({port: port for port in resolved
if port in ports}) if port in ports})
if resolved: if resolved:
return {bridge: normalized[port] for bridge, port in return {bridge: normalized[port] for port, bridge in
six.iteritems(portmap) if port in normalized.keys()} six.iteritems(portmap) if port in normalized.keys()}
return None return None

View File

@ -17,6 +17,7 @@
from charmhelpers.core.hookenv import ( from charmhelpers.core.hookenv import (
config, config,
unit_get, unit_get,
service_name,
) )
from charmhelpers.contrib.network.ip import ( from charmhelpers.contrib.network.ip import (
get_address_in_network, get_address_in_network,
@ -26,8 +27,6 @@ from charmhelpers.contrib.network.ip import (
) )
from charmhelpers.contrib.hahelpers.cluster import is_clustered from charmhelpers.contrib.hahelpers.cluster import is_clustered
from functools import partial
PUBLIC = 'public' PUBLIC = 'public'
INTERNAL = 'int' INTERNAL = 'int'
ADMIN = 'admin' ADMIN = 'admin'
@ -35,15 +34,18 @@ ADMIN = 'admin'
ADDRESS_MAP = { ADDRESS_MAP = {
PUBLIC: { PUBLIC: {
'config': 'os-public-network', 'config': 'os-public-network',
'fallback': 'public-address' 'fallback': 'public-address',
'override': 'os-public-hostname',
}, },
INTERNAL: { INTERNAL: {
'config': 'os-internal-network', 'config': 'os-internal-network',
'fallback': 'private-address' 'fallback': 'private-address',
'override': 'os-internal-hostname',
}, },
ADMIN: { ADMIN: {
'config': 'os-admin-network', 'config': 'os-admin-network',
'fallback': 'private-address' 'fallback': 'private-address',
'override': 'os-admin-hostname',
} }
} }
@ -57,15 +59,50 @@ def canonical_url(configs, endpoint_type=PUBLIC):
:param endpoint_type: str endpoint type to resolve. :param endpoint_type: str endpoint type to resolve.
:param returns: str base URL for services on the current service unit. :param returns: str base URL for services on the current service unit.
""" """
scheme = 'http' scheme = _get_scheme(configs)
if 'https' in configs.complete_contexts():
scheme = 'https'
address = resolve_address(endpoint_type) address = resolve_address(endpoint_type)
if is_ipv6(address): if is_ipv6(address):
address = "[{}]".format(address) address = "[{}]".format(address)
return '%s://%s' % (scheme, address) return '%s://%s' % (scheme, address)
def _get_scheme(configs):
"""Returns the scheme to use for the url (either http or https)
depending upon whether https is in the configs value.
:param configs: OSTemplateRenderer config templating object to inspect
for a complete https context.
:returns: either 'http' or 'https' depending on whether https is
configured within the configs context.
"""
scheme = 'http'
if configs and 'https' in configs.complete_contexts():
scheme = 'https'
return scheme
def _get_address_override(endpoint_type=PUBLIC):
"""Returns any address overrides that the user has defined based on the
endpoint type.
Note: this function allows for the service name to be inserted into the
address if the user specifies {service_name}.somehost.org.
:param endpoint_type: the type of endpoint to retrieve the override
value for.
:returns: any endpoint address or hostname that the user has overridden
or None if an override is not present.
"""
override_key = ADDRESS_MAP[endpoint_type]['override']
addr_override = config(override_key)
if not addr_override:
return None
else:
return addr_override.format(service_name=service_name())
def resolve_address(endpoint_type=PUBLIC): def resolve_address(endpoint_type=PUBLIC):
"""Return unit address depending on net config. """Return unit address depending on net config.
@ -77,7 +114,10 @@ def resolve_address(endpoint_type=PUBLIC):
:param endpoint_type: Network endpoing type :param endpoint_type: Network endpoing type
""" """
resolved_address = None resolved_address = _get_address_override(endpoint_type)
if resolved_address:
return resolved_address
vips = config('vip') vips = config('vip')
if vips: if vips:
vips = vips.split() vips = vips.split()
@ -109,38 +149,3 @@ def resolve_address(endpoint_type=PUBLIC):
"clustered=%s)" % (net_type, clustered)) "clustered=%s)" % (net_type, clustered))
return resolved_address return resolved_address
def endpoint_url(configs, url_template, port, endpoint_type=PUBLIC,
override=None):
"""Returns the correct endpoint URL to advertise to Keystone.
This method provides the correct endpoint URL which should be advertised to
the keystone charm for endpoint creation. This method allows for the url to
be overridden to force a keystone endpoint to have specific URL for any of
the defined scopes (admin, internal, public).
:param configs: OSTemplateRenderer config templating object to inspect
for a complete https context.
:param url_template: str format string for creating the url template. Only
two values will be passed - the scheme+hostname
returned by the canonical_url and the port.
:param endpoint_type: str endpoint type to resolve.
:param override: str the name of the config option which overrides the
endpoint URL defined by the charm itself. None will
disable any overrides (default).
"""
if override:
# Return any user-defined overrides for the keystone endpoint URL.
user_value = config(override)
if user_value:
return user_value.strip()
return url_template % (canonical_url(configs, endpoint_type), port)
public_endpoint = partial(endpoint_url, endpoint_type=PUBLIC)
internal_endpoint = partial(endpoint_url, endpoint_type=INTERNAL)
admin_endpoint = partial(endpoint_url, endpoint_type=ADMIN)

View File

@ -255,17 +255,30 @@ def network_manager():
return 'neutron' return 'neutron'
def parse_mappings(mappings): def parse_mappings(mappings, key_rvalue=False):
"""By default mappings are lvalue keyed.
If key_rvalue is True, the mapping will be reversed to allow multiple
configs for the same lvalue.
"""
parsed = {} parsed = {}
if mappings: if mappings:
mappings = mappings.split() mappings = mappings.split()
for m in mappings: for m in mappings:
p = m.partition(':') p = m.partition(':')
key = p[0].strip()
if p[1]: if key_rvalue:
parsed[key] = p[2].strip() key_index = 2
val_index = 0
# if there is no rvalue skip to next
if not p[1]:
continue
else: else:
parsed[key] = '' key_index = 0
val_index = 2
key = p[key_index].strip()
parsed[key] = p[val_index].strip()
return parsed return parsed
@ -283,25 +296,25 @@ def parse_bridge_mappings(mappings):
def parse_data_port_mappings(mappings, default_bridge='br-data'): def parse_data_port_mappings(mappings, default_bridge='br-data'):
"""Parse data port mappings. """Parse data port mappings.
Mappings must be a space-delimited list of bridge:port mappings. Mappings must be a space-delimited list of port:bridge mappings.
Returns dict of the form {bridge:port}. Returns dict of the form {port:bridge} where port may be an mac address or
interface name.
""" """
_mappings = parse_mappings(mappings)
# NOTE(dosaboy): we use rvalue for key to allow multiple values to be
# proposed for <port> since it may be a mac address which will differ
# across units this allowing first-known-good to be chosen.
_mappings = parse_mappings(mappings, key_rvalue=True)
if not _mappings or list(_mappings.values()) == ['']: if not _mappings or list(_mappings.values()) == ['']:
if not mappings: if not mappings:
return {} return {}
# For backwards-compatibility we need to support port-only provided in # For backwards-compatibility we need to support port-only provided in
# config. # config.
_mappings = {default_bridge: mappings.split()[0]} _mappings = {mappings.split()[0]: default_bridge}
bridges = _mappings.keys()
ports = _mappings.values()
if len(set(bridges)) != len(bridges):
raise Exception("It is not allowed to have more than one port "
"configured on the same bridge")
ports = _mappings.keys()
if len(set(ports)) != len(ports): if len(set(ports)) != len(ports):
raise Exception("It is not allowed to have the same port configured " raise Exception("It is not allowed to have the same port configured "
"on more than one bridge") "on more than one bridge")

View File

@ -29,8 +29,8 @@ from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
try: try:
from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
except ImportError: except ImportError:
# python-jinja2 may not be installed yet, or we're running unittests. apt_install('python-jinja2', fatal=True)
FileSystemLoader = ChoiceLoader = Environment = exceptions = None from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
class OSConfigException(Exception): class OSConfigException(Exception):

View File

@ -24,6 +24,7 @@ import subprocess
import json import json
import os import os
import sys import sys
import re
import six import six
import yaml import yaml
@ -69,7 +70,6 @@ CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'
DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed ' DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed '
'restricted main multiverse universe') 'restricted main multiverse universe')
UBUNTU_OPENSTACK_RELEASE = OrderedDict([ UBUNTU_OPENSTACK_RELEASE = OrderedDict([
('oneiric', 'diablo'), ('oneiric', 'diablo'),
('precise', 'essex'), ('precise', 'essex'),
@ -79,6 +79,7 @@ UBUNTU_OPENSTACK_RELEASE = OrderedDict([
('trusty', 'icehouse'), ('trusty', 'icehouse'),
('utopic', 'juno'), ('utopic', 'juno'),
('vivid', 'kilo'), ('vivid', 'kilo'),
('wily', 'liberty'),
]) ])
@ -91,6 +92,7 @@ OPENSTACK_CODENAMES = OrderedDict([
('2014.1', 'icehouse'), ('2014.1', 'icehouse'),
('2014.2', 'juno'), ('2014.2', 'juno'),
('2015.1', 'kilo'), ('2015.1', 'kilo'),
('2015.2', 'liberty'),
]) ])
# The ugly duckling # The ugly duckling
@ -113,8 +115,37 @@ SWIFT_CODENAMES = OrderedDict([
('2.2.0', 'juno'), ('2.2.0', 'juno'),
('2.2.1', 'kilo'), ('2.2.1', 'kilo'),
('2.2.2', 'kilo'), ('2.2.2', 'kilo'),
('2.3.0', 'liberty'),
]) ])
# >= Liberty version->codename mapping
PACKAGE_CODENAMES = {
'nova-common': OrderedDict([
('12.0.0', 'liberty'),
]),
'neutron-common': OrderedDict([
('7.0.0', 'liberty'),
]),
'cinder-common': OrderedDict([
('7.0.0', 'liberty'),
]),
'keystone': OrderedDict([
('8.0.0', 'liberty'),
]),
'horizon-common': OrderedDict([
('8.0.0', 'liberty'),
]),
'ceilometer-common': OrderedDict([
('5.0.0', 'liberty'),
]),
'heat-common': OrderedDict([
('5.0.0', 'liberty'),
]),
'glance-common': OrderedDict([
('11.0.0', 'liberty'),
]),
}
DEFAULT_LOOPBACK_SIZE = '5G' DEFAULT_LOOPBACK_SIZE = '5G'
@ -198,7 +229,16 @@ def get_os_codename_package(package, fatal=True):
error_out(e) error_out(e)
vers = apt.upstream_version(pkg.current_ver.ver_str) vers = apt.upstream_version(pkg.current_ver.ver_str)
match = re.match('^(\d)\.(\d)\.(\d)', vers)
if match:
vers = match.group(0)
# >= Liberty independent project versions
if (package in PACKAGE_CODENAMES and
vers in PACKAGE_CODENAMES[package]):
return PACKAGE_CODENAMES[package][vers]
else:
# < Liberty co-ordinated project versions
try: try:
if 'swift' in pkg.name: if 'swift' in pkg.name:
swift_vers = vers[:5] swift_vers = vers[:5]
@ -321,6 +361,9 @@ def configure_installation_source(rel):
'kilo': 'trusty-updates/kilo', 'kilo': 'trusty-updates/kilo',
'kilo/updates': 'trusty-updates/kilo', 'kilo/updates': 'trusty-updates/kilo',
'kilo/proposed': 'trusty-proposed/kilo', 'kilo/proposed': 'trusty-proposed/kilo',
'liberty': 'trusty-updates/liberty',
'liberty/updates': 'trusty-updates/liberty',
'liberty/proposed': 'trusty-proposed/liberty',
} }
try: try:
@ -516,6 +559,7 @@ def git_clone_and_install(projects_yaml, core_project, depth=1):
Clone/install all specified OpenStack repositories. Clone/install all specified OpenStack repositories.
The expected format of projects_yaml is: The expected format of projects_yaml is:
repositories: repositories:
- {name: keystone, - {name: keystone,
repository: 'git://git.openstack.org/openstack/keystone.git', repository: 'git://git.openstack.org/openstack/keystone.git',
@ -523,11 +567,13 @@ def git_clone_and_install(projects_yaml, core_project, depth=1):
- {name: requirements, - {name: requirements,
repository: 'git://git.openstack.org/openstack/requirements.git', repository: 'git://git.openstack.org/openstack/requirements.git',
branch: 'stable/icehouse'} branch: 'stable/icehouse'}
directory: /mnt/openstack-git directory: /mnt/openstack-git
http_proxy: squid-proxy-url http_proxy: squid-proxy-url
https_proxy: squid-proxy-url https_proxy: squid-proxy-url
The directory, http_proxy, and https_proxy keys are optional. The directory, http_proxy, and https_proxy keys are optional.
""" """
global requirements_dir global requirements_dir
parent_dir = '/mnt/openstack-git' parent_dir = '/mnt/openstack-git'
@ -549,6 +595,12 @@ def git_clone_and_install(projects_yaml, core_project, depth=1):
pip_create_virtualenv(os.path.join(parent_dir, 'venv')) pip_create_virtualenv(os.path.join(parent_dir, 'venv'))
# Upgrade setuptools and pip from default virtualenv versions. The default
# versions in trusty break master OpenStack branch deployments.
for p in ['pip', 'setuptools']:
pip_install(p, upgrade=True, proxy=http_proxy,
venv=os.path.join(parent_dir, 'venv'))
for p in projects['repositories']: for p in projects['repositories']:
repo = p['repository'] repo = p['repository']
branch = p['branch'] branch = p['branch']
@ -610,24 +662,24 @@ def _git_clone_and_install_single(repo, branch, depth, parent_dir, http_proxy,
else: else:
repo_dir = dest_dir repo_dir = dest_dir
venv = os.path.join(parent_dir, 'venv')
if update_requirements: if update_requirements:
if not requirements_dir: if not requirements_dir:
error_out('requirements repo must be cloned before ' error_out('requirements repo must be cloned before '
'updating from global requirements.') 'updating from global requirements.')
_git_update_requirements(repo_dir, requirements_dir) _git_update_requirements(venv, repo_dir, requirements_dir)
juju_log('Installing git repo from dir: {}'.format(repo_dir)) juju_log('Installing git repo from dir: {}'.format(repo_dir))
if http_proxy: if http_proxy:
pip_install(repo_dir, proxy=http_proxy, pip_install(repo_dir, proxy=http_proxy, venv=venv)
venv=os.path.join(parent_dir, 'venv'))
else: else:
pip_install(repo_dir, pip_install(repo_dir, venv=venv)
venv=os.path.join(parent_dir, 'venv'))
return repo_dir return repo_dir
def _git_update_requirements(package_dir, reqs_dir): def _git_update_requirements(venv, package_dir, reqs_dir):
""" """
Update from global requirements. Update from global requirements.
@ -636,12 +688,14 @@ def _git_update_requirements(package_dir, reqs_dir):
""" """
orig_dir = os.getcwd() orig_dir = os.getcwd()
os.chdir(reqs_dir) os.chdir(reqs_dir)
cmd = ['python', 'update.py', package_dir] python = os.path.join(venv, 'bin/python')
cmd = [python, 'update.py', package_dir]
try: try:
subprocess.check_call(cmd) subprocess.check_call(cmd)
except subprocess.CalledProcessError: except subprocess.CalledProcessError:
package = os.path.basename(package_dir) package = os.path.basename(package_dir)
error_out("Error updating {} from global-requirements.txt".format(package)) error_out("Error updating {} from "
"global-requirements.txt".format(package))
os.chdir(orig_dir) os.chdir(orig_dir)

View File

@ -36,6 +36,8 @@ __author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>"
def parse_options(given, available): def parse_options(given, available):
"""Given a set of options, check if available""" """Given a set of options, check if available"""
for key, value in sorted(given.items()): for key, value in sorted(given.items()):
if not value:
continue
if key in available: if key in available:
yield "--{0}={1}".format(key, value) yield "--{0}={1}".format(key, value)

View File

@ -43,9 +43,10 @@ def zap_disk(block_device):
:param block_device: str: Full path of block device to clean. :param block_device: str: Full path of block device to clean.
''' '''
# https://github.com/ceph/ceph/commit/fdd7f8d83afa25c4e09aaedd90ab93f3b64a677b
# sometimes sgdisk exits non-zero; this is OK, dd will clean up # sometimes sgdisk exits non-zero; this is OK, dd will clean up
call(['sgdisk', '--zap-all', '--mbrtogpt', call(['sgdisk', '--zap-all', '--', block_device])
'--clear', block_device]) call(['sgdisk', '--clear', '--mbrtogpt', '--', block_device])
dev_end = check_output(['blockdev', '--getsz', dev_end = check_output(['blockdev', '--getsz',
block_device]).decode('UTF-8') block_device]).decode('UTF-8')
gpt_end = int(dev_end.split()[0]) - 100 gpt_end = int(dev_end.split()[0]) - 100
@ -67,4 +68,4 @@ def is_device_mounted(device):
out = check_output(['mount']).decode('UTF-8') out = check_output(['mount']).decode('UTF-8')
if is_partition: if is_partition:
return bool(re.search(device + r"\b", out)) return bool(re.search(device + r"\b", out))
return bool(re.search(device + r"[0-9]+\b", out)) return bool(re.search(device + r"[0-9]*\b", out))

View File

@ -0,0 +1,45 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2014-2015 Canonical Limited.
#
# This file is part of charm-helpers.
#
# charm-helpers is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License version 3 as
# published by the Free Software Foundation.
#
# charm-helpers is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
__author__ = 'Jorge Niedbalski <niedbalski@ubuntu.com>'
import os
import subprocess
def sed(filename, before, after, flags='g'):
"""
Search and replaces the given pattern on filename.
:param filename: relative or absolute file path.
:param before: expression to be replaced (see 'man sed')
:param after: expression to replace with (see 'man sed')
:param flags: sed-compatible regex flags in example, to make
the search and replace case insensitive, specify ``flags="i"``.
The ``g`` flag is always specified regardless, so you do not
need to remember to include it when overriding this parameter.
:returns: If the sed command exit code was zero then return,
otherwise raise CalledProcessError.
"""
expression = r's/{0}/{1}/{2}'.format(before,
after, flags)
return subprocess.check_call(["sed", "-i", "-r", "-e",
expression,
os.path.expanduser(filename)])

View File

@ -21,7 +21,10 @@
# Charm Helpers Developers <juju@lists.ubuntu.com> # Charm Helpers Developers <juju@lists.ubuntu.com>
from __future__ import print_function from __future__ import print_function
import copy
from distutils.version import LooseVersion
from functools import wraps from functools import wraps
import glob
import os import os
import json import json
import yaml import yaml
@ -71,6 +74,7 @@ def cached(func):
res = func(*args, **kwargs) res = func(*args, **kwargs)
cache[key] = res cache[key] = res
return res return res
wrapper._wrapped = func
return wrapper return wrapper
@ -170,9 +174,19 @@ def relation_type():
return os.environ.get('JUJU_RELATION', None) return os.environ.get('JUJU_RELATION', None)
def relation_id(): @cached
"""The relation ID for the current relation hook""" def relation_id(relation_name=None, service_or_unit=None):
"""The relation ID for the current or a specified relation"""
if not relation_name and not service_or_unit:
return os.environ.get('JUJU_RELATION_ID', None) return os.environ.get('JUJU_RELATION_ID', None)
elif relation_name and service_or_unit:
service_name = service_or_unit.split('/')[0]
for relid in relation_ids(relation_name):
remote_service = remote_service_name(relid)
if remote_service == service_name:
return relid
else:
raise ValueError('Must specify neither or both of relation_name and service_or_unit')
def local_unit(): def local_unit():
@ -190,9 +204,20 @@ def service_name():
return local_unit().split('/')[0] return local_unit().split('/')[0]
@cached
def remote_service_name(relid=None):
"""The remote service name for a given relation-id (or the current relation)"""
if relid is None:
unit = remote_unit()
else:
units = related_units(relid)
unit = units[0] if units else None
return unit.split('/')[0] if unit else None
def hook_name(): def hook_name():
"""The name of the currently executing hook""" """The name of the currently executing hook"""
return os.path.basename(sys.argv[0]) return os.environ.get('JUJU_HOOK_NAME', os.path.basename(sys.argv[0]))
class Config(dict): class Config(dict):
@ -242,29 +267,7 @@ class Config(dict):
self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME) self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME)
if os.path.exists(self.path): if os.path.exists(self.path):
self.load_previous() self.load_previous()
atexit(self._implicit_save)
def __getitem__(self, key):
"""For regular dict lookups, check the current juju config first,
then the previous (saved) copy. This ensures that user-saved values
will be returned by a dict lookup.
"""
try:
return dict.__getitem__(self, key)
except KeyError:
return (self._prev_dict or {})[key]
def get(self, key, default=None):
try:
return self[key]
except KeyError:
return default
def keys(self):
prev_keys = []
if self._prev_dict is not None:
prev_keys = self._prev_dict.keys()
return list(set(prev_keys + list(dict.keys(self))))
def load_previous(self, path=None): def load_previous(self, path=None):
"""Load previous copy of config from disk. """Load previous copy of config from disk.
@ -283,6 +286,9 @@ class Config(dict):
self.path = path or self.path self.path = path or self.path
with open(self.path) as f: with open(self.path) as f:
self._prev_dict = json.load(f) self._prev_dict = json.load(f)
for k, v in copy.deepcopy(self._prev_dict).items():
if k not in self:
self[k] = v
def changed(self, key): def changed(self, key):
"""Return True if the current value for this key is different from """Return True if the current value for this key is different from
@ -314,13 +320,13 @@ class Config(dict):
instance. instance.
""" """
if self._prev_dict:
for k, v in six.iteritems(self._prev_dict):
if k not in self:
self[k] = v
with open(self.path, 'w') as f: with open(self.path, 'w') as f:
json.dump(self, f) json.dump(self, f)
def _implicit_save(self):
if self.implicit_save:
self.save()
@cached @cached
def config(scope=None): def config(scope=None):
@ -484,6 +490,63 @@ def relation_types():
return rel_types return rel_types
@cached
def relation_to_interface(relation_name):
"""
Given the name of a relation, return the interface that relation uses.
:returns: The interface name, or ``None``.
"""
return relation_to_role_and_interface(relation_name)[1]
@cached
def relation_to_role_and_interface(relation_name):
"""
Given the name of a relation, return the role and the name of the interface
that relation uses (where role is one of ``provides``, ``requires``, or ``peer``).
:returns: A tuple containing ``(role, interface)``, or ``(None, None)``.
"""
_metadata = metadata()
for role in ('provides', 'requires', 'peer'):
interface = _metadata.get(role, {}).get(relation_name, {}).get('interface')
if interface:
return role, interface
return None, None
@cached
def role_and_interface_to_relations(role, interface_name):
"""
Given a role and interface name, return a list of relation names for the
current charm that use that interface under that role (where role is one
of ``provides``, ``requires``, or ``peer``).
:returns: A list of relation names.
"""
_metadata = metadata()
results = []
for relation_name, relation in _metadata.get(role, {}).items():
if relation['interface'] == interface_name:
results.append(relation_name)
return results
@cached
def interface_to_relations(interface_name):
"""
Given an interface, return a list of relation names for the current
charm that use that interface.
:returns: A list of relation names.
"""
results = []
for role in ('provides', 'requires', 'peer'):
results.extend(role_and_interface_to_relations(role, interface_name))
return results
@cached @cached
def charm_name(): def charm_name():
"""Get the name of the current charm as is specified on metadata.yaml""" """Get the name of the current charm as is specified on metadata.yaml"""
@ -587,10 +650,14 @@ class Hooks(object):
hooks.execute(sys.argv) hooks.execute(sys.argv)
""" """
def __init__(self, config_save=True): def __init__(self, config_save=None):
super(Hooks, self).__init__() super(Hooks, self).__init__()
self._hooks = {} self._hooks = {}
self._config_save = config_save
# For unknown reasons, we allow the Hooks constructor to override
# config().implicit_save.
if config_save is not None:
config().implicit_save = config_save
def register(self, name, function): def register(self, name, function):
"""Register a hook""" """Register a hook"""
@ -598,13 +665,16 @@ class Hooks(object):
def execute(self, args): def execute(self, args):
"""Execute a registered hook based on args[0]""" """Execute a registered hook based on args[0]"""
_run_atstart()
hook_name = os.path.basename(args[0]) hook_name = os.path.basename(args[0])
if hook_name in self._hooks: if hook_name in self._hooks:
try:
self._hooks[hook_name]() self._hooks[hook_name]()
if self._config_save: except SystemExit as x:
cfg = config() if x.code is None or x.code == 0:
if cfg.implicit_save: _run_atexit()
cfg.save() raise
_run_atexit()
else: else:
raise UnregisteredHookError(hook_name) raise UnregisteredHookError(hook_name)
@ -653,6 +723,21 @@ def action_fail(message):
subprocess.check_call(['action-fail', message]) subprocess.check_call(['action-fail', message])
def action_name():
"""Get the name of the currently executing action."""
return os.environ.get('JUJU_ACTION_NAME')
def action_uuid():
"""Get the UUID of the currently executing action."""
return os.environ.get('JUJU_ACTION_UUID')
def action_tag():
"""Get the tag for the currently executing action."""
return os.environ.get('JUJU_ACTION_TAG')
def status_set(workload_state, message): def status_set(workload_state, message):
"""Set the workload state with a message """Set the workload state with a message
@ -732,13 +817,80 @@ def leader_get(attribute=None):
@translate_exc(from_exc=OSError, to_exc=NotImplementedError) @translate_exc(from_exc=OSError, to_exc=NotImplementedError)
def leader_set(settings=None, **kwargs): def leader_set(settings=None, **kwargs):
"""Juju leader set value(s)""" """Juju leader set value(s)"""
log("Juju leader-set '%s'" % (settings), level=DEBUG) # Don't log secrets.
# log("Juju leader-set '%s'" % (settings), level=DEBUG)
cmd = ['leader-set'] cmd = ['leader-set']
settings = settings or {} settings = settings or {}
settings.update(kwargs) settings.update(kwargs)
for k, v in settings.iteritems(): for k, v in settings.items():
if v is None: if v is None:
cmd.append('{}='.format(k)) cmd.append('{}='.format(k))
else: else:
cmd.append('{}={}'.format(k, v)) cmd.append('{}={}'.format(k, v))
subprocess.check_call(cmd) subprocess.check_call(cmd)
@cached
def juju_version():
"""Full version string (eg. '1.23.3.1-trusty-amd64')"""
# Per https://bugs.launchpad.net/juju-core/+bug/1455368/comments/1
jujud = glob.glob('/var/lib/juju/tools/machine-*/jujud')[0]
return subprocess.check_output([jujud, 'version'],
universal_newlines=True).strip()
@cached
def has_juju_version(minimum_version):
"""Return True if the Juju version is at least the provided version"""
return LooseVersion(juju_version()) >= LooseVersion(minimum_version)
_atexit = []
_atstart = []
def atstart(callback, *args, **kwargs):
'''Schedule a callback to run before the main hook.
Callbacks are run in the order they were added.
This is useful for modules and classes to perform initialization
and inject behavior. In particular:
- Run common code before all of your hooks, such as logging
the hook name or interesting relation data.
- Defer object or module initialization that requires a hook
context until we know there actually is a hook context,
making testing easier.
- Rather than requiring charm authors to include boilerplate to
invoke your helper's behavior, have it run automatically if
your object is instantiated or module imported.
This is not at all useful after your hook framework as been launched.
'''
global _atstart
_atstart.append((callback, args, kwargs))
def atexit(callback, *args, **kwargs):
'''Schedule a callback to run on successful hook completion.
Callbacks are run in the reverse order that they were added.'''
_atexit.append((callback, args, kwargs))
def _run_atstart():
'''Hook frameworks must invoke this before running the main hook body.'''
global _atstart
for callback, args, kwargs in _atstart:
callback(*args, **kwargs)
del _atstart[:]
def _run_atexit():
'''Hook frameworks must invoke this after the main hook body has
successfully completed. Do not invoke it if the hook fails.'''
global _atexit
for callback, args, kwargs in reversed(_atexit):
callback(*args, **kwargs)
del _atexit[:]

View File

@ -24,6 +24,7 @@
import os import os
import re import re
import pwd import pwd
import glob
import grp import grp
import random import random
import string import string
@ -62,6 +63,36 @@ def service_reload(service_name, restart_on_failure=False):
return service_result return service_result
def service_pause(service_name, init_dir=None):
"""Pause a system service.
Stop it, and prevent it from starting again at boot."""
if init_dir is None:
init_dir = "/etc/init"
stopped = service_stop(service_name)
# XXX: Support systemd too
override_path = os.path.join(
init_dir, '{}.override'.format(service_name))
with open(override_path, 'w') as fh:
fh.write("manual\n")
return stopped
def service_resume(service_name, init_dir=None):
"""Resume a system service.
Reenable starting again at boot. Start the service"""
# XXX: Support systemd too
if init_dir is None:
init_dir = "/etc/init"
override_path = os.path.join(
init_dir, '{}.override'.format(service_name))
if os.path.exists(override_path):
os.unlink(override_path)
started = service_start(service_name)
return started
def service(action, service_name): def service(action, service_name):
"""Control a system service""" """Control a system service"""
cmd = ['service', service_name, action] cmd = ['service', service_name, action]
@ -117,6 +148,16 @@ def adduser(username, password=None, shell='/bin/bash', system_user=False):
return user_info return user_info
def user_exists(username):
"""Check if a user exists"""
try:
pwd.getpwnam(username)
user_exists = True
except KeyError:
user_exists = False
return user_exists
def add_group(group_name, system_group=False): def add_group(group_name, system_group=False):
"""Add a group to the system""" """Add a group to the system"""
try: try:
@ -139,11 +180,7 @@ def add_group(group_name, system_group=False):
def add_user_to_group(username, group): def add_user_to_group(username, group):
"""Add a user to a group""" """Add a user to a group"""
cmd = [ cmd = ['gpasswd', '-a', username, group]
'gpasswd', '-a',
username,
group
]
log("Adding user {} to group {}".format(username, group)) log("Adding user {} to group {}".format(username, group))
subprocess.check_call(cmd) subprocess.check_call(cmd)
@ -253,6 +290,17 @@ def mounts():
return system_mounts return system_mounts
def fstab_mount(mountpoint):
"""Mount filesystem using fstab"""
cmd_args = ['mount', mountpoint]
try:
subprocess.check_output(cmd_args)
except subprocess.CalledProcessError as e:
log('Error unmounting {}\n{}'.format(mountpoint, e.output))
return False
return True
def file_hash(path, hash_type='md5'): def file_hash(path, hash_type='md5'):
""" """
Generate a hash checksum of the contents of 'path' or None if not found. Generate a hash checksum of the contents of 'path' or None if not found.
@ -269,6 +317,21 @@ def file_hash(path, hash_type='md5'):
return None return None
def path_hash(path):
"""
Generate a hash checksum of all files matching 'path'. Standard wildcards
like '*' and '?' are supported, see documentation for the 'glob' module for
more information.
:return: dict: A { filename: hash } dictionary for all matched files.
Empty if none found.
"""
return {
filename: file_hash(filename)
for filename in glob.iglob(path)
}
def check_hash(path, checksum, hash_type='md5'): def check_hash(path, checksum, hash_type='md5'):
""" """
Validate a file using a cryptographic checksum. Validate a file using a cryptographic checksum.
@ -296,23 +359,25 @@ def restart_on_change(restart_map, stopstart=False):
@restart_on_change({ @restart_on_change({
'/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ] '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
'/etc/apache/sites-enabled/*': [ 'apache2' ]
}) })
def ceph_client_changed(): def config_changed():
pass # your code here pass # your code here
In this example, the cinder-api and cinder-volume services In this example, the cinder-api and cinder-volume services
would be restarted if /etc/ceph/ceph.conf is changed by the would be restarted if /etc/ceph/ceph.conf is changed by the
ceph_client_changed function. ceph_client_changed function. The apache2 service would be
restarted if any file matching the pattern got changed, created
or removed. Standard wildcards are supported, see documentation
for the 'glob' module for more information.
""" """
def wrap(f): def wrap(f):
def wrapped_f(*args, **kwargs): def wrapped_f(*args, **kwargs):
checksums = {} checksums = {path: path_hash(path) for path in restart_map}
for path in restart_map:
checksums[path] = file_hash(path)
f(*args, **kwargs) f(*args, **kwargs)
restarts = [] restarts = []
for path in restart_map: for path in restart_map:
if checksums[path] != file_hash(path): if path_hash(path) != checksums[path]:
restarts += restart_map[path] restarts += restart_map[path]
services_list = list(OrderedDict.fromkeys(restarts)) services_list = list(OrderedDict.fromkeys(restarts))
if not stopstart: if not stopstart:
@ -352,25 +417,80 @@ def pwgen(length=None):
return(''.join(random_chars)) return(''.join(random_chars))
def list_nics(nic_type): def is_phy_iface(interface):
"""Returns True if interface is not virtual, otherwise False."""
if interface:
sys_net = '/sys/class/net'
if os.path.isdir(sys_net):
for iface in glob.glob(os.path.join(sys_net, '*')):
if '/virtual/' in os.path.realpath(iface):
continue
if interface == os.path.basename(iface):
return True
return False
def get_bond_master(interface):
"""Returns bond master if interface is bond slave otherwise None.
NOTE: the provided interface is expected to be physical
"""
if interface:
iface_path = '/sys/class/net/%s' % (interface)
if os.path.exists(iface_path):
if '/virtual/' in os.path.realpath(iface_path):
return None
master = os.path.join(iface_path, 'master')
if os.path.exists(master):
master = os.path.realpath(master)
# make sure it is a bond master
if os.path.exists(os.path.join(master, 'bonding')):
return os.path.basename(master)
return None
def list_nics(nic_type=None):
'''Return a list of nics of given type(s)''' '''Return a list of nics of given type(s)'''
if isinstance(nic_type, six.string_types): if isinstance(nic_type, six.string_types):
int_types = [nic_type] int_types = [nic_type]
else: else:
int_types = nic_type int_types = nic_type
interfaces = [] interfaces = []
if nic_type:
for int_type in int_types: for int_type in int_types:
cmd = ['ip', 'addr', 'show', 'label', int_type + '*'] cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n') ip_output = subprocess.check_output(cmd).decode('UTF-8')
ip_output = ip_output.split('\n')
ip_output = (line for line in ip_output if line) ip_output = (line for line in ip_output if line)
for line in ip_output: for line in ip_output:
if line.split()[1].startswith(int_type): if line.split()[1].startswith(int_type):
matched = re.search('.*: (' + int_type + r'[0-9]+\.[0-9]+)@.*', line) matched = re.search('.*: (' + int_type +
r'[0-9]+\.[0-9]+)@.*', line)
if matched: if matched:
interface = matched.groups()[0] iface = matched.groups()[0]
else: else:
interface = line.split()[1].replace(":", "") iface = line.split()[1].replace(":", "")
interfaces.append(interface)
if iface not in interfaces:
interfaces.append(iface)
else:
cmd = ['ip', 'a']
ip_output = subprocess.check_output(cmd).decode('UTF-8').split('\n')
ip_output = (line.strip() for line in ip_output if line)
key = re.compile('^[0-9]+:\s+(.+):')
for line in ip_output:
matched = re.search(key, line)
if matched:
iface = matched.group(1)
iface = iface.partition("@")[0]
if iface not in interfaces:
interfaces.append(iface)
return interfaces return interfaces

View File

@ -0,0 +1,62 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2015 Canonical Limited.
#
# This file is part of charm-helpers.
#
# charm-helpers is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License version 3 as
# published by the Free Software Foundation.
#
# charm-helpers is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
import yaml
from charmhelpers.core import fstab
from charmhelpers.core import sysctl
from charmhelpers.core.host import (
add_group,
add_user_to_group,
fstab_mount,
mkdir,
)
def hugepage_support(user, group='hugetlb', nr_hugepages=256,
max_map_count=65536, mnt_point='/run/hugepages/kvm',
pagesize='2MB', mount=True):
"""Enable hugepages on system.
Args:
user (str) -- Username to allow access to hugepages to
group (str) -- Group name to own hugepages
nr_hugepages (int) -- Number of pages to reserve
max_map_count (int) -- Number of Virtual Memory Areas a process can own
mnt_point (str) -- Directory to mount hugepages on
pagesize (str) -- Size of hugepages
mount (bool) -- Whether to Mount hugepages
"""
group_info = add_group(group)
gid = group_info.gr_gid
add_user_to_group(user, group)
sysctl_settings = {
'vm.nr_hugepages': nr_hugepages,
'vm.max_map_count': max_map_count,
'vm.hugetlb_shm_group': gid,
}
sysctl.create(yaml.dump(sysctl_settings), '/etc/sysctl.d/10-hugepage.conf')
mkdir(mnt_point, owner='root', group='root', perms=0o755, force=False)
lfstab = fstab.Fstab()
fstab_entry = lfstab.get_entry_by_attr('mountpoint', mnt_point)
if fstab_entry:
lfstab.remove_entry(fstab_entry)
entry = lfstab.Entry('nodev', mnt_point, 'hugetlbfs',
'mode=1770,gid={},pagesize={}'.format(gid, pagesize), 0, 0)
lfstab.add_entry(entry)
if mount:
fstab_mount(mnt_point)

View File

@ -128,15 +128,18 @@ class ServiceManager(object):
""" """
Handle the current hook by doing The Right Thing with the registered services. Handle the current hook by doing The Right Thing with the registered services.
""" """
hookenv._run_atstart()
try:
hook_name = hookenv.hook_name() hook_name = hookenv.hook_name()
if hook_name == 'stop': if hook_name == 'stop':
self.stop_services() self.stop_services()
else: else:
self.reconfigure_services() self.reconfigure_services()
self.provide_data() self.provide_data()
cfg = hookenv.config() except SystemExit as x:
if cfg.implicit_save: if x.code is None or x.code == 0:
cfg.save() hookenv._run_atexit()
hookenv._run_atexit()
def provide_data(self): def provide_data(self):
""" """

View File

@ -16,7 +16,9 @@
import os import os
import yaml import yaml
from charmhelpers.core import hookenv from charmhelpers.core import hookenv
from charmhelpers.core import host
from charmhelpers.core import templating from charmhelpers.core import templating
from charmhelpers.core.services.base import ManagerCallback from charmhelpers.core.services.base import ManagerCallback
@ -245,22 +247,36 @@ class TemplateCallback(ManagerCallback):
:param str owner: The owner of the rendered file :param str owner: The owner of the rendered file
:param str group: The group of the rendered file :param str group: The group of the rendered file
:param int perms: The permissions of the rendered file :param int perms: The permissions of the rendered file
:param partial on_change_action: functools partial to be executed when
rendered file changes
""" """
def __init__(self, source, target, def __init__(self, source, target,
owner='root', group='root', perms=0o444): owner='root', group='root', perms=0o444,
on_change_action=None):
self.source = source self.source = source
self.target = target self.target = target
self.owner = owner self.owner = owner
self.group = group self.group = group
self.perms = perms self.perms = perms
self.on_change_action = on_change_action
def __call__(self, manager, service_name, event_name): def __call__(self, manager, service_name, event_name):
pre_checksum = ''
if self.on_change_action and os.path.isfile(self.target):
pre_checksum = host.file_hash(self.target)
service = manager.get_service(service_name) service = manager.get_service(service_name)
context = {} context = {}
for ctx in service.get('required_data', []): for ctx in service.get('required_data', []):
context.update(ctx) context.update(ctx)
templating.render(self.source, self.target, context, templating.render(self.source, self.target, context,
self.owner, self.group, self.perms) self.owner, self.group, self.perms)
if self.on_change_action:
if pre_checksum == host.file_hash(self.target):
hookenv.log(
'No change detected: {}'.format(self.target),
hookenv.DEBUG)
else:
self.on_change_action()
# Convenience aliases for templates # Convenience aliases for templates

View File

@ -152,6 +152,7 @@ associated to the hookname.
import collections import collections
import contextlib import contextlib
import datetime import datetime
import itertools
import json import json
import os import os
import pprint import pprint
@ -164,8 +165,7 @@ __author__ = 'Kapil Thangavelu <kapil.foss@gmail.com>'
class Storage(object): class Storage(object):
"""Simple key value database for local unit state within charms. """Simple key value database for local unit state within charms.
Modifications are automatically committed at hook exit. That's Modifications are not persisted unless :meth:`flush` is called.
currently regardless of exit code.
To support dicts, lists, integer, floats, and booleans values To support dicts, lists, integer, floats, and booleans values
are automatically json encoded/decoded. are automatically json encoded/decoded.
@ -173,6 +173,9 @@ class Storage(object):
def __init__(self, path=None): def __init__(self, path=None):
self.db_path = path self.db_path = path
if path is None: if path is None:
if 'UNIT_STATE_DB' in os.environ:
self.db_path = os.environ['UNIT_STATE_DB']
else:
self.db_path = os.path.join( self.db_path = os.path.join(
os.environ.get('CHARM_DIR', ''), '.unit-state.db') os.environ.get('CHARM_DIR', ''), '.unit-state.db')
self.conn = sqlite3.connect('%s' % self.db_path) self.conn = sqlite3.connect('%s' % self.db_path)
@ -189,15 +192,8 @@ class Storage(object):
self.conn.close() self.conn.close()
self._closed = True self._closed = True
def _scoped_query(self, stmt, params=None):
if params is None:
params = []
return stmt, params
def get(self, key, default=None, record=False): def get(self, key, default=None, record=False):
self.cursor.execute( self.cursor.execute('select data from kv where key=?', [key])
*self._scoped_query(
'select data from kv where key=?', [key]))
result = self.cursor.fetchone() result = self.cursor.fetchone()
if not result: if not result:
return default return default
@ -206,33 +202,81 @@ class Storage(object):
return json.loads(result[0]) return json.loads(result[0])
def getrange(self, key_prefix, strip=False): def getrange(self, key_prefix, strip=False):
stmt = "select key, data from kv where key like '%s%%'" % key_prefix """
self.cursor.execute(*self._scoped_query(stmt)) Get a range of keys starting with a common prefix as a mapping of
keys to values.
:param str key_prefix: Common prefix among all keys
:param bool strip: Optionally strip the common prefix from the key
names in the returned dict
:return dict: A (possibly empty) dict of key-value mappings
"""
self.cursor.execute("select key, data from kv where key like ?",
['%s%%' % key_prefix])
result = self.cursor.fetchall() result = self.cursor.fetchall()
if not result: if not result:
return None return {}
if not strip: if not strip:
key_prefix = '' key_prefix = ''
return dict([ return dict([
(k[len(key_prefix):], json.loads(v)) for k, v in result]) (k[len(key_prefix):], json.loads(v)) for k, v in result])
def update(self, mapping, prefix=""): def update(self, mapping, prefix=""):
"""
Set the values of multiple keys at once.
:param dict mapping: Mapping of keys to values
:param str prefix: Optional prefix to apply to all keys in `mapping`
before setting
"""
for k, v in mapping.items(): for k, v in mapping.items():
self.set("%s%s" % (prefix, k), v) self.set("%s%s" % (prefix, k), v)
def unset(self, key): def unset(self, key):
"""
Remove a key from the database entirely.
"""
self.cursor.execute('delete from kv where key=?', [key]) self.cursor.execute('delete from kv where key=?', [key])
if self.revision and self.cursor.rowcount: if self.revision and self.cursor.rowcount:
self.cursor.execute( self.cursor.execute(
'insert into kv_revisions values (?, ?, ?)', 'insert into kv_revisions values (?, ?, ?)',
[key, self.revision, json.dumps('DELETED')]) [key, self.revision, json.dumps('DELETED')])
def unsetrange(self, keys=None, prefix=""):
"""
Remove a range of keys starting with a common prefix, from the database
entirely.
:param list keys: List of keys to remove.
:param str prefix: Optional prefix to apply to all keys in ``keys``
before removing.
"""
if keys is not None:
keys = ['%s%s' % (prefix, key) for key in keys]
self.cursor.execute('delete from kv where key in (%s)' % ','.join(['?'] * len(keys)), keys)
if self.revision and self.cursor.rowcount:
self.cursor.execute(
'insert into kv_revisions values %s' % ','.join(['(?, ?, ?)'] * len(keys)),
list(itertools.chain.from_iterable((key, self.revision, json.dumps('DELETED')) for key in keys)))
else:
self.cursor.execute('delete from kv where key like ?',
['%s%%' % prefix])
if self.revision and self.cursor.rowcount:
self.cursor.execute(
'insert into kv_revisions values (?, ?, ?)',
['%s%%' % prefix, self.revision, json.dumps('DELETED')])
def set(self, key, value): def set(self, key, value):
"""
Set a value in the database.
:param str key: Key to set the value for
:param value: Any JSON-serializable value to be set
"""
serialized = json.dumps(value) serialized = json.dumps(value)
self.cursor.execute( self.cursor.execute('select data from kv where key=?', [key])
'select data from kv where key=?', [key])
exists = self.cursor.fetchone() exists = self.cursor.fetchone()
# Skip mutations to the same value # Skip mutations to the same value

View File

@ -90,6 +90,14 @@ CLOUD_ARCHIVE_POCKETS = {
'kilo/proposed': 'trusty-proposed/kilo', 'kilo/proposed': 'trusty-proposed/kilo',
'trusty-kilo/proposed': 'trusty-proposed/kilo', 'trusty-kilo/proposed': 'trusty-proposed/kilo',
'trusty-proposed/kilo': 'trusty-proposed/kilo', 'trusty-proposed/kilo': 'trusty-proposed/kilo',
# Liberty
'liberty': 'trusty-updates/liberty',
'trusty-liberty': 'trusty-updates/liberty',
'trusty-liberty/updates': 'trusty-updates/liberty',
'trusty-updates/liberty': 'trusty-updates/liberty',
'liberty/proposed': 'trusty-proposed/liberty',
'trusty-liberty/proposed': 'trusty-proposed/liberty',
'trusty-proposed/liberty': 'trusty-proposed/liberty',
} }
# The order of this list is very important. Handlers should be listed in from # The order of this list is very important. Handlers should be listed in from
@ -215,9 +223,9 @@ def apt_purge(packages, fatal=False):
_run_apt_command(cmd, fatal) _run_apt_command(cmd, fatal)
def apt_hold(packages, fatal=False): def apt_mark(packages, mark, fatal=False):
"""Hold one or more packages""" """Flag one or more packages using apt-mark"""
cmd = ['apt-mark', 'hold'] cmd = ['apt-mark', mark]
if isinstance(packages, six.string_types): if isinstance(packages, six.string_types):
cmd.append(packages) cmd.append(packages)
else: else:
@ -225,9 +233,17 @@ def apt_hold(packages, fatal=False):
log("Holding {}".format(packages)) log("Holding {}".format(packages))
if fatal: if fatal:
subprocess.check_call(cmd) subprocess.check_call(cmd, universal_newlines=True)
else: else:
subprocess.call(cmd) subprocess.call(cmd, universal_newlines=True)
def apt_hold(packages, fatal=False):
return apt_mark(packages, 'hold', fatal=fatal)
def apt_unhold(packages, fatal=False):
return apt_mark(packages, 'unhold', fatal=fatal)
def add_source(source, key=None): def add_source(source, key=None):
@ -370,8 +386,9 @@ def install_remote(source, *args, **kwargs):
for handler in handlers: for handler in handlers:
try: try:
installed_to = handler.install(source, *args, **kwargs) installed_to = handler.install(source, *args, **kwargs)
except UnhandledSource: except UnhandledSource as e:
pass log('Install source attempt unsuccessful: {}'.format(e),
level='WARNING')
if not installed_to: if not installed_to:
raise UnhandledSource("No handler found for source {}".format(source)) raise UnhandledSource("No handler found for source {}".format(source))
return installed_to return installed_to

View File

@ -77,6 +77,8 @@ class ArchiveUrlFetchHandler(BaseFetchHandler):
def can_handle(self, source): def can_handle(self, source):
url_parts = self.parse_url(source) url_parts = self.parse_url(source)
if url_parts.scheme not in ('http', 'https', 'ftp', 'file'): if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):
# XXX: Why is this returning a boolean and a string? It's
# doomed to fail since "bool(can_handle('foo://'))" will be True.
return "Wrong source type" return "Wrong source type"
if get_archive_handler(self.base_url(source)): if get_archive_handler(self.base_url(source)):
return True return True
@ -155,7 +157,11 @@ class ArchiveUrlFetchHandler(BaseFetchHandler):
else: else:
algorithms = hashlib.algorithms_available algorithms = hashlib.algorithms_available
if key in algorithms: if key in algorithms:
check_hash(dld_file, value, key) if len(value) != 1:
raise TypeError(
"Expected 1 hash value, not %d" % len(value))
expected = value[0]
check_hash(dld_file, expected, key)
if checksum: if checksum:
check_hash(dld_file, checksum, hash_type) check_hash(dld_file, checksum, hash_type)
return extract(dld_file, dest) return extract(dld_file, dest)

View File

@ -67,7 +67,7 @@ class GitUrlFetchHandler(BaseFetchHandler):
try: try:
self.clone(source, dest_dir, branch, depth) self.clone(source, dest_dir, branch, depth)
except GitCommandError as e: except GitCommandError as e:
raise UnhandledSource(e.message) raise UnhandledSource(e)
except OSError as e: except OSError as e:
raise UnhandledSource(e.strerror) raise UnhandledSource(e.strerror)
return dest_dir return dest_dir

View File

@ -0,0 +1 @@
neutron_api_hooks.py

View File

@ -0,0 +1 @@
neutron_api_hooks.py

View File

@ -0,0 +1 @@
neutron_api_hooks.py

View File

@ -256,3 +256,63 @@ class EtcdContext(context.OSContextGenerator):
ctxt['cluster'] = cluster_string ctxt['cluster'] = cluster_string
return ctxt return ctxt
class NeutronApiSDNContext(context.SubordinateConfigContext):
interfaces = 'neutron-plugin-api-subordinate'
def __init__(self):
super(NeutronApiSDNContext, self).__init__(
interface='neutron-plugin-api-subordinate',
service='neutron-api',
config_file='/etc/neutron/neutron.conf')
def __call__(self):
ctxt = super(NeutronApiSDNContext, self).__call__()
defaults = {
'core-plugin': {
'templ_key': 'core_plugin',
'value': 'neutron.plugins.ml2.plugin.Ml2Plugin',
},
'neutron-plugin-config': {
'templ_key': 'neutron_plugin_config',
'value': '/etc/neutron/plugins/ml2/ml2_conf.ini',
},
'service-plugins': {
'templ_key': 'service_plugins',
'value': 'router,firewall,lbaas,vpnaas,metering',
},
'restart-trigger': {
'templ_key': 'restart_trigger',
'value': '',
},
}
for rid in relation_ids('neutron-plugin-api-subordinate'):
for unit in related_units(rid):
rdata = relation_get(rid=rid, unit=unit)
plugin = rdata.get('neutron-plugin')
if not plugin:
continue
ctxt['neutron_plugin'] = plugin
for key in defaults.keys():
remote_value = rdata.get(key)
ctxt_key = defaults[key]['templ_key']
if remote_value:
ctxt[ctxt_key] = remote_value
else:
ctxt[ctxt_key] = defaults[key]['value']
return ctxt
return ctxt
class NeutronApiSDNConfigFileContext(context.OSContextGenerator):
interfaces = ['neutron-plugin-api-subordinate']
def __call__(self):
for rid in relation_ids('neutron-plugin-api-subordinate'):
for unit in related_units(rid):
rdata = relation_get(rid=rid, unit=unit)
neutron_server_plugin_conf = rdata.get('neutron-plugin-config')
if neutron_server_plugin_conf:
return {'config': neutron_server_plugin_conf}
return {'config': '/etc/neutron/plugins/ml2/ml2_conf.ini'}

View File

@ -490,7 +490,8 @@ def zeromq_configuration_relation_joined(relid=None):
users="neutron") users="neutron")
@hooks.hook('zeromq-configuration-relation-changed') @hooks.hook('zeromq-configuration-relation-changed',
'neutron-plugin-api-subordinate-relation-changed')
@restart_on_change(restart_map(), stopstart=True) @restart_on_change(restart_map(), stopstart=True)
def zeromq_configuration_relation_changed(): def zeromq_configuration_relation_changed():
CONFIGS.write_all() CONFIGS.write_all()

View File

@ -17,9 +17,15 @@ from charmhelpers.contrib.openstack.utils import (
git_install_requested, git_install_requested,
git_clone_and_install, git_clone_and_install,
git_src_dir, git_src_dir,
git_pip_venv_dir,
git_yaml_value,
configure_installation_source, configure_installation_source,
) )
from charmhelpers.contrib.python.packages import (
pip_install,
)
from charmhelpers.core.hookenv import ( from charmhelpers.core.hookenv import (
config, config,
log, log,
@ -45,6 +51,7 @@ from charmhelpers.core.host import (
) )
from charmhelpers.core.templating import render from charmhelpers.core.templating import render
from charmhelpers.contrib.hahelpers.cluster import is_elected_leader
import neutron_api_context import neutron_api_context
@ -70,9 +77,14 @@ KILO_PACKAGES = [
] ]
BASE_GIT_PACKAGES = [ BASE_GIT_PACKAGES = [
'libffi-dev',
'libmysqlclient-dev',
'libssl-dev',
'libxml2-dev', 'libxml2-dev',
'libxslt1-dev', 'libxslt1-dev',
'libyaml-dev',
'python-dev', 'python-dev',
'python-neutronclient', # required for get_neutron_client() import
'python-pip', 'python-pip',
'python-setuptools', 'python-setuptools',
'zlib1g-dev', 'zlib1g-dev',
@ -182,12 +194,17 @@ def force_etcd_restart():
service_start('etcd') service_start('etcd')
def manage_plugin():
return config('manage-neutron-plugin-legacy-mode')
def determine_packages(source=None): def determine_packages(source=None):
# currently all packages match service names # currently all packages match service names
packages = [] + BASE_PACKAGES packages = [] + BASE_PACKAGES
for v in resource_map().values(): for v in resource_map().values():
packages.extend(v['services']) packages.extend(v['services'])
if manage_plugin():
pkgs = neutron_plugin_attribute(config('neutron-plugin'), pkgs = neutron_plugin_attribute(config('neutron-plugin'),
'server_packages', 'server_packages',
'neutron') 'neutron')
@ -233,8 +250,9 @@ def resource_map():
else: else:
resource_map.pop(APACHE_24_CONF) resource_map.pop(APACHE_24_CONF)
# add neutron plugin requirements. nova-c-c only needs the neutron-server if manage_plugin():
# associated with configs, not the plugin agent. # add neutron plugin requirements. nova-c-c only needs the
# neutron-server associated with configs, not the plugin agent.
plugin = config('neutron-plugin') plugin = config('neutron-plugin')
conf = neutron_plugin_attribute(plugin, 'config', 'neutron') conf = neutron_plugin_attribute(plugin, 'config', 'neutron')
ctxts = (neutron_plugin_attribute(plugin, 'contexts', 'neutron') ctxts = (neutron_plugin_attribute(plugin, 'contexts', 'neutron')
@ -251,6 +269,12 @@ def resource_map():
resource_map[conf]['contexts'].append( resource_map[conf]['contexts'].append(
context.PostgresqlDBContext(database=config('database'))) context.PostgresqlDBContext(database=config('database')))
else:
resource_map[NEUTRON_CONF]['contexts'].append(
neutron_api_context.NeutronApiSDNContext()
)
resource_map[NEUTRON_DEFAULT]['contexts'] = \
[neutron_api_context.NeutronApiSDNConfigFileContext()]
return resource_map return resource_map
@ -316,7 +340,7 @@ def do_openstack_upgrade(configs):
# set CONFIGS to load templates from new release # set CONFIGS to load templates from new release
configs.set_release(openstack_release=new_os_rel) configs.set_release(openstack_release=new_os_rel)
# Before kilo it's nova-cloud-controllers job # Before kilo it's nova-cloud-controllers job
if new_os_rel >= 'kilo': if is_elected_leader(CLUSTER_RES) and new_os_rel >= 'kilo':
stamp_neutron_database(cur_os_rel) stamp_neutron_database(cur_os_rel)
migrate_neutron_database() migrate_neutron_database()
@ -454,6 +478,14 @@ def git_pre_install():
def git_post_install(projects_yaml): def git_post_install(projects_yaml):
"""Perform post-install setup.""" """Perform post-install setup."""
http_proxy = git_yaml_value(projects_yaml, 'http_proxy')
if http_proxy:
pip_install('mysql-python', proxy=http_proxy,
venv=git_pip_venv_dir(projects_yaml))
else:
pip_install('mysql-python',
venv=git_pip_venv_dir(projects_yaml))
src_etc = os.path.join(git_src_dir(projects_yaml, 'neutron'), 'etc') src_etc = os.path.join(git_src_dir(projects_yaml, 'neutron'), 'etc')
configs = [ configs = [
{'src': src_etc, {'src': src_etc,
@ -469,13 +501,30 @@ def git_post_install(projects_yaml):
shutil.rmtree(c['dest']) shutil.rmtree(c['dest'])
shutil.copytree(c['src'], c['dest']) shutil.copytree(c['src'], c['dest'])
# NOTE(coreycb): Need to find better solution than bin symlinks.
symlinks = [
{'src': os.path.join(git_pip_venv_dir(projects_yaml),
'bin/neutron-rootwrap'),
'link': '/usr/local/bin/neutron-rootwrap'},
{'src': os.path.join(git_pip_venv_dir(projects_yaml),
'bin/neutron-db-manage'),
'link': '/usr/local/bin/neutron-db-manage'},
]
for s in symlinks:
if os.path.lexists(s['link']):
os.remove(s['link'])
os.symlink(s['src'], s['link'])
render('git/neutron_sudoers', '/etc/sudoers.d/neutron_sudoers', {}, render('git/neutron_sudoers', '/etc/sudoers.d/neutron_sudoers', {},
perms=0o440) perms=0o440)
bin_dir = os.path.join(git_pip_venv_dir(projects_yaml), 'bin')
neutron_api_context = { neutron_api_context = {
'service_description': 'Neutron API server', 'service_description': 'Neutron API server',
'charm_name': 'neutron-api', 'charm_name': 'neutron-api',
'process_name': 'neutron-server', 'process_name': 'neutron-server',
'executable_name': os.path.join(bin_dir, 'neutron-server'),
} }
# NOTE(coreycb): Needs systemd support # NOTE(coreycb): Needs systemd support

View File

@ -37,6 +37,9 @@ requires:
zeromq-configuration: zeromq-configuration:
interface: zeromq-configuration interface: zeromq-configuration
scope: container scope: container
neutron-plugin-api-subordinate:
interface: neutron-plugin-api-subordinate
scope: container
etcd-proxy: etcd-proxy:
interface: etcd-proxy interface: etcd-proxy
peers: peers:

View File

@ -16,7 +16,7 @@ end script
script script
[ -r /etc/default/{{ process_name }} ] && . /etc/default/{{ process_name }} [ -r /etc/default/{{ process_name }} ] && . /etc/default/{{ process_name }}
[ -r "$NEUTRON_PLUGIN_CONFIG" ] && CONF_ARG="--config-file $NEUTRON_PLUGIN_CONFIG" [ -r "$NEUTRON_PLUGIN_CONFIG" ] && CONF_ARG="--config-file $NEUTRON_PLUGIN_CONFIG"
exec start-stop-daemon --start --chuid neutron --exec /usr/local/bin/neutron-server -- \ exec start-stop-daemon --start --chuid neutron --exec {{ executable_name }} -- \
--config-file /etc/neutron/neutron.conf \ --config-file /etc/neutron/neutron.conf \
--log-file /var/log/neutron/server.log $CONF_ARG --log-file /var/log/neutron/server.log $CONF_ARG
end script end script

View File

@ -27,10 +27,14 @@ bind_port = 9696
{% if core_plugin -%} {% if core_plugin -%}
core_plugin = {{ core_plugin }} core_plugin = {{ core_plugin }}
{% if service_plugins -%}
service_plugins = {{ service_plugins }}
{% else -%}
{% if neutron_plugin in ['ovs', 'ml2', 'Calico'] -%} {% if neutron_plugin in ['ovs', 'ml2', 'Calico'] -%}
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.firewall.fwaas_plugin.FirewallPlugin,neutron.services.loadbalancer.plugin.LoadBalancerPlugin,neutron.services.vpn.plugin.VPNDriverPlugin,neutron.services.metering.metering_plugin.MeteringPlugin service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.firewall.fwaas_plugin.FirewallPlugin,neutron.services.loadbalancer.plugin.LoadBalancerPlugin,neutron.services.vpn.plugin.VPNDriverPlugin,neutron.services.metering.metering_plugin.MeteringPlugin
{% endif -%} {% endif -%}
{% endif -%} {% endif -%}
{% endif -%}
{% if neutron_security_groups -%} {% if neutron_security_groups -%}
allow_overlapping_ips = True allow_overlapping_ips = True
@ -58,6 +62,12 @@ nova_admin_password = {{ admin_password }}
nova_admin_auth_url = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }}/v2.0 nova_admin_auth_url = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }}/v2.0
{% endif -%} {% endif -%}
{% if sections and 'DEFAULT' in sections -%}
{% for key, value in sections['DEFAULT'] -%}
{{ key }} = {{ value }}
{% endfor -%}
{% endif %}
[quotas] [quotas]
quota_driver = neutron.db.quota_db.DbQuotaDriver quota_driver = neutron.db.quota_db.DbQuotaDriver
{% if neutron_security_groups -%} {% if neutron_security_groups -%}

View File

@ -62,6 +62,12 @@ nova_admin_password = {{ admin_password }}
nova_admin_auth_url = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }}/v2.0 nova_admin_auth_url = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }}/v2.0
{% endif -%} {% endif -%}
{% if sections and 'DEFAULT' in sections -%}
{% for key, value in sections['DEFAULT'] -%}
{{ key }} = {{ value }}
{% endfor -%}
{% endif %}
[quotas] [quotas]
quota_driver = neutron.db.quota_db.DbQuotaDriver quota_driver = neutron.db.quota_db.DbQuotaDriver
{% if neutron_security_groups -%} {% if neutron_security_groups -%}

View File

@ -31,10 +31,14 @@ bind_port = 9696
{% if core_plugin -%} {% if core_plugin -%}
core_plugin = {{ core_plugin }} core_plugin = {{ core_plugin }}
{% if service_plugins -%}
service_plugins = {{ service_plugins }}
{% else -%}
{% if neutron_plugin in ['ovs', 'ml2', 'Calico'] -%} {% if neutron_plugin in ['ovs', 'ml2', 'Calico'] -%}
service_plugins = router,firewall,lbaas,vpnaas,metering service_plugins = router,firewall,lbaas,vpnaas,metering
{% endif -%} {% endif -%}
{% endif -%} {% endif -%}
{% endif -%}
{% if neutron_security_groups -%} {% if neutron_security_groups -%}
allow_overlapping_ips = True allow_overlapping_ips = True
@ -60,6 +64,12 @@ nova_admin_password = {{ admin_password }}
nova_admin_auth_url = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }}/v2.0 nova_admin_auth_url = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }}/v2.0
{% endif -%} {% endif -%}
{% if sections and 'DEFAULT' in sections -%}
{% for key, value in sections['DEFAULT'] -%}
{{ key }} = {{ value }}
{% endfor -%}
{% endif %}
{% include "section-zeromq" %} {% include "section-zeromq" %}
[quotas] [quotas]

View File

@ -5,6 +5,7 @@ set -ex
sudo add-apt-repository --yes ppa:juju/stable sudo add-apt-repository --yes ppa:juju/stable
sudo apt-get update --yes sudo apt-get update --yes
sudo apt-get install --yes python-amulet \ sudo apt-get install --yes python-amulet \
python-distro-info \
python-neutronclient \ python-neutronclient \
python-keystoneclient \ python-keystoneclient \
python-novaclient \ python-novaclient \

View File

@ -1,9 +0,0 @@
#!/usr/bin/python
"""Amulet tests on a basic neutron-api deployment on utopic-juno."""
from basic_deployment import NeutronAPIBasicDeployment
if __name__ == '__main__':
deployment = NeutronAPIBasicDeployment(series='utopic')
deployment.run_tests()

12
tests/052-basic-trusty-kilo-git Executable file
View File

@ -0,0 +1,12 @@
#!/usr/bin/python
"""Amulet tests on a basic neutron-api git deployment on trusty-kilo."""
from basic_deployment import NeutronAPIBasicDeployment
if __name__ == '__main__':
deployment = NeutronAPIBasicDeployment(series='trusty',
openstack='cloud:trusty-kilo',
source='cloud:trusty-updates/kilo',
git=True)
deployment.run_tests()

View File

@ -81,7 +81,7 @@ class NeutronAPIBasicDeployment(OpenStackAmuletDeployment):
{'name': 'rabbitmq-server'}, {'name': 'keystone'}, {'name': 'rabbitmq-server'}, {'name': 'keystone'},
{'name': 'neutron-openvswitch'}, {'name': 'neutron-openvswitch'},
{'name': 'nova-cloud-controller'}, {'name': 'nova-cloud-controller'},
{'name': 'quantum-gateway'}, {'name': 'neutron-gateway'},
{'name': 'nova-compute'}] {'name': 'nova-compute'}]
super(NeutronAPIBasicDeployment, self)._add_services(this_service, super(NeutronAPIBasicDeployment, self)._add_services(this_service,
other_services) other_services)
@ -92,7 +92,7 @@ class NeutronAPIBasicDeployment(OpenStackAmuletDeployment):
'neutron-api:shared-db': 'mysql:shared-db', 'neutron-api:shared-db': 'mysql:shared-db',
'neutron-api:amqp': 'rabbitmq-server:amqp', 'neutron-api:amqp': 'rabbitmq-server:amqp',
'neutron-api:neutron-api': 'nova-cloud-controller:neutron-api', 'neutron-api:neutron-api': 'nova-cloud-controller:neutron-api',
'neutron-api:neutron-plugin-api': 'quantum-gateway:' 'neutron-api:neutron-plugin-api': 'neutron-gateway:'
'neutron-plugin-api', 'neutron-plugin-api',
'neutron-api:neutron-plugin-api': 'neutron-openvswitch:' 'neutron-api:neutron-plugin-api': 'neutron-openvswitch:'
'neutron-plugin-api', 'neutron-plugin-api',
@ -107,13 +107,25 @@ class NeutronAPIBasicDeployment(OpenStackAmuletDeployment):
"""Configure all of the services.""" """Configure all of the services."""
neutron_api_config = {} neutron_api_config = {}
if self.git: if self.git:
branch = 'stable/' + self._get_openstack_release_string()
amulet_http_proxy = os.environ.get('AMULET_HTTP_PROXY') amulet_http_proxy = os.environ.get('AMULET_HTTP_PROXY')
branch = 'stable/' + self._get_openstack_release_string()
if self._get_openstack_release() >= self.trusty_kilo:
openstack_origin_git = { openstack_origin_git = {
'repositories': [ 'repositories': [
{'name': 'requirements', {'name': 'requirements',
'repository': 'git://github.com/openstack/requirements', 'repository': 'git://github.com/openstack/requirements',
'branch': branch}, 'branch': branch},
{'name': 'neutron-fwaas',
'repository': 'git://github.com/openstack/neutron-fwaas',
'branch': branch},
{'name': 'neutron-lbaas',
'repository': 'git://github.com/openstack/neutron-lbaas',
'branch': branch},
{'name': 'neutron-vpnaas',
'repository': 'git://github.com/openstack/neutron-vpnaas',
'branch': branch},
{'name': 'neutron', {'name': 'neutron',
'repository': 'git://github.com/openstack/neutron', 'repository': 'git://github.com/openstack/neutron',
'branch': branch}, 'branch': branch},
@ -122,6 +134,26 @@ class NeutronAPIBasicDeployment(OpenStackAmuletDeployment):
'http_proxy': amulet_http_proxy, 'http_proxy': amulet_http_proxy,
'https_proxy': amulet_http_proxy, 'https_proxy': amulet_http_proxy,
} }
else:
reqs_repo = 'git://github.com/openstack/requirements'
neutron_repo = 'git://github.com/openstack/neutron'
if self._get_openstack_release() == self.trusty_icehouse:
reqs_repo = 'git://github.com/coreycb/requirements'
neutron_repo = 'git://github.com/coreycb/neutron'
openstack_origin_git = {
'repositories': [
{'name': 'requirements',
'repository': reqs_repo,
'branch': branch},
{'name': 'neutron',
'repository': neutron_repo,
'branch': branch},
],
'directory': '/mnt/openstack-git',
'http_proxy': amulet_http_proxy,
'https_proxy': amulet_http_proxy,
}
neutron_api_config['openstack-origin-git'] = yaml.dump(openstack_origin_git) neutron_api_config['openstack-origin-git'] = yaml.dump(openstack_origin_git)
keystone_config = {'admin-password': 'openstack', keystone_config = {'admin-password': 'openstack',
'admin-token': 'ubuntutesting'} 'admin-token': 'ubuntutesting'}
@ -139,7 +171,7 @@ class NeutronAPIBasicDeployment(OpenStackAmuletDeployment):
self.keystone_sentry = self.d.sentry.unit['keystone/0'] self.keystone_sentry = self.d.sentry.unit['keystone/0']
self.rabbitmq_sentry = self.d.sentry.unit['rabbitmq-server/0'] self.rabbitmq_sentry = self.d.sentry.unit['rabbitmq-server/0']
self.nova_cc_sentry = self.d.sentry.unit['nova-cloud-controller/0'] self.nova_cc_sentry = self.d.sentry.unit['nova-cloud-controller/0']
self.quantum_gateway_sentry = self.d.sentry.unit['quantum-gateway/0'] self.neutron_gateway_sentry = self.d.sentry.unit['neutron-gateway/0']
self.neutron_api_sentry = self.d.sentry.unit['neutron-api/0'] self.neutron_api_sentry = self.d.sentry.unit['neutron-api/0']
self.nova_compute_sentry = self.d.sentry.unit['nova-compute/0'] self.nova_compute_sentry = self.d.sentry.unit['nova-compute/0']
u.log.debug('openstack release val: {}'.format( u.log.debug('openstack release val: {}'.format(
@ -180,7 +212,7 @@ class NeutronAPIBasicDeployment(OpenStackAmuletDeployment):
self.mysql_sentry: ['status mysql'], self.mysql_sentry: ['status mysql'],
self.keystone_sentry: ['status keystone'], self.keystone_sentry: ['status keystone'],
self.nova_cc_sentry: nova_cc_services, self.nova_cc_sentry: nova_cc_services,
self.quantum_gateway_sentry: neutron_services, self.neutron_gateway_sentry: neutron_services,
self.neutron_api_sentry: neutron_api_services, self.neutron_api_sentry: neutron_api_services,
} }

View File

@ -14,14 +14,23 @@
# You should have received a copy of the GNU Lesser General Public License # You should have received a copy of the GNU Lesser General Public License
# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
import ConfigParser
import io import io
import json
import logging import logging
import os
import re import re
import subprocess
import sys import sys
import time import time
import amulet
import distro_info
import six import six
from six.moves import configparser
if six.PY3:
from urllib import parse as urlparse
else:
import urlparse
class AmuletUtils(object): class AmuletUtils(object):
@ -33,6 +42,7 @@ class AmuletUtils(object):
def __init__(self, log_level=logging.ERROR): def __init__(self, log_level=logging.ERROR):
self.log = self.get_logger(level=log_level) self.log = self.get_logger(level=log_level)
self.ubuntu_releases = self.get_ubuntu_releases()
def get_logger(self, name="amulet-logger", level=logging.DEBUG): def get_logger(self, name="amulet-logger", level=logging.DEBUG):
"""Get a logger object that will log to stdout.""" """Get a logger object that will log to stdout."""
@ -70,12 +80,44 @@ class AmuletUtils(object):
else: else:
return False return False
def validate_services(self, commands): def get_ubuntu_release_from_sentry(self, sentry_unit):
"""Validate services. """Get Ubuntu release codename from sentry unit.
Verify the specified services are running on the corresponding :param sentry_unit: amulet sentry/service unit pointer
service units. :returns: list of strings - release codename, failure message
""" """
msg = None
cmd = 'lsb_release -cs'
release, code = sentry_unit.run(cmd)
if code == 0:
self.log.debug('{} lsb_release: {}'.format(
sentry_unit.info['unit_name'], release))
else:
msg = ('{} `{}` returned {} '
'{}'.format(sentry_unit.info['unit_name'],
cmd, release, code))
if release not in self.ubuntu_releases:
msg = ("Release ({}) not found in Ubuntu releases "
"({})".format(release, self.ubuntu_releases))
return release, msg
def validate_services(self, commands):
"""Validate that lists of commands succeed on service units. Can be
used to verify system services are running on the corresponding
service units.
:param commands: dict with sentry keys and arbitrary command list vals
:returns: None if successful, Failure string message otherwise
"""
self.log.debug('Checking status of system services...')
# /!\ DEPRECATION WARNING (beisner):
# New and existing tests should be rewritten to use
# validate_services_by_name() as it is aware of init systems.
self.log.warn('/!\\ DEPRECATION WARNING: use '
'validate_services_by_name instead of validate_services '
'due to init system differences.')
for k, v in six.iteritems(commands): for k, v in six.iteritems(commands):
for cmd in v: for cmd in v:
output, code = k.run(cmd) output, code = k.run(cmd)
@ -86,6 +128,45 @@ class AmuletUtils(object):
return "command `{}` returned {}".format(cmd, str(code)) return "command `{}` returned {}".format(cmd, str(code))
return None return None
def validate_services_by_name(self, sentry_services):
"""Validate system service status by service name, automatically
detecting init system based on Ubuntu release codename.
:param sentry_services: dict with sentry keys and svc list values
:returns: None if successful, Failure string message otherwise
"""
self.log.debug('Checking status of system services...')
# Point at which systemd became a thing
systemd_switch = self.ubuntu_releases.index('vivid')
for sentry_unit, services_list in six.iteritems(sentry_services):
# Get lsb_release codename from unit
release, ret = self.get_ubuntu_release_from_sentry(sentry_unit)
if ret:
return ret
for service_name in services_list:
if (self.ubuntu_releases.index(release) >= systemd_switch or
service_name in ['rabbitmq-server', 'apache2']):
# init is systemd (or regular sysv)
cmd = 'sudo service {} status'.format(service_name)
output, code = sentry_unit.run(cmd)
service_running = code == 0
elif self.ubuntu_releases.index(release) < systemd_switch:
# init is upstart
cmd = 'sudo status {}'.format(service_name)
output, code = sentry_unit.run(cmd)
service_running = code == 0 and "start/running" in output
self.log.debug('{} `{}` returned '
'{}'.format(sentry_unit.info['unit_name'],
cmd, code))
if not service_running:
return u"command `{}` returned {} {}".format(
cmd, output, str(code))
return None
def _get_config(self, unit, filename): def _get_config(self, unit, filename):
"""Get a ConfigParser object for parsing a unit's config file.""" """Get a ConfigParser object for parsing a unit's config file."""
file_contents = unit.file_contents(filename) file_contents = unit.file_contents(filename)
@ -93,7 +174,7 @@ class AmuletUtils(object):
# NOTE(beisner): by default, ConfigParser does not handle options # NOTE(beisner): by default, ConfigParser does not handle options
# with no value, such as the flags used in the mysql my.cnf file. # with no value, such as the flags used in the mysql my.cnf file.
# https://bugs.python.org/issue7005 # https://bugs.python.org/issue7005
config = ConfigParser.ConfigParser(allow_no_value=True) config = configparser.ConfigParser(allow_no_value=True)
config.readfp(io.StringIO(file_contents)) config.readfp(io.StringIO(file_contents))
return config return config
@ -103,7 +184,15 @@ class AmuletUtils(object):
Verify that the specified section of the config file contains Verify that the specified section of the config file contains
the expected option key:value pairs. the expected option key:value pairs.
Compare expected dictionary data vs actual dictionary data.
The values in the 'expected' dictionary can be strings, bools, ints,
longs, or can be a function that evaluates a variable and returns a
bool.
""" """
self.log.debug('Validating config file data ({} in {} on {})'
'...'.format(section, config_file,
sentry_unit.info['unit_name']))
config = self._get_config(sentry_unit, config_file) config = self._get_config(sentry_unit, config_file)
if section != 'DEFAULT' and not config.has_section(section): if section != 'DEFAULT' and not config.has_section(section):
@ -112,9 +201,20 @@ class AmuletUtils(object):
for k in expected.keys(): for k in expected.keys():
if not config.has_option(section, k): if not config.has_option(section, k):
return "section [{}] is missing option {}".format(section, k) return "section [{}] is missing option {}".format(section, k)
if config.get(section, k) != expected[k]:
actual = config.get(section, k)
v = expected[k]
if (isinstance(v, six.string_types) or
isinstance(v, bool) or
isinstance(v, six.integer_types)):
# handle explicit values
if actual != v:
return "section [{}] {}:{} != expected {}:{}".format( return "section [{}] {}:{} != expected {}:{}".format(
section, k, config.get(section, k), k, expected[k]) section, k, actual, k, expected[k])
# handle function pointers, such as not_null or valid_ip
elif not v(actual):
return "section [{}] {}:{} != expected {}:{}".format(
section, k, actual, k, expected[k])
return None return None
def _validate_dict_data(self, expected, actual): def _validate_dict_data(self, expected, actual):
@ -122,7 +222,7 @@ class AmuletUtils(object):
Compare expected dictionary data vs actual dictionary data. Compare expected dictionary data vs actual dictionary data.
The values in the 'expected' dictionary can be strings, bools, ints, The values in the 'expected' dictionary can be strings, bools, ints,
longs, or can be a function that evaluate a variable and returns a longs, or can be a function that evaluates a variable and returns a
bool. bool.
""" """
self.log.debug('actual: {}'.format(repr(actual))) self.log.debug('actual: {}'.format(repr(actual)))
@ -133,8 +233,10 @@ class AmuletUtils(object):
if (isinstance(v, six.string_types) or if (isinstance(v, six.string_types) or
isinstance(v, bool) or isinstance(v, bool) or
isinstance(v, six.integer_types)): isinstance(v, six.integer_types)):
# handle explicit values
if v != actual[k]: if v != actual[k]:
return "{}:{}".format(k, actual[k]) return "{}:{}".format(k, actual[k])
# handle function pointers, such as not_null or valid_ip
elif not v(actual[k]): elif not v(actual[k]):
return "{}:{}".format(k, actual[k]) return "{}:{}".format(k, actual[k])
else: else:
@ -321,3 +423,174 @@ class AmuletUtils(object):
def endpoint_error(self, name, data): def endpoint_error(self, name, data):
return 'unexpected endpoint data in {} - {}'.format(name, data) return 'unexpected endpoint data in {} - {}'.format(name, data)
def get_ubuntu_releases(self):
"""Return a list of all Ubuntu releases in order of release."""
_d = distro_info.UbuntuDistroInfo()
_release_list = _d.all
self.log.debug('Ubuntu release list: {}'.format(_release_list))
return _release_list
def file_to_url(self, file_rel_path):
"""Convert a relative file path to a file URL."""
_abs_path = os.path.abspath(file_rel_path)
return urlparse.urlparse(_abs_path, scheme='file').geturl()
def check_commands_on_units(self, commands, sentry_units):
"""Check that all commands in a list exit zero on all
sentry units in a list.
:param commands: list of bash commands
:param sentry_units: list of sentry unit pointers
:returns: None if successful; Failure message otherwise
"""
self.log.debug('Checking exit codes for {} commands on {} '
'sentry units...'.format(len(commands),
len(sentry_units)))
for sentry_unit in sentry_units:
for cmd in commands:
output, code = sentry_unit.run(cmd)
if code == 0:
self.log.debug('{} `{}` returned {} '
'(OK)'.format(sentry_unit.info['unit_name'],
cmd, code))
else:
return ('{} `{}` returned {} '
'{}'.format(sentry_unit.info['unit_name'],
cmd, code, output))
return None
def get_process_id_list(self, sentry_unit, process_name,
expect_success=True):
"""Get a list of process ID(s) from a single sentry juju unit
for a single process name.
:param sentry_unit: Amulet sentry instance (juju unit)
:param process_name: Process name
:param expect_success: If False, expect the PID to be missing,
raise if it is present.
:returns: List of process IDs
"""
cmd = 'pidof -x {}'.format(process_name)
if not expect_success:
cmd += " || exit 0 && exit 1"
output, code = sentry_unit.run(cmd)
if code != 0:
msg = ('{} `{}` returned {} '
'{}'.format(sentry_unit.info['unit_name'],
cmd, code, output))
amulet.raise_status(amulet.FAIL, msg=msg)
return str(output).split()
def get_unit_process_ids(self, unit_processes, expect_success=True):
"""Construct a dict containing unit sentries, process names, and
process IDs.
:param unit_processes: A dictionary of Amulet sentry instance
to list of process names.
:param expect_success: if False expect the processes to not be
running, raise if they are.
:returns: Dictionary of Amulet sentry instance to dictionary
of process names to PIDs.
"""
pid_dict = {}
for sentry_unit, process_list in six.iteritems(unit_processes):
pid_dict[sentry_unit] = {}
for process in process_list:
pids = self.get_process_id_list(
sentry_unit, process, expect_success=expect_success)
pid_dict[sentry_unit].update({process: pids})
return pid_dict
def validate_unit_process_ids(self, expected, actual):
"""Validate process id quantities for services on units."""
self.log.debug('Checking units for running processes...')
self.log.debug('Expected PIDs: {}'.format(expected))
self.log.debug('Actual PIDs: {}'.format(actual))
if len(actual) != len(expected):
return ('Unit count mismatch. expected, actual: {}, '
'{} '.format(len(expected), len(actual)))
for (e_sentry, e_proc_names) in six.iteritems(expected):
e_sentry_name = e_sentry.info['unit_name']
if e_sentry in actual.keys():
a_proc_names = actual[e_sentry]
else:
return ('Expected sentry ({}) not found in actual dict data.'
'{}'.format(e_sentry_name, e_sentry))
if len(e_proc_names.keys()) != len(a_proc_names.keys()):
return ('Process name count mismatch. expected, actual: {}, '
'{}'.format(len(expected), len(actual)))
for (e_proc_name, e_pids_length), (a_proc_name, a_pids) in \
zip(e_proc_names.items(), a_proc_names.items()):
if e_proc_name != a_proc_name:
return ('Process name mismatch. expected, actual: {}, '
'{}'.format(e_proc_name, a_proc_name))
a_pids_length = len(a_pids)
fail_msg = ('PID count mismatch. {} ({}) expected, actual: '
'{}, {} ({})'.format(e_sentry_name, e_proc_name,
e_pids_length, a_pids_length,
a_pids))
# If expected is not bool, ensure PID quantities match
if not isinstance(e_pids_length, bool) and \
a_pids_length != e_pids_length:
return fail_msg
# If expected is bool True, ensure 1 or more PIDs exist
elif isinstance(e_pids_length, bool) and \
e_pids_length is True and a_pids_length < 1:
return fail_msg
# If expected is bool False, ensure 0 PIDs exist
elif isinstance(e_pids_length, bool) and \
e_pids_length is False and a_pids_length != 0:
return fail_msg
else:
self.log.debug('PID check OK: {} {} {}: '
'{}'.format(e_sentry_name, e_proc_name,
e_pids_length, a_pids))
return None
def validate_list_of_identical_dicts(self, list_of_dicts):
"""Check that all dicts within a list are identical."""
hashes = []
for _dict in list_of_dicts:
hashes.append(hash(frozenset(_dict.items())))
self.log.debug('Hashes: {}'.format(hashes))
if len(set(hashes)) == 1:
self.log.debug('Dicts within list are identical')
else:
return 'Dicts within list are not identical'
return None
def run_action(self, unit_sentry, action,
_check_output=subprocess.check_output):
"""Run the named action on a given unit sentry.
_check_output parameter is used for dependency injection.
@return action_id.
"""
unit_id = unit_sentry.info["unit_name"]
command = ["juju", "action", "do", "--format=json", unit_id, action]
self.log.info("Running command: %s\n" % " ".join(command))
output = _check_output(command, universal_newlines=True)
data = json.loads(output)
action_id = data[u'Action queued with id']
return action_id
def wait_on_action(self, action_id, _check_output=subprocess.check_output):
"""Wait for a given action, returning if it completed or not.
_check_output parameter is used for dependency injection.
"""
command = ["juju", "action", "fetch", "--format=json", "--wait=0",
action_id]
output = _check_output(command, universal_newlines=True)
data = json.loads(output)
return data.get(u"status") == "completed"

View File

@ -44,7 +44,7 @@ class OpenStackAmuletDeployment(AmuletDeployment):
Determine if the local branch being tested is derived from its Determine if the local branch being tested is derived from its
stable or next (dev) branch, and based on this, use the corresonding stable or next (dev) branch, and based on this, use the corresonding
stable or next branches for the other_services.""" stable or next branches for the other_services."""
base_charms = ['mysql', 'mongodb'] base_charms = ['mysql', 'mongodb', 'nrpe']
if self.series in ['precise', 'trusty']: if self.series in ['precise', 'trusty']:
base_series = self.series base_series = self.series
@ -79,9 +79,9 @@ class OpenStackAmuletDeployment(AmuletDeployment):
services.append(this_service) services.append(this_service)
use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph', use_source = ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
'ceph-osd', 'ceph-radosgw'] 'ceph-osd', 'ceph-radosgw']
# Openstack subordinate charms do not expose an origin option as that # Most OpenStack subordinate charms do not expose an origin option
# is controlled by the principle # as that is controlled by the principle.
ignore = ['neutron-openvswitch'] ignore = ['cinder-ceph', 'hacluster', 'neutron-openvswitch', 'nrpe']
if self.openstack: if self.openstack:
for svc in services: for svc in services:
@ -110,7 +110,8 @@ class OpenStackAmuletDeployment(AmuletDeployment):
(self.precise_essex, self.precise_folsom, self.precise_grizzly, (self.precise_essex, self.precise_folsom, self.precise_grizzly,
self.precise_havana, self.precise_icehouse, self.precise_havana, self.precise_icehouse,
self.trusty_icehouse, self.trusty_juno, self.utopic_juno, self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
self.trusty_kilo, self.vivid_kilo) = range(10) self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
self.wily_liberty) = range(12)
releases = { releases = {
('precise', None): self.precise_essex, ('precise', None): self.precise_essex,
@ -121,8 +122,10 @@ class OpenStackAmuletDeployment(AmuletDeployment):
('trusty', None): self.trusty_icehouse, ('trusty', None): self.trusty_icehouse,
('trusty', 'cloud:trusty-juno'): self.trusty_juno, ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
('trusty', 'cloud:trusty-kilo'): self.trusty_kilo, ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
('utopic', None): self.utopic_juno, ('utopic', None): self.utopic_juno,
('vivid', None): self.vivid_kilo} ('vivid', None): self.vivid_kilo,
('wily', None): self.wily_liberty}
return releases[(self.series, self.openstack)] return releases[(self.series, self.openstack)]
def _get_openstack_release_string(self): def _get_openstack_release_string(self):
@ -138,9 +141,43 @@ class OpenStackAmuletDeployment(AmuletDeployment):
('trusty', 'icehouse'), ('trusty', 'icehouse'),
('utopic', 'juno'), ('utopic', 'juno'),
('vivid', 'kilo'), ('vivid', 'kilo'),
('wily', 'liberty'),
]) ])
if self.openstack: if self.openstack:
os_origin = self.openstack.split(':')[1] os_origin = self.openstack.split(':')[1]
return os_origin.split('%s-' % self.series)[1].split('/')[0] return os_origin.split('%s-' % self.series)[1].split('/')[0]
else: else:
return releases[self.series] return releases[self.series]
def get_ceph_expected_pools(self, radosgw=False):
"""Return a list of expected ceph pools in a ceph + cinder + glance
test scenario, based on OpenStack release and whether ceph radosgw
is flagged as present or not."""
if self._get_openstack_release() >= self.trusty_kilo:
# Kilo or later
pools = [
'rbd',
'cinder',
'glance'
]
else:
# Juno or earlier
pools = [
'data',
'metadata',
'rbd',
'cinder',
'glance'
]
if radosgw:
pools.extend([
'.rgw.root',
'.rgw.control',
'.rgw',
'.rgw.gc',
'.users.uid'
])
return pools

View File

@ -14,16 +14,20 @@
# You should have received a copy of the GNU Lesser General Public License # You should have received a copy of the GNU Lesser General Public License
# along with charm-helpers. If not, see <http://www.gnu.org/licenses/>. # along with charm-helpers. If not, see <http://www.gnu.org/licenses/>.
import amulet
import json
import logging import logging
import os import os
import six
import time import time
import urllib import urllib
import cinderclient.v1.client as cinder_client
import glanceclient.v1.client as glance_client import glanceclient.v1.client as glance_client
import heatclient.v1.client as heat_client
import keystoneclient.v2_0 as keystone_client import keystoneclient.v2_0 as keystone_client
import novaclient.v1_1.client as nova_client import novaclient.v1_1.client as nova_client
import swiftclient
import six
from charmhelpers.contrib.amulet.utils import ( from charmhelpers.contrib.amulet.utils import (
AmuletUtils AmuletUtils
@ -37,7 +41,7 @@ class OpenStackAmuletUtils(AmuletUtils):
"""OpenStack amulet utilities. """OpenStack amulet utilities.
This class inherits from AmuletUtils and has additional support This class inherits from AmuletUtils and has additional support
that is specifically for use by OpenStack charms. that is specifically for use by OpenStack charm tests.
""" """
def __init__(self, log_level=ERROR): def __init__(self, log_level=ERROR):
@ -51,6 +55,8 @@ class OpenStackAmuletUtils(AmuletUtils):
Validate actual endpoint data vs expected endpoint data. The ports Validate actual endpoint data vs expected endpoint data. The ports
are used to find the matching endpoint. are used to find the matching endpoint.
""" """
self.log.debug('Validating endpoint data...')
self.log.debug('actual: {}'.format(repr(endpoints)))
found = False found = False
for ep in endpoints: for ep in endpoints:
self.log.debug('endpoint: {}'.format(repr(ep))) self.log.debug('endpoint: {}'.format(repr(ep)))
@ -77,6 +83,7 @@ class OpenStackAmuletUtils(AmuletUtils):
Validate a list of actual service catalog endpoints vs a list of Validate a list of actual service catalog endpoints vs a list of
expected service catalog endpoints. expected service catalog endpoints.
""" """
self.log.debug('Validating service catalog endpoint data...')
self.log.debug('actual: {}'.format(repr(actual))) self.log.debug('actual: {}'.format(repr(actual)))
for k, v in six.iteritems(expected): for k, v in six.iteritems(expected):
if k in actual: if k in actual:
@ -93,6 +100,7 @@ class OpenStackAmuletUtils(AmuletUtils):
Validate a list of actual tenant data vs list of expected tenant Validate a list of actual tenant data vs list of expected tenant
data. data.
""" """
self.log.debug('Validating tenant data...')
self.log.debug('actual: {}'.format(repr(actual))) self.log.debug('actual: {}'.format(repr(actual)))
for e in expected: for e in expected:
found = False found = False
@ -114,6 +122,7 @@ class OpenStackAmuletUtils(AmuletUtils):
Validate a list of actual role data vs a list of expected role Validate a list of actual role data vs a list of expected role
data. data.
""" """
self.log.debug('Validating role data...')
self.log.debug('actual: {}'.format(repr(actual))) self.log.debug('actual: {}'.format(repr(actual)))
for e in expected: for e in expected:
found = False found = False
@ -134,6 +143,7 @@ class OpenStackAmuletUtils(AmuletUtils):
Validate a list of actual user data vs a list of expected user Validate a list of actual user data vs a list of expected user
data. data.
""" """
self.log.debug('Validating user data...')
self.log.debug('actual: {}'.format(repr(actual))) self.log.debug('actual: {}'.format(repr(actual)))
for e in expected: for e in expected:
found = False found = False
@ -155,17 +165,30 @@ class OpenStackAmuletUtils(AmuletUtils):
Validate a list of actual flavors vs a list of expected flavors. Validate a list of actual flavors vs a list of expected flavors.
""" """
self.log.debug('Validating flavor data...')
self.log.debug('actual: {}'.format(repr(actual))) self.log.debug('actual: {}'.format(repr(actual)))
act = [a.name for a in actual] act = [a.name for a in actual]
return self._validate_list_data(expected, act) return self._validate_list_data(expected, act)
def tenant_exists(self, keystone, tenant): def tenant_exists(self, keystone, tenant):
"""Return True if tenant exists.""" """Return True if tenant exists."""
self.log.debug('Checking if tenant exists ({})...'.format(tenant))
return tenant in [t.name for t in keystone.tenants.list()] return tenant in [t.name for t in keystone.tenants.list()]
def authenticate_cinder_admin(self, keystone_sentry, username,
password, tenant):
"""Authenticates admin user with cinder."""
# NOTE(beisner): cinder python client doesn't accept tokens.
service_ip = \
keystone_sentry.relation('shared-db',
'mysql:shared-db')['private-address']
ept = "http://{}:5000/v2.0".format(service_ip.strip().decode('utf-8'))
return cinder_client.Client(username, password, tenant, ept)
def authenticate_keystone_admin(self, keystone_sentry, user, password, def authenticate_keystone_admin(self, keystone_sentry, user, password,
tenant): tenant):
"""Authenticates admin user with the keystone admin endpoint.""" """Authenticates admin user with the keystone admin endpoint."""
self.log.debug('Authenticating keystone admin...')
unit = keystone_sentry unit = keystone_sentry
service_ip = unit.relation('shared-db', service_ip = unit.relation('shared-db',
'mysql:shared-db')['private-address'] 'mysql:shared-db')['private-address']
@ -175,6 +198,7 @@ class OpenStackAmuletUtils(AmuletUtils):
def authenticate_keystone_user(self, keystone, user, password, tenant): def authenticate_keystone_user(self, keystone, user, password, tenant):
"""Authenticates a regular user with the keystone public endpoint.""" """Authenticates a regular user with the keystone public endpoint."""
self.log.debug('Authenticating keystone user ({})...'.format(user))
ep = keystone.service_catalog.url_for(service_type='identity', ep = keystone.service_catalog.url_for(service_type='identity',
endpoint_type='publicURL') endpoint_type='publicURL')
return keystone_client.Client(username=user, password=password, return keystone_client.Client(username=user, password=password,
@ -182,19 +206,49 @@ class OpenStackAmuletUtils(AmuletUtils):
def authenticate_glance_admin(self, keystone): def authenticate_glance_admin(self, keystone):
"""Authenticates admin user with glance.""" """Authenticates admin user with glance."""
self.log.debug('Authenticating glance admin...')
ep = keystone.service_catalog.url_for(service_type='image', ep = keystone.service_catalog.url_for(service_type='image',
endpoint_type='adminURL') endpoint_type='adminURL')
return glance_client.Client(ep, token=keystone.auth_token) return glance_client.Client(ep, token=keystone.auth_token)
def authenticate_heat_admin(self, keystone):
"""Authenticates the admin user with heat."""
self.log.debug('Authenticating heat admin...')
ep = keystone.service_catalog.url_for(service_type='orchestration',
endpoint_type='publicURL')
return heat_client.Client(endpoint=ep, token=keystone.auth_token)
def authenticate_nova_user(self, keystone, user, password, tenant): def authenticate_nova_user(self, keystone, user, password, tenant):
"""Authenticates a regular user with nova-api.""" """Authenticates a regular user with nova-api."""
self.log.debug('Authenticating nova user ({})...'.format(user))
ep = keystone.service_catalog.url_for(service_type='identity', ep = keystone.service_catalog.url_for(service_type='identity',
endpoint_type='publicURL') endpoint_type='publicURL')
return nova_client.Client(username=user, api_key=password, return nova_client.Client(username=user, api_key=password,
project_id=tenant, auth_url=ep) project_id=tenant, auth_url=ep)
def authenticate_swift_user(self, keystone, user, password, tenant):
"""Authenticates a regular user with swift api."""
self.log.debug('Authenticating swift user ({})...'.format(user))
ep = keystone.service_catalog.url_for(service_type='identity',
endpoint_type='publicURL')
return swiftclient.Connection(authurl=ep,
user=user,
key=password,
tenant_name=tenant,
auth_version='2.0')
def create_cirros_image(self, glance, image_name): def create_cirros_image(self, glance, image_name):
"""Download the latest cirros image and upload it to glance.""" """Download the latest cirros image and upload it to glance,
validate and return a resource pointer.
:param glance: pointer to authenticated glance connection
:param image_name: display name for new image
:returns: glance image pointer
"""
self.log.debug('Creating glance cirros image '
'({})...'.format(image_name))
# Download cirros image
http_proxy = os.getenv('AMULET_HTTP_PROXY') http_proxy = os.getenv('AMULET_HTTP_PROXY')
self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy)) self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))
if http_proxy: if http_proxy:
@ -203,57 +257,67 @@ class OpenStackAmuletUtils(AmuletUtils):
else: else:
opener = urllib.FancyURLopener() opener = urllib.FancyURLopener()
f = opener.open("http://download.cirros-cloud.net/version/released") f = opener.open('http://download.cirros-cloud.net/version/released')
version = f.read().strip() version = f.read().strip()
cirros_img = "cirros-{}-x86_64-disk.img".format(version) cirros_img = 'cirros-{}-x86_64-disk.img'.format(version)
local_path = os.path.join('tests', cirros_img) local_path = os.path.join('tests', cirros_img)
if not os.path.exists(local_path): if not os.path.exists(local_path):
cirros_url = "http://{}/{}/{}".format("download.cirros-cloud.net", cirros_url = 'http://{}/{}/{}'.format('download.cirros-cloud.net',
version, cirros_img) version, cirros_img)
opener.retrieve(cirros_url, local_path) opener.retrieve(cirros_url, local_path)
f.close() f.close()
# Create glance image
with open(local_path) as f: with open(local_path) as f:
image = glance.images.create(name=image_name, is_public=True, image = glance.images.create(name=image_name, is_public=True,
disk_format='qcow2', disk_format='qcow2',
container_format='bare', data=f) container_format='bare', data=f)
count = 1
status = image.status
while status != 'active' and count < 10:
time.sleep(3)
image = glance.images.get(image.id)
status = image.status
self.log.debug('image status: {}'.format(status))
count += 1
if status != 'active': # Wait for image to reach active status
self.log.error('image creation timed out') img_id = image.id
return None ret = self.resource_reaches_status(glance.images, img_id,
expected_stat='active',
msg='Image status wait')
if not ret:
msg = 'Glance image failed to reach expected state.'
amulet.raise_status(amulet.FAIL, msg=msg)
# Re-validate new image
self.log.debug('Validating image attributes...')
val_img_name = glance.images.get(img_id).name
val_img_stat = glance.images.get(img_id).status
val_img_pub = glance.images.get(img_id).is_public
val_img_cfmt = glance.images.get(img_id).container_format
val_img_dfmt = glance.images.get(img_id).disk_format
msg_attr = ('Image attributes - name:{} public:{} id:{} stat:{} '
'container fmt:{} disk fmt:{}'.format(
val_img_name, val_img_pub, img_id,
val_img_stat, val_img_cfmt, val_img_dfmt))
if val_img_name == image_name and val_img_stat == 'active' \
and val_img_pub is True and val_img_cfmt == 'bare' \
and val_img_dfmt == 'qcow2':
self.log.debug(msg_attr)
else:
msg = ('Volume validation failed, {}'.format(msg_attr))
amulet.raise_status(amulet.FAIL, msg=msg)
return image return image
def delete_image(self, glance, image): def delete_image(self, glance, image):
"""Delete the specified image.""" """Delete the specified image."""
num_before = len(list(glance.images.list()))
glance.images.delete(image)
count = 1 # /!\ DEPRECATION WARNING
num_after = len(list(glance.images.list())) self.log.warn('/!\\ DEPRECATION WARNING: use '
while num_after != (num_before - 1) and count < 10: 'delete_resource instead of delete_image.')
time.sleep(3) self.log.debug('Deleting glance image ({})...'.format(image))
num_after = len(list(glance.images.list())) return self.delete_resource(glance.images, image, msg='glance image')
self.log.debug('number of images: {}'.format(num_after))
count += 1
if num_after != (num_before - 1):
self.log.error('image deletion timed out')
return False
return True
def create_instance(self, nova, image_name, instance_name, flavor): def create_instance(self, nova, image_name, instance_name, flavor):
"""Create the specified instance.""" """Create the specified instance."""
self.log.debug('Creating instance '
'({}|{}|{})'.format(instance_name, image_name, flavor))
image = nova.images.find(name=image_name) image = nova.images.find(name=image_name)
flavor = nova.flavors.find(name=flavor) flavor = nova.flavors.find(name=flavor)
instance = nova.servers.create(name=instance_name, image=image, instance = nova.servers.create(name=instance_name, image=image,
@ -276,19 +340,265 @@ class OpenStackAmuletUtils(AmuletUtils):
def delete_instance(self, nova, instance): def delete_instance(self, nova, instance):
"""Delete the specified instance.""" """Delete the specified instance."""
num_before = len(list(nova.servers.list()))
nova.servers.delete(instance)
count = 1 # /!\ DEPRECATION WARNING
num_after = len(list(nova.servers.list())) self.log.warn('/!\\ DEPRECATION WARNING: use '
while num_after != (num_before - 1) and count < 10: 'delete_resource instead of delete_instance.')
time.sleep(3) self.log.debug('Deleting instance ({})...'.format(instance))
num_after = len(list(nova.servers.list())) return self.delete_resource(nova.servers, instance,
self.log.debug('number of instances: {}'.format(num_after)) msg='nova instance')
count += 1
if num_after != (num_before - 1): def create_or_get_keypair(self, nova, keypair_name="testkey"):
self.log.error('instance deletion timed out') """Create a new keypair, or return pointer if it already exists."""
try:
_keypair = nova.keypairs.get(keypair_name)
self.log.debug('Keypair ({}) already exists, '
'using it.'.format(keypair_name))
return _keypair
except:
self.log.debug('Keypair ({}) does not exist, '
'creating it.'.format(keypair_name))
_keypair = nova.keypairs.create(name=keypair_name)
return _keypair
def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1,
img_id=None, src_vol_id=None, snap_id=None):
"""Create cinder volume, optionally from a glance image, OR
optionally as a clone of an existing volume, OR optionally
from a snapshot. Wait for the new volume status to reach
the expected status, validate and return a resource pointer.
:param vol_name: cinder volume display name
:param vol_size: size in gigabytes
:param img_id: optional glance image id
:param src_vol_id: optional source volume id to clone
:param snap_id: optional snapshot id to use
:returns: cinder volume pointer
"""
# Handle parameter input and avoid impossible combinations
if img_id and not src_vol_id and not snap_id:
# Create volume from image
self.log.debug('Creating cinder volume from glance image...')
bootable = 'true'
elif src_vol_id and not img_id and not snap_id:
# Clone an existing volume
self.log.debug('Cloning cinder volume...')
bootable = cinder.volumes.get(src_vol_id).bootable
elif snap_id and not src_vol_id and not img_id:
# Create volume from snapshot
self.log.debug('Creating cinder volume from snapshot...')
snap = cinder.volume_snapshots.find(id=snap_id)
vol_size = snap.size
snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id
bootable = cinder.volumes.get(snap_vol_id).bootable
elif not img_id and not src_vol_id and not snap_id:
# Create volume
self.log.debug('Creating cinder volume...')
bootable = 'false'
else:
# Impossible combination of parameters
msg = ('Invalid method use - name:{} size:{} img_id:{} '
'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size,
img_id, src_vol_id,
snap_id))
amulet.raise_status(amulet.FAIL, msg=msg)
# Create new volume
try:
vol_new = cinder.volumes.create(display_name=vol_name,
imageRef=img_id,
size=vol_size,
source_volid=src_vol_id,
snapshot_id=snap_id)
vol_id = vol_new.id
except Exception as e:
msg = 'Failed to create volume: {}'.format(e)
amulet.raise_status(amulet.FAIL, msg=msg)
# Wait for volume to reach available status
ret = self.resource_reaches_status(cinder.volumes, vol_id,
expected_stat="available",
msg="Volume status wait")
if not ret:
msg = 'Cinder volume failed to reach expected state.'
amulet.raise_status(amulet.FAIL, msg=msg)
# Re-validate new volume
self.log.debug('Validating volume attributes...')
val_vol_name = cinder.volumes.get(vol_id).display_name
val_vol_boot = cinder.volumes.get(vol_id).bootable
val_vol_stat = cinder.volumes.get(vol_id).status
val_vol_size = cinder.volumes.get(vol_id).size
msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:'
'{} size:{}'.format(val_vol_name, vol_id,
val_vol_stat, val_vol_boot,
val_vol_size))
if val_vol_boot == bootable and val_vol_stat == 'available' \
and val_vol_name == vol_name and val_vol_size == vol_size:
self.log.debug(msg_attr)
else:
msg = ('Volume validation failed, {}'.format(msg_attr))
amulet.raise_status(amulet.FAIL, msg=msg)
return vol_new
def delete_resource(self, resource, resource_id,
msg="resource", max_wait=120):
"""Delete one openstack resource, such as one instance, keypair,
image, volume, stack, etc., and confirm deletion within max wait time.
:param resource: pointer to os resource type, ex:glance_client.images
:param resource_id: unique name or id for the openstack resource
:param msg: text to identify purpose in logging
:param max_wait: maximum wait time in seconds
:returns: True if successful, otherwise False
"""
self.log.debug('Deleting OpenStack resource '
'{} ({})'.format(resource_id, msg))
num_before = len(list(resource.list()))
resource.delete(resource_id)
tries = 0
num_after = len(list(resource.list()))
while num_after != (num_before - 1) and tries < (max_wait / 4):
self.log.debug('{} delete check: '
'{} [{}:{}] {}'.format(msg, tries,
num_before,
num_after,
resource_id))
time.sleep(4)
num_after = len(list(resource.list()))
tries += 1
self.log.debug('{}: expected, actual count = {}, '
'{}'.format(msg, num_before - 1, num_after))
if num_after == (num_before - 1):
return True
else:
self.log.error('{} delete timed out'.format(msg))
return False return False
def resource_reaches_status(self, resource, resource_id,
expected_stat='available',
msg='resource', max_wait=120):
"""Wait for an openstack resources status to reach an
expected status within a specified time. Useful to confirm that
nova instances, cinder vols, snapshots, glance images, heat stacks
and other resources eventually reach the expected status.
:param resource: pointer to os resource type, ex: heat_client.stacks
:param resource_id: unique id for the openstack resource
:param expected_stat: status to expect resource to reach
:param msg: text to identify purpose in logging
:param max_wait: maximum wait time in seconds
:returns: True if successful, False if status is not reached
"""
tries = 0
resource_stat = resource.get(resource_id).status
while resource_stat != expected_stat and tries < (max_wait / 4):
self.log.debug('{} status check: '
'{} [{}:{}] {}'.format(msg, tries,
resource_stat,
expected_stat,
resource_id))
time.sleep(4)
resource_stat = resource.get(resource_id).status
tries += 1
self.log.debug('{}: expected, actual status = {}, '
'{}'.format(msg, resource_stat, expected_stat))
if resource_stat == expected_stat:
return True return True
else:
self.log.debug('{} never reached expected status: '
'{}'.format(resource_id, expected_stat))
return False
def get_ceph_osd_id_cmd(self, index):
"""Produce a shell command that will return a ceph-osd id."""
return ("`initctl list | grep 'ceph-osd ' | "
"awk 'NR=={} {{ print $2 }}' | "
"grep -o '[0-9]*'`".format(index + 1))
def get_ceph_pools(self, sentry_unit):
"""Return a dict of ceph pools from a single ceph unit, with
pool name as keys, pool id as vals."""
pools = {}
cmd = 'sudo ceph osd lspools'
output, code = sentry_unit.run(cmd)
if code != 0:
msg = ('{} `{}` returned {} '
'{}'.format(sentry_unit.info['unit_name'],
cmd, code, output))
amulet.raise_status(amulet.FAIL, msg=msg)
# Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance,
for pool in str(output).split(','):
pool_id_name = pool.split(' ')
if len(pool_id_name) == 2:
pool_id = pool_id_name[0]
pool_name = pool_id_name[1]
pools[pool_name] = int(pool_id)
self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'],
pools))
return pools
def get_ceph_df(self, sentry_unit):
"""Return dict of ceph df json output, including ceph pool state.
:param sentry_unit: Pointer to amulet sentry instance (juju unit)
:returns: Dict of ceph df output
"""
cmd = 'sudo ceph df --format=json'
output, code = sentry_unit.run(cmd)
if code != 0:
msg = ('{} `{}` returned {} '
'{}'.format(sentry_unit.info['unit_name'],
cmd, code, output))
amulet.raise_status(amulet.FAIL, msg=msg)
return json.loads(output)
def get_ceph_pool_sample(self, sentry_unit, pool_id=0):
"""Take a sample of attributes of a ceph pool, returning ceph
pool name, object count and disk space used for the specified
pool ID number.
:param sentry_unit: Pointer to amulet sentry instance (juju unit)
:param pool_id: Ceph pool ID
:returns: List of pool name, object count, kb disk space used
"""
df = self.get_ceph_df(sentry_unit)
pool_name = df['pools'][pool_id]['name']
obj_count = df['pools'][pool_id]['stats']['objects']
kb_used = df['pools'][pool_id]['stats']['kb_used']
self.log.debug('Ceph {} pool (ID {}): {} objects, '
'{} kb used'.format(pool_name, pool_id,
obj_count, kb_used))
return pool_name, obj_count, kb_used
def validate_ceph_pool_samples(self, samples, sample_type="resource pool"):
"""Validate ceph pool samples taken over time, such as pool
object counts or pool kb used, before adding, after adding, and
after deleting items which affect those pool attributes. The
2nd element is expected to be greater than the 1st; 3rd is expected
to be less than the 2nd.
:param samples: List containing 3 data samples
:param sample_type: String for logging and usage context
:returns: None if successful, Failure message otherwise
"""
original, created, deleted = range(3)
if samples[created] <= samples[original] or \
samples[deleted] >= samples[created]:
return ('Ceph {} samples ({}) '
'unexpected.'.format(sample_type, samples))
else:
self.log.debug('Ceph {} samples (OK): '
'{}'.format(sample_type, samples))
return None

View File

@ -1,3 +1,4 @@
import json
from test_utils import CharmTestCase from test_utils import CharmTestCase
from mock import patch from mock import patch
import neutron_api_context as context import neutron_api_context as context
@ -479,3 +480,135 @@ class EtcdContextTest(CharmTestCase):
expect = {'cluster': ''} expect = {'cluster': ''}
self.assertEquals(expect, ctxt) self.assertEquals(expect, ctxt)
class NeutronApiSDNContextTest(CharmTestCase):
def setUp(self):
super(NeutronApiSDNContextTest, self).setUp(context, TO_PATCH)
self.relation_get.side_effect = self.test_relation.get
def tearDown(self):
super(NeutronApiSDNContextTest, self).tearDown()
def test_init(self):
napisdn_ctxt = context.NeutronApiSDNContext()
self.assertEquals(
napisdn_ctxt.interfaces,
['neutron-plugin-api-subordinate']
)
self.assertEquals(napisdn_ctxt.services, ['neutron-api'])
self.assertEquals(
napisdn_ctxt.config_file,
'/etc/neutron/neutron.conf'
)
@patch.object(charmhelpers.contrib.openstack.context, 'log')
@patch.object(charmhelpers.contrib.openstack.context, 'relation_get')
@patch.object(charmhelpers.contrib.openstack.context, 'related_units')
@patch.object(charmhelpers.contrib.openstack.context, 'relation_ids')
def ctxt_check(self, rel_settings, expect, _rids, _runits, _rget, _log):
self.test_relation.set(rel_settings)
_runits.return_value = ['unit1']
_rids.return_value = ['rid2']
_rget.side_effect = self.test_relation.get
self.relation_ids.return_value = ['rid2']
self.related_units.return_value = ['unit1']
napisdn_ctxt = context.NeutronApiSDNContext()()
self.assertEquals(napisdn_ctxt, expect)
def test_defaults(self):
self.ctxt_check(
{'neutron-plugin': 'ovs'},
{
'core_plugin': 'neutron.plugins.ml2.plugin.Ml2Plugin',
'neutron_plugin_config': ('/etc/neutron/plugins/ml2/'
'ml2_conf.ini'),
'service_plugins': 'router,firewall,lbaas,vpnaas,metering',
'restart_trigger': '',
'neutron_plugin': 'ovs',
'sections': {},
}
)
def test_overrides(self):
self.ctxt_check(
{
'neutron-plugin': 'ovs',
'core-plugin': 'neutron.plugins.ml2.plugin.MidoPlumODL',
'neutron-plugin-config': '/etc/neutron/plugins/fl/flump.ini',
'service-plugins': 'router,unicorn,rainbows',
'restart-trigger': 'restartnow',
},
{
'core_plugin': 'neutron.plugins.ml2.plugin.MidoPlumODL',
'neutron_plugin_config': '/etc/neutron/plugins/fl/flump.ini',
'service_plugins': 'router,unicorn,rainbows',
'restart_trigger': 'restartnow',
'neutron_plugin': 'ovs',
'sections': {},
}
)
def test_subordinateconfig(self):
principle_config = {
"neutron-api": {
"/etc/neutron/neutron.conf": {
"sections": {
'DEFAULT': [
('neutronboost', True)
],
}
}
}
}
self.ctxt_check(
{
'neutron-plugin': 'ovs',
'subordinate_configuration': json.dumps(principle_config),
},
{
'core_plugin': 'neutron.plugins.ml2.plugin.Ml2Plugin',
'neutron_plugin_config': ('/etc/neutron/plugins/ml2/'
'ml2_conf.ini'),
'service_plugins': 'router,firewall,lbaas,vpnaas,metering',
'restart_trigger': '',
'neutron_plugin': 'ovs',
'sections': {u'DEFAULT': [[u'neutronboost', True]]},
}
)
def test_empty(self):
self.ctxt_check(
{},
{'sections': {}},
)
class NeutronApiSDNConfigFileContextTest(CharmTestCase):
def setUp(self):
super(NeutronApiSDNConfigFileContextTest, self).setUp(
context, TO_PATCH)
self.relation_get.side_effect = self.test_relation.get
def tearDown(self):
super(NeutronApiSDNConfigFileContextTest, self).tearDown()
def test_configset(self):
self.test_relation.set({
'neutron-plugin-config': '/etc/neutron/superplugin.ini'
})
self.relation_ids.return_value = ['rid2']
self.related_units.return_value = ['unit1']
napisdn_ctxt = context.NeutronApiSDNConfigFileContext()()
self.assertEquals(napisdn_ctxt, {
'config': '/etc/neutron/superplugin.ini'
})
def test_default(self):
self.relation_ids.return_value = []
napisdn_ctxt = context.NeutronApiSDNConfigFileContext()()
self.assertEquals(napisdn_ctxt, {
'config': '/etc/neutron/plugins/ml2/ml2_conf.ini'
})

View File

@ -25,7 +25,6 @@ TO_PATCH = [
'api_port', 'api_port',
'apt_update', 'apt_update',
'apt_install', 'apt_install',
'canonical_url',
'config', 'config',
'CONFIGS', 'CONFIGS',
'check_call', 'check_call',
@ -320,8 +319,9 @@ class NeutronAPIHooksTests(CharmTestCase):
self._call_hook('amqp-relation-broken') self._call_hook('amqp-relation-broken')
self.assertTrue(self.CONFIGS.write_all.called) self.assertTrue(self.CONFIGS.write_all.called)
def test_identity_joined(self): @patch.object(hooks, 'canonical_url')
self.canonical_url.return_value = 'http://127.0.0.1' def test_identity_joined(self, _canonical_url):
_canonical_url.return_value = 'http://127.0.0.1'
self.api_port.return_value = '9696' self.api_port.return_value = '9696'
self.test_config.set('region', 'region1') self.test_config.set('region', 'region1')
_neutron_url = 'http://127.0.0.1:9696' _neutron_url = 'http://127.0.0.1:9696'
@ -338,6 +338,34 @@ class NeutronAPIHooksTests(CharmTestCase):
relation_settings=_endpoints relation_settings=_endpoints
) )
@patch('charmhelpers.contrib.openstack.ip.service_name',
lambda *args: 'neutron-api')
@patch('charmhelpers.contrib.openstack.ip.unit_get')
@patch('charmhelpers.contrib.openstack.ip.is_clustered')
@patch('charmhelpers.contrib.openstack.ip.config')
def test_identity_changed_public_name(self, _config, _is_clustered,
_unit_get):
_unit_get.return_value = '127.0.0.1'
_is_clustered.return_value = False
_config.side_effect = self.test_config.get
self.api_port.return_value = '9696'
self.test_config.set('region', 'region1')
self.test_config.set('os-public-hostname',
'neutron-api.example.com')
self._call_hook('identity-service-relation-joined')
_neutron_url = 'http://127.0.0.1:9696'
_endpoints = {
'quantum_service': 'quantum',
'quantum_region': 'region1',
'quantum_public_url': 'http://neutron-api.example.com:9696',
'quantum_admin_url': _neutron_url,
'quantum_internal_url': _neutron_url,
}
self.relation_set.assert_called_with(
relation_id=None,
relation_settings=_endpoints
)
def test_identity_changed_partial_ctxt(self): def test_identity_changed_partial_ctxt(self):
self.CONFIGS.complete_contexts.return_value = [] self.CONFIGS.complete_contexts.return_value = []
_api_rel_joined = self.patch('neutron_api_relation_joined') _api_rel_joined = self.patch('neutron_api_relation_joined')
@ -354,12 +382,13 @@ class NeutronAPIHooksTests(CharmTestCase):
self.assertTrue(self.CONFIGS.write.called_with(NEUTRON_CONF)) self.assertTrue(self.CONFIGS.write.called_with(NEUTRON_CONF))
self.assertTrue(_api_rel_joined.called) self.assertTrue(_api_rel_joined.called)
def test_neutron_api_relation_no_id_joined(self): @patch.object(hooks, 'canonical_url')
def test_neutron_api_relation_no_id_joined(self, _canonical_url):
host = 'http://127.0.0.1' host = 'http://127.0.0.1'
port = 1234 port = 1234
_id_rel_joined = self.patch('identity_joined') _id_rel_joined = self.patch('identity_joined')
self.relation_ids.side_effect = self._fake_relids self.relation_ids.side_effect = self._fake_relids
self.canonical_url.return_value = host _canonical_url.return_value = host
self.api_port.return_value = port self.api_port.return_value = port
self.is_relation_made = False self.is_relation_made = False
neutron_url = '%s:%s' % (host, port) neutron_url = '%s:%s' % (host, port)
@ -382,10 +411,11 @@ class NeutronAPIHooksTests(CharmTestCase):
**_relation_data **_relation_data
) )
def test_neutron_api_relation_joined(self): @patch.object(hooks, 'canonical_url')
def test_neutron_api_relation_joined(self, _canonical_url):
host = 'http://127.0.0.1' host = 'http://127.0.0.1'
port = 1234 port = 1234
self.canonical_url.return_value = host _canonical_url.return_value = host
self.api_port.return_value = port self.api_port.return_value = port
self.is_relation_made = True self.is_relation_made = True
neutron_url = '%s:%s' % (host, port) neutron_url = '%s:%s' % (host, port)

View File

@ -32,7 +32,9 @@ TO_PATCH = [
'log', 'log',
'neutron_plugin_attribute', 'neutron_plugin_attribute',
'os_release', 'os_release',
'pip_install',
'subprocess', 'subprocess',
'is_elected_leader',
'service_stop', 'service_stop',
'service_start', 'service_start',
'glob', 'glob',
@ -106,28 +108,57 @@ class TestNeutronAPIUtils(CharmTestCase):
expect.extend(nutils.KILO_PACKAGES) expect.extend(nutils.KILO_PACKAGES)
self.assertItemsEqual(pkg_list, expect) self.assertItemsEqual(pkg_list, expect)
@patch.object(nutils, 'git_install_requested')
def test_determine_packages_noplugin(self, git_requested):
git_requested.return_value = False
self.test_config.set('manage-neutron-plugin-legacy-mode', False)
pkg_list = nutils.determine_packages()
expect = deepcopy(nutils.BASE_PACKAGES)
expect.extend(['neutron-server'])
self.assertItemsEqual(pkg_list, expect)
def test_determine_ports(self): def test_determine_ports(self):
port_list = nutils.determine_ports() port_list = nutils.determine_ports()
self.assertItemsEqual(port_list, [9696]) self.assertItemsEqual(port_list, [9696])
@patch.object(nutils, 'manage_plugin')
@patch('os.path.exists') @patch('os.path.exists')
def test_resource_map(self, _path_exists): def test_resource_map(self, _path_exists, _manage_plugin):
_path_exists.return_value = False _path_exists.return_value = False
_manage_plugin.return_value = True
_map = nutils.resource_map() _map = nutils.resource_map()
confs = [nutils.NEUTRON_CONF, nutils.NEUTRON_DEFAULT, confs = [nutils.NEUTRON_CONF, nutils.NEUTRON_DEFAULT,
nutils.APACHE_CONF] nutils.APACHE_CONF]
[self.assertIn(q_conf, _map.keys()) for q_conf in confs] [self.assertIn(q_conf, _map.keys()) for q_conf in confs]
self.assertTrue(nutils.APACHE_24_CONF not in _map.keys()) self.assertTrue(nutils.APACHE_24_CONF not in _map.keys())
@patch.object(nutils, 'manage_plugin')
@patch('os.path.exists') @patch('os.path.exists')
def test_resource_map_apache24(self, _path_exists): def test_resource_map_apache24(self, _path_exists, _manage_plugin):
_path_exists.return_value = True _path_exists.return_value = True
_manage_plugin.return_value = True
_map = nutils.resource_map() _map = nutils.resource_map()
confs = [nutils.NEUTRON_CONF, nutils.NEUTRON_DEFAULT, confs = [nutils.NEUTRON_CONF, nutils.NEUTRON_DEFAULT,
nutils.APACHE_24_CONF] nutils.APACHE_24_CONF]
[self.assertIn(q_conf, _map.keys()) for q_conf in confs] [self.assertIn(q_conf, _map.keys()) for q_conf in confs]
self.assertTrue(nutils.APACHE_CONF not in _map.keys()) self.assertTrue(nutils.APACHE_CONF not in _map.keys())
@patch.object(nutils, 'manage_plugin')
@patch('os.path.exists')
def test_resource_map_noplugin(self, _path_exists, _manage_plugin):
_path_exists.return_value = True
_manage_plugin.return_value = False
_map = nutils.resource_map()
found_sdn_ctxt = False
found_sdnconfig_ctxt = False
for ctxt in _map[nutils.NEUTRON_CONF]['contexts']:
if isinstance(ctxt, ncontext.NeutronApiSDNContext):
found_sdn_ctxt = True
for ctxt in _map[nutils.NEUTRON_DEFAULT]['contexts']:
if isinstance(ctxt, ncontext.NeutronApiSDNConfigFileContext):
found_sdnconfig_ctxt = True
self.assertTrue(found_sdn_ctxt and found_sdnconfig_ctxt)
@patch('os.path.exists') @patch('os.path.exists')
def test_restart_map(self, mock_path_exists): def test_restart_map(self, mock_path_exists):
mock_path_exists.return_value = False mock_path_exists.return_value = False
@ -193,6 +224,7 @@ class TestNeutronAPIUtils(CharmTestCase):
def test_do_openstack_upgrade_juno(self, git_requested, def test_do_openstack_upgrade_juno(self, git_requested,
stamp_neutron_db, migrate_neutron_db): stamp_neutron_db, migrate_neutron_db):
git_requested.return_value = False git_requested.return_value = False
self.is_elected_leader.return_value = True
self.config.side_effect = self.test_config.get self.config.side_effect = self.test_config.get
self.test_config.set('openstack-origin', 'cloud:trusty-juno') self.test_config.set('openstack-origin', 'cloud:trusty-juno')
self.os_release.return_value = 'icehouse' self.os_release.return_value = 'icehouse'
@ -230,6 +262,7 @@ class TestNeutronAPIUtils(CharmTestCase):
stamp_neutron_db, migrate_neutron_db, stamp_neutron_db, migrate_neutron_db,
gsrc): gsrc):
git_requested.return_value = False git_requested.return_value = False
self.is_elected_leader.return_value = True
self.os_release.return_value = 'juno' self.os_release.return_value = 'juno'
self.config.side_effect = self.test_config.get self.config.side_effect = self.test_config.get
self.test_config.set('openstack-origin', 'cloud:trusty-kilo') self.test_config.set('openstack-origin', 'cloud:trusty-kilo')
@ -259,6 +292,46 @@ class TestNeutronAPIUtils(CharmTestCase):
stamp_neutron_db.assert_called_with('juno') stamp_neutron_db.assert_called_with('juno')
migrate_neutron_db.assert_called_with() migrate_neutron_db.assert_called_with()
@patch.object(charmhelpers.contrib.openstack.utils,
'get_os_codename_install_source')
@patch.object(nutils, 'migrate_neutron_database')
@patch.object(nutils, 'stamp_neutron_database')
@patch.object(nutils, 'git_install_requested')
def test_do_openstack_upgrade_kilo_notleader(self, git_requested,
stamp_neutron_db,
migrate_neutron_db,
gsrc):
git_requested.return_value = False
self.is_elected_leader.return_value = False
self.os_release.return_value = 'juno'
self.config.side_effect = self.test_config.get
self.test_config.set('openstack-origin', 'cloud:trusty-kilo')
gsrc.return_value = 'kilo'
self.get_os_codename_install_source.return_value = 'kilo'
configs = MagicMock()
nutils.do_openstack_upgrade(configs)
self.os_release.assert_called_with('neutron-server')
self.log.assert_called()
self.configure_installation_source.assert_called_with(
'cloud:trusty-kilo'
)
self.apt_update.assert_called_with(fatal=True)
dpkg_opts = [
'--option', 'Dpkg::Options::=--force-confnew',
'--option', 'Dpkg::Options::=--force-confdef',
]
self.apt_upgrade.assert_called_with(options=dpkg_opts,
fatal=True,
dist=True)
pkgs = nutils.determine_packages()
pkgs.sort()
self.apt_install.assert_called_with(packages=pkgs,
options=dpkg_opts,
fatal=True)
configs.set_release.assert_called_with(openstack_release='kilo')
self.assertFalse(stamp_neutron_db.called)
self.assertFalse(migrate_neutron_db.called)
@patch.object(ncontext, 'IdentityServiceContext') @patch.object(ncontext, 'IdentityServiceContext')
@patch('neutronclient.v2_0.client.Client') @patch('neutronclient.v2_0.client.Client')
def test_get_neutron_client(self, nclient, IdentityServiceContext): def test_get_neutron_client(self, nclient, IdentityServiceContext):
@ -419,14 +492,19 @@ class TestNeutronAPIUtils(CharmTestCase):
@patch.object(nutils, 'git_src_dir') @patch.object(nutils, 'git_src_dir')
@patch.object(nutils, 'service_restart') @patch.object(nutils, 'service_restart')
@patch.object(nutils, 'render') @patch.object(nutils, 'render')
@patch.object(nutils, 'git_pip_venv_dir')
@patch('os.path.join') @patch('os.path.join')
@patch('os.path.exists') @patch('os.path.exists')
@patch('os.symlink')
@patch('shutil.copytree') @patch('shutil.copytree')
@patch('shutil.rmtree') @patch('shutil.rmtree')
def test_git_post_install(self, rmtree, copytree, exists, join, render, @patch('subprocess.check_call')
service_restart, git_src_dir): def test_git_post_install(self, check_call, rmtree, copytree, symlink,
exists, join, venv, render, service_restart,
git_src_dir):
projects_yaml = openstack_origin_git projects_yaml = openstack_origin_git
join.return_value = 'joined-string' join.return_value = 'joined-string'
venv.return_value = '/mnt/openstack-git/venv'
nutils.git_post_install(projects_yaml) nutils.git_post_install(projects_yaml)
expected = [ expected = [
call('joined-string', '/etc/neutron'), call('joined-string', '/etc/neutron'),
@ -434,10 +512,16 @@ class TestNeutronAPIUtils(CharmTestCase):
call('joined-string', '/etc/neutron/rootwrap.d'), call('joined-string', '/etc/neutron/rootwrap.d'),
] ]
copytree.assert_has_calls(expected) copytree.assert_has_calls(expected)
expected = [
call('joined-string', '/usr/local/bin/neutron-rootwrap'),
call('joined-string', '/usr/local/bin/neutron-db-manage'),
]
symlink.assert_has_calls(expected, any_order=True)
neutron_api_context = { neutron_api_context = {
'service_description': 'Neutron API server', 'service_description': 'Neutron API server',
'charm_name': 'neutron-api', 'charm_name': 'neutron-api',
'process_name': 'neutron-server', 'process_name': 'neutron-server',
'executable_name': 'joined-string',
} }
expected = [ expected = [
call('git/neutron_sudoers', '/etc/sudoers.d/neutron_sudoers', {}, call('git/neutron_sudoers', '/etc/sudoers.d/neutron_sudoers', {},
@ -470,6 +554,16 @@ class TestNeutronAPIUtils(CharmTestCase):
'head'] 'head']
self.subprocess.check_output.assert_called_with(cmd) self.subprocess.check_output.assert_called_with(cmd)
def test_manage_plugin_true(self):
self.test_config.set('manage-neutron-plugin-legacy-mode', True)
manage = nutils.manage_plugin()
self.assertTrue(manage)
def test_manage_plugin_false(self):
self.test_config.set('manage-neutron-plugin-legacy-mode', False)
manage = nutils.manage_plugin()
self.assertFalse(manage)
def test_additional_install_locations_calico(self): def test_additional_install_locations_calico(self):
self.get_os_codename_install_source.return_value = 'icehouse' self.get_os_codename_install_source.return_value = 'icehouse'
nutils.additional_install_locations('Calico', '') nutils.additional_install_locations('Calico', '')