Add xena bundles
- add non-voting focal-xena bundle - add non-voting impish-xena bundle - rebuild to pick up charm-helpers changes - update tox/pip.sh to ensure setuptools<50.0.0 Change-Id: Idd5275cb2440ee712dae62b1ef4ba5a6d846135d
This commit is contained in:
parent
e05668674c
commit
77c75f62f9
@ -1,4 +1,4 @@
|
|||||||
# Copyright 2014-2015 Canonical Limited.
|
# Copyright 2012-2021 Canonical Limited.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -13,7 +13,6 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
"""Compatibility with the nrpe-external-master charm"""
|
"""Compatibility with the nrpe-external-master charm"""
|
||||||
# Copyright 2012 Canonical Ltd.
|
|
||||||
#
|
#
|
||||||
# Authors:
|
# Authors:
|
||||||
# Matthew Wedgwood <matthew.wedgwood@canonical.com>
|
# Matthew Wedgwood <matthew.wedgwood@canonical.com>
|
||||||
@ -511,7 +510,7 @@ def add_haproxy_checks(nrpe, unit_name):
|
|||||||
|
|
||||||
def remove_deprecated_check(nrpe, deprecated_services):
|
def remove_deprecated_check(nrpe, deprecated_services):
|
||||||
"""
|
"""
|
||||||
Remove checks fro deprecated services in list
|
Remove checks for deprecated services in list
|
||||||
|
|
||||||
:param nrpe: NRPE object to remove check from
|
:param nrpe: NRPE object to remove check from
|
||||||
:type nrpe: NRPE
|
:type nrpe: NRPE
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
# Copyright 2014-2015 Canonical Limited.
|
# Copyright 2014-2021 Canonical Limited.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -22,7 +22,7 @@ Configuration stanzas::
|
|||||||
type: boolean
|
type: boolean
|
||||||
default: true
|
default: true
|
||||||
description: >
|
description: >
|
||||||
If false, a volume is mounted as sepecified in "volume-map"
|
If false, a volume is mounted as specified in "volume-map"
|
||||||
If true, ephemeral storage will be used, meaning that log data
|
If true, ephemeral storage will be used, meaning that log data
|
||||||
will only exist as long as the machine. YOU HAVE BEEN WARNED.
|
will only exist as long as the machine. YOU HAVE BEEN WARNED.
|
||||||
volume-map:
|
volume-map:
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
# Copyright 2014-2015 Canonical Limited.
|
# Copyright 2014-2021 Canonical Limited.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -86,7 +86,7 @@ def is_elected_leader(resource):
|
|||||||
2. If the charm is part of a corosync cluster, call corosync to
|
2. If the charm is part of a corosync cluster, call corosync to
|
||||||
determine leadership.
|
determine leadership.
|
||||||
3. If the charm is not part of a corosync cluster, the leader is
|
3. If the charm is not part of a corosync cluster, the leader is
|
||||||
determined as being "the alive unit with the lowest unit numer". In
|
determined as being "the alive unit with the lowest unit number". In
|
||||||
other words, the oldest surviving unit.
|
other words, the oldest surviving unit.
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
@ -418,7 +418,7 @@ def get_managed_services_and_ports(services, external_ports,
|
|||||||
|
|
||||||
Return only the services and corresponding ports that are managed by this
|
Return only the services and corresponding ports that are managed by this
|
||||||
charm. This excludes haproxy when there is a relation with hacluster. This
|
charm. This excludes haproxy when there is a relation with hacluster. This
|
||||||
is because this charm passes responsability for stopping and starting
|
is because this charm passes responsibility for stopping and starting
|
||||||
haproxy to hacluster.
|
haproxy to hacluster.
|
||||||
|
|
||||||
Similarly, if a relation with hacluster exists then the ports returned by
|
Similarly, if a relation with hacluster exists then the ports returned by
|
||||||
|
@ -187,7 +187,7 @@ SYS_GID_MAX {{ sys_gid_max }}
|
|||||||
|
|
||||||
#
|
#
|
||||||
# Max number of login retries if password is bad. This will most likely be
|
# Max number of login retries if password is bad. This will most likely be
|
||||||
# overriden by PAM, since the default pam_unix module has it's own built
|
# overridden by PAM, since the default pam_unix module has it's own built
|
||||||
# in of 3 retries. However, this is a safe fallback in case you are using
|
# in of 3 retries. However, this is a safe fallback in case you are using
|
||||||
# an authentication module that does not enforce PAM_MAXTRIES.
|
# an authentication module that does not enforce PAM_MAXTRIES.
|
||||||
#
|
#
|
||||||
@ -235,7 +235,7 @@ USERGROUPS_ENAB yes
|
|||||||
#
|
#
|
||||||
# Instead of the real user shell, the program specified by this parameter
|
# Instead of the real user shell, the program specified by this parameter
|
||||||
# will be launched, although its visible name (argv[0]) will be the shell's.
|
# will be launched, although its visible name (argv[0]) will be the shell's.
|
||||||
# The program may do whatever it wants (logging, additional authentification,
|
# The program may do whatever it wants (logging, additional authentication,
|
||||||
# banner, ...) before running the actual shell.
|
# banner, ...) before running the actual shell.
|
||||||
#
|
#
|
||||||
# FAKE_SHELL /bin/fakeshell
|
# FAKE_SHELL /bin/fakeshell
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
# Copyright 2016 Canonical Limited.
|
# Copyright 2016-2021 Canonical Limited.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -85,7 +85,7 @@ def _get_user_provided_overrides(modules):
|
|||||||
|
|
||||||
|
|
||||||
def _apply_overrides(settings, overrides, schema):
|
def _apply_overrides(settings, overrides, schema):
|
||||||
"""Get overrides config overlayed onto modules defaults.
|
"""Get overrides config overlaid onto modules defaults.
|
||||||
|
|
||||||
:param modules: require stack modules config.
|
:param modules: require stack modules config.
|
||||||
:returns: dictionary of modules config with user overrides applied.
|
:returns: dictionary of modules config with user overrides applied.
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
# Copyright 2014-2015 Canonical Limited.
|
# Copyright 2014-2021 Canonical Limited.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -578,7 +578,7 @@ def get_relation_ip(interface, cidr_network=None):
|
|||||||
@returns IPv6 or IPv4 address
|
@returns IPv6 or IPv4 address
|
||||||
"""
|
"""
|
||||||
# Select the interface address first
|
# Select the interface address first
|
||||||
# For possible use as a fallback bellow with get_address_in_network
|
# For possible use as a fallback below with get_address_in_network
|
||||||
try:
|
try:
|
||||||
# Get the interface specific IP
|
# Get the interface specific IP
|
||||||
address = network_get_primary_address(interface)
|
address = network_get_primary_address(interface)
|
||||||
|
@ -244,7 +244,7 @@ def get_deferred_restarts():
|
|||||||
|
|
||||||
|
|
||||||
def clear_deferred_restarts(services):
|
def clear_deferred_restarts(services):
|
||||||
"""Clear deferred restart events targetted at `services`.
|
"""Clear deferred restart events targeted at `services`.
|
||||||
|
|
||||||
:param services: Services with deferred actions to clear.
|
:param services: Services with deferred actions to clear.
|
||||||
:type services: List[str]
|
:type services: List[str]
|
||||||
@ -253,7 +253,7 @@ def clear_deferred_restarts(services):
|
|||||||
|
|
||||||
|
|
||||||
def process_svc_restart(service):
|
def process_svc_restart(service):
|
||||||
"""Respond to a service restart having occured.
|
"""Respond to a service restart having occurred.
|
||||||
|
|
||||||
:param service: Services that the action was performed against.
|
:param service: Services that the action was performed against.
|
||||||
:type service: str
|
:type service: str
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
|
|
||||||
"""This script is an implemenation of policy-rc.d
|
"""This script is an implementation of policy-rc.d
|
||||||
|
|
||||||
For further information on policy-rc.d see *1
|
For further information on policy-rc.d see *1
|
||||||
|
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
# Copyright 2019 Canonical Ltd
|
# Copyright 2019-2021 Canonical Ltd
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -59,7 +59,7 @@ provided:
|
|||||||
The functions should be called from the install and upgrade hooks in the charm.
|
The functions should be called from the install and upgrade hooks in the charm.
|
||||||
The `maybe_do_policyd_overrides_on_config_changed` function is designed to be
|
The `maybe_do_policyd_overrides_on_config_changed` function is designed to be
|
||||||
called on the config-changed hook, in that it does an additional check to
|
called on the config-changed hook, in that it does an additional check to
|
||||||
ensure that an already overriden policy.d in an upgrade or install hooks isn't
|
ensure that an already overridden policy.d in an upgrade or install hooks isn't
|
||||||
repeated.
|
repeated.
|
||||||
|
|
||||||
In order the *enable* this functionality, the charm's install, config_changed,
|
In order the *enable* this functionality, the charm's install, config_changed,
|
||||||
@ -334,7 +334,7 @@ def maybe_do_policyd_overrides(openstack_release,
|
|||||||
restart_handler()
|
restart_handler()
|
||||||
|
|
||||||
|
|
||||||
@charmhelpers.deprecate("Use maybe_do_poliyd_overrrides instead")
|
@charmhelpers.deprecate("Use maybe_do_policyd_overrides instead")
|
||||||
def maybe_do_policyd_overrides_on_config_changed(*args, **kwargs):
|
def maybe_do_policyd_overrides_on_config_changed(*args, **kwargs):
|
||||||
"""This function is designed to be called from the config changed hook.
|
"""This function is designed to be called from the config changed hook.
|
||||||
|
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
# Copyright 2014-2015 Canonical Limited.
|
# Copyright 2014-2021 Canonical Limited.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -106,6 +106,8 @@ from charmhelpers.fetch import (
|
|||||||
filter_installed_packages,
|
filter_installed_packages,
|
||||||
filter_missing_packages,
|
filter_missing_packages,
|
||||||
ubuntu_apt_pkg as apt,
|
ubuntu_apt_pkg as apt,
|
||||||
|
OPENSTACK_RELEASES,
|
||||||
|
UBUNTU_OPENSTACK_RELEASE,
|
||||||
)
|
)
|
||||||
|
|
||||||
from charmhelpers.fetch.snap import (
|
from charmhelpers.fetch.snap import (
|
||||||
@ -132,54 +134,9 @@ CLOUD_ARCHIVE_KEY_ID = '5EDB1B62EC4926EA'
|
|||||||
DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed '
|
DISTRO_PROPOSED = ('deb http://archive.ubuntu.com/ubuntu/ %s-proposed '
|
||||||
'restricted main multiverse universe')
|
'restricted main multiverse universe')
|
||||||
|
|
||||||
OPENSTACK_RELEASES = (
|
|
||||||
'diablo',
|
|
||||||
'essex',
|
|
||||||
'folsom',
|
|
||||||
'grizzly',
|
|
||||||
'havana',
|
|
||||||
'icehouse',
|
|
||||||
'juno',
|
|
||||||
'kilo',
|
|
||||||
'liberty',
|
|
||||||
'mitaka',
|
|
||||||
'newton',
|
|
||||||
'ocata',
|
|
||||||
'pike',
|
|
||||||
'queens',
|
|
||||||
'rocky',
|
|
||||||
'stein',
|
|
||||||
'train',
|
|
||||||
'ussuri',
|
|
||||||
'victoria',
|
|
||||||
'wallaby',
|
|
||||||
)
|
|
||||||
|
|
||||||
UBUNTU_OPENSTACK_RELEASE = OrderedDict([
|
|
||||||
('oneiric', 'diablo'),
|
|
||||||
('precise', 'essex'),
|
|
||||||
('quantal', 'folsom'),
|
|
||||||
('raring', 'grizzly'),
|
|
||||||
('saucy', 'havana'),
|
|
||||||
('trusty', 'icehouse'),
|
|
||||||
('utopic', 'juno'),
|
|
||||||
('vivid', 'kilo'),
|
|
||||||
('wily', 'liberty'),
|
|
||||||
('xenial', 'mitaka'),
|
|
||||||
('yakkety', 'newton'),
|
|
||||||
('zesty', 'ocata'),
|
|
||||||
('artful', 'pike'),
|
|
||||||
('bionic', 'queens'),
|
|
||||||
('cosmic', 'rocky'),
|
|
||||||
('disco', 'stein'),
|
|
||||||
('eoan', 'train'),
|
|
||||||
('focal', 'ussuri'),
|
|
||||||
('groovy', 'victoria'),
|
|
||||||
('hirsute', 'wallaby'),
|
|
||||||
])
|
|
||||||
|
|
||||||
|
|
||||||
OPENSTACK_CODENAMES = OrderedDict([
|
OPENSTACK_CODENAMES = OrderedDict([
|
||||||
|
# NOTE(lourot): 'yyyy.i' isn't actually mapping with any real version
|
||||||
|
# number. This just means the i-th version of the year yyyy.
|
||||||
('2011.2', 'diablo'),
|
('2011.2', 'diablo'),
|
||||||
('2012.1', 'essex'),
|
('2012.1', 'essex'),
|
||||||
('2012.2', 'folsom'),
|
('2012.2', 'folsom'),
|
||||||
@ -200,6 +157,8 @@ OPENSTACK_CODENAMES = OrderedDict([
|
|||||||
('2020.1', 'ussuri'),
|
('2020.1', 'ussuri'),
|
||||||
('2020.2', 'victoria'),
|
('2020.2', 'victoria'),
|
||||||
('2021.1', 'wallaby'),
|
('2021.1', 'wallaby'),
|
||||||
|
('2021.2', 'xena'),
|
||||||
|
('2022.1', 'yoga'),
|
||||||
])
|
])
|
||||||
|
|
||||||
# The ugly duckling - must list releases oldest to newest
|
# The ugly duckling - must list releases oldest to newest
|
||||||
@ -701,7 +660,7 @@ def import_key(keyid):
|
|||||||
def get_source_and_pgp_key(source_and_key):
|
def get_source_and_pgp_key(source_and_key):
|
||||||
"""Look for a pgp key ID or ascii-armor key in the given input.
|
"""Look for a pgp key ID or ascii-armor key in the given input.
|
||||||
|
|
||||||
:param source_and_key: Sting, "source_spec|keyid" where '|keyid' is
|
:param source_and_key: String, "source_spec|keyid" where '|keyid' is
|
||||||
optional.
|
optional.
|
||||||
:returns (source_spec, key_id OR None) as a tuple. Returns None for key_id
|
:returns (source_spec, key_id OR None) as a tuple. Returns None for key_id
|
||||||
if there was no '|' in the source_and_key string.
|
if there was no '|' in the source_and_key string.
|
||||||
@ -721,7 +680,7 @@ def configure_installation_source(source_plus_key):
|
|||||||
The functionality is provided by charmhelpers.fetch.add_source()
|
The functionality is provided by charmhelpers.fetch.add_source()
|
||||||
The difference between the two functions is that add_source() signature
|
The difference between the two functions is that add_source() signature
|
||||||
requires the key to be passed directly, whereas this function passes an
|
requires the key to be passed directly, whereas this function passes an
|
||||||
optional key by appending '|<key>' to the end of the source specificiation
|
optional key by appending '|<key>' to the end of the source specification
|
||||||
'source'.
|
'source'.
|
||||||
|
|
||||||
Another difference from add_source() is that the function calls sys.exit(1)
|
Another difference from add_source() is that the function calls sys.exit(1)
|
||||||
@ -808,7 +767,7 @@ def get_endpoint_notifications(service_names, rel_name='identity-service'):
|
|||||||
|
|
||||||
|
|
||||||
def endpoint_changed(service_name, rel_name='identity-service'):
|
def endpoint_changed(service_name, rel_name='identity-service'):
|
||||||
"""Whether a new notification has been recieved for an endpoint.
|
"""Whether a new notification has been received for an endpoint.
|
||||||
|
|
||||||
:param service_name: Service name eg nova, neutron, placement etc
|
:param service_name: Service name eg nova, neutron, placement etc
|
||||||
:type service_name: str
|
:type service_name: str
|
||||||
@ -834,7 +793,7 @@ def endpoint_changed(service_name, rel_name='identity-service'):
|
|||||||
|
|
||||||
|
|
||||||
def save_endpoint_changed_triggers(service_names, rel_name='identity-service'):
|
def save_endpoint_changed_triggers(service_names, rel_name='identity-service'):
|
||||||
"""Save the enpoint triggers in db so it can be tracked if they changed.
|
"""Save the endpoint triggers in db so it can be tracked if they changed.
|
||||||
|
|
||||||
:param service_names: List of service name.
|
:param service_names: List of service name.
|
||||||
:type service_name: List
|
:type service_name: List
|
||||||
@ -1502,9 +1461,9 @@ def remote_restart(rel_name, remote_service=None):
|
|||||||
if remote_service:
|
if remote_service:
|
||||||
trigger['remote-service'] = remote_service
|
trigger['remote-service'] = remote_service
|
||||||
for rid in relation_ids(rel_name):
|
for rid in relation_ids(rel_name):
|
||||||
# This subordinate can be related to two seperate services using
|
# This subordinate can be related to two separate services using
|
||||||
# different subordinate relations so only issue the restart if
|
# different subordinate relations so only issue the restart if
|
||||||
# the principle is conencted down the relation we think it is
|
# the principle is connected down the relation we think it is
|
||||||
if related_units(relid=rid):
|
if related_units(relid=rid):
|
||||||
relation_set(relation_id=rid,
|
relation_set(relation_id=rid,
|
||||||
relation_settings=trigger,
|
relation_settings=trigger,
|
||||||
@ -1621,7 +1580,7 @@ def manage_payload_services(action, services=None, charm_func=None):
|
|||||||
"""Run an action against all services.
|
"""Run an action against all services.
|
||||||
|
|
||||||
An optional charm_func() can be called. It should raise an Exception to
|
An optional charm_func() can be called. It should raise an Exception to
|
||||||
indicate that the function failed. If it was succesfull it should return
|
indicate that the function failed. If it was successful it should return
|
||||||
None or an optional message.
|
None or an optional message.
|
||||||
|
|
||||||
The signature for charm_func is:
|
The signature for charm_func is:
|
||||||
@ -1880,7 +1839,7 @@ def pausable_restart_on_change(restart_map, stopstart=False,
|
|||||||
:param post_svc_restart_f: A function run after a service has
|
:param post_svc_restart_f: A function run after a service has
|
||||||
restarted.
|
restarted.
|
||||||
:type post_svc_restart_f: Callable[[str], None]
|
:type post_svc_restart_f: Callable[[str], None]
|
||||||
:param pre_restarts_wait_f: A function callled before any restarts.
|
:param pre_restarts_wait_f: A function called before any restarts.
|
||||||
:type pre_restarts_wait_f: Callable[None, None]
|
:type pre_restarts_wait_f: Callable[None, None]
|
||||||
:returns: decorator to use a restart_on_change with pausability
|
:returns: decorator to use a restart_on_change with pausability
|
||||||
:rtype: decorator
|
:rtype: decorator
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
# Copyright 2014-2015 Canonical Limited.
|
# Copyright 2014-2021 Canonical Limited.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -12,9 +12,6 @@
|
|||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
#
|
|
||||||
# Copyright 2012 Canonical Ltd.
|
|
||||||
#
|
|
||||||
# This file is sourced from lp:openstack-charm-helpers
|
# This file is sourced from lp:openstack-charm-helpers
|
||||||
#
|
#
|
||||||
# Authors:
|
# Authors:
|
||||||
@ -605,7 +602,7 @@ class BasePool(object):
|
|||||||
|
|
||||||
|
|
||||||
class Pool(BasePool):
|
class Pool(BasePool):
|
||||||
"""Compability shim for any descendents external to this library."""
|
"""Compatibility shim for any descendents external to this library."""
|
||||||
|
|
||||||
@deprecate(
|
@deprecate(
|
||||||
'The ``Pool`` baseclass has been replaced by ``BasePool`` class.')
|
'The ``Pool`` baseclass has been replaced by ``BasePool`` class.')
|
||||||
@ -1535,7 +1532,7 @@ def map_block_storage(service, pool, image):
|
|||||||
|
|
||||||
|
|
||||||
def filesystem_mounted(fs):
|
def filesystem_mounted(fs):
|
||||||
"""Determine whether a filesytems is already mounted."""
|
"""Determine whether a filesystem is already mounted."""
|
||||||
return fs in [f for f, m in mounts()]
|
return fs in [f for f, m in mounts()]
|
||||||
|
|
||||||
|
|
||||||
@ -1904,7 +1901,7 @@ class CephBrokerRq(object):
|
|||||||
set the ceph-mon unit handling the broker
|
set the ceph-mon unit handling the broker
|
||||||
request will set its default value.
|
request will set its default value.
|
||||||
:type erasure_profile: str
|
:type erasure_profile: str
|
||||||
:param allow_ec_overwrites: allow EC pools to be overriden
|
:param allow_ec_overwrites: allow EC pools to be overridden
|
||||||
:type allow_ec_overwrites: bool
|
:type allow_ec_overwrites: bool
|
||||||
:raises: AssertionError if provided data is of invalid type/range
|
:raises: AssertionError if provided data is of invalid type/range
|
||||||
"""
|
"""
|
||||||
@ -1949,7 +1946,7 @@ class CephBrokerRq(object):
|
|||||||
:param lrc_locality: Group the coding and data chunks into sets of size locality
|
:param lrc_locality: Group the coding and data chunks into sets of size locality
|
||||||
(lrc plugin)
|
(lrc plugin)
|
||||||
:type lrc_locality: int
|
:type lrc_locality: int
|
||||||
:param durability_estimator: The number of parity chuncks each of which includes
|
:param durability_estimator: The number of parity chunks each of which includes
|
||||||
a data chunk in its calculation range (shec plugin)
|
a data chunk in its calculation range (shec plugin)
|
||||||
:type durability_estimator: int
|
:type durability_estimator: int
|
||||||
:param helper_chunks: The number of helper chunks to use for recovery operations
|
:param helper_chunks: The number of helper chunks to use for recovery operations
|
||||||
@ -2327,7 +2324,7 @@ class CephOSDConfContext(CephConfContext):
|
|||||||
settings are in conf['osd_from_client'] and finally settings which do
|
settings are in conf['osd_from_client'] and finally settings which do
|
||||||
clash are in conf['osd_from_client_conflict']. Rather than silently drop
|
clash are in conf['osd_from_client_conflict']. Rather than silently drop
|
||||||
the conflicting settings they are provided in the context so they can be
|
the conflicting settings they are provided in the context so they can be
|
||||||
rendered commented out to give some visability to the admin.
|
rendered commented out to give some visibility to the admin.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def __init__(self, permitted_sections=None):
|
def __init__(self, permitted_sections=None):
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
# Copyright 2014-2015 Canonical Limited.
|
# Copyright 2014-2021 Canonical Limited.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -27,7 +27,7 @@ from subprocess import (
|
|||||||
##################################################
|
##################################################
|
||||||
def deactivate_lvm_volume_group(block_device):
|
def deactivate_lvm_volume_group(block_device):
|
||||||
'''
|
'''
|
||||||
Deactivate any volume gruop associated with an LVM physical volume.
|
Deactivate any volume group associated with an LVM physical volume.
|
||||||
|
|
||||||
:param block_device: str: Full path to LVM physical volume
|
:param block_device: str: Full path to LVM physical volume
|
||||||
'''
|
'''
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
# Copyright 2014-2015 Canonical Limited.
|
# Copyright 2013-2021 Canonical Limited.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -13,7 +13,6 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
"Interactions with the Juju environment"
|
"Interactions with the Juju environment"
|
||||||
# Copyright 2013 Canonical Ltd.
|
|
||||||
#
|
#
|
||||||
# Authors:
|
# Authors:
|
||||||
# Charm Helpers Developers <juju@lists.ubuntu.com>
|
# Charm Helpers Developers <juju@lists.ubuntu.com>
|
||||||
@ -610,7 +609,7 @@ def expected_related_units(reltype=None):
|
|||||||
relation_type()))
|
relation_type()))
|
||||||
|
|
||||||
:param reltype: Relation type to list data for, default is to list data for
|
:param reltype: Relation type to list data for, default is to list data for
|
||||||
the realtion type we are currently executing a hook for.
|
the relation type we are currently executing a hook for.
|
||||||
:type reltype: str
|
:type reltype: str
|
||||||
:returns: iterator
|
:returns: iterator
|
||||||
:rtype: types.GeneratorType
|
:rtype: types.GeneratorType
|
||||||
@ -627,7 +626,7 @@ def expected_related_units(reltype=None):
|
|||||||
|
|
||||||
@cached
|
@cached
|
||||||
def relation_for_unit(unit=None, rid=None):
|
def relation_for_unit(unit=None, rid=None):
|
||||||
"""Get the json represenation of a unit's relation"""
|
"""Get the json representation of a unit's relation"""
|
||||||
unit = unit or remote_unit()
|
unit = unit or remote_unit()
|
||||||
relation = relation_get(unit=unit, rid=rid)
|
relation = relation_get(unit=unit, rid=rid)
|
||||||
for key in relation:
|
for key in relation:
|
||||||
@ -1614,11 +1613,11 @@ def env_proxy_settings(selected_settings=None):
|
|||||||
def _contains_range(addresses):
|
def _contains_range(addresses):
|
||||||
"""Check for cidr or wildcard domain in a string.
|
"""Check for cidr or wildcard domain in a string.
|
||||||
|
|
||||||
Given a string comprising a comma seperated list of ip addresses
|
Given a string comprising a comma separated list of ip addresses
|
||||||
and domain names, determine whether the string contains IP ranges
|
and domain names, determine whether the string contains IP ranges
|
||||||
or wildcard domains.
|
or wildcard domains.
|
||||||
|
|
||||||
:param addresses: comma seperated list of domains and ip addresses.
|
:param addresses: comma separated list of domains and ip addresses.
|
||||||
:type addresses: str
|
:type addresses: str
|
||||||
"""
|
"""
|
||||||
return (
|
return (
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
# Copyright 2014-2015 Canonical Limited.
|
# Copyright 2014-2021 Canonical Limited.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -217,7 +217,7 @@ def service_resume(service_name, init_dir="/etc/init",
|
|||||||
initd_dir="/etc/init.d", **kwargs):
|
initd_dir="/etc/init.d", **kwargs):
|
||||||
"""Resume a system service.
|
"""Resume a system service.
|
||||||
|
|
||||||
Reenable starting again at boot. Start the service.
|
Re-enable starting again at boot. Start the service.
|
||||||
|
|
||||||
:param service_name: the name of the service to resume
|
:param service_name: the name of the service to resume
|
||||||
:param init_dir: the path to the init dir
|
:param init_dir: the path to the init dir
|
||||||
@ -727,7 +727,7 @@ class restart_on_change(object):
|
|||||||
:param post_svc_restart_f: A function run after a service has
|
:param post_svc_restart_f: A function run after a service has
|
||||||
restarted.
|
restarted.
|
||||||
:type post_svc_restart_f: Callable[[str], None]
|
:type post_svc_restart_f: Callable[[str], None]
|
||||||
:param pre_restarts_wait_f: A function callled before any restarts.
|
:param pre_restarts_wait_f: A function called before any restarts.
|
||||||
:type pre_restarts_wait_f: Callable[None, None]
|
:type pre_restarts_wait_f: Callable[None, None]
|
||||||
"""
|
"""
|
||||||
self.restart_map = restart_map
|
self.restart_map = restart_map
|
||||||
@ -828,7 +828,7 @@ def restart_on_change_helper(lambda_f, restart_map, stopstart=False,
|
|||||||
:param post_svc_restart_f: A function run after a service has
|
:param post_svc_restart_f: A function run after a service has
|
||||||
restarted.
|
restarted.
|
||||||
:type post_svc_restart_f: Callable[[str], None]
|
:type post_svc_restart_f: Callable[[str], None]
|
||||||
:param pre_restarts_wait_f: A function callled before any restarts.
|
:param pre_restarts_wait_f: A function called before any restarts.
|
||||||
:type pre_restarts_wait_f: Callable[None, None]
|
:type pre_restarts_wait_f: Callable[None, None]
|
||||||
:returns: result of lambda_f()
|
:returns: result of lambda_f()
|
||||||
:rtype: ANY
|
:rtype: ANY
|
||||||
@ -880,7 +880,7 @@ def _post_restart_on_change_helper(checksums,
|
|||||||
:param post_svc_restart_f: A function run after a service has
|
:param post_svc_restart_f: A function run after a service has
|
||||||
restarted.
|
restarted.
|
||||||
:type post_svc_restart_f: Callable[[str], None]
|
:type post_svc_restart_f: Callable[[str], None]
|
||||||
:param pre_restarts_wait_f: A function callled before any restarts.
|
:param pre_restarts_wait_f: A function called before any restarts.
|
||||||
:type pre_restarts_wait_f: Callable[None, None]
|
:type pre_restarts_wait_f: Callable[None, None]
|
||||||
"""
|
"""
|
||||||
if restart_functions is None:
|
if restart_functions is None:
|
||||||
@ -914,7 +914,7 @@ def _post_restart_on_change_helper(checksums,
|
|||||||
|
|
||||||
|
|
||||||
def pwgen(length=None):
|
def pwgen(length=None):
|
||||||
"""Generate a random pasword."""
|
"""Generate a random password."""
|
||||||
if length is None:
|
if length is None:
|
||||||
# A random length is ok to use a weak PRNG
|
# A random length is ok to use a weak PRNG
|
||||||
length = random.choice(range(35, 45))
|
length = random.choice(range(35, 45))
|
||||||
|
@ -28,6 +28,7 @@ UBUNTU_RELEASES = (
|
|||||||
'focal',
|
'focal',
|
||||||
'groovy',
|
'groovy',
|
||||||
'hirsute',
|
'hirsute',
|
||||||
|
'impish',
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
@ -18,8 +18,11 @@
|
|||||||
import six
|
import six
|
||||||
import re
|
import re
|
||||||
|
|
||||||
|
TRUTHY_STRINGS = {'y', 'yes', 'true', 't', 'on'}
|
||||||
|
FALSEY_STRINGS = {'n', 'no', 'false', 'f', 'off'}
|
||||||
|
|
||||||
def bool_from_string(value):
|
|
||||||
|
def bool_from_string(value, truthy_strings=TRUTHY_STRINGS, falsey_strings=FALSEY_STRINGS, assume_false=False):
|
||||||
"""Interpret string value as boolean.
|
"""Interpret string value as boolean.
|
||||||
|
|
||||||
Returns True if value translates to True otherwise False.
|
Returns True if value translates to True otherwise False.
|
||||||
@ -32,9 +35,9 @@ def bool_from_string(value):
|
|||||||
|
|
||||||
value = value.strip().lower()
|
value = value.strip().lower()
|
||||||
|
|
||||||
if value in ['y', 'yes', 'true', 't', 'on']:
|
if value in truthy_strings:
|
||||||
return True
|
return True
|
||||||
elif value in ['n', 'no', 'false', 'f', 'off']:
|
elif value in falsey_strings or assume_false:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
msg = "Unable to interpret string value '%s' as boolean" % (value)
|
msg = "Unable to interpret string value '%s' as boolean" % (value)
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
#!/usr/bin/env python
|
#!/usr/bin/env python
|
||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
#
|
#
|
||||||
# Copyright 2014-2015 Canonical Limited.
|
# Copyright 2014-2021 Canonical Limited.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -61,7 +61,7 @@ Here's a fully worked integration example using hookenv.Hooks::
|
|||||||
'previous value', prev,
|
'previous value', prev,
|
||||||
'current value', cur)
|
'current value', cur)
|
||||||
|
|
||||||
# Get some unit specific bookeeping
|
# Get some unit specific bookkeeping
|
||||||
if not db.get('pkg_key'):
|
if not db.get('pkg_key'):
|
||||||
key = urllib.urlopen('https://example.com/pkg_key').read()
|
key = urllib.urlopen('https://example.com/pkg_key').read()
|
||||||
db.set('pkg_key', key)
|
db.set('pkg_key', key)
|
||||||
@ -449,7 +449,7 @@ class HookData(object):
|
|||||||
'previous value', prev,
|
'previous value', prev,
|
||||||
'current value', cur)
|
'current value', cur)
|
||||||
|
|
||||||
# Get some unit specific bookeeping
|
# Get some unit specific bookkeeping
|
||||||
if not db.get('pkg_key'):
|
if not db.get('pkg_key'):
|
||||||
key = urllib.urlopen('https://example.com/pkg_key').read()
|
key = urllib.urlopen('https://example.com/pkg_key').read()
|
||||||
db.set('pkg_key', key)
|
db.set('pkg_key', key)
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
# Copyright 2014-2015 Canonical Limited.
|
# Copyright 2014-2021 Canonical Limited.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -106,6 +106,8 @@ if __platform__ == "ubuntu":
|
|||||||
apt_pkg = fetch.ubuntu_apt_pkg
|
apt_pkg = fetch.ubuntu_apt_pkg
|
||||||
get_apt_dpkg_env = fetch.get_apt_dpkg_env
|
get_apt_dpkg_env = fetch.get_apt_dpkg_env
|
||||||
get_installed_version = fetch.get_installed_version
|
get_installed_version = fetch.get_installed_version
|
||||||
|
OPENSTACK_RELEASES = fetch.OPENSTACK_RELEASES
|
||||||
|
UBUNTU_OPENSTACK_RELEASE = fetch.UBUNTU_OPENSTACK_RELEASE
|
||||||
elif __platform__ == "centos":
|
elif __platform__ == "centos":
|
||||||
yum_search = fetch.yum_search
|
yum_search = fetch.yum_search
|
||||||
|
|
||||||
@ -203,7 +205,7 @@ def plugins(fetch_handlers=None):
|
|||||||
classname)
|
classname)
|
||||||
plugin_list.append(handler_class())
|
plugin_list.append(handler_class())
|
||||||
except NotImplementedError:
|
except NotImplementedError:
|
||||||
# Skip missing plugins so that they can be ommitted from
|
# Skip missing plugins so that they can be omitted from
|
||||||
# installation if desired
|
# installation if desired
|
||||||
log("FetchHandler {} not found, skipping plugin".format(
|
log("FetchHandler {} not found, skipping plugin".format(
|
||||||
handler_name))
|
handler_name))
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
#!/usr/bin/env python
|
#!/usr/bin/env python
|
||||||
# coding: utf-8
|
# coding: utf-8
|
||||||
|
|
||||||
# Copyright 2014-2015 Canonical Limited.
|
# Copyright 2014-2021 Canonical Limited.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -27,7 +27,7 @@ __author__ = "Jorge Niedbalski <jorge.niedbalski@canonical.com>"
|
|||||||
|
|
||||||
|
|
||||||
def pip_execute(*args, **kwargs):
|
def pip_execute(*args, **kwargs):
|
||||||
"""Overriden pip_execute() to stop sys.path being changed.
|
"""Overridden pip_execute() to stop sys.path being changed.
|
||||||
|
|
||||||
The act of importing main from the pip module seems to cause add wheels
|
The act of importing main from the pip module seems to cause add wheels
|
||||||
from the /usr/share/python-wheels which are installed by various tools.
|
from the /usr/share/python-wheels which are installed by various tools.
|
||||||
@ -142,8 +142,10 @@ def pip_create_virtualenv(path=None):
|
|||||||
"""Create an isolated Python environment."""
|
"""Create an isolated Python environment."""
|
||||||
if six.PY2:
|
if six.PY2:
|
||||||
apt_install('python-virtualenv')
|
apt_install('python-virtualenv')
|
||||||
|
extra_flags = []
|
||||||
else:
|
else:
|
||||||
apt_install('python3-virtualenv')
|
apt_install(['python3-virtualenv', 'virtualenv'])
|
||||||
|
extra_flags = ['--python=python3']
|
||||||
|
|
||||||
if path:
|
if path:
|
||||||
venv_path = path
|
venv_path = path
|
||||||
@ -151,4 +153,4 @@ def pip_create_virtualenv(path=None):
|
|||||||
venv_path = os.path.join(charm_dir(), 'venv')
|
venv_path = os.path.join(charm_dir(), 'venv')
|
||||||
|
|
||||||
if not os.path.exists(venv_path):
|
if not os.path.exists(venv_path):
|
||||||
subprocess.check_call(['virtualenv', venv_path])
|
subprocess.check_call(['virtualenv', venv_path] + extra_flags)
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
# Copyright 2014-2017 Canonical Limited.
|
# Copyright 2014-2021 Canonical Limited.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -65,7 +65,7 @@ def _snap_exec(commands):
|
|||||||
retry_count += + 1
|
retry_count += + 1
|
||||||
if retry_count > SNAP_NO_LOCK_RETRY_COUNT:
|
if retry_count > SNAP_NO_LOCK_RETRY_COUNT:
|
||||||
raise CouldNotAcquireLockException(
|
raise CouldNotAcquireLockException(
|
||||||
'Could not aquire lock after {} attempts'
|
'Could not acquire lock after {} attempts'
|
||||||
.format(SNAP_NO_LOCK_RETRY_COUNT))
|
.format(SNAP_NO_LOCK_RETRY_COUNT))
|
||||||
return_code = e.returncode
|
return_code = e.returncode
|
||||||
log('Snap failed to acquire lock, trying again in {} seconds.'
|
log('Snap failed to acquire lock, trying again in {} seconds.'
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
# Copyright 2014-2015 Canonical Limited.
|
# Copyright 2014-2021 Canonical Limited.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -208,12 +208,79 @@ CLOUD_ARCHIVE_POCKETS = {
|
|||||||
'wallaby/proposed': 'focal-proposed/wallaby',
|
'wallaby/proposed': 'focal-proposed/wallaby',
|
||||||
'focal-wallaby/proposed': 'focal-proposed/wallaby',
|
'focal-wallaby/proposed': 'focal-proposed/wallaby',
|
||||||
'focal-proposed/wallaby': 'focal-proposed/wallaby',
|
'focal-proposed/wallaby': 'focal-proposed/wallaby',
|
||||||
|
# Xena
|
||||||
|
'xena': 'focal-updates/xena',
|
||||||
|
'focal-xena': 'focal-updates/xena',
|
||||||
|
'focal-xena/updates': 'focal-updates/xena',
|
||||||
|
'focal-updates/xena': 'focal-updates/xena',
|
||||||
|
'xena/proposed': 'focal-proposed/xena',
|
||||||
|
'focal-xena/proposed': 'focal-proposed/xena',
|
||||||
|
'focal-proposed/xena': 'focal-proposed/xena',
|
||||||
|
# Yoga
|
||||||
|
'yoga': 'focal-updates/yoga',
|
||||||
|
'focal-yoga': 'focal-updates/yoga',
|
||||||
|
'focal-yoga/updates': 'focal-updates/yoga',
|
||||||
|
'focal-updates/yoga': 'focal-updates/yoga',
|
||||||
|
'yoga/proposed': 'focal-proposed/yoga',
|
||||||
|
'focal-yoga/proposed': 'focal-proposed/yoga',
|
||||||
|
'focal-proposed/yoga': 'focal-proposed/yoga',
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
OPENSTACK_RELEASES = (
|
||||||
|
'diablo',
|
||||||
|
'essex',
|
||||||
|
'folsom',
|
||||||
|
'grizzly',
|
||||||
|
'havana',
|
||||||
|
'icehouse',
|
||||||
|
'juno',
|
||||||
|
'kilo',
|
||||||
|
'liberty',
|
||||||
|
'mitaka',
|
||||||
|
'newton',
|
||||||
|
'ocata',
|
||||||
|
'pike',
|
||||||
|
'queens',
|
||||||
|
'rocky',
|
||||||
|
'stein',
|
||||||
|
'train',
|
||||||
|
'ussuri',
|
||||||
|
'victoria',
|
||||||
|
'wallaby',
|
||||||
|
'xena',
|
||||||
|
'yoga',
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
UBUNTU_OPENSTACK_RELEASE = OrderedDict([
|
||||||
|
('oneiric', 'diablo'),
|
||||||
|
('precise', 'essex'),
|
||||||
|
('quantal', 'folsom'),
|
||||||
|
('raring', 'grizzly'),
|
||||||
|
('saucy', 'havana'),
|
||||||
|
('trusty', 'icehouse'),
|
||||||
|
('utopic', 'juno'),
|
||||||
|
('vivid', 'kilo'),
|
||||||
|
('wily', 'liberty'),
|
||||||
|
('xenial', 'mitaka'),
|
||||||
|
('yakkety', 'newton'),
|
||||||
|
('zesty', 'ocata'),
|
||||||
|
('artful', 'pike'),
|
||||||
|
('bionic', 'queens'),
|
||||||
|
('cosmic', 'rocky'),
|
||||||
|
('disco', 'stein'),
|
||||||
|
('eoan', 'train'),
|
||||||
|
('focal', 'ussuri'),
|
||||||
|
('groovy', 'victoria'),
|
||||||
|
('hirsute', 'wallaby'),
|
||||||
|
('impish', 'xena'),
|
||||||
|
])
|
||||||
|
|
||||||
|
|
||||||
APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
|
APT_NO_LOCK = 100 # The return code for "couldn't acquire lock" in APT.
|
||||||
CMD_RETRY_DELAY = 10 # Wait 10 seconds between command retries.
|
CMD_RETRY_DELAY = 10 # Wait 10 seconds between command retries.
|
||||||
CMD_RETRY_COUNT = 3 # Retry a failing fatal command X times.
|
CMD_RETRY_COUNT = 10 # Retry a failing fatal command X times.
|
||||||
|
|
||||||
|
|
||||||
def filter_installed_packages(packages):
|
def filter_installed_packages(packages):
|
||||||
@ -246,9 +313,9 @@ def filter_missing_packages(packages):
|
|||||||
def apt_cache(*_, **__):
|
def apt_cache(*_, **__):
|
||||||
"""Shim returning an object simulating the apt_pkg Cache.
|
"""Shim returning an object simulating the apt_pkg Cache.
|
||||||
|
|
||||||
:param _: Accept arguments for compability, not used.
|
:param _: Accept arguments for compatibility, not used.
|
||||||
:type _: any
|
:type _: any
|
||||||
:param __: Accept keyword arguments for compability, not used.
|
:param __: Accept keyword arguments for compatibility, not used.
|
||||||
:type __: any
|
:type __: any
|
||||||
:returns:Object used to interrogate the system apt and dpkg databases.
|
:returns:Object used to interrogate the system apt and dpkg databases.
|
||||||
:rtype:ubuntu_apt_pkg.Cache
|
:rtype:ubuntu_apt_pkg.Cache
|
||||||
@ -283,7 +350,7 @@ def apt_install(packages, options=None, fatal=False, quiet=False):
|
|||||||
:param fatal: Whether the command's output should be checked and
|
:param fatal: Whether the command's output should be checked and
|
||||||
retried.
|
retried.
|
||||||
:type fatal: bool
|
:type fatal: bool
|
||||||
:param quiet: if True (default), supress log message to stdout/stderr
|
:param quiet: if True (default), suppress log message to stdout/stderr
|
||||||
:type quiet: bool
|
:type quiet: bool
|
||||||
:raises: subprocess.CalledProcessError
|
:raises: subprocess.CalledProcessError
|
||||||
"""
|
"""
|
||||||
@ -397,7 +464,7 @@ def import_key(key):
|
|||||||
A Radix64 format keyid is also supported for backwards
|
A Radix64 format keyid is also supported for backwards
|
||||||
compatibility. In this case Ubuntu keyserver will be
|
compatibility. In this case Ubuntu keyserver will be
|
||||||
queried for a key via HTTPS by its keyid. This method
|
queried for a key via HTTPS by its keyid. This method
|
||||||
is less preferrable because https proxy servers may
|
is less preferable because https proxy servers may
|
||||||
require traffic decryption which is equivalent to a
|
require traffic decryption which is equivalent to a
|
||||||
man-in-the-middle attack (a proxy server impersonates
|
man-in-the-middle attack (a proxy server impersonates
|
||||||
keyserver TLS certificates and has to be explicitly
|
keyserver TLS certificates and has to be explicitly
|
||||||
@ -574,6 +641,10 @@ def add_source(source, key=None, fail_invalid=False):
|
|||||||
with be used. If staging is NOT used then the cloud archive [3] will be
|
with be used. If staging is NOT used then the cloud archive [3] will be
|
||||||
added, and the 'ubuntu-cloud-keyring' package will be added for the
|
added, and the 'ubuntu-cloud-keyring' package will be added for the
|
||||||
current distro.
|
current distro.
|
||||||
|
'<openstack-version>': translate to cloud:<release> based on the current
|
||||||
|
distro version (i.e. for 'ussuri' this will either be 'bionic-ussuri' or
|
||||||
|
'distro'.
|
||||||
|
'<openstack-version>/proposed': as above, but for proposed.
|
||||||
|
|
||||||
Otherwise the source is not recognised and this is logged to the juju log.
|
Otherwise the source is not recognised and this is logged to the juju log.
|
||||||
However, no error is raised, unless sys_error_on_exit is True.
|
However, no error is raised, unless sys_error_on_exit is True.
|
||||||
@ -592,7 +663,7 @@ def add_source(source, key=None, fail_invalid=False):
|
|||||||
id may also be used, but be aware that only insecure protocols are
|
id may also be used, but be aware that only insecure protocols are
|
||||||
available to retrieve the actual public key from a public keyserver
|
available to retrieve the actual public key from a public keyserver
|
||||||
placing your Juju environment at risk. ppa and cloud archive keys
|
placing your Juju environment at risk. ppa and cloud archive keys
|
||||||
are securely added automtically, so sould not be provided.
|
are securely added automatically, so should not be provided.
|
||||||
|
|
||||||
@param fail_invalid: (boolean) if True, then the function raises a
|
@param fail_invalid: (boolean) if True, then the function raises a
|
||||||
SourceConfigError is there is no matching installation source.
|
SourceConfigError is there is no matching installation source.
|
||||||
@ -600,6 +671,12 @@ def add_source(source, key=None, fail_invalid=False):
|
|||||||
@raises SourceConfigError() if for cloud:<pocket>, the <pocket> is not a
|
@raises SourceConfigError() if for cloud:<pocket>, the <pocket> is not a
|
||||||
valid pocket in CLOUD_ARCHIVE_POCKETS
|
valid pocket in CLOUD_ARCHIVE_POCKETS
|
||||||
"""
|
"""
|
||||||
|
# extract the OpenStack versions from the CLOUD_ARCHIVE_POCKETS; can't use
|
||||||
|
# the list in contrib.openstack.utils as it might not be included in
|
||||||
|
# classic charms and would break everything. Having OpenStack specific
|
||||||
|
# code in this file is a bit of an antipattern, anyway.
|
||||||
|
os_versions_regex = "({})".format("|".join(OPENSTACK_RELEASES))
|
||||||
|
|
||||||
_mapping = OrderedDict([
|
_mapping = OrderedDict([
|
||||||
(r"^distro$", lambda: None), # This is a NOP
|
(r"^distro$", lambda: None), # This is a NOP
|
||||||
(r"^(?:proposed|distro-proposed)$", _add_proposed),
|
(r"^(?:proposed|distro-proposed)$", _add_proposed),
|
||||||
@ -609,6 +686,9 @@ def add_source(source, key=None, fail_invalid=False):
|
|||||||
(r"^cloud:(.*)-(.*)$", _add_cloud_distro_check),
|
(r"^cloud:(.*)-(.*)$", _add_cloud_distro_check),
|
||||||
(r"^cloud:(.*)$", _add_cloud_pocket),
|
(r"^cloud:(.*)$", _add_cloud_pocket),
|
||||||
(r"^snap:.*-(.*)-(.*)$", _add_cloud_distro_check),
|
(r"^snap:.*-(.*)-(.*)$", _add_cloud_distro_check),
|
||||||
|
(r"^{}\/proposed$".format(os_versions_regex),
|
||||||
|
_add_bare_openstack_proposed),
|
||||||
|
(r"^{}$".format(os_versions_regex), _add_bare_openstack),
|
||||||
])
|
])
|
||||||
if source is None:
|
if source is None:
|
||||||
source = ''
|
source = ''
|
||||||
@ -640,7 +720,7 @@ def _add_proposed():
|
|||||||
Uses get_distrib_codename to determine the correct stanza for
|
Uses get_distrib_codename to determine the correct stanza for
|
||||||
the deb line.
|
the deb line.
|
||||||
|
|
||||||
For intel architecutres PROPOSED_POCKET is used for the release, but for
|
For Intel architectures PROPOSED_POCKET is used for the release, but for
|
||||||
other architectures PROPOSED_PORTS_POCKET is used for the release.
|
other architectures PROPOSED_PORTS_POCKET is used for the release.
|
||||||
"""
|
"""
|
||||||
release = get_distrib_codename()
|
release = get_distrib_codename()
|
||||||
@ -662,7 +742,8 @@ def _add_apt_repository(spec):
|
|||||||
series = get_distrib_codename()
|
series = get_distrib_codename()
|
||||||
spec = spec.replace('{series}', series)
|
spec = spec.replace('{series}', series)
|
||||||
_run_with_retries(['add-apt-repository', '--yes', spec],
|
_run_with_retries(['add-apt-repository', '--yes', spec],
|
||||||
cmd_env=env_proxy_settings(['https', 'http']))
|
cmd_env=env_proxy_settings(['https', 'http', 'no_proxy'])
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def _add_cloud_pocket(pocket):
|
def _add_cloud_pocket(pocket):
|
||||||
@ -738,6 +819,73 @@ def _verify_is_ubuntu_rel(release, os_release):
|
|||||||
'version ({})'.format(release, os_release, ubuntu_rel))
|
'version ({})'.format(release, os_release, ubuntu_rel))
|
||||||
|
|
||||||
|
|
||||||
|
def _add_bare_openstack(openstack_release):
|
||||||
|
"""Add cloud or distro based on the release given.
|
||||||
|
|
||||||
|
The spec given is, say, 'ussuri', but this could apply cloud:bionic-ussuri
|
||||||
|
or 'distro' depending on whether the ubuntu release is bionic or focal.
|
||||||
|
|
||||||
|
:param openstack_release: the OpenStack codename to determine the release
|
||||||
|
for.
|
||||||
|
:type openstack_release: str
|
||||||
|
:raises: SourceConfigError
|
||||||
|
"""
|
||||||
|
# TODO(ajkavanagh) - surely this means we should be removing cloud archives
|
||||||
|
# if they exist?
|
||||||
|
__add_bare_helper(openstack_release, "{}-{}", lambda: None)
|
||||||
|
|
||||||
|
|
||||||
|
def _add_bare_openstack_proposed(openstack_release):
|
||||||
|
"""Add cloud of distro but with proposed.
|
||||||
|
|
||||||
|
The spec given is, say, 'ussuri' but this could apply
|
||||||
|
cloud:bionic-ussuri/proposed or 'distro/proposed' depending on whether the
|
||||||
|
ubuntu release is bionic or focal.
|
||||||
|
|
||||||
|
:param openstack_release: the OpenStack codename to determine the release
|
||||||
|
for.
|
||||||
|
:type openstack_release: str
|
||||||
|
:raises: SourceConfigError
|
||||||
|
"""
|
||||||
|
__add_bare_helper(openstack_release, "{}-{}/proposed", _add_proposed)
|
||||||
|
|
||||||
|
|
||||||
|
def __add_bare_helper(openstack_release, pocket_format, final_function):
|
||||||
|
"""Helper for _add_bare_openstack[_proposed]
|
||||||
|
|
||||||
|
The bulk of the work between the two functions is exactly the same except
|
||||||
|
for the pocket format and the function that is run if it's the distro
|
||||||
|
version.
|
||||||
|
|
||||||
|
:param openstack_release: the OpenStack codename. e.g. ussuri
|
||||||
|
:type openstack_release: str
|
||||||
|
:param pocket_format: the pocket formatter string to construct a pocket str
|
||||||
|
from the openstack_release and the current ubuntu version.
|
||||||
|
:type pocket_format: str
|
||||||
|
:param final_function: the function to call if it is the distro version.
|
||||||
|
:type final_function: Callable
|
||||||
|
:raises SourceConfigError on error
|
||||||
|
"""
|
||||||
|
ubuntu_version = get_distrib_codename()
|
||||||
|
possible_pocket = pocket_format.format(ubuntu_version, openstack_release)
|
||||||
|
if possible_pocket in CLOUD_ARCHIVE_POCKETS:
|
||||||
|
_add_cloud_pocket(possible_pocket)
|
||||||
|
return
|
||||||
|
# Otherwise it's almost certainly the distro version; verify that it
|
||||||
|
# exists.
|
||||||
|
try:
|
||||||
|
assert UBUNTU_OPENSTACK_RELEASE[ubuntu_version] == openstack_release
|
||||||
|
except KeyError:
|
||||||
|
raise SourceConfigError(
|
||||||
|
"Invalid ubuntu version {} isn't known to this library"
|
||||||
|
.format(ubuntu_version))
|
||||||
|
except AssertionError:
|
||||||
|
raise SourceConfigError(
|
||||||
|
'Invalid OpenStack release specified: {} for Ubuntu version {}'
|
||||||
|
.format(openstack_release, ubuntu_version))
|
||||||
|
final_function()
|
||||||
|
|
||||||
|
|
||||||
def _run_with_retries(cmd, max_retries=CMD_RETRY_COUNT, retry_exitcodes=(1,),
|
def _run_with_retries(cmd, max_retries=CMD_RETRY_COUNT, retry_exitcodes=(1,),
|
||||||
retry_message="", cmd_env=None, quiet=False):
|
retry_message="", cmd_env=None, quiet=False):
|
||||||
"""Run a command and retry until success or max_retries is reached.
|
"""Run a command and retry until success or max_retries is reached.
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
# Copyright 2019 Canonical Ltd
|
# Copyright 2019-2021 Canonical Ltd
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -209,7 +209,7 @@ sys.modules[__name__].config = Config()
|
|||||||
|
|
||||||
|
|
||||||
def init():
|
def init():
|
||||||
"""Compability shim that does nothing."""
|
"""Compatibility shim that does nothing."""
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
|
||||||
@ -264,7 +264,7 @@ def version_compare(a, b):
|
|||||||
else:
|
else:
|
||||||
raise RuntimeError('Unable to compare "{}" and "{}", according to '
|
raise RuntimeError('Unable to compare "{}" and "{}", according to '
|
||||||
'our logic they are neither greater, equal nor '
|
'our logic they are neither greater, equal nor '
|
||||||
'less than each other.')
|
'less than each other.'.format(a, b))
|
||||||
|
|
||||||
|
|
||||||
class PkgVersion():
|
class PkgVersion():
|
||||||
|
@ -28,6 +28,9 @@ def get_platform():
|
|||||||
elif "elementary" in current_platform:
|
elif "elementary" in current_platform:
|
||||||
# ElementaryOS fails to run tests locally without this.
|
# ElementaryOS fails to run tests locally without this.
|
||||||
return "ubuntu"
|
return "ubuntu"
|
||||||
|
elif "Pop!_OS" in current_platform:
|
||||||
|
# Pop!_OS also fails to run tests locally without this.
|
||||||
|
return "ubuntu"
|
||||||
else:
|
else:
|
||||||
raise RuntimeError("This module is not supported on {}."
|
raise RuntimeError("This module is not supported on {}."
|
||||||
.format(current_platform))
|
.format(current_platform))
|
||||||
|
@ -79,9 +79,9 @@ class Crushmap(object):
|
|||||||
stdin=crush.stdout)
|
stdin=crush.stdout)
|
||||||
.decode('UTF-8'))
|
.decode('UTF-8'))
|
||||||
except CalledProcessError as e:
|
except CalledProcessError as e:
|
||||||
log("Error occured while loading and decompiling CRUSH map:"
|
log("Error occurred while loading and decompiling CRUSH map:"
|
||||||
"{}".format(e), ERROR)
|
"{}".format(e), ERROR)
|
||||||
raise "Failed to read CRUSH map"
|
raise
|
||||||
|
|
||||||
def ensure_bucket_is_present(self, bucket_name):
|
def ensure_bucket_is_present(self, bucket_name):
|
||||||
if bucket_name not in [bucket.name for bucket in self.buckets()]:
|
if bucket_name not in [bucket.name for bucket in self.buckets()]:
|
||||||
@ -111,7 +111,7 @@ class Crushmap(object):
|
|||||||
return ceph_output
|
return ceph_output
|
||||||
except CalledProcessError as e:
|
except CalledProcessError as e:
|
||||||
log("save error: {}".format(e))
|
log("save error: {}".format(e))
|
||||||
raise "Failed to save CRUSH map."
|
raise
|
||||||
|
|
||||||
def build_crushmap(self):
|
def build_crushmap(self):
|
||||||
"""Modifies the current CRUSH map to include the new buckets"""
|
"""Modifies the current CRUSH map to include the new buckets"""
|
||||||
|
@ -14,6 +14,7 @@
|
|||||||
|
|
||||||
import collections
|
import collections
|
||||||
import glob
|
import glob
|
||||||
|
import itertools
|
||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
import pyudev
|
import pyudev
|
||||||
@ -24,6 +25,7 @@ import subprocess
|
|||||||
import sys
|
import sys
|
||||||
import time
|
import time
|
||||||
import uuid
|
import uuid
|
||||||
|
import functools
|
||||||
|
|
||||||
from contextlib import contextmanager
|
from contextlib import contextmanager
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
@ -501,30 +503,33 @@ def ceph_user():
|
|||||||
|
|
||||||
|
|
||||||
class CrushLocation(object):
|
class CrushLocation(object):
|
||||||
def __init__(self,
|
def __init__(self, identifier, name, osd="", host="", chassis="",
|
||||||
name,
|
rack="", row="", pdu="", pod="", room="",
|
||||||
identifier,
|
datacenter="", zone="", region="", root=""):
|
||||||
host,
|
|
||||||
rack,
|
|
||||||
row,
|
|
||||||
datacenter,
|
|
||||||
chassis,
|
|
||||||
root):
|
|
||||||
self.name = name
|
|
||||||
self.identifier = identifier
|
self.identifier = identifier
|
||||||
|
self.name = name
|
||||||
|
self.osd = osd
|
||||||
self.host = host
|
self.host = host
|
||||||
|
self.chassis = chassis
|
||||||
self.rack = rack
|
self.rack = rack
|
||||||
self.row = row
|
self.row = row
|
||||||
|
self.pdu = pdu
|
||||||
|
self.pod = pod
|
||||||
|
self.room = room
|
||||||
self.datacenter = datacenter
|
self.datacenter = datacenter
|
||||||
self.chassis = chassis
|
self.zone = zone
|
||||||
|
self.region = region
|
||||||
self.root = root
|
self.root = root
|
||||||
|
|
||||||
def __str__(self):
|
def __str__(self):
|
||||||
return "name: {} id: {} host: {} rack: {} row: {} datacenter: {} " \
|
return "name: {} id: {} osd: {} host: {} chassis: {} rack: {} " \
|
||||||
"chassis :{} root: {}".format(self.name, self.identifier,
|
"row: {} pdu: {} pod: {} room: {} datacenter: {} zone: {} " \
|
||||||
self.host, self.rack, self.row,
|
"region: {} root: {}".format(self.name, self.identifier,
|
||||||
self.datacenter, self.chassis,
|
self.osd, self.host, self.chassis,
|
||||||
self.root)
|
self.rack, self.row, self.pdu,
|
||||||
|
self.pod, self.room,
|
||||||
|
self.datacenter, self.zone,
|
||||||
|
self.region, self.root)
|
||||||
|
|
||||||
def __eq__(self, other):
|
def __eq__(self, other):
|
||||||
return not self.name < other.name and not other.name < self.name
|
return not self.name < other.name and not other.name < self.name
|
||||||
@ -571,10 +576,53 @@ def get_osd_weight(osd_id):
|
|||||||
raise
|
raise
|
||||||
|
|
||||||
|
|
||||||
|
def _filter_nodes_and_set_attributes(node, node_lookup_map, lookup_type):
|
||||||
|
"""Get all nodes of the desired type, with all their attributes.
|
||||||
|
|
||||||
|
These attributes can be direct or inherited from ancestors.
|
||||||
|
"""
|
||||||
|
attribute_dict = {node['type']: node['name']}
|
||||||
|
if node['type'] == lookup_type:
|
||||||
|
attribute_dict['name'] = node['name']
|
||||||
|
attribute_dict['identifier'] = node['id']
|
||||||
|
return [attribute_dict]
|
||||||
|
elif not node.get('children'):
|
||||||
|
return [attribute_dict]
|
||||||
|
else:
|
||||||
|
descendant_attribute_dicts = [
|
||||||
|
_filter_nodes_and_set_attributes(node_lookup_map[node_id],
|
||||||
|
node_lookup_map, lookup_type)
|
||||||
|
for node_id in node.get('children', [])
|
||||||
|
]
|
||||||
|
return [dict(attribute_dict, **descendant_attribute_dict)
|
||||||
|
for descendant_attribute_dict
|
||||||
|
in itertools.chain.from_iterable(descendant_attribute_dicts)]
|
||||||
|
|
||||||
|
|
||||||
|
def _flatten_roots(nodes, lookup_type='host'):
|
||||||
|
"""Get a flattened list of nodes of the desired type.
|
||||||
|
|
||||||
|
:param nodes: list of nodes defined as a dictionary of attributes and
|
||||||
|
children
|
||||||
|
:type nodes: List[Dict[int, Any]]
|
||||||
|
:param lookup_type: type of searched node
|
||||||
|
:type lookup_type: str
|
||||||
|
:returns: flattened list of nodes
|
||||||
|
:rtype: List[Dict[str, Any]]
|
||||||
|
"""
|
||||||
|
lookup_map = {node['id']: node for node in nodes}
|
||||||
|
root_attributes_dicts = [_filter_nodes_and_set_attributes(node, lookup_map,
|
||||||
|
lookup_type)
|
||||||
|
for node in nodes if node['type'] == 'root']
|
||||||
|
# get a flattened list of roots.
|
||||||
|
return list(itertools.chain.from_iterable(root_attributes_dicts))
|
||||||
|
|
||||||
|
|
||||||
def get_osd_tree(service):
|
def get_osd_tree(service):
|
||||||
"""Returns the current osd map in JSON.
|
"""Returns the current osd map in JSON.
|
||||||
|
|
||||||
:returns: List.
|
:returns: List.
|
||||||
|
:rtype: List[CrushLocation]
|
||||||
:raises: ValueError if the monmap fails to parse.
|
:raises: ValueError if the monmap fails to parse.
|
||||||
Also raises CalledProcessError if our ceph command fails
|
Also raises CalledProcessError if our ceph command fails
|
||||||
"""
|
"""
|
||||||
@ -585,35 +633,14 @@ def get_osd_tree(service):
|
|||||||
.decode('UTF-8'))
|
.decode('UTF-8'))
|
||||||
try:
|
try:
|
||||||
json_tree = json.loads(tree)
|
json_tree = json.loads(tree)
|
||||||
crush_list = []
|
roots = _flatten_roots(json_tree["nodes"])
|
||||||
# Make sure children are present in the json
|
return [CrushLocation(**host) for host in roots]
|
||||||
if not json_tree['nodes']:
|
|
||||||
return None
|
|
||||||
host_nodes = [
|
|
||||||
node for node in json_tree['nodes']
|
|
||||||
if node['type'] == 'host'
|
|
||||||
]
|
|
||||||
for host in host_nodes:
|
|
||||||
crush_list.append(
|
|
||||||
CrushLocation(
|
|
||||||
name=host.get('name'),
|
|
||||||
identifier=host['id'],
|
|
||||||
host=host.get('host'),
|
|
||||||
rack=host.get('rack'),
|
|
||||||
row=host.get('row'),
|
|
||||||
datacenter=host.get('datacenter'),
|
|
||||||
chassis=host.get('chassis'),
|
|
||||||
root=host.get('root')
|
|
||||||
)
|
|
||||||
)
|
|
||||||
return crush_list
|
|
||||||
except ValueError as v:
|
except ValueError as v:
|
||||||
log("Unable to parse ceph tree json: {}. Error: {}".format(
|
log("Unable to parse ceph tree json: {}. Error: {}".format(
|
||||||
tree, v))
|
tree, v))
|
||||||
raise
|
raise
|
||||||
except subprocess.CalledProcessError as e:
|
except subprocess.CalledProcessError as e:
|
||||||
log("ceph osd tree command failed with message: {}".format(
|
log("ceph osd tree command failed with message: {}".format(e))
|
||||||
e))
|
|
||||||
raise
|
raise
|
||||||
|
|
||||||
|
|
||||||
@ -669,7 +696,9 @@ def get_local_osd_ids():
|
|||||||
dirs = os.listdir(osd_path)
|
dirs = os.listdir(osd_path)
|
||||||
for osd_dir in dirs:
|
for osd_dir in dirs:
|
||||||
osd_id = osd_dir.split('-')[1]
|
osd_id = osd_dir.split('-')[1]
|
||||||
if _is_int(osd_id):
|
if (_is_int(osd_id) and
|
||||||
|
filesystem_mounted(os.path.join(
|
||||||
|
os.sep, osd_path, osd_dir))):
|
||||||
osd_ids.append(osd_id)
|
osd_ids.append(osd_id)
|
||||||
except OSError:
|
except OSError:
|
||||||
raise
|
raise
|
||||||
@ -3271,13 +3300,14 @@ def determine_packages():
|
|||||||
def determine_packages_to_remove():
|
def determine_packages_to_remove():
|
||||||
"""Determines packages for removal
|
"""Determines packages for removal
|
||||||
|
|
||||||
|
Note: if in a container, then the CHRONY_PACKAGE is removed.
|
||||||
|
|
||||||
:returns: list of packages to be removed
|
:returns: list of packages to be removed
|
||||||
|
:rtype: List[str]
|
||||||
"""
|
"""
|
||||||
rm_packages = REMOVE_PACKAGES.copy()
|
rm_packages = REMOVE_PACKAGES.copy()
|
||||||
if is_container():
|
if is_container():
|
||||||
install_list = filter_missing_packages(CHRONY_PACKAGE)
|
rm_packages.extend(filter_missing_packages([CHRONY_PACKAGE]))
|
||||||
if not install_list:
|
|
||||||
rm_packages.append(CHRONY_PACKAGE)
|
|
||||||
return rm_packages
|
return rm_packages
|
||||||
|
|
||||||
|
|
||||||
@ -3376,3 +3406,132 @@ def apply_osd_settings(settings):
|
|||||||
level=ERROR)
|
level=ERROR)
|
||||||
raise OSDConfigSetError
|
raise OSDConfigSetError
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
def enabled_manager_modules():
|
||||||
|
"""Return a list of enabled manager modules.
|
||||||
|
|
||||||
|
:rtype: List[str]
|
||||||
|
"""
|
||||||
|
cmd = ['ceph', 'mgr', 'module', 'ls']
|
||||||
|
try:
|
||||||
|
modules = subprocess.check_output(cmd).decode('UTF-8')
|
||||||
|
except subprocess.CalledProcessError as e:
|
||||||
|
log("Failed to list ceph modules: {}".format(e), WARNING)
|
||||||
|
return []
|
||||||
|
modules = json.loads(modules)
|
||||||
|
return modules['enabled_modules']
|
||||||
|
|
||||||
|
|
||||||
|
def is_mgr_module_enabled(module):
|
||||||
|
"""Is a given manager module enabled.
|
||||||
|
|
||||||
|
:param module:
|
||||||
|
:type module: str
|
||||||
|
:returns: Whether the named module is enabled
|
||||||
|
:rtype: bool
|
||||||
|
"""
|
||||||
|
return module in enabled_manager_modules()
|
||||||
|
|
||||||
|
|
||||||
|
is_dashboard_enabled = functools.partial(is_mgr_module_enabled, 'dashboard')
|
||||||
|
|
||||||
|
|
||||||
|
def mgr_enable_module(module):
|
||||||
|
"""Enable a Ceph Manager Module.
|
||||||
|
|
||||||
|
:param module: The module name to enable
|
||||||
|
:type module: str
|
||||||
|
|
||||||
|
:raises: subprocess.CalledProcessError
|
||||||
|
"""
|
||||||
|
if not is_mgr_module_enabled(module):
|
||||||
|
subprocess.check_call(['ceph', 'mgr', 'module', 'enable', module])
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
mgr_enable_dashboard = functools.partial(mgr_enable_module, 'dashboard')
|
||||||
|
|
||||||
|
|
||||||
|
def mgr_disable_module(module):
|
||||||
|
"""Enable a Ceph Manager Module.
|
||||||
|
|
||||||
|
:param module: The module name to enable
|
||||||
|
:type module: str
|
||||||
|
|
||||||
|
:raises: subprocess.CalledProcessError
|
||||||
|
"""
|
||||||
|
if is_mgr_module_enabled(module):
|
||||||
|
subprocess.check_call(['ceph', 'mgr', 'module', 'disable', module])
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
mgr_disable_dashboard = functools.partial(mgr_disable_module, 'dashboard')
|
||||||
|
|
||||||
|
|
||||||
|
def ceph_config_set(name, value, who):
|
||||||
|
"""Set a ceph config option
|
||||||
|
|
||||||
|
:param name: key to set
|
||||||
|
:type name: str
|
||||||
|
:param value: value corresponding to key
|
||||||
|
:type value: str
|
||||||
|
:param who: Config area the key is associated with (e.g. 'dashboard')
|
||||||
|
:type who: str
|
||||||
|
|
||||||
|
:raises: subprocess.CalledProcessError
|
||||||
|
"""
|
||||||
|
subprocess.check_call(['ceph', 'config', 'set', who, name, value])
|
||||||
|
|
||||||
|
|
||||||
|
mgr_config_set = functools.partial(ceph_config_set, who='mgr')
|
||||||
|
|
||||||
|
|
||||||
|
def ceph_config_get(name, who):
|
||||||
|
"""Retrieve the value of a ceph config option
|
||||||
|
|
||||||
|
:param name: key to lookup
|
||||||
|
:type name: str
|
||||||
|
:param who: Config area the key is associated with (e.g. 'dashboard')
|
||||||
|
:type who: str
|
||||||
|
:returns: Value associated with key
|
||||||
|
:rtype: str
|
||||||
|
:raises: subprocess.CalledProcessError
|
||||||
|
"""
|
||||||
|
return subprocess.check_output(
|
||||||
|
['ceph', 'config', 'get', who, name]).decode('UTF-8')
|
||||||
|
|
||||||
|
|
||||||
|
mgr_config_get = functools.partial(ceph_config_get, who='mgr')
|
||||||
|
|
||||||
|
|
||||||
|
def _dashboard_set_ssl_artifact(path, artifact_name, hostname=None):
|
||||||
|
"""Set SSL dashboard config option.
|
||||||
|
|
||||||
|
:param path: Path to file
|
||||||
|
:type path: str
|
||||||
|
:param artifact_name: Option name for setting the artifact
|
||||||
|
:type artifact_name: str
|
||||||
|
:param hostname: If hostname is set artifact will only be associated with
|
||||||
|
the dashboard on that host.
|
||||||
|
:type hostname: str
|
||||||
|
:raises: subprocess.CalledProcessError
|
||||||
|
"""
|
||||||
|
cmd = ['ceph', 'dashboard', artifact_name]
|
||||||
|
if hostname:
|
||||||
|
cmd.append(hostname)
|
||||||
|
cmd.extend(['-i', path])
|
||||||
|
log(cmd, level=DEBUG)
|
||||||
|
subprocess.check_call(cmd)
|
||||||
|
|
||||||
|
|
||||||
|
dashboard_set_ssl_certificate = functools.partial(
|
||||||
|
_dashboard_set_ssl_artifact,
|
||||||
|
artifact_name='set-ssl-certificate')
|
||||||
|
|
||||||
|
|
||||||
|
dashboard_set_ssl_certificate_key = functools.partial(
|
||||||
|
_dashboard_set_ssl_artifact,
|
||||||
|
artifact_name='set-ssl-certificate-key')
|
||||||
|
15
osci.yaml
15
osci.yaml
@ -13,10 +13,19 @@
|
|||||||
- focal-victoria-ec
|
- focal-victoria-ec
|
||||||
- focal-wallaby
|
- focal-wallaby
|
||||||
- focal-wallaby-ec
|
- focal-wallaby-ec
|
||||||
|
- focal-xena:
|
||||||
|
voting: false
|
||||||
|
- focal-wallaby-ec:
|
||||||
|
voting: false
|
||||||
- groovy-victoria
|
- groovy-victoria
|
||||||
- groovy-victoria-ec
|
- groovy-victoria-ec
|
||||||
- hirsute-wallaby
|
- hirsute-wallaby
|
||||||
- hirsute-wallaby-ec
|
- hirsute-wallaby-ec
|
||||||
|
- impish-xena:
|
||||||
|
voting: false
|
||||||
|
- impish-xena-ec:
|
||||||
|
voting: false
|
||||||
|
- hirsute-wallaby-ec
|
||||||
- job:
|
- job:
|
||||||
name: focal-ussuri-ec
|
name: focal-ussuri-ec
|
||||||
parent: func-target
|
parent: func-target
|
||||||
@ -48,3 +57,9 @@
|
|||||||
dependencies: *smoke-jobs
|
dependencies: *smoke-jobs
|
||||||
vars:
|
vars:
|
||||||
tox_extra_args: erasure-coded:hirsute-wallaby-ec
|
tox_extra_args: erasure-coded:hirsute-wallaby-ec
|
||||||
|
- job:
|
||||||
|
name: impish-xena-ec
|
||||||
|
parent: func-target
|
||||||
|
dependencies: *smoke-jobs
|
||||||
|
vars:
|
||||||
|
tox_extra_args: erasure-coded:impish-xena-ec
|
||||||
|
18
pip.sh
Executable file
18
pip.sh
Executable file
@ -0,0 +1,18 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
#
|
||||||
|
# This file is managed centrally by release-tools and should not be modified
|
||||||
|
# within individual charm repos. See the 'global' dir contents for available
|
||||||
|
# choices of tox.ini for OpenStack Charms:
|
||||||
|
# https://github.com/openstack-charmers/release-tools
|
||||||
|
#
|
||||||
|
# setuptools 58.0 dropped the support for use_2to3=true which is needed to
|
||||||
|
# install blessings (an indirect dependency of charm-tools).
|
||||||
|
#
|
||||||
|
# More details on the beahvior of tox and virtualenv creation can be found at
|
||||||
|
# https://github.com/tox-dev/tox/issues/448
|
||||||
|
#
|
||||||
|
# This script is wrapper to force the use of the pinned versions early in the
|
||||||
|
# process when the virtualenv was created and upgraded before installing the
|
||||||
|
# depedencies declared in the target.
|
||||||
|
pip install 'pip<20.3' 'setuptools<50.0.0'
|
||||||
|
pip "$@"
|
215
tests/bundles/focal-xena-ec.yaml
Normal file
215
tests/bundles/focal-xena-ec.yaml
Normal file
@ -0,0 +1,215 @@
|
|||||||
|
variables:
|
||||||
|
openstack-origin: &openstack-origin cloud:focal-xena
|
||||||
|
|
||||||
|
series: focal
|
||||||
|
|
||||||
|
comment:
|
||||||
|
- 'machines section to decide order of deployment. database sooner = faster'
|
||||||
|
machines:
|
||||||
|
'0':
|
||||||
|
constraints: mem=3072M
|
||||||
|
'1':
|
||||||
|
constraints: mem=3072M
|
||||||
|
'2':
|
||||||
|
constraints: mem=3072M
|
||||||
|
'3':
|
||||||
|
'4':
|
||||||
|
'5':
|
||||||
|
'6':
|
||||||
|
'7':
|
||||||
|
'8':
|
||||||
|
'9':
|
||||||
|
'10':
|
||||||
|
'11':
|
||||||
|
'12':
|
||||||
|
'13':
|
||||||
|
'14':
|
||||||
|
'15':
|
||||||
|
'16':
|
||||||
|
'17':
|
||||||
|
'18':
|
||||||
|
|
||||||
|
applications:
|
||||||
|
|
||||||
|
cinder-mysql-router:
|
||||||
|
charm: cs:~openstack-charmers-next/mysql-router
|
||||||
|
glance-mysql-router:
|
||||||
|
charm: cs:~openstack-charmers-next/mysql-router
|
||||||
|
keystone-mysql-router:
|
||||||
|
charm: cs:~openstack-charmers-next/mysql-router
|
||||||
|
|
||||||
|
mysql-innodb-cluster:
|
||||||
|
charm: cs:~openstack-charmers-next/mysql-innodb-cluster
|
||||||
|
num_units: 3
|
||||||
|
options:
|
||||||
|
source: *openstack-origin
|
||||||
|
to:
|
||||||
|
- '0'
|
||||||
|
- '1'
|
||||||
|
- '2'
|
||||||
|
|
||||||
|
ceph-mon:
|
||||||
|
charm: cs:~openstack-charmers-next/ceph-mon
|
||||||
|
num_units: 3
|
||||||
|
options:
|
||||||
|
expected-osd-count: 3
|
||||||
|
source: *openstack-origin
|
||||||
|
to:
|
||||||
|
- '3'
|
||||||
|
- '4'
|
||||||
|
- '5'
|
||||||
|
|
||||||
|
ceph-osd:
|
||||||
|
charm: cs:~openstack-charmers-next/ceph-osd
|
||||||
|
num_units: 6
|
||||||
|
storage:
|
||||||
|
osd-devices: 10G
|
||||||
|
options:
|
||||||
|
source: *openstack-origin
|
||||||
|
to:
|
||||||
|
- '6'
|
||||||
|
- '7'
|
||||||
|
- '8'
|
||||||
|
- '16'
|
||||||
|
- '17'
|
||||||
|
- '18'
|
||||||
|
|
||||||
|
ceph-proxy:
|
||||||
|
charm: ceph-proxy
|
||||||
|
num_units: 1
|
||||||
|
options:
|
||||||
|
source: *openstack-origin
|
||||||
|
to:
|
||||||
|
- '9'
|
||||||
|
|
||||||
|
ceph-radosgw:
|
||||||
|
charm: cs:~openstack-charmers-next/ceph-radosgw
|
||||||
|
num_units: 1
|
||||||
|
options:
|
||||||
|
source: *openstack-origin
|
||||||
|
pool-type: erasure-coded
|
||||||
|
ec-profile-k: 4
|
||||||
|
ec-profile-m: 2
|
||||||
|
to:
|
||||||
|
- '10'
|
||||||
|
|
||||||
|
cinder:
|
||||||
|
charm: cs:~openstack-charmers-next/cinder
|
||||||
|
num_units: 1
|
||||||
|
options:
|
||||||
|
openstack-origin: *openstack-origin
|
||||||
|
block-device: ""
|
||||||
|
ephemeral-unmount: ""
|
||||||
|
glance-api-version: 2
|
||||||
|
overwrite: "false"
|
||||||
|
constraints: mem=2048
|
||||||
|
to:
|
||||||
|
- '11'
|
||||||
|
|
||||||
|
cinder-ceph:
|
||||||
|
charm: cs:~openstack-charmers-next/cinder-ceph
|
||||||
|
options:
|
||||||
|
restrict-ceph-pools: True
|
||||||
|
pool-type: erasure-coded
|
||||||
|
ec-profile-k: 4
|
||||||
|
ec-profile-m: 2
|
||||||
|
ec-profile-plugin: lrc
|
||||||
|
ec-profile-locality: 3
|
||||||
|
|
||||||
|
keystone:
|
||||||
|
charm: cs:~openstack-charmers-next/keystone
|
||||||
|
num_units: 1
|
||||||
|
options:
|
||||||
|
openstack-origin: *openstack-origin
|
||||||
|
admin-password: openstack
|
||||||
|
constraints: mem=1024
|
||||||
|
to:
|
||||||
|
- '12'
|
||||||
|
|
||||||
|
rabbitmq-server:
|
||||||
|
charm: cs:~openstack-charmers-next/rabbitmq-server
|
||||||
|
num_units: 1
|
||||||
|
constraints: mem=1024
|
||||||
|
options:
|
||||||
|
source: *openstack-origin
|
||||||
|
to:
|
||||||
|
- '13'
|
||||||
|
|
||||||
|
glance:
|
||||||
|
charm: cs:~openstack-charmers-next/glance
|
||||||
|
num_units: 1
|
||||||
|
options:
|
||||||
|
openstack-origin: *openstack-origin
|
||||||
|
pool-type: erasure-coded
|
||||||
|
ec-profile-k: 4
|
||||||
|
ec-profile-m: 2
|
||||||
|
ec-profile-plugin: jerasure
|
||||||
|
to:
|
||||||
|
- '14'
|
||||||
|
|
||||||
|
nova-compute:
|
||||||
|
charm: cs:~openstack-charmers-next/nova-compute
|
||||||
|
num_units: 1
|
||||||
|
options:
|
||||||
|
openstack-origin: *openstack-origin
|
||||||
|
pool-type: erasure-coded
|
||||||
|
ec-profile-k: 4
|
||||||
|
ec-profile-m: 2
|
||||||
|
ec-profile-plugin: isa
|
||||||
|
libvirt-image-backend: rbd
|
||||||
|
to:
|
||||||
|
- '15'
|
||||||
|
|
||||||
|
|
||||||
|
relations:
|
||||||
|
|
||||||
|
- - 'ceph-osd:mon'
|
||||||
|
- 'ceph-mon:osd'
|
||||||
|
|
||||||
|
- - 'ceph-proxy:radosgw'
|
||||||
|
- 'ceph-radosgw:mon'
|
||||||
|
|
||||||
|
- - 'cinder:amqp'
|
||||||
|
- 'rabbitmq-server:amqp'
|
||||||
|
|
||||||
|
- - 'cinder:shared-db'
|
||||||
|
- 'cinder-mysql-router:shared-db'
|
||||||
|
- - 'cinder-mysql-router:db-router'
|
||||||
|
- 'mysql-innodb-cluster:db-router'
|
||||||
|
|
||||||
|
- - 'keystone:shared-db'
|
||||||
|
- 'keystone-mysql-router:shared-db'
|
||||||
|
- - 'keystone-mysql-router:db-router'
|
||||||
|
- 'mysql-innodb-cluster:db-router'
|
||||||
|
|
||||||
|
- - 'cinder:identity-service'
|
||||||
|
- 'keystone:identity-service'
|
||||||
|
|
||||||
|
- - 'cinder-ceph:storage-backend'
|
||||||
|
- 'cinder:storage-backend'
|
||||||
|
|
||||||
|
- - 'cinder-ceph:ceph'
|
||||||
|
- 'ceph-proxy:client'
|
||||||
|
|
||||||
|
- - 'glance:image-service'
|
||||||
|
- 'nova-compute:image-service'
|
||||||
|
|
||||||
|
- - 'glance:identity-service'
|
||||||
|
- 'keystone:identity-service'
|
||||||
|
|
||||||
|
- - 'glance:shared-db'
|
||||||
|
- 'glance-mysql-router:shared-db'
|
||||||
|
- - 'glance-mysql-router:db-router'
|
||||||
|
- 'mysql-innodb-cluster:db-router'
|
||||||
|
|
||||||
|
- - 'glance:ceph'
|
||||||
|
- 'ceph-proxy:client'
|
||||||
|
|
||||||
|
- - 'nova-compute:ceph-access'
|
||||||
|
- 'cinder-ceph:ceph-access'
|
||||||
|
|
||||||
|
- - 'nova-compute:amqp'
|
||||||
|
- 'rabbitmq-server:amqp'
|
||||||
|
|
||||||
|
- - 'nova-compute:ceph'
|
||||||
|
- 'ceph-proxy:client'
|
186
tests/bundles/focal-xena.yaml
Normal file
186
tests/bundles/focal-xena.yaml
Normal file
@ -0,0 +1,186 @@
|
|||||||
|
variables:
|
||||||
|
openstack-origin: &openstack-origin cloud:focal-xena
|
||||||
|
|
||||||
|
series: focal
|
||||||
|
|
||||||
|
comment:
|
||||||
|
- 'machines section to decide order of deployment. database sooner = faster'
|
||||||
|
machines:
|
||||||
|
'0':
|
||||||
|
constraints: mem=3072M
|
||||||
|
'1':
|
||||||
|
constraints: mem=3072M
|
||||||
|
'2':
|
||||||
|
constraints: mem=3072M
|
||||||
|
'3':
|
||||||
|
'4':
|
||||||
|
'5':
|
||||||
|
'6':
|
||||||
|
'7':
|
||||||
|
'8':
|
||||||
|
'9':
|
||||||
|
'10':
|
||||||
|
'11':
|
||||||
|
'12':
|
||||||
|
'13':
|
||||||
|
'14':
|
||||||
|
'15':
|
||||||
|
|
||||||
|
applications:
|
||||||
|
|
||||||
|
cinder-mysql-router:
|
||||||
|
charm: cs:~openstack-charmers-next/mysql-router
|
||||||
|
glance-mysql-router:
|
||||||
|
charm: cs:~openstack-charmers-next/mysql-router
|
||||||
|
keystone-mysql-router:
|
||||||
|
charm: cs:~openstack-charmers-next/mysql-router
|
||||||
|
|
||||||
|
mysql-innodb-cluster:
|
||||||
|
charm: cs:~openstack-charmers-next/mysql-innodb-cluster
|
||||||
|
num_units: 3
|
||||||
|
options:
|
||||||
|
source: *openstack-origin
|
||||||
|
to:
|
||||||
|
- '0'
|
||||||
|
- '1'
|
||||||
|
- '2'
|
||||||
|
|
||||||
|
ceph-mon:
|
||||||
|
charm: cs:~openstack-charmers-next/ceph-mon
|
||||||
|
num_units: 3
|
||||||
|
options:
|
||||||
|
expected-osd-count: 3
|
||||||
|
source: *openstack-origin
|
||||||
|
to:
|
||||||
|
- '3'
|
||||||
|
- '4'
|
||||||
|
- '5'
|
||||||
|
|
||||||
|
ceph-osd:
|
||||||
|
charm: cs:~openstack-charmers-next/ceph-osd
|
||||||
|
num_units: 3
|
||||||
|
storage:
|
||||||
|
osd-devices: 10G
|
||||||
|
options:
|
||||||
|
source: *openstack-origin
|
||||||
|
to:
|
||||||
|
- '6'
|
||||||
|
- '7'
|
||||||
|
- '8'
|
||||||
|
|
||||||
|
ceph-proxy:
|
||||||
|
charm: ceph-proxy
|
||||||
|
num_units: 1
|
||||||
|
options:
|
||||||
|
source: *openstack-origin
|
||||||
|
to:
|
||||||
|
- '9'
|
||||||
|
|
||||||
|
ceph-radosgw:
|
||||||
|
charm: cs:~openstack-charmers-next/ceph-radosgw
|
||||||
|
num_units: 1
|
||||||
|
options:
|
||||||
|
source: *openstack-origin
|
||||||
|
to:
|
||||||
|
- '10'
|
||||||
|
|
||||||
|
cinder:
|
||||||
|
charm: cs:~openstack-charmers-next/cinder
|
||||||
|
num_units: 1
|
||||||
|
options:
|
||||||
|
openstack-origin: *openstack-origin
|
||||||
|
block-device: ""
|
||||||
|
ephemeral-unmount: ""
|
||||||
|
glance-api-version: 2
|
||||||
|
overwrite: "false"
|
||||||
|
constraints: mem=2048
|
||||||
|
to:
|
||||||
|
- '11'
|
||||||
|
|
||||||
|
cinder-ceph:
|
||||||
|
charm: cs:~openstack-charmers-next/cinder-ceph
|
||||||
|
options:
|
||||||
|
restrict-ceph-pools: True
|
||||||
|
|
||||||
|
keystone:
|
||||||
|
charm: cs:~openstack-charmers-next/keystone
|
||||||
|
num_units: 1
|
||||||
|
options:
|
||||||
|
openstack-origin: *openstack-origin
|
||||||
|
admin-password: openstack
|
||||||
|
constraints: mem=1024
|
||||||
|
to:
|
||||||
|
- '12'
|
||||||
|
|
||||||
|
rabbitmq-server:
|
||||||
|
charm: cs:~openstack-charmers-next/rabbitmq-server
|
||||||
|
num_units: 1
|
||||||
|
constraints: mem=1024
|
||||||
|
options:
|
||||||
|
source: *openstack-origin
|
||||||
|
to:
|
||||||
|
- '13'
|
||||||
|
|
||||||
|
glance:
|
||||||
|
charm: cs:~openstack-charmers-next/glance
|
||||||
|
num_units: 1
|
||||||
|
options:
|
||||||
|
openstack-origin: *openstack-origin
|
||||||
|
to:
|
||||||
|
- '14'
|
||||||
|
|
||||||
|
nova-compute:
|
||||||
|
charm: cs:~openstack-charmers-next/nova-compute
|
||||||
|
num_units: 1
|
||||||
|
options:
|
||||||
|
openstack-origin: *openstack-origin
|
||||||
|
to:
|
||||||
|
- '15'
|
||||||
|
|
||||||
|
|
||||||
|
relations:
|
||||||
|
|
||||||
|
- - 'ceph-osd:mon'
|
||||||
|
- 'ceph-mon:osd'
|
||||||
|
|
||||||
|
- - 'ceph-proxy:radosgw'
|
||||||
|
- 'ceph-radosgw:mon'
|
||||||
|
|
||||||
|
- - 'cinder:amqp'
|
||||||
|
- 'rabbitmq-server:amqp'
|
||||||
|
|
||||||
|
- - 'cinder:shared-db'
|
||||||
|
- 'cinder-mysql-router:shared-db'
|
||||||
|
- - 'cinder-mysql-router:db-router'
|
||||||
|
- 'mysql-innodb-cluster:db-router'
|
||||||
|
|
||||||
|
- - 'keystone:shared-db'
|
||||||
|
- 'keystone-mysql-router:shared-db'
|
||||||
|
- - 'keystone-mysql-router:db-router'
|
||||||
|
- 'mysql-innodb-cluster:db-router'
|
||||||
|
|
||||||
|
- - 'cinder:identity-service'
|
||||||
|
- 'keystone:identity-service'
|
||||||
|
|
||||||
|
- - 'cinder-ceph:storage-backend'
|
||||||
|
- 'cinder:storage-backend'
|
||||||
|
|
||||||
|
- - 'cinder-ceph:ceph'
|
||||||
|
- 'ceph-proxy:client'
|
||||||
|
|
||||||
|
- - 'glance:image-service'
|
||||||
|
- 'nova-compute:image-service'
|
||||||
|
|
||||||
|
- - 'glance:identity-service'
|
||||||
|
- 'keystone:identity-service'
|
||||||
|
|
||||||
|
- - 'glance:shared-db'
|
||||||
|
- 'glance-mysql-router:shared-db'
|
||||||
|
- - 'glance-mysql-router:db-router'
|
||||||
|
- 'mysql-innodb-cluster:db-router'
|
||||||
|
|
||||||
|
- - 'nova-compute:ceph-access'
|
||||||
|
- 'cinder-ceph:ceph-access'
|
||||||
|
|
||||||
|
- - 'nova-compute:amqp'
|
||||||
|
- 'rabbitmq-server:amqp'
|
215
tests/bundles/impish-xena-ec.yaml
Normal file
215
tests/bundles/impish-xena-ec.yaml
Normal file
@ -0,0 +1,215 @@
|
|||||||
|
variables:
|
||||||
|
openstack-origin: &openstack-origin distro
|
||||||
|
|
||||||
|
series: impish
|
||||||
|
|
||||||
|
comment:
|
||||||
|
- 'machines section to decide order of deployment. database sooner = faster'
|
||||||
|
machines:
|
||||||
|
'0':
|
||||||
|
constraints: mem=3072M
|
||||||
|
'1':
|
||||||
|
constraints: mem=3072M
|
||||||
|
'2':
|
||||||
|
constraints: mem=3072M
|
||||||
|
'3':
|
||||||
|
'4':
|
||||||
|
'5':
|
||||||
|
'6':
|
||||||
|
'7':
|
||||||
|
'8':
|
||||||
|
'9':
|
||||||
|
'10':
|
||||||
|
'11':
|
||||||
|
'12':
|
||||||
|
'13':
|
||||||
|
'14':
|
||||||
|
'15':
|
||||||
|
'16':
|
||||||
|
'17':
|
||||||
|
'18':
|
||||||
|
|
||||||
|
applications:
|
||||||
|
|
||||||
|
cinder-mysql-router:
|
||||||
|
charm: cs:~openstack-charmers-next/mysql-router
|
||||||
|
glance-mysql-router:
|
||||||
|
charm: cs:~openstack-charmers-next/mysql-router
|
||||||
|
keystone-mysql-router:
|
||||||
|
charm: cs:~openstack-charmers-next/mysql-router
|
||||||
|
|
||||||
|
mysql-innodb-cluster:
|
||||||
|
charm: cs:~openstack-charmers-next/mysql-innodb-cluster
|
||||||
|
num_units: 3
|
||||||
|
options:
|
||||||
|
source: *openstack-origin
|
||||||
|
to:
|
||||||
|
- '0'
|
||||||
|
- '1'
|
||||||
|
- '2'
|
||||||
|
|
||||||
|
ceph-mon:
|
||||||
|
charm: cs:~openstack-charmers-next/ceph-mon
|
||||||
|
num_units: 3
|
||||||
|
options:
|
||||||
|
expected-osd-count: 3
|
||||||
|
source: *openstack-origin
|
||||||
|
to:
|
||||||
|
- '3'
|
||||||
|
- '4'
|
||||||
|
- '5'
|
||||||
|
|
||||||
|
ceph-osd:
|
||||||
|
charm: cs:~openstack-charmers-next/ceph-osd
|
||||||
|
num_units: 6
|
||||||
|
storage:
|
||||||
|
osd-devices: 10G
|
||||||
|
options:
|
||||||
|
source: *openstack-origin
|
||||||
|
to:
|
||||||
|
- '6'
|
||||||
|
- '7'
|
||||||
|
- '8'
|
||||||
|
- '16'
|
||||||
|
- '17'
|
||||||
|
- '18'
|
||||||
|
|
||||||
|
ceph-proxy:
|
||||||
|
charm: ceph-proxy
|
||||||
|
num_units: 1
|
||||||
|
options:
|
||||||
|
source: *openstack-origin
|
||||||
|
to:
|
||||||
|
- '9'
|
||||||
|
|
||||||
|
ceph-radosgw:
|
||||||
|
charm: cs:~openstack-charmers-next/ceph-radosgw
|
||||||
|
num_units: 1
|
||||||
|
options:
|
||||||
|
source: *openstack-origin
|
||||||
|
pool-type: erasure-coded
|
||||||
|
ec-profile-k: 4
|
||||||
|
ec-profile-m: 2
|
||||||
|
to:
|
||||||
|
- '10'
|
||||||
|
|
||||||
|
cinder:
|
||||||
|
charm: cs:~openstack-charmers-next/cinder
|
||||||
|
num_units: 1
|
||||||
|
options:
|
||||||
|
openstack-origin: *openstack-origin
|
||||||
|
block-device: ""
|
||||||
|
ephemeral-unmount: ""
|
||||||
|
glance-api-version: 2
|
||||||
|
overwrite: "false"
|
||||||
|
constraints: mem=2048
|
||||||
|
to:
|
||||||
|
- '11'
|
||||||
|
|
||||||
|
cinder-ceph:
|
||||||
|
charm: cs:~openstack-charmers-next/cinder-ceph
|
||||||
|
options:
|
||||||
|
restrict-ceph-pools: True
|
||||||
|
pool-type: erasure-coded
|
||||||
|
ec-profile-k: 4
|
||||||
|
ec-profile-m: 2
|
||||||
|
ec-profile-plugin: lrc
|
||||||
|
ec-profile-locality: 3
|
||||||
|
|
||||||
|
keystone:
|
||||||
|
charm: cs:~openstack-charmers-next/keystone
|
||||||
|
num_units: 1
|
||||||
|
options:
|
||||||
|
openstack-origin: *openstack-origin
|
||||||
|
admin-password: openstack
|
||||||
|
constraints: mem=1024
|
||||||
|
to:
|
||||||
|
- '12'
|
||||||
|
|
||||||
|
rabbitmq-server:
|
||||||
|
charm: cs:~openstack-charmers-next/rabbitmq-server
|
||||||
|
num_units: 1
|
||||||
|
constraints: mem=1024
|
||||||
|
options:
|
||||||
|
source: *openstack-origin
|
||||||
|
to:
|
||||||
|
- '13'
|
||||||
|
|
||||||
|
glance:
|
||||||
|
charm: cs:~openstack-charmers-next/glance
|
||||||
|
num_units: 1
|
||||||
|
options:
|
||||||
|
openstack-origin: *openstack-origin
|
||||||
|
pool-type: erasure-coded
|
||||||
|
ec-profile-k: 4
|
||||||
|
ec-profile-m: 2
|
||||||
|
ec-profile-plugin: jerasure
|
||||||
|
to:
|
||||||
|
- '14'
|
||||||
|
|
||||||
|
nova-compute:
|
||||||
|
charm: cs:~openstack-charmers-next/nova-compute
|
||||||
|
num_units: 1
|
||||||
|
options:
|
||||||
|
openstack-origin: *openstack-origin
|
||||||
|
pool-type: erasure-coded
|
||||||
|
ec-profile-k: 4
|
||||||
|
ec-profile-m: 2
|
||||||
|
ec-profile-plugin: isa
|
||||||
|
libvirt-image-backend: rbd
|
||||||
|
to:
|
||||||
|
- '15'
|
||||||
|
|
||||||
|
|
||||||
|
relations:
|
||||||
|
|
||||||
|
- - 'ceph-osd:mon'
|
||||||
|
- 'ceph-mon:osd'
|
||||||
|
|
||||||
|
- - 'ceph-proxy:radosgw'
|
||||||
|
- 'ceph-radosgw:mon'
|
||||||
|
|
||||||
|
- - 'cinder:amqp'
|
||||||
|
- 'rabbitmq-server:amqp'
|
||||||
|
|
||||||
|
- - 'cinder:shared-db'
|
||||||
|
- 'cinder-mysql-router:shared-db'
|
||||||
|
- - 'cinder-mysql-router:db-router'
|
||||||
|
- 'mysql-innodb-cluster:db-router'
|
||||||
|
|
||||||
|
- - 'keystone:shared-db'
|
||||||
|
- 'keystone-mysql-router:shared-db'
|
||||||
|
- - 'keystone-mysql-router:db-router'
|
||||||
|
- 'mysql-innodb-cluster:db-router'
|
||||||
|
|
||||||
|
- - 'cinder:identity-service'
|
||||||
|
- 'keystone:identity-service'
|
||||||
|
|
||||||
|
- - 'cinder-ceph:storage-backend'
|
||||||
|
- 'cinder:storage-backend'
|
||||||
|
|
||||||
|
- - 'cinder-ceph:ceph'
|
||||||
|
- 'ceph-proxy:client'
|
||||||
|
|
||||||
|
- - 'glance:image-service'
|
||||||
|
- 'nova-compute:image-service'
|
||||||
|
|
||||||
|
- - 'glance:identity-service'
|
||||||
|
- 'keystone:identity-service'
|
||||||
|
|
||||||
|
- - 'glance:shared-db'
|
||||||
|
- 'glance-mysql-router:shared-db'
|
||||||
|
- - 'glance-mysql-router:db-router'
|
||||||
|
- 'mysql-innodb-cluster:db-router'
|
||||||
|
|
||||||
|
- - 'glance:ceph'
|
||||||
|
- 'ceph-proxy:client'
|
||||||
|
|
||||||
|
- - 'nova-compute:ceph-access'
|
||||||
|
- 'cinder-ceph:ceph-access'
|
||||||
|
|
||||||
|
- - 'nova-compute:amqp'
|
||||||
|
- 'rabbitmq-server:amqp'
|
||||||
|
|
||||||
|
- - 'nova-compute:ceph'
|
||||||
|
- 'ceph-proxy:client'
|
186
tests/bundles/impish-xena.yaml
Normal file
186
tests/bundles/impish-xena.yaml
Normal file
@ -0,0 +1,186 @@
|
|||||||
|
variables:
|
||||||
|
openstack-origin: &openstack-origin distro
|
||||||
|
|
||||||
|
series: impish
|
||||||
|
|
||||||
|
comment:
|
||||||
|
- 'machines section to decide order of deployment. database sooner = faster'
|
||||||
|
machines:
|
||||||
|
'0':
|
||||||
|
constraints: mem=3072M
|
||||||
|
'1':
|
||||||
|
constraints: mem=3072M
|
||||||
|
'2':
|
||||||
|
constraints: mem=3072M
|
||||||
|
'3':
|
||||||
|
'4':
|
||||||
|
'5':
|
||||||
|
'6':
|
||||||
|
'7':
|
||||||
|
'8':
|
||||||
|
'9':
|
||||||
|
'10':
|
||||||
|
'11':
|
||||||
|
'12':
|
||||||
|
'13':
|
||||||
|
'14':
|
||||||
|
'15':
|
||||||
|
|
||||||
|
applications:
|
||||||
|
|
||||||
|
cinder-mysql-router:
|
||||||
|
charm: cs:~openstack-charmers-next/mysql-router
|
||||||
|
glance-mysql-router:
|
||||||
|
charm: cs:~openstack-charmers-next/mysql-router
|
||||||
|
keystone-mysql-router:
|
||||||
|
charm: cs:~openstack-charmers-next/mysql-router
|
||||||
|
|
||||||
|
mysql-innodb-cluster:
|
||||||
|
charm: cs:~openstack-charmers-next/mysql-innodb-cluster
|
||||||
|
num_units: 3
|
||||||
|
options:
|
||||||
|
source: *openstack-origin
|
||||||
|
to:
|
||||||
|
- '0'
|
||||||
|
- '1'
|
||||||
|
- '2'
|
||||||
|
|
||||||
|
ceph-mon:
|
||||||
|
charm: cs:~openstack-charmers-next/ceph-mon
|
||||||
|
num_units: 3
|
||||||
|
options:
|
||||||
|
expected-osd-count: 3
|
||||||
|
source: *openstack-origin
|
||||||
|
to:
|
||||||
|
- '3'
|
||||||
|
- '4'
|
||||||
|
- '5'
|
||||||
|
|
||||||
|
ceph-osd:
|
||||||
|
charm: cs:~openstack-charmers-next/ceph-osd
|
||||||
|
num_units: 3
|
||||||
|
storage:
|
||||||
|
osd-devices: 10G
|
||||||
|
options:
|
||||||
|
source: *openstack-origin
|
||||||
|
to:
|
||||||
|
- '6'
|
||||||
|
- '7'
|
||||||
|
- '8'
|
||||||
|
|
||||||
|
ceph-proxy:
|
||||||
|
charm: ceph-proxy
|
||||||
|
num_units: 1
|
||||||
|
options:
|
||||||
|
source: *openstack-origin
|
||||||
|
to:
|
||||||
|
- '9'
|
||||||
|
|
||||||
|
ceph-radosgw:
|
||||||
|
charm: cs:~openstack-charmers-next/ceph-radosgw
|
||||||
|
num_units: 1
|
||||||
|
options:
|
||||||
|
source: *openstack-origin
|
||||||
|
to:
|
||||||
|
- '10'
|
||||||
|
|
||||||
|
cinder:
|
||||||
|
charm: cs:~openstack-charmers-next/cinder
|
||||||
|
num_units: 1
|
||||||
|
options:
|
||||||
|
openstack-origin: *openstack-origin
|
||||||
|
block-device: ""
|
||||||
|
ephemeral-unmount: ""
|
||||||
|
glance-api-version: 2
|
||||||
|
overwrite: "false"
|
||||||
|
constraints: mem=2048
|
||||||
|
to:
|
||||||
|
- '11'
|
||||||
|
|
||||||
|
cinder-ceph:
|
||||||
|
charm: cs:~openstack-charmers-next/cinder-ceph
|
||||||
|
options:
|
||||||
|
restrict-ceph-pools: True
|
||||||
|
|
||||||
|
keystone:
|
||||||
|
charm: cs:~openstack-charmers-next/keystone
|
||||||
|
num_units: 1
|
||||||
|
options:
|
||||||
|
openstack-origin: *openstack-origin
|
||||||
|
admin-password: openstack
|
||||||
|
constraints: mem=1024
|
||||||
|
to:
|
||||||
|
- '12'
|
||||||
|
|
||||||
|
rabbitmq-server:
|
||||||
|
charm: cs:~openstack-charmers-next/rabbitmq-server
|
||||||
|
num_units: 1
|
||||||
|
constraints: mem=1024
|
||||||
|
options:
|
||||||
|
source: *openstack-origin
|
||||||
|
to:
|
||||||
|
- '13'
|
||||||
|
|
||||||
|
glance:
|
||||||
|
charm: cs:~openstack-charmers-next/glance
|
||||||
|
num_units: 1
|
||||||
|
options:
|
||||||
|
openstack-origin: *openstack-origin
|
||||||
|
to:
|
||||||
|
- '14'
|
||||||
|
|
||||||
|
nova-compute:
|
||||||
|
charm: cs:~openstack-charmers-next/nova-compute
|
||||||
|
num_units: 1
|
||||||
|
options:
|
||||||
|
openstack-origin: *openstack-origin
|
||||||
|
to:
|
||||||
|
- '15'
|
||||||
|
|
||||||
|
|
||||||
|
relations:
|
||||||
|
|
||||||
|
- - 'ceph-osd:mon'
|
||||||
|
- 'ceph-mon:osd'
|
||||||
|
|
||||||
|
- - 'ceph-proxy:radosgw'
|
||||||
|
- 'ceph-radosgw:mon'
|
||||||
|
|
||||||
|
- - 'cinder:amqp'
|
||||||
|
- 'rabbitmq-server:amqp'
|
||||||
|
|
||||||
|
- - 'cinder:shared-db'
|
||||||
|
- 'cinder-mysql-router:shared-db'
|
||||||
|
- - 'cinder-mysql-router:db-router'
|
||||||
|
- 'mysql-innodb-cluster:db-router'
|
||||||
|
|
||||||
|
- - 'keystone:shared-db'
|
||||||
|
- 'keystone-mysql-router:shared-db'
|
||||||
|
- - 'keystone-mysql-router:db-router'
|
||||||
|
- 'mysql-innodb-cluster:db-router'
|
||||||
|
|
||||||
|
- - 'cinder:identity-service'
|
||||||
|
- 'keystone:identity-service'
|
||||||
|
|
||||||
|
- - 'cinder-ceph:storage-backend'
|
||||||
|
- 'cinder:storage-backend'
|
||||||
|
|
||||||
|
- - 'cinder-ceph:ceph'
|
||||||
|
- 'ceph-proxy:client'
|
||||||
|
|
||||||
|
- - 'glance:image-service'
|
||||||
|
- 'nova-compute:image-service'
|
||||||
|
|
||||||
|
- - 'glance:identity-service'
|
||||||
|
- 'keystone:identity-service'
|
||||||
|
|
||||||
|
- - 'glance:shared-db'
|
||||||
|
- 'glance-mysql-router:shared-db'
|
||||||
|
- - 'glance-mysql-router:db-router'
|
||||||
|
- 'mysql-innodb-cluster:db-router'
|
||||||
|
|
||||||
|
- - 'nova-compute:ceph-access'
|
||||||
|
- 'cinder-ceph:ceph-access'
|
||||||
|
|
||||||
|
- - 'nova-compute:amqp'
|
||||||
|
- 'rabbitmq-server:amqp'
|
@ -23,6 +23,8 @@ gate_bundles:
|
|||||||
- erasure-coded: focal-victoria-ec
|
- erasure-coded: focal-victoria-ec
|
||||||
- focal-wallaby
|
- focal-wallaby
|
||||||
- erasure-coded: focal-wallaby-ec
|
- erasure-coded: focal-wallaby-ec
|
||||||
|
- focal-xena
|
||||||
|
- erasure-coded: focal-xena-ec
|
||||||
- groovy-victoria
|
- groovy-victoria
|
||||||
- erasure-coded: groovy-victoria-ec
|
- erasure-coded: groovy-victoria-ec
|
||||||
|
|
||||||
@ -38,6 +40,8 @@ dev_bundles:
|
|||||||
- bionic-rocky # mimic
|
- bionic-rocky # mimic
|
||||||
- hirsute-wallaby
|
- hirsute-wallaby
|
||||||
- erasure-coded: hirsute-wallaby-ec
|
- erasure-coded: hirsute-wallaby-ec
|
||||||
|
- impish-xena
|
||||||
|
- erasure-coded: impish-xena-ec
|
||||||
|
|
||||||
smoke_bundles:
|
smoke_bundles:
|
||||||
- focal-ussuri
|
- focal-ussuri
|
||||||
@ -69,3 +73,5 @@ tests_options:
|
|||||||
force_deploy:
|
force_deploy:
|
||||||
- hirsute-wallaby
|
- hirsute-wallaby
|
||||||
- hirsute-wallaby-ec
|
- hirsute-wallaby-ec
|
||||||
|
- impish-xena
|
||||||
|
- impish-xena-ec
|
||||||
|
13
tox.ini
13
tox.ini
@ -22,19 +22,22 @@ skip_missing_interpreters = False
|
|||||||
# * It is also necessary to pin virtualenv as a newer virtualenv would still
|
# * It is also necessary to pin virtualenv as a newer virtualenv would still
|
||||||
# lead to fetching the latest pip in the func* tox targets, see
|
# lead to fetching the latest pip in the func* tox targets, see
|
||||||
# https://stackoverflow.com/a/38133283
|
# https://stackoverflow.com/a/38133283
|
||||||
requires = pip < 20.3
|
requires =
|
||||||
virtualenv < 20.0
|
pip < 20.3
|
||||||
|
virtualenv < 20.0
|
||||||
|
setuptools < 50.0.0
|
||||||
|
|
||||||
# NOTE: https://wiki.canonical.com/engineering/OpenStack/InstallLatestToxOnOsci
|
# NOTE: https://wiki.canonical.com/engineering/OpenStack/InstallLatestToxOnOsci
|
||||||
minversion = 3.2.0
|
minversion = 3.18.0
|
||||||
|
|
||||||
[testenv]
|
[testenv]
|
||||||
setenv = VIRTUAL_ENV={envdir}
|
setenv = VIRTUAL_ENV={envdir}
|
||||||
PYTHONHASHSEED=0
|
PYTHONHASHSEED=0
|
||||||
CHARM_DIR={envdir}
|
CHARM_DIR={envdir}
|
||||||
install_command =
|
install_command =
|
||||||
pip install {opts} {packages}
|
{toxinidir}/pip.sh install {opts} {packages}
|
||||||
commands = stestr run --slowest {posargs}
|
commands = stestr run --slowest {posargs}
|
||||||
whitelist_externals = juju
|
allowlist_externals = juju
|
||||||
passenv = HOME TERM CS_* OS_* TEST_*
|
passenv = HOME TERM CS_* OS_* TEST_*
|
||||||
deps = -r{toxinidir}/test-requirements.txt
|
deps = -r{toxinidir}/test-requirements.txt
|
||||||
|
|
||||||
|
Loading…
x
Reference in New Issue
Block a user