Merge pull request #11 from w-miller/schedule
Improve scheduling, persist node details and add some tests
This commit is contained in:
5
.gitignore
vendored
5
.gitignore
vendored
@@ -2,8 +2,8 @@
|
||||
.*
|
||||
# Ansible retry files
|
||||
*.retry
|
||||
# Tenks allocations file
|
||||
allocations.yml
|
||||
# Tenks state file
|
||||
state.yml
|
||||
# Tenks Galaxy roles
|
||||
ansible/roles/stackhpc.*
|
||||
|
||||
@@ -45,6 +45,7 @@ pip-log.txt
|
||||
pip-delete-this-directory.txt
|
||||
|
||||
# Unit test / coverage reports
|
||||
cover/
|
||||
htmlcov/
|
||||
.tox/
|
||||
.coverage
|
||||
|
||||
3
.stestr.conf
Normal file
3
.stestr.conf
Normal file
@@ -0,0 +1,3 @@
|
||||
[DEFAULT]
|
||||
test_path=${TESTS_DIR:-./tests/}
|
||||
top_dir=./
|
||||
22
.travis.yml
22
.travis.yml
@@ -1,6 +1,5 @@
|
||||
---
|
||||
language: python
|
||||
python: "2.7"
|
||||
|
||||
# Run jobs in VMs - sudo is required by ansible tests.
|
||||
sudo: required
|
||||
@@ -15,12 +14,23 @@ addons:
|
||||
- realpath
|
||||
|
||||
# Create a build matrix for the different test jobs.
|
||||
env:
|
||||
matrix:
|
||||
# Run python style checks.
|
||||
- TOX_ENV=pep8
|
||||
matrix:
|
||||
include:
|
||||
# Run Python style checks.
|
||||
- python: 3.5
|
||||
env: TOX_ENV=pep8
|
||||
# Run Ansible linting.
|
||||
- TOX_ENV=alint
|
||||
- python: 3.5
|
||||
env: TOX_ENV=alint
|
||||
# Run Python 3.5 tests.
|
||||
- python: 3.5
|
||||
env: TOX_ENV=py35
|
||||
# Run Python 2.7 tests.
|
||||
- python: 2.7
|
||||
env: TOX_ENV=py27
|
||||
# Run coverage checks.
|
||||
- python: 3.5
|
||||
env: TOX_ENV=cover
|
||||
|
||||
install:
|
||||
# Install tox in a virtualenv to ensure we have an up to date version.
|
||||
|
||||
33
README.md
33
README.md
@@ -13,6 +13,22 @@ installed inside it, Tenks' role dependencies can be installed by
|
||||
`ansible-galaxy install --role-file=requirements.yml
|
||||
--roles-path=ansible/roles/`.
|
||||
|
||||
### Hosts
|
||||
|
||||
Tenks uses Ansible inventory to manage hosts. A multi-host setup is therefore
|
||||
supported, although the default hosts configuration will deploy an all-in-one
|
||||
setup on the host where the `ansible-playbook` command is executed
|
||||
(`localhost`).
|
||||
|
||||
* Configuration management of the Tenks cluster is always performed on
|
||||
`localhost`.
|
||||
* The `hypervisors` group should not directly contain any hosts. Its sub-groups
|
||||
must contain one or more system. Systems in its sub-groups will host a subset
|
||||
of the nodes deployed by Tenks.
|
||||
|
||||
* The `libvirt` group is a sub-group of `hypervisors`. Systems in this
|
||||
group will act as hypervisors using the Libvirt provider.
|
||||
|
||||
### Configuration
|
||||
|
||||
An override file should be created to configure Tenks. Any variables specified
|
||||
@@ -69,6 +85,23 @@ want to run. The current playbooks can be seen in the Ansible structure diagram
|
||||
in the *Development* section. Bear in mind that you will have to set `cmd` in
|
||||
your override file if you are running any of the sub-playbooks individually.
|
||||
|
||||
Once a cluster has been deployed, it can be reconfigured by modifying the Tenks
|
||||
configuration and rerunning `deploy.yml`. Node specs can be changed (including
|
||||
increasing/decreasing the number of nodes); node types can also be
|
||||
reconfigured. Existing nodes will be preserved where possible.
|
||||
|
||||
## Limitations
|
||||
|
||||
The following is a non-exhaustive list of current known limitations of Tenks:
|
||||
|
||||
* When using the Libvirt provider (currently the only provider), Tenks
|
||||
hypervisors cannot co-exist with a containerised Libvirt daemon (for example,
|
||||
as deployed by Kolla in the nova-libvirt container). Tenks will configure an
|
||||
uncontainerised Libvirt daemon instance on the hypervisor, and this may
|
||||
conflict with an existing containerised daemon. A workaround is to disable
|
||||
the Nova virtualised compute service on each Tenks hypervisor if it is
|
||||
present (for example, `docker stop nova_libvirt`) before running Tenks.
|
||||
|
||||
## Development
|
||||
|
||||
A diagram representing the Ansible structure of Tenks can be seen below. Blue
|
||||
|
||||
0
ansible/action_plugins/__init__.py
Normal file
0
ansible/action_plugins/__init__.py
Normal file
@@ -1,111 +0,0 @@
|
||||
# Copyright (c) 2018 StackHPC Ltd.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
# Avoid shadowing of system copy module by copy action plugin.
|
||||
from __future__ import absolute_import
|
||||
from copy import deepcopy
|
||||
|
||||
from ansible.errors import AnsibleActionFail
|
||||
from ansible.module_utils._text import to_text
|
||||
from ansible.plugins.action import ActionBase
|
||||
import six
|
||||
|
||||
|
||||
class ActionModule(ActionBase):
|
||||
|
||||
def run(self, tmp=None, task_vars=None):
|
||||
"""
|
||||
Schedule specifications of nodes by type onto hypervisors.
|
||||
|
||||
The following task vars are accepted:
|
||||
:hypervisor_vars: A dict of hostvars for each hypervisor, keyed
|
||||
by hypervisor hostname. Required.
|
||||
:specs: A list of node specifications to be instantiated. Required.
|
||||
:node_types: A dict mapping node type names to a dict of properties
|
||||
of that type.
|
||||
:node_name_prefix: A string with which to prefix all sequential
|
||||
node names.
|
||||
:vol_name_prefix: A string with which to prefix all sequential
|
||||
volume names.
|
||||
:returns: A dict containing lists of node details, keyed by the
|
||||
hostname of the hypervisor to which they are scheduled.
|
||||
"""
|
||||
result = super(ActionModule, self).run(tmp, task_vars)
|
||||
# Initialise our return dict.
|
||||
result['result'] = {}
|
||||
del tmp # tmp no longer has any effect
|
||||
self._validate_vars(task_vars)
|
||||
|
||||
idx = 0
|
||||
hypervisor_names = task_vars['hypervisor_vars'].keys()
|
||||
for spec in task_vars['specs']:
|
||||
try:
|
||||
typ = spec['type']
|
||||
cnt = spec['count']
|
||||
except KeyError:
|
||||
e = ("All specs must contain a `type` and a `count`. "
|
||||
"Offending spec: %s" % spec)
|
||||
raise AnsibleActionFail(to_text(e))
|
||||
for _ in six.moves.range(cnt):
|
||||
node = deepcopy(task_vars['node_types'][typ])
|
||||
# All nodes need an Ironic driver.
|
||||
node.setdefault('ironic_driver',
|
||||
task_vars['hostvars']['localhost'][
|
||||
'default_ironic_driver'])
|
||||
# Set the type, for future reference.
|
||||
node['type'] = typ
|
||||
# Sequentially number the node and volume names.
|
||||
node['name'] = "%s%d" % (task_vars['node_name_prefix'], idx)
|
||||
for vol_idx, vol in enumerate(node['volumes']):
|
||||
vol['name'] = "%s%d" % (task_vars['vol_name_prefix'],
|
||||
vol_idx)
|
||||
try:
|
||||
node['ironic_config'] = spec['ironic_config']
|
||||
except KeyError:
|
||||
# Ironic config is not mandatory.
|
||||
pass
|
||||
# Perform round-robin scheduling with node index modulo number
|
||||
# of hypervisors.
|
||||
hyp_name = hypervisor_names[idx % len(hypervisor_names)]
|
||||
try:
|
||||
result['result'][hyp_name].append(node)
|
||||
except KeyError:
|
||||
# This hypervisor doesn't yet have any scheduled nodes.
|
||||
result['result'][hyp_name] = [node]
|
||||
idx += 1
|
||||
return result
|
||||
|
||||
def _validate_vars(self, task_vars):
|
||||
if task_vars is None:
|
||||
task_vars = {}
|
||||
|
||||
REQUIRED_TASK_VARS = {'hypervisor_vars', 'specs', 'node_types'}
|
||||
# Var names and their defaults.
|
||||
OPTIONAL_TASK_VARS = {
|
||||
('node_name_prefix', 'tk'),
|
||||
('vol_name_prefix', 'vol'),
|
||||
}
|
||||
for var in REQUIRED_TASK_VARS:
|
||||
if var not in task_vars:
|
||||
e = "The parameter '%s' must be specified." % var
|
||||
raise AnsibleActionFail(to_text(e))
|
||||
|
||||
for var in OPTIONAL_TASK_VARS:
|
||||
if var[0] not in task_vars:
|
||||
task_vars[var[0]] = var[1]
|
||||
|
||||
if not task_vars['hypervisor_vars']:
|
||||
e = ("There are no hosts in the 'hypervisors' group to which we "
|
||||
"can schedule.")
|
||||
raise AnsibleActionFail(to_text(e))
|
||||
328
ansible/action_plugins/tenks_update_state.py
Normal file
328
ansible/action_plugins/tenks_update_state.py
Normal file
@@ -0,0 +1,328 @@
|
||||
# Copyright (c) 2018 StackHPC Ltd.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
# Avoid shadowing of system copy module by copy action plugin.
|
||||
from __future__ import absolute_import
|
||||
import abc
|
||||
from copy import deepcopy
|
||||
import itertools
|
||||
import re
|
||||
|
||||
from ansible.errors import AnsibleActionFail
|
||||
from ansible.module_utils._text import to_text
|
||||
from ansible.plugins.action import ActionBase
|
||||
import six
|
||||
|
||||
|
||||
class ActionModule(ActionBase):
|
||||
def run(self, tmp=None, task_vars=None):
|
||||
"""
|
||||
Produce a dict of Tenks state.
|
||||
|
||||
Actions include:
|
||||
* Generating indices for physical networks for each hypervisor.
|
||||
* Scheduling specifications of nodes by type onto hypervisors.
|
||||
|
||||
The following task vars are accepted:
|
||||
:hypervisor_vars: A dict of hostvars for each hypervisor, keyed
|
||||
by hypervisor hostname. Required.
|
||||
:specs: A list of node specifications to be instantiated. Required.
|
||||
:node_types: A dict mapping node type names to a dict of properties
|
||||
of that type.
|
||||
:node_name_prefix: A string with which to prefix all sequential
|
||||
node names.
|
||||
:vol_name_prefix: A string with which to prefix all sequential
|
||||
volume names.
|
||||
:state: A dict of existing Tenks state (as produced by a previous
|
||||
run of this module), to be taken into account in this run.
|
||||
Optional.
|
||||
:prune_only: A boolean which, if set, will instruct the plugin to
|
||||
only remove any nodes with state='absent' from
|
||||
`state`.
|
||||
:returns: A dict of Tenks state for each hypervisor, keyed by the
|
||||
hostname of the hypervisor to which the state refers.
|
||||
"""
|
||||
result = super(ActionModule, self).run(tmp, task_vars)
|
||||
# Initialise our return dict.
|
||||
result['result'] = {}
|
||||
del tmp # tmp no longer has any effect
|
||||
|
||||
self.args = self._task.args
|
||||
self.localhost_vars = task_vars['hostvars']['localhost']
|
||||
self._validate_args()
|
||||
|
||||
if self.args['prune_only']:
|
||||
self._prune_absent_nodes()
|
||||
else:
|
||||
# Modify the state as necessary.
|
||||
self._set_physnet_idxs()
|
||||
self._process_specs()
|
||||
|
||||
# Return the modified state.
|
||||
result['result'] = self.args['state']
|
||||
return result
|
||||
|
||||
def _prune_absent_nodes(self):
|
||||
"""
|
||||
Remove any nodes with state='absent' from the state dict.
|
||||
"""
|
||||
for hyp in six.itervalues(self.args['state']):
|
||||
hyp['nodes'] = [n for n in hyp['nodes']
|
||||
if n.get('state') != 'absent']
|
||||
|
||||
def _set_physnet_idxs(self):
|
||||
"""
|
||||
Set the index of each physnet for each host.
|
||||
|
||||
Use the specified physnet mappings and any existing physnet indices to
|
||||
ensure the generated indices are consistent.
|
||||
"""
|
||||
state = self.args['state']
|
||||
for hostname, hostvars in six.iteritems(self.args['hypervisor_vars']):
|
||||
# The desired mappings given in the Tenks configuration. These do
|
||||
# not include IDXs which are an implementation detail of Tenks.
|
||||
specified_mappings = hostvars['physnet_mappings']
|
||||
try:
|
||||
# The physnet indices currently in the state file.
|
||||
old_idxs = state[hostname]['physnet_indices']
|
||||
except KeyError:
|
||||
# The hypervisor is new since the last run.
|
||||
state[hostname] = {}
|
||||
old_idxs = {}
|
||||
new_idxs = {}
|
||||
next_idx = 0
|
||||
used_idxs = list(six.itervalues(old_idxs))
|
||||
for name, dev in six.iteritems(specified_mappings):
|
||||
try:
|
||||
# We need to re-use the IDXs of any existing physnets.
|
||||
idx = old_idxs[name]
|
||||
except KeyError:
|
||||
# New physnet requires a new IDX.
|
||||
while next_idx in used_idxs:
|
||||
next_idx += 1
|
||||
used_idxs.append(next_idx)
|
||||
idx = next_idx
|
||||
new_idxs[name] = idx
|
||||
state[hostname]['physnet_indices'] = new_idxs
|
||||
|
||||
def _process_specs(self):
|
||||
"""
|
||||
Ensure the correct nodes are present in `state`.
|
||||
|
||||
Remove unnecessary nodes by marking as 'absent' and schedule new nodes
|
||||
to hypervisors such that the nodes in `state` match what's specified in
|
||||
`specs`.
|
||||
"""
|
||||
# Iterate through existing nodes, marking for deletion where necessary.
|
||||
for hyp in six.itervalues(self.args['state']):
|
||||
# Absent nodes cannot fulfil a spec.
|
||||
for node in [n for n in hyp.get('nodes', [])
|
||||
if n.get('state') != 'absent']:
|
||||
if ((self.localhost_vars['cmd'] == 'teardown' or
|
||||
not self._tick_off_node(self.args['specs'], node))):
|
||||
# We need to delete this node, since it exists but does not
|
||||
# fulfil any spec.
|
||||
node['state'] = 'absent'
|
||||
|
||||
if self.localhost_vars['cmd'] != 'teardown':
|
||||
# Ensure all hosts exist in state.
|
||||
for hostname in self.args['hypervisor_vars']:
|
||||
self.args['state'].setdefault(hostname, {})
|
||||
self.args['state'][hostname].setdefault('nodes', [])
|
||||
# Now create all the required new nodes.
|
||||
scheduler = RoundRobinScheduler(self.args['hypervisor_vars'],
|
||||
self.args['state'])
|
||||
self._create_nodes(scheduler)
|
||||
|
||||
def _tick_off_node(self, specs, node):
|
||||
"""
|
||||
Tick off an existing node as fulfilling a node specification.
|
||||
|
||||
If `node` is required in `specs`, decrement that spec's count and
|
||||
return True. Otherwise, return False.
|
||||
"""
|
||||
# Attributes that a spec and a node have to have in common for the node
|
||||
# to count as an 'instance' of the spec.
|
||||
MATCHING_ATTRS = {'type', 'ironic_config'}
|
||||
for spec in specs:
|
||||
if ((all(spec[attr] == node[attr] for attr in MATCHING_ATTRS) and
|
||||
spec['count'] > 0)):
|
||||
spec['count'] -= 1
|
||||
return True
|
||||
return False
|
||||
|
||||
def _create_nodes(self, scheduler):
|
||||
"""
|
||||
Create new nodes to fulfil the specs.
|
||||
"""
|
||||
# Anything left in specs needs to be created.
|
||||
for spec in self.args['specs']:
|
||||
for _ in six.moves.range(spec['count']):
|
||||
node = self._gen_node(spec['type'], spec.get('ironic_config'))
|
||||
hostname, idx = scheduler.choose_host(node)
|
||||
# Set node name based on its index.
|
||||
node['name'] = "%s%d" % (self.args['node_name_prefix'], idx)
|
||||
# Set IPMI port using its index as an offset from the lowest
|
||||
# port.
|
||||
node['ipmi_port'] = (
|
||||
self.args['hypervisor_vars'][hostname][
|
||||
'ipmi_port_range_start'] + idx)
|
||||
self.args['state'][hostname]['nodes'].append(node)
|
||||
|
||||
def _gen_node(self, type_name, ironic_config=None):
|
||||
"""
|
||||
Generate a node description.
|
||||
|
||||
A name will not be assigned at this point because we don't know which
|
||||
hypervisor the node will be scheduled to.
|
||||
"""
|
||||
node_type = self.args['node_types'][type_name]
|
||||
node = deepcopy(node_type)
|
||||
# All nodes need an Ironic driver.
|
||||
node.setdefault(
|
||||
'ironic_driver',
|
||||
self.localhost_vars['default_ironic_driver']
|
||||
)
|
||||
# Set the type name, for future reference.
|
||||
node['type'] = type_name
|
||||
# Sequentially number the volume names.
|
||||
for vol_idx, vol in enumerate(node['volumes']):
|
||||
vol['name'] = (
|
||||
"%s%d" % (self.args['vol_name_prefix'], vol_idx))
|
||||
# Ironic config is not mandatory.
|
||||
if ironic_config:
|
||||
node['ironic_config'] = ironic_config
|
||||
return node
|
||||
|
||||
def _validate_args(self):
|
||||
if self.args is None:
|
||||
self.args = {}
|
||||
|
||||
REQUIRED_ARGS = {'hypervisor_vars', 'specs', 'node_types'}
|
||||
# Var names and their defaults.
|
||||
OPTIONAL_ARGS = [
|
||||
('node_name_prefix', 'tk'),
|
||||
# state is optional, since if this is the first run there won't be
|
||||
# any yet.
|
||||
('state', {}),
|
||||
('vol_name_prefix', 'vol'),
|
||||
('prune_only', False),
|
||||
]
|
||||
for arg in OPTIONAL_ARGS:
|
||||
if arg[0] not in self.args:
|
||||
self.args[arg[0]] = arg[1]
|
||||
|
||||
# No arguments are required in prune_only mode.
|
||||
if not self.args['prune_only']:
|
||||
for arg in REQUIRED_ARGS:
|
||||
if arg not in self.args:
|
||||
e = "The parameter '%s' must be specified." % arg
|
||||
raise AnsibleActionFail(to_text(e))
|
||||
|
||||
if not self.args['hypervisor_vars']:
|
||||
e = ("There are no hosts in the 'hypervisors' group to which "
|
||||
"we can schedule.")
|
||||
raise AnsibleActionFail(to_text(e))
|
||||
|
||||
for spec in self.args['specs']:
|
||||
if 'type' not in spec or 'count' not in spec:
|
||||
e = ("All specs must contain a `type` and a `count`. "
|
||||
"Offending spec: %s" % spec)
|
||||
raise AnsibleActionFail(to_text(e))
|
||||
|
||||
|
||||
@six.add_metaclass(abc.ABCMeta)
|
||||
class Scheduler():
|
||||
"""
|
||||
Abstract class representing a 'method' of scheduling nodes to hosts.
|
||||
"""
|
||||
def __init__(self, hostvars, state):
|
||||
self.hostvars = hostvars
|
||||
self.state = state
|
||||
|
||||
self._host_free_idxs = {}
|
||||
|
||||
@abc.abstractmethod
|
||||
def choose_host(self, node):
|
||||
"""Abstract method to choose a host to which we can schedule `node`.
|
||||
|
||||
Returns a tuple of the hostname of the chosen host and the index of
|
||||
this node on the host.
|
||||
"""
|
||||
raise NotImplementedError()
|
||||
|
||||
def host_next_idx(self, hostname):
|
||||
"""
|
||||
Return the next available index for a node on this host.
|
||||
|
||||
If the free indices are not cached for this host, they will be
|
||||
calculated.
|
||||
|
||||
:param hostname: The name of the host in question
|
||||
:returns: The next available index, or None if none is available
|
||||
"""
|
||||
if hostname not in self._host_free_idxs:
|
||||
self._calculate_free_idxs(hostname)
|
||||
try:
|
||||
return self._host_free_idxs[hostname].pop(0)
|
||||
except IndexError:
|
||||
return None
|
||||
|
||||
def host_passes(self, node, hostname):
|
||||
"""
|
||||
Perform checks to ascertain whether this host can support this node.
|
||||
"""
|
||||
# Check that the host is connected to all physical networks that the
|
||||
# node requires.
|
||||
return all(pn in self.hostvars[hostname]['physnet_mappings'].keys()
|
||||
for pn in node['physical_networks'])
|
||||
|
||||
def _calculate_free_idxs(self, hostname):
|
||||
# The maximum number of nodes this host can have is the number of
|
||||
# IPMI ports it has available.
|
||||
all_idxs = six.moves.range(
|
||||
self.hostvars[hostname]['ipmi_port_range_end'] -
|
||||
self.hostvars[hostname]['ipmi_port_range_start'] + 1)
|
||||
get_idx = (
|
||||
lambda n: int(re.match(r'[A-Za-z]*([0-9]+)$', n).group(1)))
|
||||
used_idxs = {get_idx(n['name']) for n in self.state[hostname]['nodes']
|
||||
if n.get('state') != 'absent'}
|
||||
self._host_free_idxs[hostname] = sorted([i for i in all_idxs
|
||||
if i not in used_idxs])
|
||||
|
||||
|
||||
class RoundRobinScheduler(Scheduler):
|
||||
"""
|
||||
Schedule nodes in a round-robin fashion to hosts.
|
||||
"""
|
||||
def __init__(self, hostvars, state):
|
||||
super(RoundRobinScheduler, self).__init__(hostvars, state)
|
||||
self.hostvars = hostvars
|
||||
self._host_cycle = itertools.cycle(hostvars.keys())
|
||||
|
||||
def choose_host(self, node):
|
||||
idx = None
|
||||
count = 0
|
||||
while idx is None:
|
||||
# Ensure we don't get into an infinite loop if no hosts are
|
||||
# available.
|
||||
if count >= len(self.hostvars):
|
||||
e = ("No hypervisors are left that can support the node %s."
|
||||
% node)
|
||||
raise AnsibleActionFail(to_text(e))
|
||||
count += 1
|
||||
hostname = next(self._host_cycle)
|
||||
if self.host_passes(node, hostname):
|
||||
idx = self.host_next_idx(hostname)
|
||||
return hostname, idx
|
||||
20
ansible/cleanup_state.yml
Normal file
20
ansible/cleanup_state.yml
Normal file
@@ -0,0 +1,20 @@
|
||||
---
|
||||
- hosts: localhost
|
||||
tasks:
|
||||
- name: Load state from file
|
||||
include_vars:
|
||||
file: "{{ state_file_path }}"
|
||||
name: tenks_state
|
||||
|
||||
- name: Prune absent nodes from state
|
||||
tenks_update_state:
|
||||
prune_only: true
|
||||
state: "{{ tenks_state }}"
|
||||
register: new_state
|
||||
|
||||
- name: Write new state to file
|
||||
copy:
|
||||
# tenks_schedule lookup plugin outputs a dict. Pretty-print this to
|
||||
# persist it in a YAML file.
|
||||
content: "{{ new_state.result | to_nice_yaml }}"
|
||||
dest: "{{ state_file_path }}"
|
||||
@@ -25,3 +25,6 @@
|
||||
|
||||
- name: Register flavors in Nova
|
||||
import_playbook: flavor_registration.yml
|
||||
|
||||
- name: Clean up Tenks state
|
||||
import_playbook: cleanup_state.yml
|
||||
|
||||
@@ -17,6 +17,7 @@ import re
|
||||
from ansible.errors import AnsibleFilterError
|
||||
from ansible.module_utils._text import to_text
|
||||
from jinja2 import contextfilter
|
||||
import six
|
||||
|
||||
|
||||
class FilterModule(object):
|
||||
@@ -35,6 +36,8 @@ class FilterModule(object):
|
||||
# Network name filters.
|
||||
'bridge_name': bridge_name,
|
||||
'ovs_link_name': ovs_link_name,
|
||||
'physnet_index_to_name': physnet_index_to_name,
|
||||
'physnet_name_to_index': physnet_name_to_index,
|
||||
'source_link_name': source_link_name,
|
||||
'source_to_ovs_link_name': source_to_ovs_link_name,
|
||||
'source_link_to_physnet_name': source_link_to_physnet_name,
|
||||
@@ -97,47 +100,58 @@ def set_libvirt_start_params(node):
|
||||
|
||||
|
||||
@contextfilter
|
||||
def bridge_name(context, physnet):
|
||||
def bridge_name(context, physnet, inventory_hostname=None):
|
||||
"""Get the Tenks OVS bridge name from a physical network name.
|
||||
"""
|
||||
return (_get_hostvar(context, 'bridge_prefix') +
|
||||
str(_physnet_name_to_index(context, physnet)))
|
||||
return (_get_hostvar(context, 'bridge_prefix',
|
||||
inventory_hostname=inventory_hostname) +
|
||||
str(physnet_name_to_index(context, physnet,
|
||||
inventory_hostname=inventory_hostname)))
|
||||
|
||||
|
||||
@contextfilter
|
||||
def source_link_name(context, node, physnet):
|
||||
def source_link_name(context, node, physnet, inventory_hostname=None):
|
||||
"""Get the source veth link name for a node/physnet combination.
|
||||
"""
|
||||
return (_link_name(context, node, physnet) +
|
||||
_get_hostvar(context, 'veth_node_source_suffix'))
|
||||
return (_link_name(context, node, physnet,
|
||||
inventory_hostname=inventory_hostname) +
|
||||
_get_hostvar(context, 'veth_node_source_suffix',
|
||||
inventory_hostname=inventory_hostname))
|
||||
|
||||
|
||||
@contextfilter
|
||||
def ovs_link_name(context, node, physnet):
|
||||
def ovs_link_name(context, node, physnet, inventory_hostname=None):
|
||||
"""Get the OVS veth link name for a node/physnet combination.
|
||||
"""
|
||||
return (_link_name(context, node, physnet) +
|
||||
_get_hostvar(context, 'veth_node_ovs_suffix'))
|
||||
return (_link_name(context, node, physnet,
|
||||
inventory_hostname=inventory_hostname) +
|
||||
_get_hostvar(context, 'veth_node_ovs_suffix',
|
||||
inventory_hostname=inventory_hostname))
|
||||
|
||||
|
||||
@contextfilter
|
||||
def source_to_ovs_link_name(context, source):
|
||||
def source_to_ovs_link_name(context, source, inventory_hostname=None):
|
||||
"""Get the corresponding OVS link name for a source link name.
|
||||
"""
|
||||
base = source[:-len(_get_hostvar(context, 'veth_node_source_suffix'))]
|
||||
return base + _get_hostvar(context, 'veth_node_ovs_suffix')
|
||||
base = source[:-len(_get_hostvar(context, 'veth_node_source_suffix',
|
||||
inventory_hostname=inventory_hostname))]
|
||||
return base + _get_hostvar(context, 'veth_node_ovs_suffix',
|
||||
inventory_hostname=inventory_hostname)
|
||||
|
||||
|
||||
@contextfilter
|
||||
def source_link_to_physnet_name(context, source):
|
||||
def source_link_to_physnet_name(context, source, inventory_hostname=None):
|
||||
""" Get the physical network name that a source veth link is connected to.
|
||||
"""
|
||||
prefix = _get_hostvar(context, 'veth_prefix')
|
||||
suffix = _get_hostvar(context, 'veth_node_source_suffix')
|
||||
prefix = _get_hostvar(context, 'veth_prefix',
|
||||
inventory_hostname=inventory_hostname)
|
||||
suffix = _get_hostvar(context, 'veth_node_source_suffix',
|
||||
inventory_hostname=inventory_hostname)
|
||||
match = re.compile(r"%s.*-(\d+)%s"
|
||||
% (re.escape(prefix), re.escape(suffix))).match(source)
|
||||
idx = match.group(1)
|
||||
return _physnet_index_to_name(context, int(idx))
|
||||
return physnet_index_to_name(context, int(idx),
|
||||
inventory_hostname=inventory_hostname)
|
||||
|
||||
|
||||
def size_string_to_gb(size):
|
||||
@@ -181,21 +195,36 @@ def _parse_size_string(size):
|
||||
return int(number) * (base ** POWERS[power])
|
||||
|
||||
|
||||
def _link_name(context, node, physnet):
|
||||
prefix = _get_hostvar(context, 'veth_prefix')
|
||||
return prefix + node['name'] + '-' + str(_physnet_name_to_index(context,
|
||||
physnet))
|
||||
def _link_name(context, node, physnet, inventory_hostname=None):
|
||||
prefix = _get_hostvar(context, 'veth_prefix',
|
||||
inventory_hostname=inventory_hostname)
|
||||
return (prefix + node['name'] + '-' +
|
||||
str(physnet_name_to_index(context, physnet,
|
||||
inventory_hostname=inventory_hostname)))
|
||||
|
||||
|
||||
def _physnet_name_to_index(context, physnet):
|
||||
@contextfilter
|
||||
def physnet_name_to_index(context, physnet, inventory_hostname=None):
|
||||
"""Get the ID of this physical network on the hypervisor.
|
||||
"""
|
||||
physnet_mappings = _get_hostvar(context, 'physnet_mappings')
|
||||
return sorted(physnet_mappings).index(physnet)
|
||||
if not inventory_hostname:
|
||||
inventory_hostname = _get_hostvar(context, 'inventory_hostname')
|
||||
# localhost stores the state.
|
||||
state = _get_hostvar(context, 'tenks_state',
|
||||
inventory_hostname='localhost')
|
||||
return state[inventory_hostname]['physnet_indices'][physnet]
|
||||
|
||||
|
||||
def _physnet_index_to_name(context, idx):
|
||||
@contextfilter
|
||||
def physnet_index_to_name(context, idx, inventory_hostname=None):
|
||||
"""Get the name of this physical network on the hypervisor.
|
||||
"""
|
||||
physnet_mappings = _get_hostvar(context, 'physnet_mappings')
|
||||
return sorted(physnet_mappings)[idx]
|
||||
if not inventory_hostname:
|
||||
inventory_hostname = _get_hostvar(context, 'inventory_hostname')
|
||||
# localhost stores the state.
|
||||
state = _get_hostvar(context, 'tenks_state',
|
||||
inventory_hostname='localhost')
|
||||
# We should have exactly one physnet with this index.
|
||||
for k, v in six.iteritems(state[inventory_hostname]['physnet_indices']):
|
||||
if v == idx:
|
||||
return k
|
||||
|
||||
@@ -1,5 +1,15 @@
|
||||
---
|
||||
- hosts: localhost
|
||||
tasks:
|
||||
- name: Load state from file
|
||||
include_vars:
|
||||
file: "{{ state_file_path }}"
|
||||
name: tenks_state
|
||||
|
||||
- hosts: hypervisors
|
||||
vars:
|
||||
physnet_indices: >-
|
||||
{{ hostvars.localhost.tenks_state[inventory_hostname].physnet_indices }}
|
||||
tasks:
|
||||
- include_tasks: hypervisor_setup.yml
|
||||
|
||||
|
||||
@@ -96,10 +96,11 @@ deploy_kernel_uuid:
|
||||
# The Glance UUID of the image to use for the deployment ramdisk.
|
||||
deploy_ramdisk_uuid:
|
||||
|
||||
# The path to the file which contains the current allocations of nodes to
|
||||
# hypervisors.
|
||||
allocations_file_path: >-
|
||||
{{ '/'.join([(playbook_dir | dirname), 'allocations.yml']) }}
|
||||
# The path to the file which contains the state of the current Tenks cluster
|
||||
# that is deployed. This includes details such as allocations of nodes to
|
||||
# hypervisors, and unique indices of physical networks.
|
||||
state_file_path: >-
|
||||
{{ '/'.join([(playbook_dir | dirname), 'state.yml']) }}
|
||||
|
||||
# The default Ironic driver of a node. Can be overridden per-node.
|
||||
default_ironic_driver: ipmi
|
||||
|
||||
@@ -41,11 +41,10 @@
|
||||
- name: Configure physical networks
|
||||
include_tasks: physical_network.yml
|
||||
vars:
|
||||
network_name: "{{ item.0 }}"
|
||||
tenks_bridge: "{{ bridge_prefix ~ idx }}"
|
||||
source_interface: "{{ item.1 }}"
|
||||
network_name: "{{ pn.key }}"
|
||||
tenks_bridge: "{{ bridge_prefix ~ (pn.key | physnet_name_to_index) }}"
|
||||
source_interface: "{{ pn.value }}"
|
||||
state: "{{ 'absent' if cmd == 'teardown' else 'present' }}"
|
||||
# Sort to ensure we always enumerate in the same order.
|
||||
loop: "{{ physnet_mappings | dictsort }}"
|
||||
loop: "{{ query('dict', physnet_mappings) }}"
|
||||
loop_control:
|
||||
index_var: idx
|
||||
loop_var: pn
|
||||
|
||||
@@ -1,16 +1,16 @@
|
||||
---
|
||||
- hosts: localhost
|
||||
tasks:
|
||||
- name: Load allocations from file
|
||||
- name: Load state from file
|
||||
include_vars:
|
||||
file: "{{ allocations_file_path }}"
|
||||
name: allocations
|
||||
file: "{{ state_file_path }}"
|
||||
name: tenks_state
|
||||
|
||||
- name: Perform Virtual BMC configuration
|
||||
hosts: libvirt
|
||||
vars:
|
||||
vbmc_nodes: >-
|
||||
{{ hostvars.localhost.allocations[inventory_hostname]
|
||||
{{ hostvars.localhost.tenks_state[inventory_hostname].nodes
|
||||
| default([]) | selectattr('ironic_driver', 'eq',
|
||||
bmc_emulators.virtualbmc) | list }}
|
||||
tasks:
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
- hosts: localhost
|
||||
tasks:
|
||||
- name: Load allocations from file
|
||||
- name: Load state from file
|
||||
include_vars:
|
||||
file: "{{ allocations_file_path }}"
|
||||
name: allocations
|
||||
file: "{{ state_file_path }}"
|
||||
name: tenks_state
|
||||
|
||||
- name: Check that OpenStack credentials exist in the environment
|
||||
fail:
|
||||
@@ -22,7 +22,7 @@
|
||||
# basis to account for existing allocations, rather than for all nodes
|
||||
# here.
|
||||
ironic_nodes: >-
|
||||
{{ alloc.value
|
||||
{{ alloc.value.nodes
|
||||
| map('combine', {'state': ('absent' if cmd == 'teardown'
|
||||
else 'present')})
|
||||
| list }}
|
||||
@@ -30,6 +30,6 @@
|
||||
ironic_virtualenv_path: "{{ virtualenv_path }}"
|
||||
ironic_python_upper_constraints_url: >-
|
||||
{{ python_upper_constraints_url }}
|
||||
loop: "{{ query('dict', allocations) }}"
|
||||
loop: "{{ query('dict', tenks_state) }}"
|
||||
loop_control:
|
||||
loop_var: alloc
|
||||
|
||||
@@ -1,15 +1,15 @@
|
||||
---
|
||||
- hosts: localhost
|
||||
tasks:
|
||||
- name: Load allocations from file
|
||||
- name: Load state from file
|
||||
include_vars:
|
||||
file: "{{ allocations_file_path }}"
|
||||
name: allocations
|
||||
file: "{{ state_file_path }}"
|
||||
name: tenks_state
|
||||
|
||||
- hosts: libvirt
|
||||
vars:
|
||||
nodes: >-
|
||||
{{ hostvars.localhost.allocations[inventory_hostname]
|
||||
{{ hostvars.localhost.tenks_state[inventory_hostname].nodes
|
||||
| default([]) }}
|
||||
tasks:
|
||||
- name: Configure VMs
|
||||
@@ -19,7 +19,7 @@
|
||||
libvirt_vm_default_console_log_dir: "{{ log_directory }}"
|
||||
# Configure VM definitions for the Libvirt provider.
|
||||
# FIXME(w-miller): Set absent/present in tenks_schedule on a per-node
|
||||
# basis to account for existing allocations, rather than for all nodes
|
||||
# basis to account for existing state, rather than for all nodes
|
||||
# here.
|
||||
libvirt_vms: >-
|
||||
{{ nodes | map('combine',
|
||||
|
||||
@@ -1,14 +1,14 @@
|
||||
- hosts: localhost
|
||||
tasks:
|
||||
- name: Load allocations from file
|
||||
- name: Load state from file
|
||||
include_vars:
|
||||
file: "{{ allocations_file_path }}"
|
||||
name: allocations
|
||||
file: "{{ state_file_path }}"
|
||||
name: tenks_state
|
||||
|
||||
- hosts: hypervisors
|
||||
vars:
|
||||
nodes: >-
|
||||
{{ hostvars.localhost.allocations[inventory_hostname]
|
||||
{{ hostvars.localhost.tenks_state[inventory_hostname].nodes
|
||||
| default([]) }}
|
||||
tasks:
|
||||
- name: Configure veth pairs for each node
|
||||
|
||||
@@ -40,8 +40,7 @@
|
||||
- name: Configure node in Ironic
|
||||
os_ironic:
|
||||
auth_type: password
|
||||
driver: >-
|
||||
{{ node.ironic_config.ironic_driver | default(default_ironic_driver) }}
|
||||
driver: "{{ node.ironic_driver }}"
|
||||
driver_info:
|
||||
power:
|
||||
ipmi_address: "{{ hostvars[ironic_hypervisor].ipmi_address }}"
|
||||
|
||||
@@ -10,7 +10,8 @@
|
||||
|
||||
- name: Get physical network name
|
||||
set_fact:
|
||||
physnet: "{{ source_interface | source_link_to_physnet_name }}"
|
||||
physnet: "{{ source_interface | source_link_to_physnet_name(
|
||||
inventory_hostname=ironic_hypervisor) }}"
|
||||
|
||||
- name: Get bridge name
|
||||
set_fact:
|
||||
@@ -26,4 +27,4 @@
|
||||
].macaddress }}'
|
||||
--local-link-connection switch_info='{{ bridge }}'
|
||||
--local-link-connection port_id='{{ source_interface
|
||||
| source_to_ovs_link_name }}'
|
||||
| source_to_ovs_link_name(inventory_hostname=ironic_hypervisor) }}'
|
||||
|
||||
@@ -21,16 +21,28 @@
|
||||
{{ hypervisor_vars | default({}) | combine({item: hostvars[item]}) }}
|
||||
loop: "{{ groups['hypervisors'] }}"
|
||||
|
||||
- name: Schedule nodes to hypervisors
|
||||
tenks_schedule:
|
||||
- name: Check if an existing state file exists
|
||||
stat:
|
||||
path: "{{ state_file_path }}"
|
||||
register: stat_result
|
||||
|
||||
- name: Read existing state from file
|
||||
include_vars:
|
||||
file: "{{ state_file_path }}"
|
||||
name: current_state
|
||||
when: stat_result.stat.exists
|
||||
|
||||
- name: Get updated state
|
||||
tenks_update_state:
|
||||
hypervisor_vars: "{{ hypervisor_vars }}"
|
||||
node_types: "{{ node_types }}"
|
||||
specs: "{{ specs }}"
|
||||
register: scheduling
|
||||
state: "{{ current_state | default(omit) }}"
|
||||
register: new_state
|
||||
|
||||
- name: Write node allocations to file
|
||||
- name: Write new state to file
|
||||
copy:
|
||||
# tenks_schedule lookup plugin outputs a dict. Pretty-print this to
|
||||
# persist it in a YAML file.
|
||||
content: "{{ scheduling.result | to_nice_yaml }}"
|
||||
dest: "{{ allocations_file_path }}"
|
||||
content: "{{ new_state.result | to_nice_yaml }}"
|
||||
dest: "{{ state_file_path }}"
|
||||
|
||||
@@ -25,3 +25,6 @@
|
||||
|
||||
- name: Perform deployment host deconfiguration
|
||||
import_playbook: host_setup.yml
|
||||
|
||||
- name: Clean up Tenks state
|
||||
import_playbook: cleanup_state.yml
|
||||
|
||||
@@ -3,4 +3,8 @@
|
||||
# process, which may cause wedges in the gate later.
|
||||
|
||||
ansible-lint>=3.0.0 # MIT
|
||||
coverage>=4.5.1 # Apache-2.0
|
||||
flake8>=3.5.0 # MIT
|
||||
# Required for Python 2
|
||||
mock>=2.0.0 # BSD
|
||||
stestr>=1.0.0 # Apache-2.0
|
||||
|
||||
0
tests/__init__.py
Normal file
0
tests/__init__.py
Normal file
255
tests/test_tenks_update_state.py
Normal file
255
tests/test_tenks_update_state.py
Normal file
@@ -0,0 +1,255 @@
|
||||
# Copyright (c) 2018 StackHPC Ltd.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from __future__ import absolute_import
|
||||
import copy
|
||||
import imp
|
||||
import os
|
||||
|
||||
from ansible.errors import AnsibleActionFail
|
||||
import six
|
||||
import unittest
|
||||
|
||||
|
||||
# Python 2/3 compatibility.
|
||||
try:
|
||||
from unittest.mock import MagicMock
|
||||
except ImportError:
|
||||
from mock import MagicMock # noqa
|
||||
|
||||
|
||||
# Import method lifted from kolla_ansible's test_merge_config.py
|
||||
PROJECT_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), '../'))
|
||||
PLUGIN_FILE = os.path.join(PROJECT_DIR,
|
||||
'ansible/action_plugins/tenks_update_state.py')
|
||||
|
||||
tus = imp.load_source('tenks_update_state', PLUGIN_FILE)
|
||||
|
||||
|
||||
class TestTenksUpdateState(unittest.TestCase):
|
||||
def setUp(self):
|
||||
# Pass dummy arguments to allow instantiation of action plugin.
|
||||
self.mod = tus.ActionModule(None, None, None, None, None, None)
|
||||
self.mod.localhost_vars = {
|
||||
'cmd': 'deploy',
|
||||
'default_ironic_driver': 'def_ir_driv',
|
||||
}
|
||||
|
||||
# Minimal inputs required.
|
||||
self.node_types = {
|
||||
'type0': {
|
||||
'memory_mb': 1024,
|
||||
'vcpus': 2,
|
||||
'volumes': [
|
||||
{
|
||||
'capacity': '10GB',
|
||||
},
|
||||
{
|
||||
'capacity': '20GB',
|
||||
},
|
||||
],
|
||||
'physical_networks': [
|
||||
'physnet0',
|
||||
],
|
||||
},
|
||||
}
|
||||
self.specs = [
|
||||
{
|
||||
'type': 'type0',
|
||||
'count': 2,
|
||||
'ironic_config': {
|
||||
'resource_class': 'testrc',
|
||||
},
|
||||
},
|
||||
]
|
||||
self.hypervisor_vars = {
|
||||
'foo': {
|
||||
'physnet_mappings': {
|
||||
'physnet0': 'dev0',
|
||||
},
|
||||
'ipmi_port_range_start': 100,
|
||||
'ipmi_port_range_end': 102,
|
||||
},
|
||||
}
|
||||
self.mod.args = {
|
||||
'hypervisor_vars': self.hypervisor_vars,
|
||||
'node_types': self.node_types,
|
||||
'node_name_prefix': 'test_node_pfx',
|
||||
'specs': self.specs,
|
||||
'state': {},
|
||||
'vol_name_prefix': 'test_vol_pfx',
|
||||
}
|
||||
# Alias for brevity.
|
||||
self.args = self.mod.args
|
||||
|
||||
def test__set_physnet_idxs_no_state(self):
|
||||
self.mod._set_physnet_idxs()
|
||||
expected_indices = {
|
||||
'physnet0': 0,
|
||||
}
|
||||
self.assertEqual(self.args['state']['foo']['physnet_indices'],
|
||||
expected_indices)
|
||||
|
||||
def test__set_physnet_idxs_no_state_two_hosts(self):
|
||||
self.hypervisor_vars['bar'] = self.hypervisor_vars['foo']
|
||||
self.mod._set_physnet_idxs()
|
||||
expected_indices = {
|
||||
'physnet0': 0,
|
||||
}
|
||||
for hyp in {'foo', 'bar'}:
|
||||
self.assertEqual(self.args['state'][hyp]['physnet_indices'],
|
||||
expected_indices)
|
||||
|
||||
def test_set_physnet_idxs__no_state_two_hosts_different_nets(self):
|
||||
self.hypervisor_vars['bar'] = self.hypervisor_vars['foo']
|
||||
self.hypervisor_vars['foo']['physnet_mappings'].update({
|
||||
'physnet1': 'dev1',
|
||||
'physnet2': 'dev2',
|
||||
})
|
||||
self.hypervisor_vars['bar']['physnet_mappings'].update({
|
||||
'physnet2': 'dev2',
|
||||
})
|
||||
self.mod._set_physnet_idxs()
|
||||
for host in {'foo', 'bar'}:
|
||||
idxs = list(six.itervalues(
|
||||
self.args['state'][host]['physnet_indices']))
|
||||
# Check all physnets have different IDs on the same host.
|
||||
six.assertCountEqual(self, idxs, set(idxs))
|
||||
|
||||
def test_set_physnet_idxs__idx_maintained_after_removal(self):
|
||||
self.hypervisor_vars['foo']['physnet_mappings'].update({
|
||||
'physnet1': 'dev1',
|
||||
})
|
||||
self.mod._set_physnet_idxs()
|
||||
physnet1_idx = self.args['state']['foo']['physnet_indices']['physnet1']
|
||||
del self.hypervisor_vars['foo']['physnet_mappings']['physnet0']
|
||||
self.mod._set_physnet_idxs()
|
||||
self.assertEqual(
|
||||
physnet1_idx,
|
||||
self.args['state']['foo']['physnet_indices']['physnet1']
|
||||
)
|
||||
|
||||
def _test__process_specs_no_state_create_nodes(self):
|
||||
self.mod._process_specs()
|
||||
self.assertEqual(len(self.args['state']['foo']['nodes']), 2)
|
||||
return self.args['state']['foo']['nodes']
|
||||
|
||||
def test__process_specs_no_state_attrs(self):
|
||||
nodes = self._test__process_specs_no_state_create_nodes()
|
||||
for node in nodes:
|
||||
self.assertTrue(node['name'].startswith('test_node_pfx'))
|
||||
self.assertEqual(node['memory_mb'], 1024)
|
||||
self.assertEqual(node['vcpus'], 2)
|
||||
self.assertEqual(node['physical_networks'], ['physnet0'])
|
||||
|
||||
def test__process_specs_no_state_ipmi_ports(self):
|
||||
nodes = self._test__process_specs_no_state_create_nodes()
|
||||
used_ipmi_ports = set()
|
||||
for node in nodes:
|
||||
self.assertGreaterEqual(
|
||||
node['ipmi_port'],
|
||||
self.hypervisor_vars['foo']['ipmi_port_range_start']
|
||||
)
|
||||
self.assertLessEqual(
|
||||
node['ipmi_port'],
|
||||
self.hypervisor_vars['foo']['ipmi_port_range_end']
|
||||
)
|
||||
self.assertNotIn(node['ipmi_port'], used_ipmi_ports)
|
||||
used_ipmi_ports.add(node['ipmi_port'])
|
||||
|
||||
def test__process_specs_no_state_volumes(self):
|
||||
nodes = self._test__process_specs_no_state_create_nodes()
|
||||
for node in nodes:
|
||||
self.assertEqual(len(node['volumes']), 2)
|
||||
for n in {'0', '1'}:
|
||||
self.assertIn('test_vol_pfx' + n,
|
||||
[vol['name'] for vol in node['volumes']])
|
||||
for c in {'10GB', '20GB'}:
|
||||
self.assertIn(c, [vol['capacity'] for vol in node['volumes']])
|
||||
|
||||
def test__process_specs_apply_twice(self):
|
||||
self.mod._process_specs()
|
||||
created_state = copy.deepcopy(self.args['state'])
|
||||
self.mod._process_specs()
|
||||
self.assertEqual(created_state, self.args['state'])
|
||||
|
||||
def test__process_specs_unnecessary_node(self):
|
||||
# Create some nodes definitions.
|
||||
self.mod._process_specs()
|
||||
|
||||
# Add another node to the state that isn't required.
|
||||
self.args['state']['foo']['nodes'].append(copy.deepcopy(
|
||||
self.args['state']['foo']['nodes'][0]))
|
||||
self.args['state']['foo']['nodes'][-1]['vcpus'] = 42
|
||||
new_node = copy.deepcopy(self.args['state']['foo']['nodes'][-1])
|
||||
|
||||
self.mod._process_specs()
|
||||
# Check that node has been marked for deletion.
|
||||
self.assertNotIn(new_node, self.args['state']['foo']['nodes'])
|
||||
new_node['state'] = 'absent'
|
||||
self.assertIn(new_node, self.args['state']['foo']['nodes'])
|
||||
|
||||
def test__process_specs_teardown(self):
|
||||
# Create some node definitions.
|
||||
self.mod._process_specs()
|
||||
|
||||
# After teardown, we expected all created definitions to now have an
|
||||
# 'absent' state.
|
||||
expected_state = copy.deepcopy(self.args['state'])
|
||||
for node in expected_state['foo']['nodes']:
|
||||
node['state'] = 'absent'
|
||||
self.mod.localhost_vars['cmd'] = 'teardown'
|
||||
|
||||
# After one or more runs, the 'absent' state nodes should still exist,
|
||||
# since they're only removed after completion of deployment in a
|
||||
# playbook.
|
||||
for _ in six.moves.range(3):
|
||||
self.mod._process_specs()
|
||||
self.assertEqual(expected_state, self.args['state'])
|
||||
|
||||
def test__process_specs_no_hypervisors(self):
|
||||
self.args['hypervisor_vars'] = {}
|
||||
self.assertRaises(AnsibleActionFail, self.mod._process_specs)
|
||||
|
||||
def test__process_specs_no_hypervisors_on_physnet(self):
|
||||
self.node_types['type0']['physical_networks'].append('another_pn')
|
||||
self.assertRaises(AnsibleActionFail, self.mod._process_specs)
|
||||
|
||||
def test__process_specs_one_hypervisor_on_physnet(self):
|
||||
self.node_types['type0']['physical_networks'].append('another_pn')
|
||||
self.hypervisor_vars['bar'] = copy.deepcopy(
|
||||
self.hypervisor_vars['foo'])
|
||||
self.hypervisor_vars['bar']['physnet_mappings']['another_pn'] = 'dev1'
|
||||
self.mod._process_specs()
|
||||
|
||||
# Check all nodes were scheduled to the hypervisor connected to the
|
||||
# new physnet.
|
||||
self.assertEqual(len(self.args['state']['foo']['nodes']), 0)
|
||||
self.assertEqual(len(self.args['state']['bar']['nodes']), 2)
|
||||
|
||||
def test__process_specs_not_enough_ports(self):
|
||||
# Give 'foo' only a single IPMI port to allocate.
|
||||
self.hypervisor_vars['foo']['ipmi_port_range_start'] = 123
|
||||
self.hypervisor_vars['foo']['ipmi_port_range_end'] = 123
|
||||
self.assertRaises(AnsibleActionFail, self.mod._process_specs)
|
||||
|
||||
def test__prune_absent_nodes(self):
|
||||
# Create some node definitions.
|
||||
self.mod._process_specs()
|
||||
# Set them to be 'absent'.
|
||||
for node in self.args['state']['foo']['nodes']:
|
||||
node['state'] = 'absent'
|
||||
self.mod._prune_absent_nodes()
|
||||
# Ensure they were removed.
|
||||
self.assertEqual(self.args['state']['foo']['nodes'], [])
|
||||
17
tox.ini
17
tox.ini
@@ -1,6 +1,6 @@
|
||||
[tox]
|
||||
minversion = 2.0
|
||||
envlist = py35,py27,pep8,alint
|
||||
envlist = py35,py27,pep8,alint,cover
|
||||
skipsdist = True
|
||||
|
||||
[testenv]
|
||||
@@ -21,12 +21,27 @@ deps =
|
||||
-c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt?h=stable/rocky}
|
||||
-r{toxinidir}/requirements.txt
|
||||
-r{toxinidir}/test-requirements.txt
|
||||
commands =
|
||||
stestr run {posargs}
|
||||
|
||||
[testenv:pep8]
|
||||
basepython = python2.7
|
||||
commands =
|
||||
flake8 {posargs}
|
||||
|
||||
[testenv:cover]
|
||||
basepython = python3
|
||||
setenv =
|
||||
VIRTUAL_ENV={envdir}
|
||||
PYTHON=coverage run --source tenks,ansible/action_plugins --parallel-mode
|
||||
commands =
|
||||
coverage erase
|
||||
stestr run {posargs}
|
||||
coverage combine
|
||||
coverage report
|
||||
coverage html -d cover
|
||||
coverage xml -o cover/coverage.xml
|
||||
|
||||
[testenv:alint]
|
||||
basepython = python2.7
|
||||
# ansible-lint doesn't support custom modules, so add ours to the Ansible path.
|
||||
|
||||
Reference in New Issue
Block a user