Merge "Add is_nest property for container_skel"
This commit is contained in:
commit
fd50cbbb9d
@ -150,6 +150,131 @@ following steps:
|
||||
|
||||
.. _affinity:
|
||||
|
||||
Adding virtual nest groups
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If you want to create a custom group for arbitrary grouping of hosts and
|
||||
containers within these hosts but skip the generation of any new containers,
|
||||
you should use ``is_nest`` property under container_skel and skip defining
|
||||
``belongs_to`` structure. ``is_nest`` property will add host-containers as
|
||||
children to such a group.
|
||||
|
||||
Example: Defining Availability Zones
|
||||
------------------------------------
|
||||
|
||||
A good example of how ``is_nest`` property can be used is describing
|
||||
Availability Zones. As when operating multiple AZs it's handy to define
|
||||
AZ-specific variables, like AZ name, for all hosts in this AZ. And
|
||||
leveraging group_vars is best way of ensuring that all hosts that belong
|
||||
to same AZ have same configuration applied.
|
||||
|
||||
Let's assume you have 3 controllers and each of them is placed
|
||||
in different Availability Zones. There is also a compute node in
|
||||
each Availability Zone. And we want each host or container that is placed
|
||||
physically in a specific AZ be part of it's own group (ie azN_all)
|
||||
|
||||
In order to achieve that we need:
|
||||
|
||||
#. Define host groups in conf.d or openstack_user_config.yml to assign hosts
|
||||
accordingly to their Availability Zones:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
az1-infra_hosts: &infra_az1
|
||||
az1-infra1:
|
||||
ip: 172.39.123.11
|
||||
|
||||
az2-infra_hosts: &infra_az2
|
||||
az2-infra2:
|
||||
ip: 172.39.123.12
|
||||
|
||||
az3-infra_hosts: &infra_az3
|
||||
az3-infra3:
|
||||
ip: 172.39.123.13
|
||||
|
||||
shared-infra_hosts: &controllers
|
||||
<<: *infra_az1
|
||||
<<: *infra_az2
|
||||
<<: *infra_az3
|
||||
|
||||
az1-compute_hosts: &computes_az1
|
||||
az1-compute01:
|
||||
ip: 172.39.123.100
|
||||
|
||||
az2-compute_hosts: &computes_az2
|
||||
az2-compute01:
|
||||
ip: 172.39.123.150
|
||||
|
||||
az3-compute_hosts: &computes_az3
|
||||
az3-compute01:
|
||||
ip: 172.39.123.200
|
||||
|
||||
compute_hosts:
|
||||
<<: *computes_az1
|
||||
<<: *computes_az2
|
||||
<<: *computes_az3
|
||||
|
||||
az1_hosts:
|
||||
<<: *computes_az1
|
||||
<<: *infra_az1
|
||||
|
||||
az2_hosts:
|
||||
<<: *computes_az2
|
||||
<<: *infra_az2
|
||||
|
||||
az3_hosts:
|
||||
<<: *computes_az3
|
||||
<<: *infra_az3
|
||||
|
||||
#. Create ``env.d/az.yml`` file that will leverage ``is_nest`` property and allow
|
||||
all infra containers to be part of the AZ group as well
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
component_skel:
|
||||
az1_containers:
|
||||
belongs_to:
|
||||
- az1_all
|
||||
az1_hosts:
|
||||
belongs_to:
|
||||
- az1_all
|
||||
|
||||
az2_containers:
|
||||
belongs_to:
|
||||
- az2_all
|
||||
az2_hosts:
|
||||
belongs_to:
|
||||
- az2_all
|
||||
|
||||
az3_containers:
|
||||
belongs_to:
|
||||
- az3_all
|
||||
az3_hosts:
|
||||
belongs_to:
|
||||
- az3_all
|
||||
|
||||
container_skel:
|
||||
az1_containers:
|
||||
properties:
|
||||
is_nest: True
|
||||
az2_containers:
|
||||
properties:
|
||||
is_nest: True
|
||||
az3_containers:
|
||||
properties:
|
||||
is_nest: True
|
||||
|
||||
#. Now you can leverage group_vars file to apply a variable to all
|
||||
containers and bare metal hosts in AZ.
|
||||
For example ``/etc/openstack_deploy/group_vars/az1_all.yml``:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
---
|
||||
az_name: az1
|
||||
cinder_storage_availability_zone: "{{ az_name }}"
|
||||
|
||||
|
||||
Deploying 0 (or more than one) of component type per host
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
|
@ -143,6 +143,22 @@ infrastructure hosts. To achieve this, implement
|
||||
|
||||
.. literalinclude:: ../../../../etc/openstack_deploy/env.d/cinder-volume.yml.container.example
|
||||
|
||||
You can also declare a custom group for each pod that will also include all
|
||||
containers from hosts that belong to this pod. This might be handy if you want
|
||||
to define some variable for all hosts in the pod using group_variables.
|
||||
|
||||
For that create ``/etc/openstack_deploy/env.d/pod.yml`` with the following content:
|
||||
|
||||
.. literalinclude:: ../../../../etc/openstack_deploy/env.d/pods.yml.example
|
||||
|
||||
Above example will create following groups:
|
||||
|
||||
* ``podN_hosts`` which will contain only bare metal nodes
|
||||
* ``podN_containers`` that will contain all containers that are spawned on
|
||||
bare metal nodes, that are part of the pod.
|
||||
* ``podN_all`` that will contain `podN_hosts` and `podN_containers` members
|
||||
|
||||
|
||||
User variables
|
||||
--------------
|
||||
|
||||
|
44
etc/openstack_deploy/env.d/pods.yml.example
Normal file
44
etc/openstack_deploy/env.d/pods.yml.example
Normal file
@ -0,0 +1,44 @@
|
||||
---
|
||||
|
||||
component_skel:
|
||||
pod1_containers:
|
||||
belongs_to:
|
||||
- pod1_all
|
||||
pod1_hosts:
|
||||
belongs_to:
|
||||
- pod1_all
|
||||
|
||||
pod2_containers:
|
||||
belongs_to:
|
||||
- pod2_all
|
||||
pod2_hosts:
|
||||
belongs_to:
|
||||
- pod2_all
|
||||
|
||||
pod3_containers:
|
||||
belongs_to:
|
||||
- pod3_all
|
||||
pod3_hosts:
|
||||
belongs_to:
|
||||
- pod3_all
|
||||
|
||||
pod4_containers:
|
||||
belongs_to:
|
||||
- pod3_all
|
||||
pod4_hosts:
|
||||
belongs_to:
|
||||
- pod3_all
|
||||
|
||||
container_skel:
|
||||
pod1_containers:
|
||||
properties:
|
||||
is_nest: True
|
||||
pod2_containers:
|
||||
properties:
|
||||
is_nest: True
|
||||
pod3_containers:
|
||||
properties:
|
||||
is_nest: True
|
||||
pod4_containers:
|
||||
properties:
|
||||
is_nest: True
|
@ -270,76 +270,47 @@ global_overrides:
|
||||
### Infrastructure
|
||||
###
|
||||
|
||||
pod1_hosts:
|
||||
pod1_hosts: &pod1
|
||||
infra1:
|
||||
ip: 172.29.236.10
|
||||
log1:
|
||||
ip: 172.29.236.11
|
||||
|
||||
pod2_hosts:
|
||||
pod2_hosts: &pod2
|
||||
infra2:
|
||||
ip: 172.29.239.10
|
||||
|
||||
pod3_hosts:
|
||||
pod3_hosts: &pod3
|
||||
infra3:
|
||||
ip: 172.29.242.10
|
||||
|
||||
pod4_hosts:
|
||||
pod4_hosts: &pod4
|
||||
compute1:
|
||||
ip: 172.29.245.10
|
||||
compute2:
|
||||
ip: 172.29.245.11
|
||||
|
||||
# galera, memcache, rabbitmq, utility
|
||||
shared-infra_hosts:
|
||||
infra1:
|
||||
ip: 172.29.236.10
|
||||
infra2:
|
||||
ip: 172.29.239.10
|
||||
infra3:
|
||||
ip: 172.29.242.10
|
||||
shared-infra_hosts: &controllers
|
||||
<<: *pod1
|
||||
<<: *pod2
|
||||
<<: *pod3
|
||||
|
||||
# repository (apt cache, python packages, etc)
|
||||
repo-infra_hosts:
|
||||
infra1:
|
||||
ip: 172.29.236.10
|
||||
infra2:
|
||||
ip: 172.29.239.10
|
||||
infra3:
|
||||
ip: 172.29.242.10
|
||||
repo-infra_hosts: *controllers
|
||||
|
||||
# load balancer
|
||||
# Ideally the load balancer should not use the Infrastructure hosts.
|
||||
# Dedicated hardware is best for improved performance and security.
|
||||
haproxy_hosts:
|
||||
infra1:
|
||||
ip: 172.29.236.10
|
||||
infra2:
|
||||
ip: 172.29.239.10
|
||||
infra3:
|
||||
ip: 172.29.242.10
|
||||
haproxy_hosts: *controllers
|
||||
|
||||
###
|
||||
### OpenStack
|
||||
###
|
||||
|
||||
# keystone
|
||||
identity_hosts:
|
||||
infra1:
|
||||
ip: 172.29.236.10
|
||||
infra2:
|
||||
ip: 172.29.239.10
|
||||
infra3:
|
||||
ip: 172.29.242.10
|
||||
identity_hosts: *controllers
|
||||
|
||||
# cinder api services
|
||||
storage-infra_hosts:
|
||||
infra1:
|
||||
ip: 172.29.236.10
|
||||
infra2:
|
||||
ip: 172.29.239.10
|
||||
infra3:
|
||||
ip: 172.29.242.10
|
||||
storage-infra_hosts: *controllers
|
||||
|
||||
# glance
|
||||
# The settings here are repeated for each infra host.
|
||||
@ -376,81 +347,30 @@ image_hosts:
|
||||
options: "_netdev,auto"
|
||||
|
||||
# nova api, conductor, etc services
|
||||
compute-infra_hosts:
|
||||
infra1:
|
||||
ip: 172.29.236.10
|
||||
infra2:
|
||||
ip: 172.29.239.10
|
||||
infra3:
|
||||
ip: 172.29.242.10
|
||||
compute-infra_hosts: *controllers
|
||||
|
||||
# heat
|
||||
orchestration_hosts:
|
||||
infra1:
|
||||
ip: 172.29.236.10
|
||||
infra2:
|
||||
ip: 172.29.239.10
|
||||
infra3:
|
||||
ip: 172.29.242.10
|
||||
orchestration_hosts: *controllers
|
||||
|
||||
# horizon
|
||||
dashboard_hosts:
|
||||
infra1:
|
||||
ip: 172.29.236.10
|
||||
infra2:
|
||||
ip: 172.29.239.10
|
||||
infra3:
|
||||
ip: 172.29.242.10
|
||||
dashboard_hosts: *controllers
|
||||
|
||||
# neutron server, agents (L3, etc)
|
||||
network_hosts:
|
||||
infra1:
|
||||
ip: 172.29.236.10
|
||||
infra2:
|
||||
ip: 172.29.239.10
|
||||
infra3:
|
||||
ip: 172.29.242.10
|
||||
network_hosts: *controllers
|
||||
|
||||
# ceilometer (telemetry data collection)
|
||||
metering-infra_hosts:
|
||||
infra1:
|
||||
ip: 172.29.236.10
|
||||
infra2:
|
||||
ip: 172.29.239.10
|
||||
infra3:
|
||||
ip: 172.29.242.10
|
||||
metering-infra_hosts: *controllers
|
||||
|
||||
# aodh (telemetry alarm service)
|
||||
metering-alarm_hosts:
|
||||
infra1:
|
||||
ip: 172.29.236.10
|
||||
infra2:
|
||||
ip: 172.29.239.10
|
||||
infra3:
|
||||
ip: 172.29.242.10
|
||||
|
||||
metering-alarm_hosts: *controllers
|
||||
# gnocchi (telemetry metrics storage)
|
||||
metrics_hosts:
|
||||
infra1:
|
||||
ip: 172.29.236.10
|
||||
infra2:
|
||||
ip: 172.29.239.10
|
||||
infra3:
|
||||
ip: 172.29.242.10
|
||||
metrics_hosts: *controllers
|
||||
|
||||
# nova hypervisors
|
||||
compute_hosts:
|
||||
compute1:
|
||||
ip: 172.29.245.10
|
||||
compute2:
|
||||
ip: 172.29.245.11
|
||||
compute_hosts: *pod4
|
||||
|
||||
# ceilometer compute agent (telemetry data collection)
|
||||
metering-compute_hosts:
|
||||
compute1:
|
||||
ip: 172.29.245.10
|
||||
compute2:
|
||||
ip: 172.29.245.11
|
||||
metering-compute_hosts: *pod4
|
||||
|
||||
# cinder volume hosts (NFS-backed)
|
||||
# The settings here are repeated for each infra host.
|
||||
|
@ -712,8 +712,9 @@ def container_skel_load(container_skel, inventory, config):
|
||||
logger.debug("Loading container skeleton")
|
||||
|
||||
for key, value in container_skel.items():
|
||||
contains_in = value.get('contains', False)
|
||||
belongs_to_in = value.get('belongs_to', False)
|
||||
contains_in = value.get('contains', list())
|
||||
belongs_to_in = value.get('belongs_to', list())
|
||||
properties = value.get('properties', {})
|
||||
|
||||
if belongs_to_in:
|
||||
_parse_belongs_to(
|
||||
@ -721,18 +722,25 @@ def container_skel_load(container_skel, inventory, config):
|
||||
belongs_to=value['belongs_to'],
|
||||
inventory=inventory
|
||||
)
|
||||
if properties.get('is_nest', False):
|
||||
physical_host_type = '{}_hosts'.format(key.split('_')[0])
|
||||
for host_type in inventory[physical_host_type]['hosts']:
|
||||
container_mapping = inventory[key]['children']
|
||||
host_type_containers = '{}-host_containers'.format(host_type)
|
||||
if host_type_containers in inventory:
|
||||
du.append_if(array=container_mapping,
|
||||
item=host_type_containers)
|
||||
|
||||
if contains_in or belongs_to_in:
|
||||
for assignment in value['contains']:
|
||||
for container_type in value['belongs_to']:
|
||||
_add_container_hosts(
|
||||
assignment,
|
||||
config,
|
||||
key,
|
||||
container_type,
|
||||
inventory,
|
||||
value.get('properties', {})
|
||||
)
|
||||
for assignment in contains_in:
|
||||
for container_type in belongs_to_in:
|
||||
_add_container_hosts(
|
||||
assignment,
|
||||
config,
|
||||
key,
|
||||
container_type,
|
||||
inventory,
|
||||
properties
|
||||
)
|
||||
|
||||
cidr_networks = config.get('cidr_networks')
|
||||
provider_queues = {}
|
||||
|
@ -1559,5 +1559,62 @@ class TestL3ProviderNetworkConfig(TestConfigCheckBase):
|
||||
self.assertNotIn('management_address', aio1_container_networks)
|
||||
|
||||
|
||||
class TestNestsGroups(TestConfigCheckBase):
|
||||
def setUp(self):
|
||||
super(TestNestsGroups, self).setUp()
|
||||
self.nest_env_path = path.join(TARGET_DIR, 'env.d/az.yml')
|
||||
self._create_nest_env()
|
||||
self.add_config_key('nest_hosts', {})
|
||||
self.add_host('nest_hosts', 'aio1', '172.29.236.100')
|
||||
self.add_host('nest_hosts', 'aio2', '172.29.236.101')
|
||||
self.add_host('compute_hosts', 'aio2', '172.29.236.101')
|
||||
self.write_config()
|
||||
self.inventory = get_inventory()
|
||||
|
||||
def tearDown(self):
|
||||
os.remove(self.nest_env_path)
|
||||
os.rmdir(os.path.dirname(self.nest_env_path))
|
||||
|
||||
def _create_nest_env(self):
|
||||
data = """
|
||||
component_skel:
|
||||
nest_containers:
|
||||
belongs_to:
|
||||
- nest_all
|
||||
nest_hosts:
|
||||
belongs_to:
|
||||
- nest_all
|
||||
|
||||
container_skel:
|
||||
nest_containers:
|
||||
properties:
|
||||
is_nest: True
|
||||
"""
|
||||
env = yaml.safe_load(data)
|
||||
os.mkdir(os.path.dirname(self.nest_env_path))
|
||||
with open(self.nest_env_path, 'w') as f:
|
||||
f.write(yaml.safe_dump(env))
|
||||
|
||||
def test_nest_all_childrens(self):
|
||||
nest_expected_children = set(['nest_containers', 'nest_hosts'])
|
||||
nest_children = set(self.inventory['nest_all']['children'])
|
||||
self.assertEqual(nest_expected_children, nest_children)
|
||||
|
||||
def test_nest_hosts(self):
|
||||
nest_hosts_expected = set(['aio1', 'aio2'])
|
||||
nest_hosts = set(self.inventory['nest_hosts']['hosts'])
|
||||
self.assertEqual(nest_hosts_expected, nest_hosts)
|
||||
|
||||
def test_nest_containers(self):
|
||||
host_containers_group = 'aio1-host_containers'
|
||||
nest_containers_expected = set([host_containers_group])
|
||||
nest_containers = set(self.inventory['nest_containers']['children'])
|
||||
# Ensure we have only lxc_hosts in children
|
||||
self.assertEqual(nest_containers_expected, nest_containers)
|
||||
# Ensure that we have host-containers group generated
|
||||
self.assertIn(host_containers_group, self.inventory)
|
||||
# Ensure that host-containers group is not empty
|
||||
self.assertTrue(len(self.inventory[host_containers_group]['hosts']) > 0)
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main(catchbreak=True)
|
||||
|
Loading…
Reference in New Issue
Block a user