2728dec51a
This uses ironic’s conductor group feature to limit the subset of nodes which a nova-compute service will manage. This allows for partitioning nova-compute services to a particular location (building, aisle, rack, etc), and provides a way for operators to manage the failure domain of a given nova-compute service. This adds two config options to the [ironic] config group: * partition_key: the key to match with the conductor_group property on ironic nodes to limit the subset of nodes which can be managed. * peer_list: a list of other compute service hostnames which manage the same partition_key, used when building the hash ring. Change-Id: I1b184ff37948dc403fe38874613cd4d870c644fd Implements: blueprint ironic-conductor-groups
13 lines
679 B
YAML
13 lines
679 B
YAML
---
|
|
features:
|
|
- |
|
|
In deployments with Ironic, adds the ability for compute services to manage
|
|
a subset of Ironic nodes. If the ``[ironic]/partition_key`` configuration
|
|
option is set, the compute service will only consider nodes with a matching
|
|
``conductor_group`` attribute for management. Setting the
|
|
``[ironic]/peer_list`` configuration option allows this subset of nodes to
|
|
be distributed among the compute services specified to further reduce
|
|
failure domain. This feature is useful to co-locate nova-compute services
|
|
with ironic-conductor services managing the same nodes, or to better
|
|
control failure domain of a given compute service.
|