08d7be1726
When people transition from three ironic nova-compute processes down to one process, we need a way to move the ironic nodes, and any associcated instances, between nova-compute processes. For saftey, a nova-compute process must first be forced_down via the API, similar to when using evacaute, before moving the associated ironic nodes to another nova-compute process. The destination nova-compute process should ideally not be running, but not forced down. blueprint ironic-shards Change-Id: I33034ec77b033752797bd679c6e61cef5af0a18f
18 lines
910 B
YAML
18 lines
910 B
YAML
---
|
|
features:
|
|
- |
|
|
Ironic nova-compute services can now target a specific shard of ironic
|
|
nodes by setting the config ``[ironic]shard``.
|
|
This is particularly useful when using active-passive methods to choose
|
|
on which physical host your ironic nova-compute process is running,
|
|
while ensuring ``[DEFAULT]host`` stays the same for each shard.
|
|
You can use this alongside ``[ironic]conductor_group`` to further limit
|
|
which ironic nodes are managed by each nova-compute service.
|
|
Note that when you use ``[ironic]shard`` the ``[ironic]peer_list``
|
|
is hard coded to a single nova-compute service.
|
|
|
|
There is a new nova-manage command ``db ironic_compute_node_move`` that
|
|
can be used to move ironic nodes, and the associated instances, between
|
|
nova-compute services. This is useful when migrating from the legacy
|
|
hash ring based HA towards the new sharding approach.
|