cbf400df1d
As part of the move to using Ironic shards, we document that the best practice for scaling Ironic and Nova deployments is to shard Ironic nodes between nova-compute processes, rather than attempting to user the peer_list. Currently, we only allow users to do this using conductor groups. This works well for those wanting a conductor group per L2 network domain. But in general, conductor groups per nova-compute are a very poor trade off in terms of ironic deployment complexity. Futher patches will look to enable the use of ironic shards, alongside conductor groups, to more easily shard your ironic nodes between nova-compute processes. To avoid confusion, we rename the partition_key configuration value to conductor_group. blueprint ironic-shards Change-Id: Ia2e23a59dbd2f13c6f74ca975c249751bebf54b2
17 lines
849 B
YAML
17 lines
849 B
YAML
---
|
|
deprecations:
|
|
- |
|
|
We have renamed ``[ironic]partition_key`` to ``[ironic]conductor_group``.
|
|
The config option is still used to specify which Ironic conductor group
|
|
the ironic driver in the nova compute process should target.
|
|
- |
|
|
We have deprecated the configuration ``[ironic]peer_list``, along with
|
|
our support for a group of ironic nova-compute processes targeting
|
|
a shared set of Ironic nodes.
|
|
There are so many bugs in this support we now prefer statically
|
|
sharding the nodes between multiple nova-compute processes.
|
|
Note that the ironic nova-compute process is stateless, and the
|
|
identity of the service is defined by the config option ``[DEFAULT]host``.
|
|
As such, you can use an active-passive HA solution to ensure at most
|
|
one nova-compute process is running for each Ironic node shard.
|