nova/releasenotes/notes/ironic-driver-hash-ring-7d763d87b9236e5d.yaml
Jim Rollenhagen 6047d790a3 Ironic: allow multiple compute services
This lifts some hash ring code from ironic (to be put into oslo
soon), to be used to do consistent hashing of ironic nodes among
multiple nova-compute services. The hash ring is used within the
driver itself, and is refreshed at each resource tracker.

get_available_nodes() will now return a subset of nodes,
determined by the following rules:

* any node with an instance managed by the compute service
* any node that is mapped to the compute service on the hash ring
* no nodes with instances managed by another compute service

The virt driver finds all compute services that are running the
ironic driver by joining the services table and the compute_nodes
table. Since there won't be any records in the compute_nodes table
for a service that is starting for the first time, the virt driver
also adds its own compute service into this list. The list of all
hostnames in this list is what is used to instantiate the hash ring.

As nova-compute services are brought up or down, the ring will
re-balance. It's important to note that this re-balance does not
occur at the same time on all compute services, so for some amount
of time, an ironic node may be managed by more than one compute
service. In other words, there may be two compute_nodes records
for a single ironic node, with a different host value. For
scheduling purposes, this is okay, because either compute service
is capable of actually spawning an instance on the node (because the
ironic service doesn't know about this hashing). This will cause
capacity reporting (e.g. nova hypervisor-stats) to over-report
capacity for this time. Once all compute services in the cluster
have done a resource tracker run and re-balanced the hash ring,
this will be back to normal.

It's also important to note that, due to the way nodes with instances
are handled, if an instance is deleted while the compute service is
down, that node will be removed from the compute_nodes table when
the service comes back up (as each service will see an instance on
the node object, and assume another compute service manages that
instance). The ironic node will remain active and orphaned. Once
the periodic task to reap deleted instances runs, the ironic node
will be torn down and the node will again be reported in the
compute_nodes table.

It's all very eventually consistent, with a potentially long time
to eventual.

There's no configuration to enable this mode; it's always running.
The code is exercised (but simple) when running with one compute
service; spinning up more invokes the hard bits. As such,
the release note for this change clarifies that this feature
is new and untested for running with multiple compute services.

Implements: blueprint ironic-multiple-compute-hosts
Change-Id: I852f62b29f1faedf7ff19b42bbfb966f61d95c6e
2016-08-04 23:51:13 +00:00

34 lines
1.7 KiB
YAML

---
features:
- |
Adds a new feature to the ironic virt driver, which allows
multiple nova-compute services to be run simultaneously. This uses
consistent hashing to divide the ironic nodes between the nova-compute
services, with the hash ring being refreshed each time the resource tracker
runs.
Note that instances will still be owned by the same nova-compute service
for the entire life of the instance, and so the ironic node that instance
is on will also be managed by the same nova-compute service until the node
is deleted. This also means that removing a nova-compute service will
leave instances managed by that service orphaned, and as such most
instance actions will not work until a nova-compute service with the same
hostname is brought (back) online.
When nova-compute services are brought up or down, the ring will eventually
re-balance (when the resource tracker runs on each compute). This may
result in duplicate compute_node entries for ironic nodes while the
nova-compute service pool is re-balancing. However, because any
nova-compute service running the ironic virt driver can manage any ironic
node, if a build request goes to the compute service not currently managing
the node the build request is for, it will still succeed.
There is no configuration to do to enable this feature; it is always
enabled. There are no major changes when only one compute service is
running. If more compute services are brought online, the bigger changes
come into play.
Note that this is tested when running with only one nova-compute service,
but not more than one. As such, this should be used with caution for
multiple compute hosts until it is properly tested in CI.