cinder/etc/cinder
Gorka Eguileor efa8e210c9 Allow triggering cleanup from API
Now that we support having multiple c-vol services using the same
storage backend under one cluster, they no longer clean all resources
from the backend with ongoing statuses in the DB, only those from their
own host because those are failed operations that were left "in the air"
when the service was stopped.  So we need a way to trigger the cleanup
of resources that were being processed by another c-vol service that
failed in the same cluster.

This patch adds a new API endpoint (/workers/cleanup) that will trigger
cleanup for c-vol services as microversion 3.19.

The cleanup will be performed by other services that share the same
cluster, so at least one of them must be up to be able to do the
cleanup.

Cleanup cannot be triggered during a cloud upgrade, but a restarted
service will still cleanup it's own resources during an upgrade.

If no arguments are provided cleanup will try to issue a clean message
for all nodes that are down, but we can restrict which nodes we want to
be cleaned using parameters `service_id`, `cluster_name`, `host`,
`binary`, and `disabled`.

Cleaning specific resources is also possible using `resource_type` and
`resource_id` parameters.

We can even force cleanup on nodes that are up with `is_up`, but that's
not recommended and should only used if you know what you are doing.
For example if you know a specific cinder-volume is down even though
it's still not being reported as down when listing the services and you
know the cluster has at least another service to do the cleanup.

API will return a dictionary with 2 lists, one with services that have
been issued a cleanup request (`cleaning` key) and another list with
services that cannot be cleaned right now because there is no
alternative service to do the cleanup in that cluster (`unavailable`
key).

Data returned for each service element in these two lists consist of the
`id`, `host`, `binary`, and `cluster_name`.  These are not the services
that will be performing the cleanup, but the services that will be
cleaned up or couldn't be cleaned up.

Specs: https://specs.openstack.org/openstack/cinder-specs/specs/newton/ha-aa-cleanup.html

APIImpact: New /workers/cleanup entry
Implements: blueprint cinder-volume-active-active-support
Change-Id: If336b6569b171846954ed6eb73f5a4314c6c7e2e
2017-01-13 14:34:45 +01:00
..
rootwrap.d Fix secondary lvm cmds rootwrap filters 2016-12-02 04:11:53 +00:00
README-cinder.conf.sample Remove the cinder.conf.sample file 2014-12-07 23:09:36 +08:00
api-httpd.conf Add Cinder API wsgi application 2015-08-25 13:48:03 +03:00
api-paste.ini Use oslo_middleware sizelimit 2016-06-09 07:29:51 -04:00
logging_sample.conf Move logging sample to use oslo_log 2015-04-30 15:04:48 +00:00
policy.json Allow triggering cleanup from API 2017-01-13 14:34:45 +01:00
rootwrap.conf Add iSCSI SCST Target support to cinder 2015-02-13 00:52:11 +05:30