efa8e210c9
Now that we support having multiple c-vol services using the same storage backend under one cluster, they no longer clean all resources from the backend with ongoing statuses in the DB, only those from their own host because those are failed operations that were left "in the air" when the service was stopped. So we need a way to trigger the cleanup of resources that were being processed by another c-vol service that failed in the same cluster. This patch adds a new API endpoint (/workers/cleanup) that will trigger cleanup for c-vol services as microversion 3.19. The cleanup will be performed by other services that share the same cluster, so at least one of them must be up to be able to do the cleanup. Cleanup cannot be triggered during a cloud upgrade, but a restarted service will still cleanup it's own resources during an upgrade. If no arguments are provided cleanup will try to issue a clean message for all nodes that are down, but we can restrict which nodes we want to be cleaned using parameters `service_id`, `cluster_name`, `host`, `binary`, and `disabled`. Cleaning specific resources is also possible using `resource_type` and `resource_id` parameters. We can even force cleanup on nodes that are up with `is_up`, but that's not recommended and should only used if you know what you are doing. For example if you know a specific cinder-volume is down even though it's still not being reported as down when listing the services and you know the cluster has at least another service to do the cleanup. API will return a dictionary with 2 lists, one with services that have been issued a cleanup request (`cleaning` key) and another list with services that cannot be cleaned right now because there is no alternative service to do the cleanup in that cluster (`unavailable` key). Data returned for each service element in these two lists consist of the `id`, `host`, `binary`, and `cluster_name`. These are not the services that will be performing the cleanup, but the services that will be cleaned up or couldn't be cleaned up. Specs: https://specs.openstack.org/openstack/cinder-specs/specs/newton/ha-aa-cleanup.html APIImpact: New /workers/cleanup entry Implements: blueprint cinder-volume-active-active-support Change-Id: If336b6569b171846954ed6eb73f5a4314c6c7e2e
26 lines
934 B
Python
26 lines
934 B
Python
# Copyright (c) 2016 Red Hat Inc.
|
|
# All Rights Reserved.
|
|
#
|
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
|
# not use this file except in compliance with the License. You may obtain
|
|
# a copy of the License at
|
|
#
|
|
# http://www.apache.org/licenses/LICENSE-2.0
|
|
#
|
|
# Unless required by applicable law or agreed to in writing, software
|
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
|
# License for the specific language governing permissions and limitations
|
|
# under the License.
|
|
|
|
|
|
class ViewBuilder(object):
|
|
"""Map Cluster into dicts for API responses."""
|
|
|
|
_collection_name = 'workers'
|
|
|
|
@classmethod
|
|
def service_list(cls, services):
|
|
return [{'id': s.id, 'host': s.host, 'binary': s.binary,
|
|
'cluster_name': s.cluster_name} for s in services]
|