Move OpenStack cleanup utils out of context
These will be used to do out-of-band cleanup in the near future, so they are no longer specific to the cleanup context. Change-Id: I5b5c7c89b04a080c487936cc3df0df24929ced31 Implements: blueprint cleanup-refactoring
This commit is contained in:
parent
cc5794c395
commit
8a9dc0558c
@ -319,7 +319,7 @@ Consequently, the algorithm of initiating the contexts can be roughly seen as fo
|
||||
|
||||
- where the order of contexts in which they are set up depends on the value of their *order* attribute. Contexts with lower *order* have higher priority: *1xx* contexts are reserved for users-related stuff (e.g. users/tenants creation, roles assignment etc.), *2xx* - for quotas etc.
|
||||
|
||||
The *hidden* attribute defines whether the context should be a *hidden* one. **Hidden contexts** cannot be configured by end-users through the task configuration file as shown above, but should be specified by a benchmark scenario developer through a special *@base.scenario(context={...})* decorator. Hidden contexts are typically needed to satisfy some specific benchmark scenario-specific needs, which don't require the end-user's attention. For example, the hidden **"cleanup" context** (:mod:`rally.plugins.openstack.context.cleanup.context`) is used to make generic cleanup after running benchmark. So user can't change
|
||||
The *hidden* attribute defines whether the context should be a *hidden* one. **Hidden contexts** cannot be configured by end-users through the task configuration file as shown above, but should be specified by a benchmark scenario developer through a special *@base.scenario(context={...})* decorator. Hidden contexts are typically needed to satisfy some specific benchmark scenario-specific needs, which don't require the end-user's attention. For example, the hidden **"cleanup" context** (:mod:`rally.plugins.openstack.context.cleanup`) is used to make generic cleanup after running benchmark. So user can't change
|
||||
it configuration via task and break his cloud.
|
||||
|
||||
If you want to dive deeper, also see the context manager (:mod:`rally.task.context`) class that actually implements the algorithm described above.
|
||||
|
@ -94,27 +94,27 @@ cleanup process, but it demonstrates the basic idea:
|
||||
|
||||
A fair bit of functionality will need to be added to support this:
|
||||
|
||||
* ``rally.plugins.openstack.context.cleanup.manager.cleanup()`` will
|
||||
* ``rally.plugins.openstack.cleanup.manager.cleanup()`` will
|
||||
need to accept a keyword argument specifying the type of
|
||||
cleanup. This should be a superclass that will be used to discover
|
||||
the subclasses to delete resources for. It will be passed to
|
||||
``rally.plugins.openstack.context.cleanup.manager.SeekAndDestroy``,
|
||||
``rally.plugins.openstack.cleanup.manager.SeekAndDestroy``,
|
||||
which will also need to accept the argument and generate the list of
|
||||
classes.
|
||||
* ``rally.plugins.openstack.context.cleanup.base``,
|
||||
``rally.plugins.openstack.context.cleanup.manager`` and
|
||||
``rally.plugins.openstack.context.cleanup.resources`` need to be
|
||||
* ``rally.plugins.openstack.cleanup.base``,
|
||||
``rally.plugins.openstack.cleanup.manager`` and
|
||||
``rally.plugins.openstack.cleanup.resources`` need to be
|
||||
moved out of the context space, since they will be used not only by
|
||||
the cleanup context to do scenario cleanup, but also to do
|
||||
out-of-band cleanup of all resources.
|
||||
* A new function, ``name()``, will need to be added to
|
||||
``rally.plugins.openstack.context.cleanup.base.ResourceManager``
|
||||
``rally.plugins.openstack.cleanup.base.ResourceManager``
|
||||
so that we can determine the name of a resource in order to match it.
|
||||
* A ``task_id`` keyword argument will be added to
|
||||
``name_matches_object`` and ``name_matches_pattern`` in order to
|
||||
ensure that we only match names from the currently-running
|
||||
task. This will need to be passed along starting with
|
||||
``rally.plugins.openstack.context.cleanup.manager.cleanup()``, and
|
||||
``rally.plugins.openstack.cleanup.manager.cleanup()``, and
|
||||
added as a keyword argument to every intermediate function.
|
||||
|
||||
Additionally, a new top-level command will be added::
|
||||
@ -170,7 +170,7 @@ Work Items
|
||||
#. Modify ``name_matches_{object,pattern}`` to accept a task ID.
|
||||
#. Add ``name()`` functions to all ``ResourceManager`` subclasses.
|
||||
#. Move
|
||||
``rally.plugins.openstack.context.cleanup.manager.{base,manager,resources}``
|
||||
``rally.plugins.openstack.cleanup.manager.{base,manager,resources}``
|
||||
to ``rally.plugins.openstack.cleanup``.
|
||||
#. Modify ``rally.plugins.openstack.cleanup.manager.cleanup()`` to
|
||||
accept a task ID and a superclass, pass them along to
|
||||
|
@ -16,7 +16,7 @@ import itertools
|
||||
|
||||
from rally.common import logging
|
||||
from rally import osclients
|
||||
from rally.plugins.openstack.context.cleanup import base as cleanup_base
|
||||
from rally.plugins.openstack.cleanup import base as cleanup_base
|
||||
from rally.plugins.openstack.context.keystone import users
|
||||
from rally.plugins.openstack.scenarios.cinder import utils as cinder_utils
|
||||
from rally.plugins.openstack.scenarios.ec2 import utils as ec2_utils
|
||||
|
121
rally/plugins/openstack/cleanup/base.py
Normal file
121
rally/plugins/openstack/cleanup/base.py
Normal file
@ -0,0 +1,121 @@
|
||||
# Copyright 2014: Mirantis Inc.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
|
||||
from oslo_config import cfg
|
||||
|
||||
from rally.task import utils
|
||||
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
CLEANUP_OPTS = [
|
||||
cfg.IntOpt("resource_deletion_timeout", default=600,
|
||||
help="A timeout in seconds for deleting resources")
|
||||
]
|
||||
cleanup_group = cfg.OptGroup(name="cleanup", title="Cleanup Options")
|
||||
CONF.register_group(cleanup_group)
|
||||
CONF.register_opts(CLEANUP_OPTS, cleanup_group)
|
||||
|
||||
|
||||
def resource(service, resource, order=0, admin_required=False,
|
||||
perform_for_admin_only=False, tenant_resource=False,
|
||||
max_attempts=3, timeout=CONF.cleanup.resource_deletion_timeout,
|
||||
interval=1, threads=20):
|
||||
"""Decorator that overrides resource specification.
|
||||
|
||||
Just put it on top of your resource class and specify arguments that you
|
||||
need.
|
||||
|
||||
:param service: It is equal to client name for corresponding service.
|
||||
E.g. "nova", "cinder" or "zaqar"
|
||||
:param resource: Client manager name for resource. E.g. in case of
|
||||
nova.servers you should write here "servers"
|
||||
:param order: Used to adjust priority of cleanup for different resource
|
||||
types
|
||||
:param admin_required: Admin user is required
|
||||
:param perform_for_admin_only: Perform cleanup for admin user only
|
||||
:param tenant_resource: Perform deletion only 1 time per tenant
|
||||
:param max_attempts: Max amount of attempts to delete single resource
|
||||
:param timeout: Max duration of deletion in seconds
|
||||
:param interval: Resource status pooling interval
|
||||
:param threads: Amount of threads (workers) that are deleting resources
|
||||
simultaneously
|
||||
"""
|
||||
|
||||
def inner(cls):
|
||||
# TODO(boris-42): This can be written better I believe =)
|
||||
cls._service = service
|
||||
cls._resource = resource
|
||||
cls._order = order
|
||||
cls._admin_required = admin_required
|
||||
cls._perform_for_admin_only = perform_for_admin_only
|
||||
cls._max_attempts = max_attempts
|
||||
cls._timeout = timeout
|
||||
cls._interval = interval
|
||||
cls._threads = threads
|
||||
cls._tenant_resource = tenant_resource
|
||||
|
||||
return cls
|
||||
|
||||
return inner
|
||||
|
||||
|
||||
@resource(service=None, resource=None)
|
||||
class ResourceManager(object):
|
||||
"""Base class for cleanup plugins for specific resources.
|
||||
|
||||
You should use @resource decorator to specify major configuration of
|
||||
resource manager. Usually you should specify: service, resource and order.
|
||||
|
||||
If project python client is very specific, you can override delete(),
|
||||
list() and is_deleted() methods to make them fit to your case.
|
||||
"""
|
||||
|
||||
def __init__(self, resource=None, admin=None, user=None, tenant_uuid=None):
|
||||
self.admin = admin
|
||||
self.user = user
|
||||
self.raw_resource = resource
|
||||
self.tenant_uuid = tenant_uuid
|
||||
|
||||
def _manager(self):
|
||||
client = self._admin_required and self.admin or self.user
|
||||
return getattr(getattr(client, self._service)(), self._resource)
|
||||
|
||||
def id(self):
|
||||
"""Returns id of resource."""
|
||||
return self.raw_resource.id
|
||||
|
||||
def is_deleted(self):
|
||||
"""Checks if the resource is deleted.
|
||||
|
||||
Fetch resource by id from service and check it status.
|
||||
In case of NotFound or status is DELETED or DELETE_COMPLETE returns
|
||||
True, otherwise False.
|
||||
"""
|
||||
try:
|
||||
resource = self._manager().get(self.id())
|
||||
except Exception as e:
|
||||
return getattr(e, "code", getattr(e, "http_status", 400)) == 404
|
||||
|
||||
return utils.get_status(resource) in ("DELETED", "DELETE_COMPLETE")
|
||||
|
||||
def delete(self):
|
||||
"""Delete resource that corresponds to instance of this class."""
|
||||
self._manager().delete(self.id())
|
||||
|
||||
def list(self):
|
||||
"""List all resources specific for admin or user."""
|
||||
return self._manager().list()
|
282
rally/plugins/openstack/cleanup/manager.py
Normal file
282
rally/plugins/openstack/cleanup/manager.py
Normal file
@ -0,0 +1,282 @@
|
||||
# Copyright 2014: Mirantis Inc.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import time
|
||||
|
||||
from rally.common import broker
|
||||
from rally.common.i18n import _
|
||||
from rally.common import logging
|
||||
from rally.common.plugin import discover
|
||||
from rally.common import utils as rutils
|
||||
from rally import osclients
|
||||
from rally.plugins.openstack.cleanup import base
|
||||
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class SeekAndDestroy(object):
|
||||
|
||||
def __init__(self, manager_cls, admin, users):
|
||||
"""Resource deletion class.
|
||||
|
||||
This class contains method exterminate() that finds and deletes
|
||||
all resources created by Rally.
|
||||
|
||||
:param manager_cls: subclass of base.ResourceManager
|
||||
:param admin: admin credential like in context["admin"]
|
||||
:param users: users credentials like in context["users"]
|
||||
"""
|
||||
self.manager_cls = manager_cls
|
||||
self.admin = admin
|
||||
self.users = users or []
|
||||
|
||||
@staticmethod
|
||||
def _get_cached_client(user, cache=None):
|
||||
"""Simplifies initialization and caching OpenStack clients."""
|
||||
|
||||
if not user:
|
||||
return None
|
||||
|
||||
if not isinstance(cache, dict):
|
||||
return osclients.Clients(user["credential"])
|
||||
|
||||
key = user["credential"]
|
||||
if key not in cache:
|
||||
cache[key] = osclients.Clients(key)
|
||||
|
||||
return cache[key]
|
||||
|
||||
def _delete_single_resource(self, resource):
|
||||
"""Safe resource deletion with retries and timeouts.
|
||||
|
||||
Send request to delete resource, in case of failures repeat it few
|
||||
times. After that pull status of resource until it's deleted.
|
||||
|
||||
Writes in LOG warning with UUID of resource that wasn't deleted
|
||||
|
||||
:param resource: instance of resource manager initiated with resource
|
||||
that should be deleted.
|
||||
"""
|
||||
|
||||
msg_kw = {
|
||||
"uuid": resource.id(),
|
||||
"service": resource._service,
|
||||
"resource": resource._resource
|
||||
}
|
||||
|
||||
LOG.debug("Deleting %(service)s %(resource)s object %(uuid)s" %
|
||||
msg_kw)
|
||||
|
||||
try:
|
||||
rutils.retry(resource._max_attempts, resource.delete)
|
||||
except Exception as e:
|
||||
msg_kw["reason"] = e
|
||||
LOG.warning(
|
||||
_("Resource deletion failed, max retries exceeded for "
|
||||
"%(service)s.%(resource)s: %(uuid)s. Reason: %(reason)s")
|
||||
% msg_kw)
|
||||
if logging.is_debug():
|
||||
LOG.exception(e)
|
||||
else:
|
||||
started = time.time()
|
||||
failures_count = 0
|
||||
while time.time() - started < resource._timeout:
|
||||
try:
|
||||
if resource.is_deleted():
|
||||
return
|
||||
except Exception as e:
|
||||
LOG.warning(
|
||||
_("Seems like %s.%s.is_deleted(self) method is broken "
|
||||
"It shouldn't raise any exceptions.")
|
||||
% (resource.__module__, type(resource).__name__))
|
||||
LOG.exception(e)
|
||||
|
||||
# NOTE(boris-42): Avoid LOG spamming in case of bad
|
||||
# is_deleted() method
|
||||
failures_count += 1
|
||||
if failures_count > resource._max_attempts:
|
||||
break
|
||||
|
||||
finally:
|
||||
time.sleep(resource._interval)
|
||||
|
||||
LOG.warning(_("Resource deletion failed, timeout occurred for "
|
||||
"%(service)s.%(resource)s: %(uuid)s.")
|
||||
% msg_kw)
|
||||
|
||||
def _gen_publisher(self):
|
||||
"""Returns publisher for deletion jobs.
|
||||
|
||||
This method iterates over all users, lists all resources
|
||||
(using manager_cls) and puts jobs for deletion.
|
||||
|
||||
Every deletion job contains tuple with two values: user and resource
|
||||
uuid that should be deleted.
|
||||
|
||||
In case of tenant based resource, uuids are fetched only from one user
|
||||
per tenant.
|
||||
"""
|
||||
|
||||
def publisher(queue):
|
||||
|
||||
def _publish(admin, user, manager):
|
||||
try:
|
||||
for raw_resource in rutils.retry(3, manager.list):
|
||||
queue.append((admin, user, raw_resource))
|
||||
except Exception as e:
|
||||
LOG.warning(
|
||||
_("Seems like %s.%s.list(self) method is broken. "
|
||||
"It shouldn't raise any exceptions.")
|
||||
% (manager.__module__, type(manager).__name__))
|
||||
LOG.exception(e)
|
||||
|
||||
if self.admin and (not self.users
|
||||
or self.manager_cls._perform_for_admin_only):
|
||||
manager = self.manager_cls(
|
||||
admin=self._get_cached_client(self.admin))
|
||||
_publish(self.admin, None, manager)
|
||||
|
||||
else:
|
||||
visited_tenants = set()
|
||||
admin_client = self._get_cached_client(self.admin)
|
||||
for user in self.users:
|
||||
if (self.manager_cls._tenant_resource
|
||||
and user["tenant_id"] in visited_tenants):
|
||||
continue
|
||||
|
||||
visited_tenants.add(user["tenant_id"])
|
||||
manager = self.manager_cls(
|
||||
admin=admin_client,
|
||||
user=self._get_cached_client(user),
|
||||
tenant_uuid=user["tenant_id"])
|
||||
|
||||
_publish(self.admin, user, manager)
|
||||
|
||||
return publisher
|
||||
|
||||
def _gen_consumer(self):
|
||||
"""Generate method that consumes single deletion job."""
|
||||
|
||||
def consumer(cache, args):
|
||||
"""Execute deletion job."""
|
||||
admin, user, raw_resource = args
|
||||
|
||||
manager = self.manager_cls(
|
||||
resource=raw_resource,
|
||||
admin=self._get_cached_client(admin, cache=cache),
|
||||
user=self._get_cached_client(user, cache=cache),
|
||||
tenant_uuid=user and user["tenant_id"])
|
||||
|
||||
self._delete_single_resource(manager)
|
||||
|
||||
return consumer
|
||||
|
||||
def exterminate(self):
|
||||
"""Delete all resources for passed users, admin and resource_mgr."""
|
||||
|
||||
broker.run(self._gen_publisher(), self._gen_consumer(),
|
||||
consumers_count=self.manager_cls._threads)
|
||||
|
||||
|
||||
def list_resource_names(admin_required=None):
|
||||
"""List all resource managers names.
|
||||
|
||||
Returns all service names and all combination of service.resource names.
|
||||
|
||||
:param admin_required: None -> returns all ResourceManagers
|
||||
True -> returns only admin ResourceManagers
|
||||
False -> returns only non admin ResourceManagers
|
||||
"""
|
||||
res_mgrs = discover.itersubclasses(base.ResourceManager)
|
||||
if admin_required is not None:
|
||||
res_mgrs = filter(lambda cls: cls._admin_required == admin_required,
|
||||
res_mgrs)
|
||||
|
||||
names = set()
|
||||
for cls in res_mgrs:
|
||||
names.add(cls._service)
|
||||
names.add("%s.%s" % (cls._service, cls._resource))
|
||||
|
||||
return names
|
||||
|
||||
|
||||
def find_resource_managers(names=None, admin_required=None):
|
||||
"""Returns resource managers.
|
||||
|
||||
:param names: List of names in format <service> or <service>.<resource>
|
||||
that is used for filtering resource manager classes
|
||||
:param admin_required: None -> returns all ResourceManagers
|
||||
True -> returns only admin ResourceManagers
|
||||
False -> returns only non admin ResourceManagers
|
||||
"""
|
||||
names = set(names or [])
|
||||
|
||||
resource_managers = []
|
||||
for manager in discover.itersubclasses(base.ResourceManager):
|
||||
if admin_required is not None:
|
||||
if admin_required != manager._admin_required:
|
||||
continue
|
||||
|
||||
if (manager._service in names
|
||||
or "%s.%s" % (manager._service, manager._resource) in names):
|
||||
resource_managers.append(manager)
|
||||
|
||||
resource_managers.sort(key=lambda x: x._order)
|
||||
|
||||
found_names = set()
|
||||
for mgr in resource_managers:
|
||||
found_names.add(mgr._service)
|
||||
found_names.add("%s.%s" % (mgr._service, mgr._resource))
|
||||
|
||||
missing = names - found_names
|
||||
if missing:
|
||||
LOG.warning("Missing resource managers: %s" % ", ".join(missing))
|
||||
|
||||
return resource_managers
|
||||
|
||||
|
||||
def cleanup(names=None, admin_required=None, admin=None, users=None):
|
||||
"""Generic cleaner.
|
||||
|
||||
This method goes through all plugins. Filter those and left only plugins
|
||||
with _service from services or _resource from resources.
|
||||
|
||||
Then goes through all passed users and using cleaners cleans all related
|
||||
resources.
|
||||
|
||||
:param names: Use only resource manages that has name from this list.
|
||||
There are in as _service or
|
||||
(%s.%s % (_service, _resource)) from
|
||||
|
||||
:param admin_required: If None -> return all plugins
|
||||
If True -> return only admin plugins
|
||||
If False -> return only non admin plugins
|
||||
:param admin: rally.common.objects.Credential that corresponds to OpenStack
|
||||
admin.
|
||||
:param users: List of OpenStack users that was used during benchmarking.
|
||||
Every user has next structure:
|
||||
{
|
||||
"id": <uuid1>,
|
||||
"tenant_id": <uuid2>,
|
||||
"credential": <rally.common.objects.Credential>
|
||||
|
||||
}
|
||||
"""
|
||||
for manager in find_resource_managers(names, admin_required):
|
||||
LOG.debug("Cleaning up %(service)s %(resource)s objects" %
|
||||
{"service": manager._service,
|
||||
"resource": manager._resource})
|
||||
SeekAndDestroy(manager, admin, users).exterminate()
|
654
rally/plugins/openstack/cleanup/resources.py
Normal file
654
rally/plugins/openstack/cleanup/resources.py
Normal file
@ -0,0 +1,654 @@
|
||||
# Copyright 2014: Mirantis Inc.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from boto import exception as boto_exception
|
||||
from neutronclient.common import exceptions as neutron_exceptions
|
||||
from saharaclient.api import base as saharaclient_base
|
||||
|
||||
from rally.common import logging
|
||||
from rally.common.plugin import discover
|
||||
from rally.common import utils
|
||||
from rally.plugins.openstack.cleanup import base
|
||||
from rally.plugins.openstack.scenarios.fuel import utils as futils
|
||||
from rally.plugins.openstack.scenarios.keystone import utils as kutils
|
||||
from rally.plugins.openstack.scenarios.nova import utils as nova_utils
|
||||
from rally.plugins.openstack.wrappers import keystone as keystone_wrapper
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def get_order(start):
|
||||
return iter(range(start, start + 99))
|
||||
|
||||
|
||||
class SynchronizedDeletion(object):
|
||||
|
||||
def is_deleted(self):
|
||||
return True
|
||||
|
||||
|
||||
class QuotaMixin(SynchronizedDeletion):
|
||||
|
||||
def id(self):
|
||||
return self.raw_resource
|
||||
|
||||
def delete(self):
|
||||
self._manager().delete(self.raw_resource)
|
||||
|
||||
def list(self):
|
||||
return [self.tenant_uuid] if self.tenant_uuid else []
|
||||
|
||||
|
||||
# HEAT
|
||||
|
||||
@base.resource("heat", "stacks", order=100, tenant_resource=True)
|
||||
class HeatStack(base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
# NOVA
|
||||
|
||||
_nova_order = get_order(200)
|
||||
|
||||
|
||||
@base.resource("nova", "servers", order=next(_nova_order))
|
||||
class NovaServer(base.ResourceManager):
|
||||
def list(self):
|
||||
"""List all servers."""
|
||||
|
||||
if hasattr(self._manager().api, "api_version"):
|
||||
# NOTE(andreykurilin): novaclient v2.27.0 includes ability to
|
||||
# return all servers(see https://review.openstack.org/#/c/217101
|
||||
# for more details). This release can be identified by presence
|
||||
# of "api_version" property of ``novaclient.client.Client`` cls.
|
||||
return self._manager().list(limit=-1)
|
||||
else:
|
||||
# FIXME(andreykurilin): Remove code below, when minimum version of
|
||||
# novaclient in requirements will allow it.
|
||||
# NOTE(andreykurilin): Nova API returns only limited number(
|
||||
# 'osapi_max_limit' option in nova.conf) of servers, so we need
|
||||
# to use 'marker' option to list all pages of servers.
|
||||
result = []
|
||||
marker = None
|
||||
while True:
|
||||
servers = self._manager().list(marker=marker)
|
||||
if not servers:
|
||||
break
|
||||
result.extend(servers)
|
||||
marker = servers[-1].id
|
||||
return result
|
||||
|
||||
def delete(self):
|
||||
if getattr(self.raw_resource, "OS-EXT-STS:locked", False):
|
||||
self.raw_resource.unlock()
|
||||
super(NovaServer, self).delete()
|
||||
|
||||
|
||||
@base.resource("nova", "floating_ips", order=next(_nova_order))
|
||||
class NovaFloatingIPs(SynchronizedDeletion, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("nova", "keypairs", order=next(_nova_order))
|
||||
class NovaKeypair(SynchronizedDeletion, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("nova", "security_groups", order=next(_nova_order))
|
||||
class NovaSecurityGroup(SynchronizedDeletion, base.ResourceManager):
|
||||
|
||||
def list(self):
|
||||
return filter(lambda x: x.name != "default",
|
||||
super(NovaSecurityGroup, self).list())
|
||||
|
||||
|
||||
@base.resource("nova", "quotas", order=next(_nova_order),
|
||||
admin_required=True, tenant_resource=True)
|
||||
class NovaQuotas(QuotaMixin, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("nova", "floating_ips_bulk", order=next(_nova_order),
|
||||
admin_required=True)
|
||||
class NovaFloatingIpsBulk(SynchronizedDeletion, base.ResourceManager):
|
||||
|
||||
def id(self):
|
||||
return self.raw_resource.address
|
||||
|
||||
def list(self):
|
||||
return [floating_ip for floating_ip in self._manager().list()
|
||||
if utils.name_matches_object(floating_ip.pool,
|
||||
nova_utils.NovaScenario)]
|
||||
|
||||
|
||||
@base.resource("nova", "networks", order=next(_nova_order),
|
||||
admin_required=True, tenant_resource=True)
|
||||
class NovaNetworks(SynchronizedDeletion, base.ResourceManager):
|
||||
|
||||
def list(self):
|
||||
# NOTE(stpierre): any plugin can create a nova network via the
|
||||
# network wrapper, and that network's name will be created
|
||||
# according to its owner's random name generation
|
||||
# parameters. so we need to check if there are nova networks
|
||||
# whose name pattern matches those of any loaded plugin that
|
||||
# implements RandomNameGeneratorMixin
|
||||
classes = list(discover.itersubclasses(utils.RandomNameGeneratorMixin))
|
||||
return [net for net in self._manager().list()
|
||||
if utils.name_matches_object(net.label, *classes)]
|
||||
|
||||
|
||||
# EC2
|
||||
|
||||
_ec2_order = get_order(250)
|
||||
|
||||
|
||||
class EC2Mixin(object):
|
||||
|
||||
def _manager(self):
|
||||
return getattr(self.user, self._service)()
|
||||
|
||||
|
||||
@base.resource("ec2", "servers", order=next(_ec2_order))
|
||||
class EC2Server(EC2Mixin, base.ResourceManager):
|
||||
|
||||
def is_deleted(self):
|
||||
try:
|
||||
instances = self._manager().get_only_instances(
|
||||
instance_ids=[self.id()])
|
||||
except boto_exception.EC2ResponseError as e:
|
||||
# NOTE(wtakase): Nova EC2 API returns 'InvalidInstanceID.NotFound'
|
||||
# if instance not found. In this case, we consider
|
||||
# instance has already been deleted.
|
||||
return getattr(e, "error_code") == "InvalidInstanceID.NotFound"
|
||||
|
||||
# NOTE(wtakase): After instance deletion, instance can be 'terminated'
|
||||
# state. If all instance states are 'terminated', this
|
||||
# returns True. And if get_only_instaces() returns empty
|
||||
# list, this also returns True because we consider
|
||||
# instance has already been deleted.
|
||||
return all(map(lambda i: i.state == "terminated", instances))
|
||||
|
||||
def delete(self):
|
||||
self._manager().terminate_instances(instance_ids=[self.id()])
|
||||
|
||||
def list(self):
|
||||
return self._manager().get_only_instances()
|
||||
|
||||
|
||||
# NEUTRON
|
||||
|
||||
_neutron_order = get_order(300)
|
||||
|
||||
|
||||
@base.resource(service=None, resource=None, admin_required=True)
|
||||
class NeutronMixin(SynchronizedDeletion, base.ResourceManager):
|
||||
# Neutron has the best client ever, so we need to override everything
|
||||
|
||||
def supports_extension(self, extension):
|
||||
exts = self._manager().list_extensions().get("extensions", [])
|
||||
if any(ext.get("alias") == extension for ext in exts):
|
||||
return True
|
||||
return False
|
||||
|
||||
def _manager(self):
|
||||
client = self._admin_required and self.admin or self.user
|
||||
return getattr(client, self._service)()
|
||||
|
||||
def id(self):
|
||||
return self.raw_resource["id"]
|
||||
|
||||
def delete(self):
|
||||
delete_method = getattr(self._manager(), "delete_%s" % self._resource)
|
||||
delete_method(self.id())
|
||||
|
||||
def list(self):
|
||||
resources = self._resource + "s"
|
||||
list_method = getattr(self._manager(), "list_%s" % resources)
|
||||
|
||||
return filter(lambda r: r["tenant_id"] == self.tenant_uuid,
|
||||
list_method({"tenant_id": self.tenant_uuid})[resources])
|
||||
|
||||
|
||||
class NeutronLbaasV1Mixin(NeutronMixin):
|
||||
|
||||
def list(self):
|
||||
if self.supports_extension("lbaas"):
|
||||
return super(NeutronLbaasV1Mixin, self).list()
|
||||
return []
|
||||
|
||||
|
||||
@base.resource("neutron", "vip", order=next(_neutron_order),
|
||||
tenant_resource=True)
|
||||
class NeutronV1Vip(NeutronLbaasV1Mixin):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("neutron", "health_monitor", order=next(_neutron_order),
|
||||
tenant_resource=True)
|
||||
class NeutronV1Healthmonitor(NeutronLbaasV1Mixin):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("neutron", "pool", order=next(_neutron_order),
|
||||
tenant_resource=True)
|
||||
class NeutronV1Pool(NeutronLbaasV1Mixin):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("neutron", "port", order=next(_neutron_order),
|
||||
tenant_resource=True)
|
||||
class NeutronPort(NeutronMixin):
|
||||
|
||||
def delete(self):
|
||||
if (self.raw_resource["device_owner"] == "network:router_interface" or
|
||||
self.raw_resource["device_owner"] ==
|
||||
"network:router_interface_distributed"):
|
||||
self._manager().remove_interface_router(
|
||||
self.raw_resource["device_id"],
|
||||
{"port_id": self.raw_resource["id"]})
|
||||
else:
|
||||
try:
|
||||
self._manager().delete_port(self.id())
|
||||
except neutron_exceptions.PortNotFoundClient:
|
||||
# Port can be already auto-deleted, skip silently
|
||||
LOG.debug("Port %s was not deleted. Skip silently because "
|
||||
"port can be already auto-deleted."
|
||||
% self.id())
|
||||
|
||||
|
||||
@base.resource("neutron", "router", order=next(_neutron_order),
|
||||
tenant_resource=True)
|
||||
class NeutronRouter(NeutronMixin):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("neutron", "subnet", order=next(_neutron_order),
|
||||
tenant_resource=True)
|
||||
class NeutronSubnet(NeutronMixin):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("neutron", "network", order=next(_neutron_order),
|
||||
tenant_resource=True)
|
||||
class NeutronNetwork(NeutronMixin):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("neutron", "floatingip", order=next(_neutron_order),
|
||||
tenant_resource=True)
|
||||
class NeutronFloatingIP(NeutronMixin):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("neutron", "security_group", order=next(_neutron_order),
|
||||
tenant_resource=True)
|
||||
class NeutronSecurityGroup(NeutronMixin):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("neutron", "quota", order=next(_neutron_order),
|
||||
admin_required=True, tenant_resource=True)
|
||||
class NeutronQuota(QuotaMixin, NeutronMixin):
|
||||
|
||||
def delete(self):
|
||||
self._manager().delete_quota(self.tenant_uuid)
|
||||
|
||||
|
||||
# CINDER
|
||||
|
||||
_cinder_order = get_order(400)
|
||||
|
||||
|
||||
@base.resource("cinder", "backups", order=next(_cinder_order),
|
||||
tenant_resource=True)
|
||||
class CinderVolumeBackup(base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("cinder", "volume_snapshots", order=next(_cinder_order),
|
||||
tenant_resource=True)
|
||||
class CinderVolumeSnapshot(base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("cinder", "transfers", order=next(_cinder_order),
|
||||
tenant_resource=True)
|
||||
class CinderVolumeTransfer(base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("cinder", "volumes", order=next(_cinder_order),
|
||||
tenant_resource=True)
|
||||
class CinderVolume(base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("cinder", "quotas", order=next(_cinder_order),
|
||||
admin_required=True, tenant_resource=True)
|
||||
class CinderQuotas(QuotaMixin, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
# MANILA
|
||||
|
||||
_manila_order = get_order(450)
|
||||
|
||||
|
||||
@base.resource("manila", "shares", order=next(_manila_order),
|
||||
tenant_resource=True)
|
||||
class ManilaShare(base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("manila", "share_networks", order=next(_manila_order),
|
||||
tenant_resource=True)
|
||||
class ManilaShareNetwork(base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("manila", "security_services", order=next(_manila_order),
|
||||
tenant_resource=True)
|
||||
class ManilaSecurityService(base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
# GLANCE
|
||||
|
||||
@base.resource("glance", "images", order=500, tenant_resource=True)
|
||||
class GlanceImage(base.ResourceManager):
|
||||
|
||||
def list(self):
|
||||
return self._manager().list(owner=self.tenant_uuid)
|
||||
|
||||
|
||||
# SAHARA
|
||||
|
||||
_sahara_order = get_order(600)
|
||||
|
||||
|
||||
@base.resource("sahara", "job_executions", order=next(_sahara_order),
|
||||
tenant_resource=True)
|
||||
class SaharaJobExecution(SynchronizedDeletion, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("sahara", "jobs", order=next(_sahara_order),
|
||||
tenant_resource=True)
|
||||
class SaharaJob(SynchronizedDeletion, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("sahara", "job_binary_internals", order=next(_sahara_order),
|
||||
tenant_resource=True)
|
||||
class SaharaJobBinaryInternals(SynchronizedDeletion, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("sahara", "job_binaries", order=next(_sahara_order),
|
||||
tenant_resource=True)
|
||||
class SaharaJobBinary(SynchronizedDeletion, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("sahara", "data_sources", order=next(_sahara_order),
|
||||
tenant_resource=True)
|
||||
class SaharaDataSource(SynchronizedDeletion, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("sahara", "clusters", order=next(_sahara_order),
|
||||
tenant_resource=True)
|
||||
class SaharaCluster(base.ResourceManager):
|
||||
|
||||
# Need special treatment for Sahara Cluster because of the way the
|
||||
# exceptions are described in:
|
||||
# https://github.com/openstack/python-saharaclient/blob/master/
|
||||
# saharaclient/api/base.py#L145
|
||||
|
||||
def is_deleted(self):
|
||||
try:
|
||||
self._manager().get(self.id())
|
||||
return False
|
||||
except saharaclient_base.APIException as e:
|
||||
return e.error_code == 404
|
||||
|
||||
|
||||
@base.resource("sahara", "cluster_templates", order=next(_sahara_order),
|
||||
tenant_resource=True)
|
||||
class SaharaClusterTemplate(SynchronizedDeletion, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("sahara", "node_group_templates", order=next(_sahara_order),
|
||||
tenant_resource=True)
|
||||
class SaharaNodeGroup(SynchronizedDeletion, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
# CEILOMETER
|
||||
|
||||
@base.resource("ceilometer", "alarms", order=700, tenant_resource=True)
|
||||
class CeilometerAlarms(SynchronizedDeletion, base.ResourceManager):
|
||||
|
||||
def id(self):
|
||||
return self.raw_resource.alarm_id
|
||||
|
||||
def list(self):
|
||||
query = [{
|
||||
"field": "project_id",
|
||||
"op": "eq",
|
||||
"value": self.tenant_uuid
|
||||
}]
|
||||
return self._manager().list(q=query)
|
||||
|
||||
|
||||
# ZAQAR
|
||||
|
||||
@base.resource("zaqar", "queues", order=800)
|
||||
class ZaqarQueues(SynchronizedDeletion, base.ResourceManager):
|
||||
|
||||
def list(self):
|
||||
return self.user.zaqar().queues()
|
||||
|
||||
|
||||
# DESIGNATE
|
||||
|
||||
_designate_order = get_order(900)
|
||||
|
||||
|
||||
class DesignateResource(SynchronizedDeletion, base.ResourceManager):
|
||||
def _manager(self):
|
||||
# NOTE: service name contains version, so we should split them
|
||||
service_name, version = self._service.split("_v")
|
||||
return getattr(getattr(self.user, service_name)(version),
|
||||
self._resource)
|
||||
|
||||
|
||||
@base.resource("designate_v1", "domains", order=next(_designate_order))
|
||||
class DesignateDomain(DesignateResource):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("designate_v2", "zones", order=next(_designate_order))
|
||||
class DesignateZones(DesignateResource):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("designate_v1", "servers", order=next(_designate_order),
|
||||
admin_required=True, perform_for_admin_only=True)
|
||||
class DesignateServer(DesignateResource):
|
||||
pass
|
||||
|
||||
|
||||
# SWIFT
|
||||
|
||||
_swift_order = get_order(1000)
|
||||
|
||||
|
||||
class SwiftMixin(SynchronizedDeletion, base.ResourceManager):
|
||||
|
||||
def _manager(self):
|
||||
client = self._admin_required and self.admin or self.user
|
||||
return getattr(client, self._service)()
|
||||
|
||||
def id(self):
|
||||
return self.raw_resource
|
||||
|
||||
def delete(self):
|
||||
delete_method = getattr(self._manager(), "delete_%s" % self._resource)
|
||||
# NOTE(weiwu): *self.raw_resource is required because for deleting
|
||||
# container we are passing only container name, to delete object we
|
||||
# should pass as first argument container and second is object name.
|
||||
delete_method(*self.raw_resource)
|
||||
|
||||
|
||||
@base.resource("swift", "object", order=next(_swift_order),
|
||||
tenant_resource=True)
|
||||
class SwiftObject(SwiftMixin):
|
||||
|
||||
def list(self):
|
||||
object_list = []
|
||||
containers = self._manager().get_account(full_listing=True)[1]
|
||||
for con in containers:
|
||||
objects = self._manager().get_container(con["name"],
|
||||
full_listing=True)[1]
|
||||
for obj in objects:
|
||||
raw_resource = [con["name"], obj["name"]]
|
||||
object_list.append(raw_resource)
|
||||
return object_list
|
||||
|
||||
|
||||
@base.resource("swift", "container", order=next(_swift_order),
|
||||
tenant_resource=True)
|
||||
class SwiftContainer(SwiftMixin):
|
||||
|
||||
def list(self):
|
||||
containers = self._manager().get_account(full_listing=True)[1]
|
||||
return [[con["name"]] for con in containers]
|
||||
|
||||
|
||||
# MISTRAL
|
||||
|
||||
@base.resource("mistral", "workbooks", order=1100, tenant_resource=True)
|
||||
class MistralWorkbooks(SynchronizedDeletion, base.ResourceManager):
|
||||
def delete(self):
|
||||
self._manager().delete(self.raw_resource.name)
|
||||
|
||||
|
||||
# MURANO
|
||||
|
||||
_murano_order = get_order(1200)
|
||||
|
||||
|
||||
@base.resource("murano", "environments", tenant_resource=True,
|
||||
order=next(_murano_order))
|
||||
class MuranoEnvironments(SynchronizedDeletion, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("murano", "packages", tenant_resource=True,
|
||||
order=next(_murano_order))
|
||||
class MuranoPackages(base.ResourceManager):
|
||||
def list(self):
|
||||
return filter(lambda x: x.name != "Core library",
|
||||
super(MuranoPackages, self).list())
|
||||
|
||||
|
||||
# IRONIC
|
||||
|
||||
_ironic_order = get_order(1300)
|
||||
|
||||
|
||||
@base.resource("ironic", "node", admin_required=True,
|
||||
order=next(_ironic_order), perform_for_admin_only=True)
|
||||
class IronicNodes(base.ResourceManager):
|
||||
|
||||
def id(self):
|
||||
return self.raw_resource.uuid
|
||||
|
||||
|
||||
# FUEL
|
||||
|
||||
@base.resource("fuel", "environment", order=1400,
|
||||
admin_required=True, perform_for_admin_only=True)
|
||||
class FuelEnvironment(base.ResourceManager):
|
||||
"""Fuel environment.
|
||||
|
||||
That is the only resource that can be deleted by fuelclient explicitly.
|
||||
"""
|
||||
|
||||
def id(self):
|
||||
return self.raw_resource["id"]
|
||||
|
||||
def is_deleted(self):
|
||||
return not self._manager().get(self.id())
|
||||
|
||||
def list(self):
|
||||
return [env for env in self._manager().list()
|
||||
if utils.name_matches_object(env["name"],
|
||||
futils.FuelScenario)]
|
||||
|
||||
|
||||
# KEYSTONE
|
||||
|
||||
_keystone_order = get_order(9000)
|
||||
|
||||
|
||||
class KeystoneMixin(SynchronizedDeletion):
|
||||
|
||||
def _manager(self):
|
||||
return keystone_wrapper.wrap(getattr(self.admin, self._service)())
|
||||
|
||||
def delete(self):
|
||||
delete_method = getattr(self._manager(), "delete_%s" % self._resource)
|
||||
delete_method(self.id())
|
||||
|
||||
def list(self):
|
||||
# TODO(boris-42): We should use such stuff in all list commands.
|
||||
resources = self._resource + "s"
|
||||
list_method = getattr(self._manager(), "list_%s" % resources)
|
||||
|
||||
return filter(kutils.is_temporary, list_method())
|
||||
|
||||
|
||||
@base.resource("keystone", "user", order=next(_keystone_order),
|
||||
admin_required=True, perform_for_admin_only=True)
|
||||
class KeystoneUser(KeystoneMixin, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("keystone", "project", order=next(_keystone_order),
|
||||
admin_required=True, perform_for_admin_only=True)
|
||||
class KeystoneProject(KeystoneMixin, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("keystone", "service", order=next(_keystone_order),
|
||||
admin_required=True, perform_for_admin_only=True)
|
||||
class KeystoneService(KeystoneMixin, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("keystone", "role", order=next(_keystone_order),
|
||||
admin_required=True, perform_for_admin_only=True)
|
||||
class KeystoneRole(KeystoneMixin, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("keystone", "ec2", tenant_resource=True,
|
||||
order=next(_keystone_order))
|
||||
class KeystoneEc2(SynchronizedDeletion, base.ResourceManager):
|
||||
def list(self):
|
||||
return self._manager().list(self.raw_resource)
|
@ -16,7 +16,7 @@ from rally.common.i18n import _
|
||||
from rally.common import logging
|
||||
from rally.common import utils as rutils
|
||||
from rally import consts
|
||||
from rally.plugins.openstack.context.cleanup import manager as resource_manager
|
||||
from rally.plugins.openstack.cleanup import manager as resource_manager
|
||||
from rally.plugins.openstack.scenarios.cinder import utils as cinder_utils
|
||||
from rally.task import context
|
||||
|
||||
|
@ -0,0 +1,94 @@
|
||||
# Copyright 2014: Mirantis Inc.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import sys
|
||||
|
||||
from rally.common.i18n import _
|
||||
from rally.common import logging
|
||||
from rally import consts
|
||||
from rally import exceptions
|
||||
from rally.plugins.openstack.cleanup import manager
|
||||
from rally.task import context
|
||||
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class NoSuchCleanupResources(exceptions.RallyException):
|
||||
msg_fmt = _("Missing cleanup resource managers: %(message)s")
|
||||
|
||||
|
||||
class CleanupMixin(object):
|
||||
|
||||
CONFIG_SCHEMA = {
|
||||
"type": "array",
|
||||
"$schema": consts.JSON_SCHEMA,
|
||||
"items": {
|
||||
"type": "string",
|
||||
},
|
||||
"additionalProperties": False
|
||||
}
|
||||
|
||||
def setup(self):
|
||||
pass
|
||||
|
||||
|
||||
# NOTE(amaretskiy): Set order to run this just before UserCleanup
|
||||
@context.configure(name="admin_cleanup", order=(sys.maxsize - 1), hidden=True)
|
||||
class AdminCleanup(CleanupMixin, context.Context):
|
||||
"""Context class for admin resources cleanup."""
|
||||
|
||||
@classmethod
|
||||
def validate(cls, config, non_hidden=False):
|
||||
super(AdminCleanup, cls).validate(config, non_hidden)
|
||||
|
||||
missing = set(config)
|
||||
missing -= manager.list_resource_names(admin_required=True)
|
||||
missing = ", ".join(missing)
|
||||
if missing:
|
||||
LOG.info(_("Couldn't find cleanup resource managers: %s")
|
||||
% missing)
|
||||
raise NoSuchCleanupResources(missing)
|
||||
|
||||
@logging.log_task_wrapper(LOG.info, _("admin resources cleanup"))
|
||||
def cleanup(self):
|
||||
manager.cleanup(names=self.config,
|
||||
admin_required=True,
|
||||
admin=self.context["admin"],
|
||||
users=self.context.get("users", []))
|
||||
|
||||
|
||||
# NOTE(amaretskiy): Set maximum order to run this last
|
||||
@context.configure(name="cleanup", order=sys.maxsize, hidden=True)
|
||||
class UserCleanup(CleanupMixin, context.Context):
|
||||
"""Context class for user resources cleanup."""
|
||||
|
||||
@classmethod
|
||||
def validate(cls, config, non_hidden=False):
|
||||
super(UserCleanup, cls).validate(config, non_hidden)
|
||||
|
||||
missing = set(config)
|
||||
missing -= manager.list_resource_names(admin_required=False)
|
||||
missing = ", ".join(missing)
|
||||
if missing:
|
||||
LOG.info(_("Couldn't find cleanup resource managers: %s")
|
||||
% missing)
|
||||
raise NoSuchCleanupResources(missing)
|
||||
|
||||
@logging.log_task_wrapper(LOG.info, _("user resources cleanup"))
|
||||
def cleanup(self):
|
||||
manager.cleanup(names=self.config,
|
||||
admin_required=False,
|
||||
users=self.context.get("users", []))
|
@ -1,4 +1,4 @@
|
||||
# Copyright 2014: Mirantis Inc.
|
||||
#
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
@ -13,109 +13,15 @@
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
# NOTE(stpierre): This module is left for backward compatibility.
|
||||
|
||||
from oslo_config import cfg
|
||||
import sys
|
||||
import warnings
|
||||
|
||||
from rally.task import utils
|
||||
from rally.plugins.openstack.cleanup import base
|
||||
|
||||
warnings.warn("Module rally.plugins.openstack.context.cleanup.base has been "
|
||||
"moved to rally.plugins.openstack.cleanup.base, and will be "
|
||||
"removed at some point in the future.")
|
||||
|
||||
CONF = cfg.CONF
|
||||
|
||||
CLEANUP_OPTS = [
|
||||
cfg.IntOpt("resource_deletion_timeout", default=600,
|
||||
help="A timeout in seconds for deleting resources")
|
||||
]
|
||||
cleanup_group = cfg.OptGroup(name="cleanup", title="Cleanup Options")
|
||||
CONF.register_group(cleanup_group)
|
||||
CONF.register_opts(CLEANUP_OPTS, cleanup_group)
|
||||
|
||||
|
||||
def resource(service, resource, order=0, admin_required=False,
|
||||
perform_for_admin_only=False, tenant_resource=False,
|
||||
max_attempts=3, timeout=CONF.cleanup.resource_deletion_timeout,
|
||||
interval=1, threads=20):
|
||||
"""Decorator that overrides resource specification.
|
||||
|
||||
Just put it on top of your resource class and specify arguments that you
|
||||
need.
|
||||
|
||||
:param service: It is equal to client name for corresponding service.
|
||||
E.g. "nova", "cinder" or "zaqar"
|
||||
:param resource: Client manager name for resource. E.g. in case of
|
||||
nova.servers you should write here "servers"
|
||||
:param order: Used to adjust priority of cleanup for different resource
|
||||
types
|
||||
:param admin_required: Admin user is required
|
||||
:param perform_for_admin_only: Perform cleanup for admin user only
|
||||
:param tenant_resource: Perform deletion only 1 time per tenant
|
||||
:param max_attempts: Max amount of attempts to delete single resource
|
||||
:param timeout: Max duration of deletion in seconds
|
||||
:param interval: Resource status pooling interval
|
||||
:param threads: Amount of threads (workers) that are deleting resources
|
||||
simultaneously
|
||||
"""
|
||||
|
||||
def inner(cls):
|
||||
# TODO(boris-42): This can be written better I believe =)
|
||||
cls._service = service
|
||||
cls._resource = resource
|
||||
cls._order = order
|
||||
cls._admin_required = admin_required
|
||||
cls._perform_for_admin_only = perform_for_admin_only
|
||||
cls._max_attempts = max_attempts
|
||||
cls._timeout = timeout
|
||||
cls._interval = interval
|
||||
cls._threads = threads
|
||||
cls._tenant_resource = tenant_resource
|
||||
|
||||
return cls
|
||||
|
||||
return inner
|
||||
|
||||
|
||||
@resource(service=None, resource=None)
|
||||
class ResourceManager(object):
|
||||
"""Base class for cleanup plugins for specific resources.
|
||||
|
||||
You should use @resource decorator to specify major configuration of
|
||||
resource manager. Usually you should specify: service, resource and order.
|
||||
|
||||
If project python client is very specific, you can override delete(),
|
||||
list() and is_deleted() methods to make them fit to your case.
|
||||
"""
|
||||
|
||||
def __init__(self, resource=None, admin=None, user=None, tenant_uuid=None):
|
||||
self.admin = admin
|
||||
self.user = user
|
||||
self.raw_resource = resource
|
||||
self.tenant_uuid = tenant_uuid
|
||||
|
||||
def _manager(self):
|
||||
client = self._admin_required and self.admin or self.user
|
||||
return getattr(getattr(client, self._service)(), self._resource)
|
||||
|
||||
def id(self):
|
||||
"""Returns id of resource."""
|
||||
return self.raw_resource.id
|
||||
|
||||
def is_deleted(self):
|
||||
"""Checks if the resource is deleted.
|
||||
|
||||
Fetch resource by id from service and check it status.
|
||||
In case of NotFound or status is DELETED or DELETE_COMPLETE returns
|
||||
True, otherwise False.
|
||||
"""
|
||||
try:
|
||||
resource = self._manager().get(self.id())
|
||||
except Exception as e:
|
||||
return getattr(e, "code", getattr(e, "http_status", 400)) == 404
|
||||
|
||||
return utils.get_status(resource) in ("DELETED", "DELETE_COMPLETE")
|
||||
|
||||
def delete(self):
|
||||
"""Delete resource that corresponds to instance of this class."""
|
||||
self._manager().delete(self.id())
|
||||
|
||||
def list(self):
|
||||
"""List all resources specific for admin or user."""
|
||||
return self._manager().list()
|
||||
sys.modules["rally.plugins.openstack.context.cleanup.base"] = base
|
||||
|
@ -1,94 +0,0 @@
|
||||
# Copyright 2014: Mirantis Inc.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import sys
|
||||
|
||||
from rally.common.i18n import _
|
||||
from rally.common import logging
|
||||
from rally import consts
|
||||
from rally import exceptions
|
||||
from rally.plugins.openstack.context.cleanup import manager
|
||||
from rally.task import context
|
||||
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class NoSuchCleanupResources(exceptions.RallyException):
|
||||
msg_fmt = _("Missing cleanup resource managers: %(message)s")
|
||||
|
||||
|
||||
class CleanupMixin(object):
|
||||
|
||||
CONFIG_SCHEMA = {
|
||||
"type": "array",
|
||||
"$schema": consts.JSON_SCHEMA,
|
||||
"items": {
|
||||
"type": "string",
|
||||
},
|
||||
"additionalProperties": False
|
||||
}
|
||||
|
||||
def setup(self):
|
||||
pass
|
||||
|
||||
|
||||
# NOTE(amaretskiy): Set order to run this just before UserCleanup
|
||||
@context.configure(name="admin_cleanup", order=(sys.maxsize - 1), hidden=True)
|
||||
class AdminCleanup(CleanupMixin, context.Context):
|
||||
"""Context class for admin resources cleanup."""
|
||||
|
||||
@classmethod
|
||||
def validate(cls, config, non_hidden=False):
|
||||
super(AdminCleanup, cls).validate(config, non_hidden)
|
||||
|
||||
missing = set(config)
|
||||
missing -= manager.list_resource_names(admin_required=True)
|
||||
missing = ", ".join(missing)
|
||||
if missing:
|
||||
LOG.info(_("Couldn't find cleanup resource managers: %s")
|
||||
% missing)
|
||||
raise NoSuchCleanupResources(missing)
|
||||
|
||||
@logging.log_task_wrapper(LOG.info, _("admin resources cleanup"))
|
||||
def cleanup(self):
|
||||
manager.cleanup(names=self.config,
|
||||
admin_required=True,
|
||||
admin=self.context["admin"],
|
||||
users=self.context.get("users", []))
|
||||
|
||||
|
||||
# NOTE(amaretskiy): Set maximum order to run this last
|
||||
@context.configure(name="cleanup", order=sys.maxsize, hidden=True)
|
||||
class UserCleanup(CleanupMixin, context.Context):
|
||||
"""Context class for user resources cleanup."""
|
||||
|
||||
@classmethod
|
||||
def validate(cls, config, non_hidden=False):
|
||||
super(UserCleanup, cls).validate(config, non_hidden)
|
||||
|
||||
missing = set(config)
|
||||
missing -= manager.list_resource_names(admin_required=False)
|
||||
missing = ", ".join(missing)
|
||||
if missing:
|
||||
LOG.info(_("Couldn't find cleanup resource managers: %s")
|
||||
% missing)
|
||||
raise NoSuchCleanupResources(missing)
|
||||
|
||||
@logging.log_task_wrapper(LOG.info, _("user resources cleanup"))
|
||||
def cleanup(self):
|
||||
manager.cleanup(names=self.config,
|
||||
admin_required=False,
|
||||
users=self.context.get("users", []))
|
@ -1,4 +1,4 @@
|
||||
# Copyright 2014: Mirantis Inc.
|
||||
#
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
@ -13,270 +13,15 @@
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import time
|
||||
# NOTE(stpierre): This module is left for backward compatibility.
|
||||
|
||||
from rally.common import broker
|
||||
from rally.common.i18n import _
|
||||
from rally.common import logging
|
||||
from rally.common.plugin import discover
|
||||
from rally.common import utils as rutils
|
||||
from rally import osclients
|
||||
from rally.plugins.openstack.context.cleanup import base
|
||||
import sys
|
||||
import warnings
|
||||
|
||||
from rally.plugins.openstack.cleanup import manager
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
warnings.warn("Module rally.plugins.openstack.context.cleanup.manager has "
|
||||
"been moved to rally.plugins.openstack.cleanup.manager, and "
|
||||
"will be removed at some point in the future.")
|
||||
|
||||
|
||||
class SeekAndDestroy(object):
|
||||
|
||||
def __init__(self, manager_cls, admin, users):
|
||||
"""Resource deletion class.
|
||||
|
||||
This class contains method exterminate() that finds and deletes
|
||||
all resources created by Rally.
|
||||
|
||||
:param manager_cls: subclass of base.ResourceManager
|
||||
:param admin: admin credential like in context["admin"]
|
||||
:param users: users credentials like in context["users"]
|
||||
"""
|
||||
self.manager_cls = manager_cls
|
||||
self.admin = admin
|
||||
self.users = users or []
|
||||
|
||||
@staticmethod
|
||||
def _get_cached_client(user, cache=None):
|
||||
"""Simplifies initialization and caching OpenStack clients."""
|
||||
|
||||
if not user:
|
||||
return None
|
||||
|
||||
if not isinstance(cache, dict):
|
||||
return osclients.Clients(user["credential"])
|
||||
|
||||
key = user["credential"]
|
||||
if key not in cache:
|
||||
cache[key] = osclients.Clients(key)
|
||||
|
||||
return cache[key]
|
||||
|
||||
def _delete_single_resource(self, resource):
|
||||
"""Safe resource deletion with retries and timeouts.
|
||||
|
||||
Send request to delete resource, in case of failures repeat it few
|
||||
times. After that pull status of resource until it's deleted.
|
||||
|
||||
Writes in LOG warning with UUID of resource that wasn't deleted
|
||||
|
||||
:param resource: instance of resource manager initiated with resource
|
||||
that should be deleted.
|
||||
"""
|
||||
|
||||
msg_kw = {
|
||||
"uuid": resource.id(),
|
||||
"service": resource._service,
|
||||
"resource": resource._resource
|
||||
}
|
||||
|
||||
LOG.debug("Deleting %(service)s %(resource)s object %(uuid)s" %
|
||||
msg_kw)
|
||||
|
||||
try:
|
||||
rutils.retry(resource._max_attempts, resource.delete)
|
||||
except Exception as e:
|
||||
msg_kw["reason"] = e
|
||||
LOG.warning(
|
||||
_("Resource deletion failed, max retries exceeded for "
|
||||
"%(service)s.%(resource)s: %(uuid)s. Reason: %(reason)s")
|
||||
% msg_kw)
|
||||
if logging.is_debug():
|
||||
LOG.exception(e)
|
||||
else:
|
||||
started = time.time()
|
||||
failures_count = 0
|
||||
while time.time() - started < resource._timeout:
|
||||
try:
|
||||
if resource.is_deleted():
|
||||
return
|
||||
except Exception as e:
|
||||
LOG.warning(
|
||||
_("Seems like %s.%s.is_deleted(self) method is broken "
|
||||
"It shouldn't raise any exceptions.")
|
||||
% (resource.__module__, type(resource).__name__))
|
||||
LOG.exception(e)
|
||||
|
||||
# NOTE(boris-42): Avoid LOG spamming in case of bad
|
||||
# is_deleted() method
|
||||
failures_count += 1
|
||||
if failures_count > resource._max_attempts:
|
||||
break
|
||||
|
||||
finally:
|
||||
time.sleep(resource._interval)
|
||||
|
||||
LOG.warning(_("Resource deletion failed, timeout occurred for "
|
||||
"%(service)s.%(resource)s: %(uuid)s.")
|
||||
% msg_kw)
|
||||
|
||||
def _gen_publisher(self):
|
||||
"""Returns publisher for deletion jobs.
|
||||
|
||||
This method iterates over all users, lists all resources
|
||||
(using manager_cls) and puts jobs for deletion.
|
||||
|
||||
Every deletion job contains tuple with two values: user and resource
|
||||
uuid that should be deleted.
|
||||
|
||||
In case of tenant based resource, uuids are fetched only from one user
|
||||
per tenant.
|
||||
"""
|
||||
|
||||
def publisher(queue):
|
||||
|
||||
def _publish(admin, user, manager):
|
||||
try:
|
||||
for raw_resource in rutils.retry(3, manager.list):
|
||||
queue.append((admin, user, raw_resource))
|
||||
except Exception as e:
|
||||
LOG.warning(
|
||||
_("Seems like %s.%s.list(self) method is broken. "
|
||||
"It shouldn't raise any exceptions.")
|
||||
% (manager.__module__, type(manager).__name__))
|
||||
LOG.exception(e)
|
||||
|
||||
if self.admin and (not self.users
|
||||
or self.manager_cls._perform_for_admin_only):
|
||||
manager = self.manager_cls(
|
||||
admin=self._get_cached_client(self.admin))
|
||||
_publish(self.admin, None, manager)
|
||||
|
||||
else:
|
||||
visited_tenants = set()
|
||||
admin_client = self._get_cached_client(self.admin)
|
||||
for user in self.users:
|
||||
if (self.manager_cls._tenant_resource
|
||||
and user["tenant_id"] in visited_tenants):
|
||||
continue
|
||||
|
||||
visited_tenants.add(user["tenant_id"])
|
||||
manager = self.manager_cls(
|
||||
admin=admin_client,
|
||||
user=self._get_cached_client(user),
|
||||
tenant_uuid=user["tenant_id"])
|
||||
|
||||
_publish(self.admin, user, manager)
|
||||
|
||||
return publisher
|
||||
|
||||
def _gen_consumer(self):
|
||||
"""Generate method that consumes single deletion job."""
|
||||
|
||||
def consumer(cache, args):
|
||||
"""Execute deletion job."""
|
||||
admin, user, raw_resource = args
|
||||
|
||||
manager = self.manager_cls(
|
||||
resource=raw_resource,
|
||||
admin=self._get_cached_client(admin, cache=cache),
|
||||
user=self._get_cached_client(user, cache=cache),
|
||||
tenant_uuid=user and user["tenant_id"])
|
||||
|
||||
self._delete_single_resource(manager)
|
||||
|
||||
return consumer
|
||||
|
||||
def exterminate(self):
|
||||
"""Delete all resources for passed users, admin and resource_mgr."""
|
||||
|
||||
broker.run(self._gen_publisher(), self._gen_consumer(),
|
||||
consumers_count=self.manager_cls._threads)
|
||||
|
||||
|
||||
def list_resource_names(admin_required=None):
|
||||
"""List all resource managers names.
|
||||
|
||||
Returns all service names and all combination of service.resource names.
|
||||
|
||||
:param admin_required: None -> returns all ResourceManagers
|
||||
True -> returns only admin ResourceManagers
|
||||
False -> returns only non admin ResourceManagers
|
||||
"""
|
||||
res_mgrs = discover.itersubclasses(base.ResourceManager)
|
||||
if admin_required is not None:
|
||||
res_mgrs = filter(lambda cls: cls._admin_required == admin_required,
|
||||
res_mgrs)
|
||||
|
||||
names = set()
|
||||
for cls in res_mgrs:
|
||||
names.add(cls._service)
|
||||
names.add("%s.%s" % (cls._service, cls._resource))
|
||||
|
||||
return names
|
||||
|
||||
|
||||
def find_resource_managers(names=None, admin_required=None):
|
||||
"""Returns resource managers.
|
||||
|
||||
:param names: List of names in format <service> or <service>.<resource>
|
||||
that is used for filtering resource manager classes
|
||||
:param admin_required: None -> returns all ResourceManagers
|
||||
True -> returns only admin ResourceManagers
|
||||
False -> returns only non admin ResourceManagers
|
||||
"""
|
||||
names = set(names or [])
|
||||
|
||||
resource_managers = []
|
||||
for manager in discover.itersubclasses(base.ResourceManager):
|
||||
if admin_required is not None:
|
||||
if admin_required != manager._admin_required:
|
||||
continue
|
||||
|
||||
if (manager._service in names
|
||||
or "%s.%s" % (manager._service, manager._resource) in names):
|
||||
resource_managers.append(manager)
|
||||
|
||||
resource_managers.sort(key=lambda x: x._order)
|
||||
|
||||
found_names = set()
|
||||
for mgr in resource_managers:
|
||||
found_names.add(mgr._service)
|
||||
found_names.add("%s.%s" % (mgr._service, mgr._resource))
|
||||
|
||||
missing = names - found_names
|
||||
if missing:
|
||||
LOG.warning("Missing resource managers: %s" % ", ".join(missing))
|
||||
|
||||
return resource_managers
|
||||
|
||||
|
||||
def cleanup(names=None, admin_required=None, admin=None, users=None):
|
||||
"""Generic cleaner.
|
||||
|
||||
This method goes through all plugins. Filter those and left only plugins
|
||||
with _service from services or _resource from resources.
|
||||
|
||||
Then goes through all passed users and using cleaners cleans all related
|
||||
resources.
|
||||
|
||||
:param names: Use only resource manages that has name from this list.
|
||||
There are in as _service or
|
||||
(%s.%s % (_service, _resource)) from
|
||||
|
||||
:param admin_required: If None -> return all plugins
|
||||
If True -> return only admin plugins
|
||||
If False -> return only non admin plugins
|
||||
:param admin: rally.common.objects.Credential that corresponds to OpenStack
|
||||
admin.
|
||||
:param users: List of OpenStack users that was used during benchmarking.
|
||||
Every user has next structure:
|
||||
{
|
||||
"id": <uuid1>,
|
||||
"tenant_id": <uuid2>,
|
||||
"credential": <rally.common.objects.Credential>
|
||||
|
||||
}
|
||||
"""
|
||||
for manager in find_resource_managers(names, admin_required):
|
||||
LOG.debug("Cleaning up %(service)s %(resource)s objects" %
|
||||
{"service": manager._service,
|
||||
"resource": manager._resource})
|
||||
SeekAndDestroy(manager, admin, users).exterminate()
|
||||
sys.modules["rally.plugins.openstack.context.cleanup.manager"] = manager
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Copyright 2014: Mirantis Inc.
|
||||
#
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
@ -13,642 +13,15 @@
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from boto import exception as boto_exception
|
||||
from neutronclient.common import exceptions as neutron_exceptions
|
||||
from saharaclient.api import base as saharaclient_base
|
||||
# NOTE(stpierre): This module is left for backward compatibility.
|
||||
|
||||
from rally.common import logging
|
||||
from rally.common.plugin import discover
|
||||
from rally.common import utils
|
||||
from rally.plugins.openstack.context.cleanup import base
|
||||
from rally.plugins.openstack.scenarios.fuel import utils as futils
|
||||
from rally.plugins.openstack.scenarios.keystone import utils as kutils
|
||||
from rally.plugins.openstack.scenarios.nova import utils as nova_utils
|
||||
from rally.plugins.openstack.wrappers import keystone as keystone_wrapper
|
||||
import sys
|
||||
import warnings
|
||||
|
||||
LOG = logging.getLogger(__name__)
|
||||
from rally.plugins.openstack.cleanup import resources
|
||||
|
||||
warnings.warn("Module rally.plugins.openstack.context.cleanup.resources has "
|
||||
"been moved to rally.plugins.openstack.cleanup.resources, and "
|
||||
"will be removed at some point in the future.")
|
||||
|
||||
def get_order(start):
|
||||
return iter(range(start, start + 99))
|
||||
|
||||
|
||||
class SynchronizedDeletion(object):
|
||||
|
||||
def is_deleted(self):
|
||||
return True
|
||||
|
||||
|
||||
class QuotaMixin(SynchronizedDeletion):
|
||||
|
||||
def id(self):
|
||||
return self.raw_resource
|
||||
|
||||
def delete(self):
|
||||
self._manager().delete(self.raw_resource)
|
||||
|
||||
def list(self):
|
||||
return [self.tenant_uuid] if self.tenant_uuid else []
|
||||
|
||||
|
||||
# HEAT
|
||||
|
||||
@base.resource("heat", "stacks", order=100, tenant_resource=True)
|
||||
class HeatStack(base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
# NOVA
|
||||
|
||||
_nova_order = get_order(200)
|
||||
|
||||
|
||||
@base.resource("nova", "servers", order=next(_nova_order))
|
||||
class NovaServer(base.ResourceManager):
|
||||
def list(self):
|
||||
"""List all servers."""
|
||||
|
||||
if hasattr(self._manager().api, "api_version"):
|
||||
# NOTE(andreykurilin): novaclient v2.27.0 includes ability to
|
||||
# return all servers(see https://review.openstack.org/#/c/217101
|
||||
# for more details). This release can be identified by presence
|
||||
# of "api_version" property of ``novaclient.client.Client`` cls.
|
||||
return self._manager().list(limit=-1)
|
||||
else:
|
||||
# FIXME(andreykurilin): Remove code below, when minimum version of
|
||||
# novaclient in requirements will allow it.
|
||||
# NOTE(andreykurilin): Nova API returns only limited number(
|
||||
# 'osapi_max_limit' option in nova.conf) of servers, so we need
|
||||
# to use 'marker' option to list all pages of servers.
|
||||
result = []
|
||||
marker = None
|
||||
while True:
|
||||
servers = self._manager().list(marker=marker)
|
||||
if not servers:
|
||||
break
|
||||
result.extend(servers)
|
||||
marker = servers[-1].id
|
||||
return result
|
||||
|
||||
def delete(self):
|
||||
if getattr(self.raw_resource, "OS-EXT-STS:locked", False):
|
||||
self.raw_resource.unlock()
|
||||
super(NovaServer, self).delete()
|
||||
|
||||
|
||||
@base.resource("nova", "floating_ips", order=next(_nova_order))
|
||||
class NovaFloatingIPs(SynchronizedDeletion, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("nova", "keypairs", order=next(_nova_order))
|
||||
class NovaKeypair(SynchronizedDeletion, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("nova", "security_groups", order=next(_nova_order))
|
||||
class NovaSecurityGroup(SynchronizedDeletion, base.ResourceManager):
|
||||
|
||||
def list(self):
|
||||
return filter(lambda x: x.name != "default",
|
||||
super(NovaSecurityGroup, self).list())
|
||||
|
||||
|
||||
@base.resource("nova", "quotas", order=next(_nova_order),
|
||||
admin_required=True, tenant_resource=True)
|
||||
class NovaQuotas(QuotaMixin, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("nova", "floating_ips_bulk", order=next(_nova_order),
|
||||
admin_required=True)
|
||||
class NovaFloatingIpsBulk(SynchronizedDeletion, base.ResourceManager):
|
||||
|
||||
def id(self):
|
||||
return self.raw_resource.address
|
||||
|
||||
def list(self):
|
||||
return [floating_ip for floating_ip in self._manager().list()
|
||||
if utils.name_matches_object(floating_ip.pool,
|
||||
nova_utils.NovaScenario)]
|
||||
|
||||
|
||||
@base.resource("nova", "networks", order=next(_nova_order),
|
||||
admin_required=True, tenant_resource=True)
|
||||
class NovaNetworks(SynchronizedDeletion, base.ResourceManager):
|
||||
|
||||
def list(self):
|
||||
# NOTE(stpierre): any plugin can create a nova network via the
|
||||
# network wrapper, and that network's name will be created
|
||||
# according to its owner's random name generation
|
||||
# parameters. so we need to check if there are nova networks
|
||||
# whose name pattern matches those of any loaded plugin that
|
||||
# implements RandomNameGeneratorMixin
|
||||
classes = list(discover.itersubclasses(utils.RandomNameGeneratorMixin))
|
||||
return [net for net in self._manager().list()
|
||||
if utils.name_matches_object(net.label, *classes)]
|
||||
|
||||
|
||||
# EC2
|
||||
|
||||
_ec2_order = get_order(250)
|
||||
|
||||
|
||||
class EC2Mixin(object):
|
||||
|
||||
def _manager(self):
|
||||
return getattr(self.user, self._service)()
|
||||
|
||||
|
||||
@base.resource("ec2", "servers", order=next(_ec2_order))
|
||||
class EC2Server(EC2Mixin, base.ResourceManager):
|
||||
|
||||
def is_deleted(self):
|
||||
try:
|
||||
instances = self._manager().get_only_instances(
|
||||
instance_ids=[self.id()])
|
||||
except boto_exception.EC2ResponseError as e:
|
||||
# NOTE(wtakase): Nova EC2 API returns 'InvalidInstanceID.NotFound'
|
||||
# if instance not found. In this case, we consider
|
||||
# instance has already been deleted.
|
||||
return getattr(e, "error_code") == "InvalidInstanceID.NotFound"
|
||||
|
||||
# NOTE(wtakase): After instance deletion, instance can be 'terminated'
|
||||
# state. If all instance states are 'terminated', this
|
||||
# returns True. And if get_only_instaces() returns empty
|
||||
# list, this also returns True because we consider
|
||||
# instance has already been deleted.
|
||||
return all(map(lambda i: i.state == "terminated", instances))
|
||||
|
||||
def delete(self):
|
||||
self._manager().terminate_instances(instance_ids=[self.id()])
|
||||
|
||||
def list(self):
|
||||
return self._manager().get_only_instances()
|
||||
|
||||
|
||||
# NEUTRON
|
||||
|
||||
_neutron_order = get_order(300)
|
||||
|
||||
|
||||
@base.resource(service=None, resource=None, admin_required=True)
|
||||
class NeutronMixin(SynchronizedDeletion, base.ResourceManager):
|
||||
# Neutron has the best client ever, so we need to override everything
|
||||
|
||||
def supports_extension(self, extension):
|
||||
exts = self._manager().list_extensions().get("extensions", [])
|
||||
if any(ext.get("alias") == extension for ext in exts):
|
||||
return True
|
||||
return False
|
||||
|
||||
def _manager(self):
|
||||
client = self._admin_required and self.admin or self.user
|
||||
return getattr(client, self._service)()
|
||||
|
||||
def id(self):
|
||||
return self.raw_resource["id"]
|
||||
|
||||
def delete(self):
|
||||
delete_method = getattr(self._manager(), "delete_%s" % self._resource)
|
||||
delete_method(self.id())
|
||||
|
||||
def list(self):
|
||||
resources = self._resource + "s"
|
||||
list_method = getattr(self._manager(), "list_%s" % resources)
|
||||
|
||||
return filter(lambda r: r["tenant_id"] == self.tenant_uuid,
|
||||
list_method({"tenant_id": self.tenant_uuid})[resources])
|
||||
|
||||
|
||||
class NeutronLbaasV1Mixin(NeutronMixin):
|
||||
|
||||
def list(self):
|
||||
if self.supports_extension("lbaas"):
|
||||
return super(NeutronLbaasV1Mixin, self).list()
|
||||
return []
|
||||
|
||||
|
||||
@base.resource("neutron", "vip", order=next(_neutron_order),
|
||||
tenant_resource=True)
|
||||
class NeutronV1Vip(NeutronLbaasV1Mixin):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("neutron", "health_monitor", order=next(_neutron_order),
|
||||
tenant_resource=True)
|
||||
class NeutronV1Healthmonitor(NeutronLbaasV1Mixin):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("neutron", "pool", order=next(_neutron_order),
|
||||
tenant_resource=True)
|
||||
class NeutronV1Pool(NeutronLbaasV1Mixin):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("neutron", "port", order=next(_neutron_order),
|
||||
tenant_resource=True)
|
||||
class NeutronPort(NeutronMixin):
|
||||
|
||||
def delete(self):
|
||||
if (self.raw_resource["device_owner"] == "network:router_interface" or
|
||||
self.raw_resource["device_owner"] ==
|
||||
"network:router_interface_distributed"):
|
||||
self._manager().remove_interface_router(
|
||||
self.raw_resource["device_id"],
|
||||
{"port_id": self.raw_resource["id"]})
|
||||
else:
|
||||
try:
|
||||
self._manager().delete_port(self.id())
|
||||
except neutron_exceptions.PortNotFoundClient:
|
||||
# Port can be already auto-deleted, skip silently
|
||||
LOG.debug("Port %s was not deleted. Skip silently because "
|
||||
"port can be already auto-deleted."
|
||||
% self.id())
|
||||
|
||||
|
||||
@base.resource("neutron", "router", order=next(_neutron_order),
|
||||
tenant_resource=True)
|
||||
class NeutronRouter(NeutronMixin):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("neutron", "subnet", order=next(_neutron_order),
|
||||
tenant_resource=True)
|
||||
class NeutronSubnet(NeutronMixin):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("neutron", "network", order=next(_neutron_order),
|
||||
tenant_resource=True)
|
||||
class NeutronNetwork(NeutronMixin):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("neutron", "floatingip", order=next(_neutron_order),
|
||||
tenant_resource=True)
|
||||
class NeutronFloatingIP(NeutronMixin):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("neutron", "security_group", order=next(_neutron_order),
|
||||
tenant_resource=True)
|
||||
class NeutronSecurityGroup(NeutronMixin):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("neutron", "quota", order=next(_neutron_order),
|
||||
admin_required=True, tenant_resource=True)
|
||||
class NeutronQuota(QuotaMixin, NeutronMixin):
|
||||
|
||||
def delete(self):
|
||||
self._manager().delete_quota(self.tenant_uuid)
|
||||
|
||||
|
||||
# CINDER
|
||||
|
||||
_cinder_order = get_order(400)
|
||||
|
||||
|
||||
@base.resource("cinder", "backups", order=next(_cinder_order),
|
||||
tenant_resource=True)
|
||||
class CinderVolumeBackup(base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("cinder", "volume_snapshots", order=next(_cinder_order),
|
||||
tenant_resource=True)
|
||||
class CinderVolumeSnapshot(base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("cinder", "transfers", order=next(_cinder_order),
|
||||
tenant_resource=True)
|
||||
class CinderVolumeTransfer(base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("cinder", "volumes", order=next(_cinder_order),
|
||||
tenant_resource=True)
|
||||
class CinderVolume(base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("cinder", "quotas", order=next(_cinder_order),
|
||||
admin_required=True, tenant_resource=True)
|
||||
class CinderQuotas(QuotaMixin, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
# MANILA
|
||||
|
||||
_manila_order = get_order(450)
|
||||
|
||||
|
||||
@base.resource("manila", "shares", order=next(_manila_order),
|
||||
tenant_resource=True)
|
||||
class ManilaShare(base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("manila", "share_networks", order=next(_manila_order),
|
||||
tenant_resource=True)
|
||||
class ManilaShareNetwork(base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("manila", "security_services", order=next(_manila_order),
|
||||
tenant_resource=True)
|
||||
class ManilaSecurityService(base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
# GLANCE
|
||||
|
||||
@base.resource("glance", "images", order=500, tenant_resource=True)
|
||||
class GlanceImage(base.ResourceManager):
|
||||
|
||||
def list(self):
|
||||
return self._manager().list(owner=self.tenant_uuid)
|
||||
|
||||
|
||||
# SAHARA
|
||||
|
||||
_sahara_order = get_order(600)
|
||||
|
||||
|
||||
@base.resource("sahara", "job_executions", order=next(_sahara_order),
|
||||
tenant_resource=True)
|
||||
class SaharaJobExecution(SynchronizedDeletion, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("sahara", "jobs", order=next(_sahara_order),
|
||||
tenant_resource=True)
|
||||
class SaharaJob(SynchronizedDeletion, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("sahara", "job_binary_internals", order=next(_sahara_order),
|
||||
tenant_resource=True)
|
||||
class SaharaJobBinaryInternals(SynchronizedDeletion, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("sahara", "job_binaries", order=next(_sahara_order),
|
||||
tenant_resource=True)
|
||||
class SaharaJobBinary(SynchronizedDeletion, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("sahara", "data_sources", order=next(_sahara_order),
|
||||
tenant_resource=True)
|
||||
class SaharaDataSource(SynchronizedDeletion, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("sahara", "clusters", order=next(_sahara_order),
|
||||
tenant_resource=True)
|
||||
class SaharaCluster(base.ResourceManager):
|
||||
|
||||
# Need special treatment for Sahara Cluster because of the way the
|
||||
# exceptions are described in:
|
||||
# https://github.com/openstack/python-saharaclient/blob/master/
|
||||
# saharaclient/api/base.py#L145
|
||||
|
||||
def is_deleted(self):
|
||||
try:
|
||||
self._manager().get(self.id())
|
||||
return False
|
||||
except saharaclient_base.APIException as e:
|
||||
return e.error_code == 404
|
||||
|
||||
|
||||
@base.resource("sahara", "cluster_templates", order=next(_sahara_order),
|
||||
tenant_resource=True)
|
||||
class SaharaClusterTemplate(SynchronizedDeletion, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("sahara", "node_group_templates", order=next(_sahara_order),
|
||||
tenant_resource=True)
|
||||
class SaharaNodeGroup(SynchronizedDeletion, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
# CEILOMETER
|
||||
|
||||
@base.resource("ceilometer", "alarms", order=700, tenant_resource=True)
|
||||
class CeilometerAlarms(SynchronizedDeletion, base.ResourceManager):
|
||||
|
||||
def id(self):
|
||||
return self.raw_resource.alarm_id
|
||||
|
||||
def list(self):
|
||||
query = [{
|
||||
"field": "project_id",
|
||||
"op": "eq",
|
||||
"value": self.tenant_uuid
|
||||
}]
|
||||
return self._manager().list(q=query)
|
||||
|
||||
|
||||
# ZAQAR
|
||||
|
||||
@base.resource("zaqar", "queues", order=800)
|
||||
class ZaqarQueues(SynchronizedDeletion, base.ResourceManager):
|
||||
|
||||
def list(self):
|
||||
return self.user.zaqar().queues()
|
||||
|
||||
|
||||
# DESIGNATE
|
||||
|
||||
_designate_order = get_order(900)
|
||||
|
||||
|
||||
class DesignateResource(SynchronizedDeletion, base.ResourceManager):
|
||||
def _manager(self):
|
||||
# NOTE: service name contains version, so we should split them
|
||||
service_name, version = self._service.split("_v")
|
||||
return getattr(getattr(self.user, service_name)(version),
|
||||
self._resource)
|
||||
|
||||
|
||||
@base.resource("designate_v1", "domains", order=next(_designate_order))
|
||||
class DesignateDomain(DesignateResource):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("designate_v2", "zones", order=next(_designate_order))
|
||||
class DesignateZones(DesignateResource):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("designate_v1", "servers", order=next(_designate_order),
|
||||
admin_required=True, perform_for_admin_only=True)
|
||||
class DesignateServer(DesignateResource):
|
||||
pass
|
||||
|
||||
|
||||
# SWIFT
|
||||
|
||||
_swift_order = get_order(1000)
|
||||
|
||||
|
||||
class SwiftMixin(SynchronizedDeletion, base.ResourceManager):
|
||||
|
||||
def _manager(self):
|
||||
client = self._admin_required and self.admin or self.user
|
||||
return getattr(client, self._service)()
|
||||
|
||||
def id(self):
|
||||
return self.raw_resource
|
||||
|
||||
def delete(self):
|
||||
delete_method = getattr(self._manager(), "delete_%s" % self._resource)
|
||||
# NOTE(weiwu): *self.raw_resource is required because for deleting
|
||||
# container we are passing only container name, to delete object we
|
||||
# should pass as first argument container and second is object name.
|
||||
delete_method(*self.raw_resource)
|
||||
|
||||
|
||||
@base.resource("swift", "object", order=next(_swift_order),
|
||||
tenant_resource=True)
|
||||
class SwiftObject(SwiftMixin):
|
||||
|
||||
def list(self):
|
||||
object_list = []
|
||||
containers = self._manager().get_account(full_listing=True)[1]
|
||||
for con in containers:
|
||||
objects = self._manager().get_container(con["name"],
|
||||
full_listing=True)[1]
|
||||
for obj in objects:
|
||||
raw_resource = [con["name"], obj["name"]]
|
||||
object_list.append(raw_resource)
|
||||
return object_list
|
||||
|
||||
|
||||
@base.resource("swift", "container", order=next(_swift_order),
|
||||
tenant_resource=True)
|
||||
class SwiftContainer(SwiftMixin):
|
||||
|
||||
def list(self):
|
||||
containers = self._manager().get_account(full_listing=True)[1]
|
||||
return [[con["name"]] for con in containers]
|
||||
|
||||
|
||||
# MISTRAL
|
||||
|
||||
@base.resource("mistral", "workbooks", order=1100, tenant_resource=True)
|
||||
class MistralWorkbooks(SynchronizedDeletion, base.ResourceManager):
|
||||
def delete(self):
|
||||
self._manager().delete(self.raw_resource.name)
|
||||
|
||||
|
||||
# MURANO
|
||||
|
||||
_murano_order = get_order(1200)
|
||||
|
||||
|
||||
@base.resource("murano", "environments", tenant_resource=True,
|
||||
order=next(_murano_order))
|
||||
class MuranoEnvironments(SynchronizedDeletion, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("murano", "packages", tenant_resource=True,
|
||||
order=next(_murano_order))
|
||||
class MuranoPackages(base.ResourceManager):
|
||||
def list(self):
|
||||
return filter(lambda x: x.name != "Core library",
|
||||
super(MuranoPackages, self).list())
|
||||
|
||||
|
||||
# IRONIC
|
||||
|
||||
_ironic_order = get_order(1300)
|
||||
|
||||
|
||||
@base.resource("ironic", "node", admin_required=True,
|
||||
order=next(_ironic_order), perform_for_admin_only=True)
|
||||
class IronicNodes(base.ResourceManager):
|
||||
|
||||
def id(self):
|
||||
return self.raw_resource.uuid
|
||||
|
||||
|
||||
# FUEL
|
||||
|
||||
@base.resource("fuel", "environment", order=1400,
|
||||
admin_required=True, perform_for_admin_only=True)
|
||||
class FuelEnvironment(base.ResourceManager):
|
||||
"""Fuel environment.
|
||||
|
||||
That is the only resource that can be deleted by fuelclient explicitly.
|
||||
"""
|
||||
|
||||
def id(self):
|
||||
return self.raw_resource["id"]
|
||||
|
||||
def is_deleted(self):
|
||||
return not self._manager().get(self.id())
|
||||
|
||||
def list(self):
|
||||
return [env for env in self._manager().list()
|
||||
if utils.name_matches_object(env["name"],
|
||||
futils.FuelScenario)]
|
||||
|
||||
|
||||
# KEYSTONE
|
||||
|
||||
_keystone_order = get_order(9000)
|
||||
|
||||
|
||||
class KeystoneMixin(SynchronizedDeletion):
|
||||
|
||||
def _manager(self):
|
||||
return keystone_wrapper.wrap(getattr(self.admin, self._service)())
|
||||
|
||||
def delete(self):
|
||||
delete_method = getattr(self._manager(), "delete_%s" % self._resource)
|
||||
delete_method(self.id())
|
||||
|
||||
def list(self):
|
||||
# TODO(boris-42): We should use such stuff in all list commands.
|
||||
resources = self._resource + "s"
|
||||
list_method = getattr(self._manager(), "list_%s" % resources)
|
||||
|
||||
return filter(kutils.is_temporary, list_method())
|
||||
|
||||
|
||||
@base.resource("keystone", "user", order=next(_keystone_order),
|
||||
admin_required=True, perform_for_admin_only=True)
|
||||
class KeystoneUser(KeystoneMixin, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("keystone", "project", order=next(_keystone_order),
|
||||
admin_required=True, perform_for_admin_only=True)
|
||||
class KeystoneProject(KeystoneMixin, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("keystone", "service", order=next(_keystone_order),
|
||||
admin_required=True, perform_for_admin_only=True)
|
||||
class KeystoneService(KeystoneMixin, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("keystone", "role", order=next(_keystone_order),
|
||||
admin_required=True, perform_for_admin_only=True)
|
||||
class KeystoneRole(KeystoneMixin, base.ResourceManager):
|
||||
pass
|
||||
|
||||
|
||||
@base.resource("keystone", "ec2", tenant_resource=True,
|
||||
order=next(_keystone_order))
|
||||
class KeystoneEc2(SynchronizedDeletion, base.ResourceManager):
|
||||
def list(self):
|
||||
return self._manager().list(self.raw_resource)
|
||||
sys.modules["rally.plugins.openstack.context.cleanup.resources"] = resources
|
||||
|
@ -16,7 +16,7 @@ from rally.common.i18n import _
|
||||
from rally.common import logging
|
||||
from rally.common import utils as rutils
|
||||
from rally import consts
|
||||
from rally.plugins.openstack.context.cleanup import manager as resource_manager
|
||||
from rally.plugins.openstack.cleanup import manager as resource_manager
|
||||
from rally.plugins.openstack.scenarios.designate import utils
|
||||
from rally.task import context
|
||||
|
||||
|
@ -17,7 +17,7 @@ from rally.common import logging
|
||||
from rally.common import utils as rutils
|
||||
from rally import consts
|
||||
from rally import osclients
|
||||
from rally.plugins.openstack.context.cleanup import manager as resource_manager
|
||||
from rally.plugins.openstack.cleanup import manager as resource_manager
|
||||
from rally.plugins.openstack.scenarios.ec2 import utils as ec2_utils
|
||||
from rally.task import context
|
||||
from rally.task import types
|
||||
|
@ -16,7 +16,7 @@ from rally.common.i18n import _
|
||||
from rally.common import logging
|
||||
from rally.common import utils as rutils
|
||||
from rally import consts
|
||||
from rally.plugins.openstack.context.cleanup import manager as resource_manager
|
||||
from rally.plugins.openstack.cleanup import manager as resource_manager
|
||||
from rally.plugins.openstack.scenarios.glance import utils as glance_utils
|
||||
from rally.task import context
|
||||
|
||||
|
@ -17,7 +17,7 @@ from rally.common.i18n import _
|
||||
from rally.common import logging
|
||||
from rally.common import utils as rutils
|
||||
from rally import consts
|
||||
from rally.plugins.openstack.context.cleanup import manager as resource_manager
|
||||
from rally.plugins.openstack.cleanup import manager as resource_manager
|
||||
from rally.plugins.openstack.scenarios.heat import utils as heat_utils
|
||||
from rally.task import context
|
||||
|
||||
|
@ -24,7 +24,7 @@ from rally.common import utils
|
||||
from rally import consts
|
||||
from rally import exceptions
|
||||
from rally import osclients
|
||||
from rally.plugins.openstack.context.cleanup import manager as resource_manager
|
||||
from rally.plugins.openstack.cleanup import manager as resource_manager
|
||||
from rally.task import context
|
||||
|
||||
|
||||
|
@ -18,7 +18,7 @@ import novaclient.exceptions
|
||||
from rally.common.i18n import _
|
||||
from rally.common import logging
|
||||
from rally import osclients
|
||||
from rally.plugins.openstack.context.cleanup import manager as resource_manager
|
||||
from rally.plugins.openstack.cleanup import manager as resource_manager
|
||||
from rally.task import context
|
||||
|
||||
|
||||
|
@ -17,7 +17,7 @@ from rally.common import logging
|
||||
from rally.common import utils as rutils
|
||||
from rally import consts
|
||||
from rally import osclients
|
||||
from rally.plugins.openstack.context.cleanup import manager as resource_manager
|
||||
from rally.plugins.openstack.cleanup import manager as resource_manager
|
||||
from rally.plugins.openstack.scenarios.nova import utils as nova_utils
|
||||
from rally.task import context
|
||||
from rally.task import types
|
||||
|
@ -20,7 +20,7 @@ from rally.common import logging
|
||||
from rally.common import utils as rutils
|
||||
from rally import consts
|
||||
from rally import exceptions
|
||||
from rally.plugins.openstack.context.cleanup import manager as resource_manager
|
||||
from rally.plugins.openstack.cleanup import manager as resource_manager
|
||||
from rally.plugins.openstack.scenarios.sahara import utils
|
||||
from rally.task import context
|
||||
from rally.task import utils as bench_utils
|
||||
|
@ -18,7 +18,7 @@ from rally.common import utils as rutils
|
||||
from rally import consts
|
||||
from rally import exceptions
|
||||
from rally import osclients
|
||||
from rally.plugins.openstack.context.cleanup import manager as resource_manager
|
||||
from rally.plugins.openstack.cleanup import manager as resource_manager
|
||||
from rally.plugins.openstack.scenarios.glance import utils as glance_utils
|
||||
from rally.plugins.openstack.scenarios.sahara import utils
|
||||
from rally.task import context
|
||||
|
@ -21,8 +21,8 @@ from rally.common import logging
|
||||
from rally.common import utils as rutils
|
||||
from rally import consts
|
||||
from rally import osclients
|
||||
from rally.plugins.openstack.context.cleanup import manager as resource_manager
|
||||
from rally.plugins.openstack.context.cleanup import resources as res_cleanup
|
||||
from rally.plugins.openstack.cleanup import manager as resource_manager
|
||||
from rally.plugins.openstack.cleanup import resources as res_cleanup
|
||||
from rally.plugins.openstack.scenarios.sahara import utils
|
||||
from rally.plugins.openstack.scenarios.swift import utils as swift_utils
|
||||
from rally.task import context
|
||||
|
@ -21,7 +21,7 @@ from rally.common import utils as rutils
|
||||
from rally import consts
|
||||
from rally import exceptions
|
||||
from rally import osclients
|
||||
from rally.plugins.openstack.context.cleanup import manager as resource_manager
|
||||
from rally.plugins.openstack.cleanup import manager as resource_manager
|
||||
from rally.plugins.openstack.scenarios.sahara import utils
|
||||
from rally.task import context
|
||||
|
||||
|
@ -18,8 +18,8 @@ from rally.common import logging
|
||||
from rally.common import utils as rutils
|
||||
from rally import consts
|
||||
from rally import osclients
|
||||
from rally.plugins.openstack.context.cleanup import manager as resource_manager
|
||||
from rally.plugins.openstack.context.cleanup import resources as res_cleanup
|
||||
from rally.plugins.openstack.cleanup import manager as resource_manager
|
||||
from rally.plugins.openstack.cleanup import resources as res_cleanup
|
||||
from rally.plugins.openstack.scenarios.sahara import utils
|
||||
from rally.plugins.openstack.scenarios.swift import utils as swift_utils
|
||||
from rally.task import context
|
||||
|
0
tests/unit/plugins/openstack/cleanup/__init__.py
Normal file
0
tests/unit/plugins/openstack/cleanup/__init__.py
Normal file
@ -15,11 +15,11 @@
|
||||
|
||||
import mock
|
||||
|
||||
from rally.plugins.openstack.context.cleanup import base
|
||||
from rally.plugins.openstack.cleanup import base
|
||||
from tests.unit import test
|
||||
|
||||
|
||||
BASE = "rally.plugins.openstack.context.cleanup.base"
|
||||
BASE = "rally.plugins.openstack.cleanup.base"
|
||||
|
||||
|
||||
class ResourceDecoratorTestCase(test.TestCase):
|
@ -16,12 +16,12 @@
|
||||
import mock
|
||||
import six
|
||||
|
||||
from rally.plugins.openstack.context.cleanup import base
|
||||
from rally.plugins.openstack.context.cleanup import manager
|
||||
from rally.plugins.openstack.cleanup import base
|
||||
from rally.plugins.openstack.cleanup import manager
|
||||
from tests.unit import test
|
||||
|
||||
|
||||
BASE = "rally.plugins.openstack.context.cleanup.manager"
|
||||
BASE = "rally.plugins.openstack.cleanup.manager"
|
||||
|
||||
|
||||
class SeekAndDestroyTestCase(test.TestCase):
|
@ -19,11 +19,11 @@ from neutronclient.common import exceptions as neutron_exceptions
|
||||
|
||||
from rally.common.plugin import discover
|
||||
from rally.common import utils
|
||||
from rally.plugins.openstack.context.cleanup import base
|
||||
from rally.plugins.openstack.context.cleanup import resources
|
||||
from rally.plugins.openstack.cleanup import base
|
||||
from rally.plugins.openstack.cleanup import resources
|
||||
from tests.unit import test
|
||||
|
||||
BASE = "rally.plugins.openstack.context.cleanup.resources"
|
||||
BASE = "rally.plugins.openstack.cleanup.resources"
|
||||
|
||||
|
||||
class AllResourceManagerTestCase(test.TestCase):
|
@ -16,11 +16,11 @@
|
||||
import jsonschema
|
||||
import mock
|
||||
|
||||
from rally.plugins.openstack.context.cleanup import context
|
||||
from rally.plugins.openstack.context import cleanup
|
||||
from tests.unit import test
|
||||
|
||||
|
||||
BASE = "rally.plugins.openstack.context.cleanup.context"
|
||||
BASE = "rally.plugins.openstack.context.cleanup"
|
||||
|
||||
|
||||
class AdminCleanupTestCase(test.TestCase):
|
||||
@ -28,21 +28,21 @@ class AdminCleanupTestCase(test.TestCase):
|
||||
@mock.patch("%s.manager" % BASE)
|
||||
def test_validate(self, mock_manager):
|
||||
mock_manager.list_resource_names.return_value = set(["a", "b", "c"])
|
||||
context.AdminCleanup.validate(["a"])
|
||||
cleanup.AdminCleanup.validate(["a"])
|
||||
mock_manager.list_resource_names.assert_called_once_with(
|
||||
admin_required=True)
|
||||
|
||||
@mock.patch("%s.manager" % BASE)
|
||||
def test_validate_no_such_cleanup(self, mock_manager):
|
||||
mock_manager.list_resource_names.return_value = set(["a", "b", "c"])
|
||||
self.assertRaises(context.NoSuchCleanupResources,
|
||||
context.AdminCleanup.validate, ["a", "d"])
|
||||
self.assertRaises(cleanup.NoSuchCleanupResources,
|
||||
cleanup.AdminCleanup.validate, ["a", "d"])
|
||||
mock_manager.list_resource_names.assert_called_once_with(
|
||||
admin_required=True)
|
||||
|
||||
def test_validate_invalid_config(self):
|
||||
self.assertRaises(jsonschema.ValidationError,
|
||||
context.AdminCleanup.validate, {})
|
||||
cleanup.AdminCleanup.validate, {})
|
||||
|
||||
@mock.patch("%s.manager.find_resource_managers" % BASE,
|
||||
return_value=[mock.MagicMock(), mock.MagicMock()])
|
||||
@ -56,7 +56,7 @@ class AdminCleanupTestCase(test.TestCase):
|
||||
"task": mock.MagicMock()
|
||||
}
|
||||
|
||||
admin_cleanup = context.AdminCleanup(ctx)
|
||||
admin_cleanup = cleanup.AdminCleanup(ctx)
|
||||
admin_cleanup.setup()
|
||||
admin_cleanup.cleanup()
|
||||
|
||||
@ -80,21 +80,21 @@ class UserCleanupTestCase(test.TestCase):
|
||||
@mock.patch("%s.manager" % BASE)
|
||||
def test_validate(self, mock_manager):
|
||||
mock_manager.list_resource_names.return_value = set(["a", "b", "c"])
|
||||
context.UserCleanup.validate(["a"])
|
||||
cleanup.UserCleanup.validate(["a"])
|
||||
mock_manager.list_resource_names.assert_called_once_with(
|
||||
admin_required=False)
|
||||
|
||||
@mock.patch("%s.manager" % BASE)
|
||||
def test_validate_no_such_cleanup(self, mock_manager):
|
||||
mock_manager.list_resource_names.return_value = set(["a", "b", "c"])
|
||||
self.assertRaises(context.NoSuchCleanupResources,
|
||||
context.UserCleanup.validate, ["a", "b", "d"])
|
||||
self.assertRaises(cleanup.NoSuchCleanupResources,
|
||||
cleanup.UserCleanup.validate, ["a", "b", "d"])
|
||||
mock_manager.list_resource_names.assert_called_once_with(
|
||||
admin_required=False)
|
||||
|
||||
def test_validate_invalid_config(self):
|
||||
self.assertRaises(jsonschema.ValidationError,
|
||||
context.UserCleanup.validate, {})
|
||||
cleanup.UserCleanup.validate, {})
|
||||
|
||||
@mock.patch("%s.manager.find_resource_managers" % BASE,
|
||||
return_value=[mock.MagicMock(), mock.MagicMock()])
|
||||
@ -107,7 +107,7 @@ class UserCleanupTestCase(test.TestCase):
|
||||
"task": mock.MagicMock()
|
||||
}
|
||||
|
||||
admin_cleanup = context.UserCleanup(ctx)
|
||||
admin_cleanup = cleanup.UserCleanup(ctx)
|
||||
admin_cleanup.setup()
|
||||
admin_cleanup.cleanup()
|
||||
|
Loading…
Reference in New Issue
Block a user