Remove deprecated Grizzly code.

Now that Havana development has started remove code deprecated in Grizzly.

Change-Id: Ie3e7611347c334c359dea98d759345b97c66c9c1
This commit is contained in:
Joe Gordon 2013-04-03 16:54:08 -07:00
parent f3d6e5ccda
commit 820f43fc61
16 changed files with 14 additions and 953 deletions

View File

@ -256,67 +256,10 @@ putting instance on an appropriate host would have low.
So let's find out, how does all this computing work happen.
Before weighting Filter Scheduler creates the list of tuples containing weights
and cost functions to use for weighing hosts. These functions can be got from
cache, if this operation had been done before (this cache depends on `topic` of
node, Filter Scheduler works with only the Compute Nodes, so the topic would be
"`compute`" here). If there is no cost functions in cache associated with
"compute", Filter Scheduler tries to get these cost functions from `nova.conf`.
Weight in tuple means weight of cost function matching with it. It also can be
got from `nova.conf`. After that Scheduler weights host, using selected cost
functions. It does this using `weighted_sum` method, which parameters are:
* `weighted_fns` - list of cost functions created with their weights;
* `host_states` - hosts to be weighted;
* `weighing_properties` - dictionary of values that can influence weights.
This method firstly creates a grid of function results (it just counts value of
each function using `host_state` and `weighing_properties`) - `scores`, where
it would be one row per host and one function per column. The next step is to
multiply value from the each cell of the grid by the weight of appropriate cost
function. And the final step is to sum values in the each row - it would be the
weight of host, described in this line. This method returns the host with the
lowest weight - the best one.
If we concentrate on cost functions, it would be important to say that we use
`compute_fill_first_cost_fn` function by default, which simply returns hosts
free RAM:
::
def compute_fill_first_cost_fn(host_state, weighing_properties):
"""More free ram = higher weight. So servers will less free ram will be
preferred."""
return host_state.free_ram_mb
You can implement your own variant of cost function for the hosts capabilities
you would like to mention. Using different cost functions (as you understand,
there can be a lot of ones used in the same time) can make the chose of next
host for the creating of the new instance flexible.
These cost functions should be set up in the `nova.conf` with the flag
`least_cost_functions` (there can be more than one functions separated by
commas). By default this line would look like this:
::
--least_cost_functions=nova.scheduler.least_cost.compute_fill_first_cost_fn
As for weights of cost functions, they also should be described in `nova.conf`.
The line with this description looks the following way:
**function_name_weight**.
As for default cost function, it would be: `compute_fill_first_cost_fn_weight`,
and by default it is -1.0.
::
--compute_fill_first_cost_fn_weight=-1.0
Negative function's weight means that the more free RAM Compute Node has, the
better it is. Nova tries to spread instances as much as possible over the
Compute Nodes. Positive weight here would mean that Nova would fill up a single
Compute Node first.
The Filter Scheduler weights hosts based on the config option
`scheduler_weight_classes`, this defaults to
`nova.scheduler.weights.all_weighers`, which selects the only weigher available
-- the RamWeigher. Hosts are then weighted and sorted with the largest weight winning.
Filter Scheduler finds local list of acceptable hosts by repeated filtering and
weighing. Each time it chooses a host, it virtually consumes resources on it,

View File

@ -25,8 +25,6 @@ availability_zone_opts = [
default='internal',
help='availability_zone to show internal services under'),
cfg.StrOpt('default_availability_zone',
# deprecated in Grizzly release
deprecated_name='node_availability_zone',
default='nova',
help='default compute node availability_zone'),
]

View File

@ -1,431 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright (c) 2011 X.commerce, a business unit of eBay Inc.
# Copyright 2010 United States Government as represented by the
# Administrator of the National Aeronautics and Space Administration.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
This version of the api is deprecated in Grizzly and will be removed.
It is provided just in case a third party manager is in use.
"""
from nova.compute import instance_types
from nova.db import base
from nova import exception
from nova.network import api as shiny_api
from nova.network import model as network_model
from nova.network import rpcapi as network_rpcapi
from nova.openstack.common import log as logging
LOG = logging.getLogger(__name__)
refresh_cache = shiny_api.refresh_cache
_update_instance_cache = shiny_api.update_instance_cache_with_nw_info
update_instance_cache_with_nw_info = _update_instance_cache
wrap_check_policy = shiny_api.wrap_check_policy
class API(base.Base):
"""API for doing networking via the nova-network network manager.
This is a pluggable module - other implementations do networking via
other services (such as Quantum).
"""
_sentinel = object()
def __init__(self, **kwargs):
self.network_rpcapi = network_rpcapi.NetworkAPI()
super(API, self).__init__(**kwargs)
@wrap_check_policy
def get_all(self, context):
return self.network_rpcapi.get_all_networks(context)
@wrap_check_policy
def get(self, context, network_uuid):
return self.network_rpcapi.get_network(context, network_uuid)
@wrap_check_policy
def create(self, context, **kwargs):
return self.network_rpcapi.create_networks(context, **kwargs)
@wrap_check_policy
def delete(self, context, network_uuid):
return self.network_rpcapi.delete_network(context, network_uuid, None)
@wrap_check_policy
def disassociate(self, context, network_uuid):
return self.network_rpcapi.disassociate_network(context, network_uuid)
@wrap_check_policy
def get_fixed_ip(self, context, id):
return self.network_rpcapi.get_fixed_ip(context, id)
@wrap_check_policy
def get_fixed_ip_by_address(self, context, address):
return self.network_rpcapi.get_fixed_ip_by_address(context, address)
@wrap_check_policy
def get_floating_ip(self, context, id):
return self.network_rpcapi.get_floating_ip(context, id)
@wrap_check_policy
def get_floating_ip_pools(self, context):
return self.network_rpcapi.get_floating_ip_pools(context)
@wrap_check_policy
def get_floating_ip_by_address(self, context, address):
return self.network_rpcapi.get_floating_ip_by_address(context, address)
@wrap_check_policy
def get_floating_ips_by_project(self, context):
return self.network_rpcapi.get_floating_ips_by_project(context)
@wrap_check_policy
def get_floating_ips_by_fixed_address(self, context, fixed_address):
args = (context, fixed_address)
return self.network_rpcapi.get_floating_ips_by_fixed_address(*args)
@wrap_check_policy
def get_backdoor_port(self, context, host):
return self.network_rpcapi.get_backdoor_port(context, host)
@wrap_check_policy
def get_instance_id_by_floating_address(self, context, address):
# NOTE(tr3buchet): i hate this
return self.network_rpcapi.get_instance_id_by_floating_address(context,
address)
@wrap_check_policy
def get_vifs_by_instance(self, context, instance):
return self.network_rpcapi.get_vifs_by_instance(context,
instance['id'])
@wrap_check_policy
def get_vif_by_mac_address(self, context, mac_address):
return self.network_rpcapi.get_vif_by_mac_address(context, mac_address)
@wrap_check_policy
def allocate_floating_ip(self, context, pool=None):
"""Adds (allocates) a floating ip to a project from a pool."""
# NOTE(vish): We don't know which network host should get the ip
# when we allocate, so just send it to any one. This
# will probably need to move into a network supervisor
# at some point.
return self.network_rpcapi.allocate_floating_ip(context,
context.project_id,
pool,
False)
@wrap_check_policy
def release_floating_ip(self, context, address,
affect_auto_assigned=False):
"""Removes (deallocates) a floating ip with address from a project."""
args = (context, address, affect_auto_assigned)
return self.network_rpcapi.deallocate_floating_ip(*args)
@wrap_check_policy
@refresh_cache
def associate_floating_ip(self, context, instance,
floating_address, fixed_address,
affect_auto_assigned=False):
"""Associates a floating ip with a fixed ip.
ensures floating ip is allocated to the project in context
"""
args = (context, floating_address, fixed_address, affect_auto_assigned)
orig_instance_uuid = self.network_rpcapi.associate_floating_ip(*args)
if orig_instance_uuid:
msg_dict = dict(address=floating_address,
instance_id=orig_instance_uuid)
LOG.info(_('re-assign floating IP %(address)s from '
'instance %(instance_id)s') % msg_dict)
orig_instance = self.db.instance_get_by_uuid(context,
orig_instance_uuid)
# purge cached nw info for the original instance
update_instance_cache_with_nw_info(self, context, orig_instance)
@wrap_check_policy
@refresh_cache
def disassociate_floating_ip(self, context, instance, address,
affect_auto_assigned=False):
"""Disassociates a floating ip from fixed ip it is associated with."""
self.network_rpcapi.disassociate_floating_ip(context, address,
affect_auto_assigned)
@wrap_check_policy
@refresh_cache
def allocate_for_instance(self, context, instance, vpn,
requested_networks, macs=None,
conductor_api=None, security_groups=None,
**kwargs):
"""Allocates all network structures for an instance.
TODO(someone): document the rest of these parameters.
:param macs: None or a set of MAC addresses that the instance
should use. macs is supplied by the hypervisor driver (contrast
with requested_networks which is user supplied).
NB: macs is ignored by nova-network.
:returns: network info as from get_instance_nw_info() below
"""
instance_type = instance_types.extract_instance_type(instance)
args = {}
args['vpn'] = vpn
args['requested_networks'] = requested_networks
args['instance_id'] = instance['uuid']
args['project_id'] = instance['project_id']
args['host'] = instance['host']
args['rxtx_factor'] = instance_type['rxtx_factor']
nw_info = self.network_rpcapi.allocate_for_instance(context, **args)
return network_model.NetworkInfo.hydrate(nw_info)
@wrap_check_policy
def deallocate_for_instance(self, context, instance, **kwargs):
"""Deallocates all network structures related to instance."""
args = {}
args['instance_id'] = instance['id']
args['project_id'] = instance['project_id']
args['host'] = instance['host']
self.network_rpcapi.deallocate_for_instance(context, **args)
@wrap_check_policy
@refresh_cache
def add_fixed_ip_to_instance(self, context, instance, network_id,
conductor_api=None, **kwargs):
"""Adds a fixed ip to instance from specified network."""
args = {'instance_id': instance['uuid'],
'host': instance['host'],
'network_id': network_id,
'rxtx_factor': None}
self.network_rpcapi.add_fixed_ip_to_instance(context, **args)
@wrap_check_policy
@refresh_cache
def remove_fixed_ip_from_instance(self, context, instance, address,
conductor=None, **kwargs):
"""Removes a fixed ip from instance from specified network."""
args = {'instance_id': instance['uuid'],
'host': instance['host'],
'address': address,
'rxtx_factor': None}
self.network_rpcapi.remove_fixed_ip_from_instance(context, **args)
@wrap_check_policy
def add_network_to_project(self, context, project_id, network_uuid=None):
"""Force adds another network to a project."""
self.network_rpcapi.add_network_to_project(context, project_id,
network_uuid)
@wrap_check_policy
def associate(self, context, network_uuid, host=_sentinel,
project=_sentinel):
"""Associate or disassociate host or project to network."""
associations = {}
if host is not API._sentinel:
associations['host'] = host
if project is not API._sentinel:
associations['project'] = project
self.network_rpcapi.associate(context, network_uuid, associations)
@wrap_check_policy
def get_instance_nw_info(self, context, instance, conductor_api=None,
**kwargs):
"""Returns all network info related to an instance."""
result = self._get_instance_nw_info(context, instance)
update_instance_cache_with_nw_info(self, context, instance,
result, conductor_api)
return result
def _get_instance_nw_info(self, context, instance):
"""Returns all network info related to an instance."""
instance_type = instance_types.extract_instance_type(instance)
args = {'instance_id': instance['uuid'],
'rxtx_factor': instance_type['rxtx_factor'],
'host': instance['host'],
'project_id': instance['project_id']}
nw_info = self.network_rpcapi.get_instance_nw_info(context, **args)
return network_model.NetworkInfo.hydrate(nw_info)
@wrap_check_policy
def validate_networks(self, context, requested_networks):
"""validate the networks passed at the time of creating
the server
"""
return self.network_rpcapi.validate_networks(context,
requested_networks)
@wrap_check_policy
def get_instance_uuids_by_ip_filter(self, context, filters):
"""Returns a list of dicts in the form of
{'instance_uuid': uuid, 'ip': ip} that matched the ip_filter
"""
return self.network_rpcapi.get_instance_uuids_by_ip_filter(context,
filters)
@wrap_check_policy
def get_dns_domains(self, context):
"""Returns a list of available dns domains.
These can be used to create DNS entries for floating ips.
"""
return self.network_rpcapi.get_dns_domains(context)
@wrap_check_policy
def add_dns_entry(self, context, address, name, dns_type, domain):
"""Create specified DNS entry for address."""
args = {'address': address,
'name': name,
'dns_type': dns_type,
'domain': domain}
return self.network_rpcapi.add_dns_entry(context, **args)
@wrap_check_policy
def modify_dns_entry(self, context, name, address, domain):
"""Create specified DNS entry for address."""
args = {'address': address,
'name': name,
'domain': domain}
return self.network_rpcapi.modify_dns_entry(context, **args)
@wrap_check_policy
def delete_dns_entry(self, context, name, domain):
"""Delete the specified dns entry."""
args = {'name': name, 'domain': domain}
return self.network_rpcapi.delete_dns_entry(context, **args)
@wrap_check_policy
def delete_dns_domain(self, context, domain):
"""Delete the specified dns domain."""
return self.network_rpcapi.delete_dns_domain(context, domain=domain)
@wrap_check_policy
def get_dns_entries_by_address(self, context, address, domain):
"""Get entries for address and domain."""
args = {'address': address, 'domain': domain}
return self.network_rpcapi.get_dns_entries_by_address(context, **args)
@wrap_check_policy
def get_dns_entries_by_name(self, context, name, domain):
"""Get entries for name and domain."""
args = {'name': name, 'domain': domain}
return self.network_rpcapi.get_dns_entries_by_name(context, **args)
@wrap_check_policy
def create_private_dns_domain(self, context, domain, availability_zone):
"""Create a private DNS domain with nova availability zone."""
args = {'domain': domain, 'av_zone': availability_zone}
return self.network_rpcapi.create_private_dns_domain(context, **args)
@wrap_check_policy
def create_public_dns_domain(self, context, domain, project=None):
"""Create a public DNS domain with optional nova project."""
args = {'domain': domain, 'project': project}
return self.network_rpcapi.create_public_dns_domain(context, **args)
@wrap_check_policy
def setup_networks_on_host(self, context, instance, host=None,
teardown=False):
"""Setup or teardown the network structures on hosts related to
instance"""
host = host or instance['host']
# NOTE(tr3buchet): host is passed in cases where we need to setup
# or teardown the networks on a host which has been migrated to/from
# and instance['host'] is not yet or is no longer equal to
args = {'instance_id': instance['id'],
'host': host,
'teardown': teardown}
self.network_rpcapi.setup_networks_on_host(context, **args)
def _is_multi_host(self, context, instance):
try:
fixed_ips = self.db.fixed_ip_get_by_instance(context,
instance['uuid'])
except exception.FixedIpNotFoundForInstance:
return False
network = self.db.network_get(context, fixed_ips[0]['network_id'],
project_only='allow_none')
return network['multi_host']
def _get_floating_ip_addresses(self, context, instance):
args = (context, instance['uuid'])
floating_ips = self.db.instance_floating_address_get_all(*args)
return [floating_ip['address'] for floating_ip in floating_ips]
@wrap_check_policy
def migrate_instance_start(self, context, instance, migration):
"""Start to migrate the network of an instance."""
instance_type = instance_types.extract_instance_type(instance)
args = dict(
instance_uuid=instance['uuid'],
rxtx_factor=instance_type['rxtx_factor'],
project_id=instance['project_id'],
source_compute=migration['source_compute'],
dest_compute=migration['dest_compute'],
floating_addresses=None,
)
if self._is_multi_host(context, instance):
args['floating_addresses'] = \
self._get_floating_ip_addresses(context, instance)
args['host'] = migration['source_compute']
self.network_rpcapi.migrate_instance_start(context, **args)
@wrap_check_policy
def migrate_instance_finish(self, context, instance, migration):
"""Finish migrating the network of an instance."""
instance_type = instance_types.extract_instance_type(instance)
args = dict(
instance_uuid=instance['uuid'],
rxtx_factor=instance_type['rxtx_factor'],
project_id=instance['project_id'],
source_compute=migration['source_compute'],
dest_compute=migration['dest_compute'],
floating_addresses=None,
)
if self._is_multi_host(context, instance):
args['floating_addresses'] = \
self._get_floating_ip_addresses(context, instance)
args['host'] = migration['dest_compute']
self.network_rpcapi.migrate_instance_finish(context, **args)
# NOTE(jkoelker) These functions where added to the api after
# deprecation. Stubs provided for support documentation
def allocate_port_for_instance(self, context, instance, port_id,
network_id=None, requested_ip=None,
conductor_api=None):
raise NotImplementedError()
def deallocate_port_for_instance(self, context, instance, port_id,
conductor_api=None):
raise NotImplementedError()
def list_ports(self, *args, **kwargs):
raise NotImplementedError()
def show_port(self, *args, **kwargs):
raise NotImplementedError()

View File

@ -48,10 +48,3 @@ def all_filters():
and should return a list of all filter classes available.
"""
return HostFilterHandler().get_all_classes()
def standard_filters():
"""Deprecated. Configs should change to use all_filters()."""
LOG.deprecated(_("Use 'nova.scheduler.filters.all_filters' instead "
"of 'nova.scheduler.filters.standard_filters'"))
return all_filters()

View File

@ -61,24 +61,18 @@ LOG = logging.getLogger(__name__)
trusted_opts = [
cfg.StrOpt('attestation_server',
# deprecated in Grizzly
deprecated_name='server',
default=None,
help='attestation server http'),
cfg.StrOpt('attestation_server_ca_file',
deprecated_name='server_ca_file',
default=None,
help='attestation server Cert file for Identity verification'),
cfg.StrOpt('attestation_port',
deprecated_name='port',
default='8443',
help='attestation server port'),
cfg.StrOpt('attestation_api_url',
deprecated_name='api_url',
default='/OpenAttestationWebServices/V1.0',
help='attestation web API URL'),
cfg.StrOpt('attestation_auth_blob',
deprecated_name='auth_blob',
default=None,
help='attestation authorization blob - must change'),
cfg.IntOpt('attestation_auth_timeout',

View File

@ -21,8 +21,8 @@ from nova.scheduler import filters
class TypeAffinityFilter(filters.BaseHostFilter):
"""TypeAffinityFilter doesn't allow more then one VM type per host.
Note: this works best with compute_fill_first_cost_fn_weight
(dispersion) set to 1 (-1 by default).
Note: this works best with ram_weight_multiplier
(spread) set to 1 (default).
"""
def host_passes(self, host_state, filter_properties):

View File

@ -20,7 +20,6 @@ Scheduler host weights
from oslo.config import cfg
from nova.openstack.common import log as logging
from nova.scheduler.weights import least_cost
from nova import weights
LOG = logging.getLogger(__name__)
@ -52,10 +51,4 @@ class HostWeightHandler(weights.BaseWeightHandler):
def all_weighers():
"""Return a list of weight plugin classes found in this directory."""
if (CONF.least_cost_functions is not None or
CONF.compute_fill_first_cost_fn_weight is not None):
LOG.deprecated(_('least_cost has been deprecated in favor of '
'the RAM Weigher.'))
return least_cost.get_least_cost_weighers()
return HostWeightHandler().get_all_classes()

View File

@ -1,126 +0,0 @@
# Copyright (c) 2011-2012 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Least Cost is an algorithm for choosing which host machines to
provision a set of resources to. The input is a WeightedHost object which
is decided upon by a set of objective-functions, called the 'cost-functions'.
The WeightedHost contains a combined weight for each cost-function.
The cost-function and weights are tabulated, and the host with the least cost
is then selected for provisioning.
NOTE(comstud): This is deprecated. One should use the RAMWeigher and/or
create other weight modules.
"""
from oslo.config import cfg
from nova import exception
from nova.openstack.common import importutils
from nova.openstack.common import log as logging
LOG = logging.getLogger(__name__)
least_cost_opts = [
cfg.ListOpt('least_cost_functions',
default=None,
help='Which cost functions the LeastCostScheduler should use'),
cfg.FloatOpt('noop_cost_fn_weight',
default=1.0,
help='How much weight to give the noop cost function'),
cfg.FloatOpt('compute_fill_first_cost_fn_weight',
default=None,
help='How much weight to give the fill-first cost function. '
'A negative value will reverse behavior: '
'e.g. spread-first'),
]
CONF = cfg.CONF
CONF.register_opts(least_cost_opts)
def noop_cost_fn(host_state, weight_properties):
"""Return a pre-weight cost of 1 for each host."""
return 1
def compute_fill_first_cost_fn(host_state, weight_properties):
"""Higher weights win, so we should return a lower weight
when there's more free ram available.
Note: the weight modifier for this function in default configuration
is -1.0. With -1.0 this function runs in reverse, so systems
with the most free memory will be preferred.
"""
return -host_state.free_ram_mb
def _get_cost_functions():
"""Returns a list of tuples containing weights and cost functions to
use for weighing hosts
"""
cost_fns_conf = CONF.least_cost_functions
if cost_fns_conf is None:
# The old default. This will get fixed up below.
fn_str = 'nova.scheduler.least_cost.compute_fill_first_cost_fn'
cost_fns_conf = [fn_str]
cost_fns = []
for cost_fn_str in cost_fns_conf:
short_name = cost_fn_str.split('.')[-1]
if not (short_name.startswith('compute_') or
short_name.startswith('noop')):
continue
# Fix up any old paths to the new paths
if cost_fn_str.startswith('nova.scheduler.least_cost.'):
cost_fn_str = ('nova.scheduler.weights.least_cost' +
cost_fn_str[25:])
try:
# NOTE: import_class is somewhat misnamed since
# the weighing function can be any non-class callable
# (i.e., no 'self')
cost_fn = importutils.import_class(cost_fn_str)
except ImportError:
raise exception.SchedulerCostFunctionNotFound(
cost_fn_str=cost_fn_str)
try:
flag_name = "%s_weight" % cost_fn.__name__
weight = getattr(CONF, flag_name)
except AttributeError:
raise exception.SchedulerWeightFlagNotFound(
flag_name=flag_name)
# Set the original default.
if (flag_name == 'compute_fill_first_cost_fn_weight' and
weight is None):
weight = -1.0
cost_fns.append((weight, cost_fn))
return cost_fns
def get_least_cost_weighers():
cost_functions = _get_cost_functions()
# Unfortunately we need to import this late so we don't have an
# import loop.
from nova.scheduler import weights
class _LeastCostWeigher(weights.BaseHostWeigher):
def weigh_objects(self, weighted_hosts, weight_properties):
for host in weighted_hosts:
host.weight = sum(weight * fn(host.obj, weight_properties)
for weight, fn in cost_functions)
return [_LeastCostWeigher]

View File

@ -1,80 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright 2013 Openstack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Tests for the deprecated network API."""
import inspect
from nova.network import api
from nova.network import api_deprecated
from nova import test
# NOTE(jkoelker) These tests require that decorators in the apis
# "do the right thing" and set __name__ properly
# they should all be using functools.wraps or similar
# functionality.
def isapimethod(obj):
if inspect.ismethod(obj) and not obj.__name__.startswith('_'):
return True
return False
def discover_real_method(name, method):
if method.func_closure:
for closure in method.func_closure:
if closure.cell_contents.__name__ == name:
return closure.cell_contents
return method
class DeprecatedApiTestCase(test.TestCase):
def setUp(self):
super(DeprecatedApiTestCase, self).setUp()
self.api = api.API()
self.api_deprecated = api_deprecated.API()
self.api_methods = inspect.getmembers(self.api, isapimethod)
def test_api_compat(self):
methods = [m[0] for m in self.api_methods]
deprecated_methods = [getattr(self.api_deprecated, n, None)
for n in methods]
missing = [m[0] for m in zip(methods, deprecated_methods)
if m[1] is None]
self.assertFalse(missing,
'Deprecated api needs methods: %s' % missing)
def test_method_signatures(self):
for name, method in self.api_methods:
deprecated_method = getattr(self.api_deprecated, name, None)
self.assertIsNotNone(deprecated_method,
'Deprecated api has no method %s' % name)
method = discover_real_method(name, method)
deprecated_method = discover_real_method(name,
deprecated_method)
api_argspec = inspect.getargspec(method)
deprecated_argspec = inspect.getargspec(deprecated_method)
# NOTE/TODO(jkoelker) Should probably handle the case where
# varargs/keywords are used.
self.assertEqual(api_argspec.args, deprecated_argspec.args,
"API method %s arguments differ" % name)

View File

@ -256,22 +256,6 @@ class HostFiltersTestCase(test.TestCase):
for cls in classes:
self.class_map[cls.__name__] = cls
def test_standard_filters_is_deprecated(self):
info = {'called': False}
def _fake_deprecated(*args, **kwargs):
info['called'] = True
self.stubs.Set(filters.LOG, 'deprecated', _fake_deprecated)
filter_handler = filters.HostFilterHandler()
filter_handler.get_matching_classes(
['nova.scheduler.filters.standard_filters'])
self.assertTrue(info['called'])
self.assertIn('AllHostsFilter', self.class_map)
self.assertIn('ComputeFilter', self.class_map)
def test_all_filters(self):
# Double check at least a couple of known filters exist
self.assertIn('AllHostsFilter', self.class_map)

View File

@ -1,143 +0,0 @@
# Copyright 2011-2012 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Tests For Least Cost functions.
"""
from oslo.config import cfg
from nova import context
from nova.scheduler import weights
from nova.scheduler.weights import least_cost
from nova import test
from nova.tests.scheduler import fakes
test_least_cost_opts = [
cfg.FloatOpt('compute_fake_weigher1_weight',
default=2.0,
help='How much weight to give the fake_weigher1 function'),
cfg.FloatOpt('compute_fake_weigher2_weight',
default=1.0,
help='How much weight to give the fake_weigher2 function'),
]
CONF = cfg.CONF
CONF.import_opt('least_cost_functions', 'nova.scheduler.weights.least_cost')
CONF.import_opt('compute_fill_first_cost_fn_weight',
'nova.scheduler.weights.least_cost')
CONF.register_opts(test_least_cost_opts)
def compute_fake_weigher1(hostinfo, options):
return hostinfo.free_ram_mb + 10000
def compute_fake_weigher2(hostinfo, options):
return hostinfo.free_ram_mb * 2
class LeastCostTestCase(test.TestCase):
def setUp(self):
super(LeastCostTestCase, self).setUp()
self.host_manager = fakes.FakeHostManager()
self.weight_handler = weights.HostWeightHandler()
def _get_weighed_host(self, hosts, weight_properties=None):
weigher_classes = least_cost.get_least_cost_weighers()
if weight_properties is None:
weight_properties = {}
return self.weight_handler.get_weighed_objects(weigher_classes,
hosts, weight_properties)[0]
def _get_all_hosts(self):
ctxt = context.get_admin_context()
fakes.mox_host_manager_db_calls(self.mox, ctxt)
self.mox.ReplayAll()
host_states = self.host_manager.get_all_host_states(ctxt)
self.mox.VerifyAll()
self.mox.ResetAll()
return host_states
def test_default_of_spread_first(self):
# Default modifier is -1.0, so it turns out that hosts with
# the most free memory win
hostinfo_list = self._get_all_hosts()
# host1: free_ram_mb=512
# host2: free_ram_mb=1024
# host3: free_ram_mb=3072
# host4: free_ram_mb=8192
# so, host1 should win:
weighed_host = self._get_weighed_host(hostinfo_list)
self.assertEqual(weighed_host.weight, 8192)
self.assertEqual(weighed_host.obj.host, 'host4')
def test_filling_first(self):
self.flags(compute_fill_first_cost_fn_weight=1.0)
hostinfo_list = self._get_all_hosts()
# host1: free_ram_mb=-512
# host2: free_ram_mb=-1024
# host3: free_ram_mb=-3072
# host4: free_ram_mb=-8192
# so, host1 should win:
weighed_host = self._get_weighed_host(hostinfo_list)
self.assertEqual(weighed_host.weight, -512)
self.assertEqual(weighed_host.obj.host, 'host1')
def test_weighted_sum_provided_method(self):
fns = ['nova.tests.scheduler.test_least_cost.compute_fake_weigher1',
'nova.tests.scheduler.test_least_cost.compute_fake_weigher2']
self.flags(least_cost_functions=fns)
hostinfo_list = self._get_all_hosts()
# host1: free_ram_mb=512
# host2: free_ram_mb=1024
# host3: free_ram_mb=3072
# host4: free_ram_mb=8192
# [offset, scale]=
# [10512, 11024, 13072, 18192]
# [1024, 2048, 6144, 16384]
# adjusted [ 2.0 * x + 1.0 * y] =
# [22048, 24096, 32288, 52768]
# so, host1 should win:
weighed_host = self._get_weighed_host(hostinfo_list)
self.assertEqual(weighed_host.weight, 52768)
self.assertEqual(weighed_host.obj.host, 'host4')
def test_weighted_sum_single_function(self):
fns = ['nova.tests.scheduler.test_least_cost.compute_fake_weigher1']
self.flags(least_cost_functions=fns)
hostinfo_list = self._get_all_hosts()
# host1: free_ram_mb=0
# host2: free_ram_mb=1536
# host3: free_ram_mb=3072
# host4: free_ram_mb=8192
# [offset, ]=
# [10512, 11024, 13072, 18192]
# adjusted [ 2.0 * x ]=
# [21024, 22048, 26144, 36384]
# so, host1 should win:
weighed_host = self._get_weighed_host(hostinfo_list)
self.assertEqual(weighed_host.weight, 36384)
self.assertEqual(weighed_host.obj.host, 'host4')

View File

@ -37,20 +37,6 @@ class TestWeighedHost(test.TestCase):
self.assertEqual(len(classes), 1)
self.assertIn('RAMWeigher', class_names)
def test_all_weighers_with_deprecated_config1(self):
self.flags(compute_fill_first_cost_fn_weight=-1.0)
classes = weights.all_weighers()
class_names = [cls.__name__ for cls in classes]
self.assertEqual(len(classes), 1)
self.assertIn('_LeastCostWeigher', class_names)
def test_all_weighers_with_deprecated_config2(self):
self.flags(least_cost_functions=['something'])
classes = weights.all_weighers()
class_names = [cls.__name__ for cls in classes]
self.assertEqual(len(classes), 1)
self.assertIn('_LeastCostWeigher', class_names)
class RamWeigherTestCase(test.TestCase):
def setUp(self):

View File

@ -29,8 +29,7 @@ hyperv_opts = [
default=None,
help='External virtual switch Name, '
'if not provided, the first external virtual '
'switch is used',
deprecated_group='DEFAULT'),
'switch is used'),
]
CONF = cfg.CONF

View File

@ -44,22 +44,18 @@ hyperv_opts = [
cfg.BoolOpt('limit_cpu_features',
default=False,
help='Required for live migration among '
'hosts with different CPU features',
deprecated_group='DEFAULT'),
'hosts with different CPU features'),
cfg.BoolOpt('config_drive_inject_password',
default=False,
help='Sets the admin password in the config drive image',
deprecated_group='DEFAULT'),
help='Sets the admin password in the config drive image'),
cfg.StrOpt('qemu_img_cmd',
default="qemu-img.exe",
help='qemu-img is used to convert between '
'different image types',
deprecated_group='DEFAULT'),
'different image types'),
cfg.BoolOpt('config_drive_cdrom',
default=False,
help='Attaches the Config Drive image as a cdrom drive '
'instead of a disk drive',
deprecated_group='DEFAULT')
'instead of a disk drive')
]
CONF = cfg.CONF

View File

@ -36,18 +36,13 @@ LOG = logging.getLogger(__name__)
hyper_volumeops_opts = [
cfg.IntOpt('volume_attach_retry_count',
default=10,
help='The number of times to retry to attach a volume',
deprecated_name='hyperv_attaching_volume_retry_count',
deprecated_group='DEFAULT'),
help='The number of times to retry to attach a volume'),
cfg.IntOpt('volume_attach_retry_interval',
default=5,
help='Interval between volume attachment attempts, in seconds',
deprecated_name='hyperv_wait_between_attach_retry',
deprecated_group='DEFAULT'),
help='Interval between volume attachment attempts, in seconds'),
cfg.BoolOpt('force_volumeutils_v1',
default=False,
help='Force volumeutils v1',
deprecated_group='DEFAULT'),
help='Force volumeutils v1'),
]
CONF = cfg.CONF

View File

@ -1,40 +0,0 @@
# vim: tabstop=4 shiftwidth=4 softtabstop=4
# Copyright (c) 2012 NetApp, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Deprecated file, kept for back-compat only. To be removed in Hxxxx."""
from nova.openstack.common import log as logging
from nova.virt.libvirt import volume
LOG = logging.getLogger(__name__)
class NfsVolumeDriver(volume.LibvirtNFSVolumeDriver):
"""Deprecated driver for NFS, renamed to LibvirtNFSVolumeDriver
and moved into the main volume.py module. Kept for backwards
compatibility in the Grizzly cycle to give users opportunity
to configure before its removal in the Hxxxx cycle."""
def __init__(self, *args, **kwargs):
super(NfsVolumeDriver,
self).__init__(*args, **kwargs)
LOG.deprecated(
_("The nova.virt.libvirt.volume_nfs.NfsVolumeDriver "
"class is deprecated and will be removed in the "
"Hxxxx release. Please update nova.conf so that "
"the 'libvirt_volume_drivers' parameter refers to "
"nova.virt.libvirt.volume.LibvirtNFSVolumeDriver."))