Rename DistributedScheduler as FilterScheduler

Change-Id: I1091609d5997c4ba9c26a3f2426496ff7f1e64fa
This commit is contained in:
Joe Gordon
2012-03-05 17:53:57 -08:00
parent b8e11afa40
commit 1b73b78e57
6 changed files with 23 additions and 23 deletions

View File

@@ -18,14 +18,14 @@
(OpenOffice Impress format) Illustrations are "exported" to png and then scaled
to 400x300 or 640x480 as needed and placed in the doc/source/images directory.
Distributed Scheduler
Filter Scheduler
=====================
The Scheduler is akin to a Dating Service. Requests for the creation of new instances come in and the most applicable Compute nodes are selected from a large pool of potential candidates. In a small deployment we may be happy with the currently available Chance Scheduler which randomly selects a Host from the available pool. Or if you need something a little more fancy you may want to use the Distributed Scheduler, which selects Compute hosts from a logical partitioning of available hosts (within a single Zone).
The Scheduler is akin to a Dating Service. Requests for the creation of new instances come in and the most applicable Compute nodes are selected from a large pool of potential candidates. In a small deployment we may be happy with the currently available Chance Scheduler which randomly selects a Host from the available pool. Or if you need something a little more fancy you may want to use the Filter Scheduler, which selects Compute hosts from a logical partitioning of available hosts.
.. image:: /images/dating_service.png
The Distributed Scheduler (DS) supports filtering and weighing to make informed decisions on where a new instance should be created.
The Filter Scheduler supports filtering and weighing to make informed decisions on where a new instance should be created.
So, how does this all work?
@@ -48,14 +48,14 @@ This Weight is computed for each Instance requested. If the customer asked for 1
Filtering and Weighing
----------------------
The filtering (excluding compute nodes incapable of fulfilling the request) and weighing (computing the relative "fitness" of a compute node to fulfill the request) rules used are very subjective operations ... Service Providers will probably have a very different set of filtering and weighing rules than private cloud administrators. The filtering and weighing aspects of the `DistributedScheduler` are flexible and extensible.
The filtering (excluding compute nodes incapable of fulfilling the request) and weighing (computing the relative "fitness" of a compute node to fulfill the request) rules used are very subjective operations ... Service Providers will probably have a very different set of filtering and weighing rules than private cloud administrators. The filtering and weighing aspects of the `FilterScheduler` are flexible and extensible.
.. image:: /images/filtering.png
Host Filter
-----------
As we mentioned earlier, filtering hosts is a very deployment-specific process. Service Providers may have a different set of criteria for filtering Compute nodes than a University. To facilitate this, the `DistributedScheduler` supports a variety of filtering strategies as well as an easy means for plugging in your own algorithms. Specifying filters involves 2 settings. One makes filters available for use. The second specifies which filters to use by default (out of the filters available). The reason for this second option is that there may be support to allow end-users to specify specific filters during a build at some point in the future.
As we mentioned earlier, filtering hosts is a very deployment-specific process. Service Providers may have a different set of criteria for filtering Compute nodes than a University. To facilitate this, the `FilterScheduler` supports a variety of filtering strategies as well as an easy means for plugging in your own algorithms. Specifying filters involves 2 settings. One makes filters available for use. The second specifies which filters to use by default (out of the filters available). The reason for this second option is that there may be support to allow end-users to specify specific filters during a build at some point in the future.
Making filters available:
@@ -85,12 +85,12 @@ Here are some of the main flags you should set in your `nova.conf` file:
::
--scheduler_driver=nova.scheduler.distributed_scheduler.DistributedScheduler
--scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
--scheduler_available_filters=nova.scheduler.filters.standard_filters
# --scheduler_available_filters=myfilter.MyOwnFilter
--scheduler_default_filters=RamFilter,ComputeFilter,MyOwnFilter
`scheduler_driver` is the real workhorse of the operation. For Distributed Scheduler, you need to specify a class derived from `nova.scheduler.distributed_scheduler.DistributedScheduler`.
`scheduler_driver` is the real workhorse of the operation. For Filter Scheduler, you need to specify a class derived from `nova.scheduler.filter_scheduler.FilterScheduler`.
`scheduler_default_filters` are the host filters to be used for filtering candidate Compute nodes.
Some optional flags which are handy for debugging are:

View File

@@ -38,7 +38,7 @@ Background Concepts for Nova
threading
il8n
distributed_scheduler
filter_scheduler
multinic
zone
rabbit

View File

@@ -14,7 +14,7 @@
# under the License.
"""
The DistributedScheduler is for creating instances locally.
The FilterScheduler is for creating instances locally.
You can customize this scheduler by specifying your own Host Filters and
Weighing Functions.
"""
@@ -35,10 +35,10 @@ FLAGS = flags.FLAGS
LOG = logging.getLogger(__name__)
class DistributedScheduler(driver.Scheduler):
class FilterScheduler(driver.Scheduler):
"""Scheduler that can be used for filtering and weighing."""
def __init__(self, *args, **kwargs):
super(DistributedScheduler, self).__init__(*args, **kwargs)
super(FilterScheduler, self).__init__(*args, **kwargs)
self.cost_function_cache = {}
self.options = scheduler_options.SchedulerOptions()

View File

@@ -30,7 +30,7 @@ from nova.scheduler import driver
multi_scheduler_opts = [
cfg.StrOpt('compute_scheduler_driver',
default='nova.scheduler.'
'distributed_scheduler.DistributedScheduler',
'filter_scheduler.FilterScheduler',
help='Driver to use for scheduling compute calls'),
cfg.StrOpt('volume_scheduler_driver',
default='nova.scheduler.chance.ChanceScheduler',

View File

@@ -21,7 +21,7 @@ import mox
from nova import db
from nova.compute import instance_types
from nova.compute import vm_states
from nova.scheduler import distributed_scheduler
from nova.scheduler import filter_scheduler
from nova.scheduler import host_manager
@@ -56,9 +56,9 @@ INSTANCES = [
]
class FakeDistributedScheduler(distributed_scheduler.DistributedScheduler):
class FakeFilterScheduler(filter_scheduler.FilterScheduler):
def __init__(self, *args, **kwargs):
super(FakeDistributedScheduler, self).__init__(*args, **kwargs)
super(FakeFilterScheduler, self).__init__(*args, **kwargs)
self.host_manager = host_manager.HostManager()

View File

@@ -22,7 +22,7 @@ from nova import context
from nova import exception
from nova.scheduler import least_cost
from nova.scheduler import host_manager
from nova.scheduler import distributed_scheduler
from nova.scheduler import filter_scheduler
from nova import test
from nova.tests.scheduler import fakes
from nova.tests.scheduler import test_scheduler
@@ -32,10 +32,10 @@ def fake_filter_hosts(hosts, filter_properties):
return list(hosts)
class DistributedSchedulerTestCase(test_scheduler.SchedulerTestCase):
class FilterSchedulerTestCase(test_scheduler.SchedulerTestCase):
"""Test case for Distributed Scheduler."""
driver_cls = distributed_scheduler.DistributedScheduler
driver_cls = filter_scheduler.FilterScheduler
def test_run_instance_no_hosts(self):
"""
@@ -44,7 +44,7 @@ class DistributedSchedulerTestCase(test_scheduler.SchedulerTestCase):
def _fake_empty_call_zone_method(*args, **kwargs):
return []
sched = fakes.FakeDistributedScheduler()
sched = fakes.FakeFilterScheduler()
fake_context = context.RequestContext('user', 'project')
request_spec = {'instance_type': {'memory_mb': 1, 'root_gb': 1,
@@ -64,7 +64,7 @@ class DistributedSchedulerTestCase(test_scheduler.SchedulerTestCase):
self.was_admin = context.is_admin
return {}
sched = fakes.FakeDistributedScheduler()
sched = fakes.FakeFilterScheduler()
self.stubs.Set(sched.host_manager, 'get_all_host_states', fake_get)
fake_context = context.RequestContext('user', 'project')
@@ -77,7 +77,7 @@ class DistributedSchedulerTestCase(test_scheduler.SchedulerTestCase):
def test_schedule_bad_topic(self):
"""Parameter checking."""
sched = fakes.FakeDistributedScheduler()
sched = fakes.FakeFilterScheduler()
fake_context = context.RequestContext('user', 'project')
self.assertRaises(NotImplementedError, sched._schedule, fake_context,
"foo", {})
@@ -139,7 +139,7 @@ class DistributedSchedulerTestCase(test_scheduler.SchedulerTestCase):
return least_cost.WeightedHost(self.next_weight,
host_state=host_state)
sched = fakes.FakeDistributedScheduler()
sched = fakes.FakeFilterScheduler()
fake_context = context.RequestContext('user', 'project',
is_admin=True)
@@ -166,7 +166,7 @@ class DistributedSchedulerTestCase(test_scheduler.SchedulerTestCase):
def test_get_cost_functions(self):
self.flags(reserved_host_memory_mb=128)
fixture = fakes.FakeDistributedScheduler()
fixture = fakes.FakeFilterScheduler()
fns = fixture.get_cost_functions()
self.assertEquals(len(fns), 1)
weight, fn = fns[0]