Documentation for the region placement policy

This patch adds documentation for the region placement policy.

Change-Id: I798e76a93c8cb956d982e6f1a6653005a0d3cf00
This commit is contained in:
tengqm 2016-03-17 10:34:11 -04:00
parent 8aae0726c9
commit 9546ab887b
8 changed files with 226 additions and 94 deletions

View File

@ -57,7 +57,7 @@ in a collaborative way to meet the needs of complicated usage scenarios.
policies/affinity_v1
policies/deletion_v1
policies/load_balance_v1
policies/region_placement_v1
policies/region_v1
policies/scaling_v1
policies/zone_v1

View File

@ -200,7 +200,7 @@ zone ``AZ-1``, one of the nodes is from availability zone ``AZ-2``.
S6: Deletion across Multiple Regions
------------------------------------
When you have a :doc:`region placement policy <region_placement_v1>` attached
When you have a :doc:`region placement policy <region_v1>` attached
to a cluster, the region placement policy will decide to which region(s) new
nodes will be placed and from which region(s) old nodes should be deleted to
maintain an expected node distribution. Such a region placement policy will be

View File

@ -1,20 +0,0 @@
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
============================
Region Placement Policy V1.0
============================
This policy is designed to make sure the nodes in a cluster are distributed
across multiple regions according to a specified scheme.

View File

@ -0,0 +1,217 @@
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
============================
Region Placement Policy V1.0
============================
This policy is designed to make sure the nodes in a cluster are distributed
across multiple regions according to a specified scheme.
Applicable Profiles
~~~~~~~~~~~~~~~~~~~
The policy is designed to handle any profile types.
Actions Handled
~~~~~~~~~~~~~~~
The policy is capable of handling the following actions:
- ``CLUSTER_SCALE_IN``: an action that carries an optional integer value named
``count`` in its ``inputs``.
- ``CLUSTER_SCALE_OUT``: an action that carries an optional integer value
named ``count`` in its ``inputs``.
- ``CLUSTER_RESIZE``: an action that accepts a map as its input parameters in
its ``inputs`` property, such as "``adjustment_type``", "``number``" etc.
The policy will be checked **BEFORE** any of the above mentioned actions is
executed. Because the same policy implementation is used for covering both the
cases of scaling out a cluster and the cases of scaling in, the region
placment policy need to parse the inputs in different scenarios.
The placement policy can be used independently, with and without other polices
attached to the same cluster. So the policy needs to understand whether there
are policy decisions from other policies (such as a
:doc:`scaling policy <scaling_v1>`).
When the policy is checked, it will first attempt to get the proper ``count``
input value, which may be an outcome from other policies or the inputs for
the action. For more details, check the scenarios described in following
sections.
Scenarios
~~~~~~~~~
S1: ``CLUSTER_SCALE_IN``
------------------------
The placement policy first checks if there are policy decisions from other
policies by looking into the ``deletion`` field of the action's ``data``
property. If there is such a field, the policy attempts to extract the
``count`` value from the ``deletion`` field. If the ``count`` value is not
found, 1 is assumed to be the default.
If, however, the policy fails to find the ``deletion`` field, it tries to find
if there is a ``count`` field in the action's ``inputs`` property. If the
answer is true, the policy will use it, or else it will fall back to assume 1
as the default count.
After the policy has find out the ``count`` value (i.e. number of nodes to be
deleted), it validates the list of region names provided to the policy. If for
some reason, none of the provided names passed the validation, the policy
check fails with the following data recorded in the action's ``data``
property:
::
{
"status": "ERROR",
"reason": "No region is found usable.",
}
With the list of regions known to be good and the map of node distribution
specified in the policy spec, senlin engine continues to calculate a placement
plan that best matches the desired distribution.
If there are nodes that cannot be fit into the distribution plan, the policy
check failes with an error recorded in the action's ``data``, as shown below:
::
{
"status": "ERROR",
"reason": "There is no feasible plan to handle all nodes."
}
If there is a feasible plan to remove nodes from each region, the policy saves
the plan into the ``data`` property of the action as exemplified below:
::
{
"status": "OK",
"deletion": {
"count": 3,
"regions": {
"RegionOne": 2,
"RegionTwo": 1
}
}
}
This means in total, 3 nodes should be removed from the cluster. Among them,
2 nodes should be selected from region "``RegionOne``" and the rest one should
be selected from region "``RegionTwo``".
**NOTE**: When there is a :doc:`deletion policy <deletion_v1>` attached to the
same cluster. That deletion policy will be evaluated after the region
placement policy and it is expected to rebase its candidate selection on the
region distribution enforced here. For example, if the deletion policy is
tasked to select the oldest nodes for deletion, it will adapt its behavior to
select the oldest nodes from each region. The number of nodes to be chosen
from each region would be based on the output from this placement policy.
S2: ``CLUSTER_SCALE_OUT``
-------------------------
The placement policy first checks if there are policy decisions from other
policies by looking into the ``creation`` field of the action's ``data``
property. If there is such a field, the policy attempts to extract the
``count`` value from the ``creation`` field. If the ``count`` value is not
found, 1 is assumed to be the default.
If, however, the policy fails to find the ``creation`` field, it tries to find
if there is a ``count`` field in the action's ``inputs`` property. If the
answer is true, the policy will use it, or else it will fall back to assume 1
as the default node count.
After the policy has find out the ``count`` value (i.e. number of nodes to be
created), it validates the list of region names provided to the policy and
extracts the current distribution of nodes among those regions.
If for some reason, none of the provided names passed the validation,
the policy check fails with the following data recorded in the action's
``data`` property:
::
{
"status": "ERROR",
"reason": "No region is found usable.",
}
The logic of generating a distribution plan is almost identical to what have
been described in scenario *S1*, except for the output format. When there is
a feasible plan to accommodate all nodes, the plan is saved into the ``data``
property of the action as shown in the following example:
::
{
"status": "OK",
"creation": {
"count": 3,
"regions": {
"RegionOne": 1,
"RegionTwo": 2
}
}
}
This means in total, 3 nodes should be created into the cluster. Among them,
2 nodes should be created at region "``RegionOne``" and the left one should be
created at region "``RegionTwo``".
S3: ``CLUSTER_RESIZE``
----------------------
The placement policy first checks if there are policy decisions from other
policies by looking into the ``creation`` field of the action's ``data``
property. If there is such a field, the policy extracts the ``count`` value
from the ``creation`` field. If the ``creation`` field is not found, the policy
tries to find if there is a ``deletion`` field in the action's ``data``
property. If there is such a field, the policy extracts the ``count`` value
from the ``creation`` field. If neither ``creation`` nor ``deletion`` is found
in the action's ``data`` property, the policy proceeds to parse the raw inputs
of the action.
The output from the parser may indicate an invalid combination of input
values. If that is the case, the policy check failes with the action's
``data`` set to something like the following example:
::
{
"status": "ERROR",
"reason": <error message from the parser.>
}
If the parser successfully parsed the action's raw inputs, the policy tries
again to find if there is either ``creation`` or ``deletion`` field in the
action's ``data`` property. It will use the ``count`` value from the field
found as the number of nodes to be handled.
When the placement policy finds out the number of nodes to create (or delete),
it proceeds to calculate a distribution plan. If the action is about growing
the size of the cluster, the logic and the output format are the same as that
have been outlined in scenario *S2*. Otherwise, the logic and the output
format are identical to that have been describled in scenario *S1*.

View File

@ -141,7 +141,7 @@ S3: Cross-region or Cross-AZ Scaling
When scaling a cluster across multiple regions or multiple availability zones,
the scaling policy will be evaluated before the
:doc:`region placement policy <region_placement_v1>` or the
:doc:`region placement policy <region_v1>` or the
:doc:`zone placement policy <zone_v1>` respectively. Based on
builtin priority settings, checking of this scaling policy always happen
before the region placement policy or the zone placement policy.

View File

@ -223,7 +223,7 @@ For built-in policy types, the protocol is documented below:
policies/affinity_v1
policies/deletion_v1
policies/load_balance_v1
policies/region_placement_v1
policies/region_v1
policies/scaling_v1
policies/zone_v1

View File

@ -13,40 +13,8 @@
"""
Policy for scheduling nodes across multiple regions.
NOTE: How placement policy works
Input:
cluster: cluster whose nodes are to be manipulated.
action.data['creation']:
- count: number of nodes to create; it can be decision from a scaling
policy. If no scaling policy is in effect, the count will be
assumed to be 1.
action.data['deletion']:
- count: number of nodes to delete. It can be a decision from a scaling
policy. If there is no scaling policy in effect, we assume the
count value to be 1.
Output:
action.data: A dictionary containing scheduling decisions made.
For actions that increase the size of a cluster, the output will look like::
{
'status': 'OK',
'creation': {
'count': 2,
'regions': {'RegionOne': 1, 'RegionTwo': 1}
}
}
For actions that shrink the size of a cluster, the output will look like::
{
'status': 'OK',
'deletion': {
'count': 3,
'regions': {'RegionOne': 1, 'RegionTwo': 2}
}
}
NOTE: For full documentation about how the policy works, check:
http://docs.openstack.org/developer/senlin/developer/policies/region_v1.html
"""
import math
@ -79,8 +47,7 @@ class RegionPlacementPolicy(base.Policy):
]
PROFILE_TYPE = [
'os.nova.server-1.0',
'os.heat.stack-1.0',
'ANY'
]
KEYS = (

View File

@ -13,40 +13,8 @@
"""
Policy for scheduling nodes across availability zones.
NOTE: How this policy works
Input:
cluster: cluster whose nodes are to be manipulated.
action.data['creation']:
- count: number of nodes to create. It can be decision from a scaling
policy. If no scaling policy is in effect, the count will be
assumed to be 1.
action.data['deleteion']:
- count: number of nodes to delete. It can be decision from a scaling
policy. If no scaling policy is in effect, the count will be
assumed to be 1.
Output:
action.data: A dictionary containing scheduling decisions made.
For actions that increase the size of a cluster, the output looks like::
{
'status': 'OK',
'creation': {
'count': 2,
'zones': {'nova-1': 1, 'nova-2': 1}
}
}
For actions that decrease the size of a cluster, the output looks like::
{
'status': 'OK',
'deletion': {
'count': 3,
'zones': {'nova-1': 2, 'nova-2': 1}
}
}
NOTE: For full documentation about how the policy works, check:
http://docs.openstack.org/developer/senlin/developer/policies/zone_v1.html
"""
import math