Capture initial set of Use Cases

Pulling from the etherpad at the Denver 2019 PTG and inserting in
to the template format.  Much more detail can be fleshed out, but
this gives a framework to start.

Change-Id: If15093cb80f1230f3a626253676c88d162cbb2b2
Story: 2005751
Task: 33433
This commit is contained in:
Joseph Davis 2019-05-29 14:13:44 -07:00 committed by Adam Spiers
parent 9b1840cd65
commit 8e2ce29cdf
4 changed files with 368 additions and 7 deletions

View File

@ -1,13 +1,43 @@
Use cases
=========
When adding a new use case, please use ``use-cases/template.rst`` as
a starting point.
The Auto-Scaling SIG intends to gather user cases for many different
auto-scaling scenarios in OpenStack. Not all use cases will be
immediately available, and some may never be implemented. But
collecting all the possible use cases will help in communication and
build a view of what is possible.
.. commented out until files are added
Compute resource scaling
------------------------
.. toctree::
:glob:
:maxdepth: 1
.. toctree::
:glob:
:maxdepth: 1
use-cases/*
Control plane scaling
---------------------
While the possibility of scaling the Control Plane was discussed at the Denver PTG,
there are currently no defined use cases or requirements. Contributions are welcome.
Other scaling
-------------
Similar to Control Plane Scaling, there may be other categories of scaling which
could be handled by an Orchestration Engine but have yet to be defined.
History
-------
An initial list of use cases was captured at the Denver 2019 PTG
https://etherpad.openstack.org/p/DEN-auto-scaling-SIG
How to submit a new use case
----------------------------
When submitting a new use case, please use ``use-cases/template.rst``
as a starting point.
use-cases/*

View File

@ -0,0 +1,125 @@
..
This template is intended to encourage a certain level of
consistency between different use cases. Adherence to the structure
of this template is recommended but not strictly required.
This template should be in ReSTructured text. For help with syntax,
see <http://sphinx-doc.org/rest.html>. To test out your formatting,
see <http://www.tele3.cz/jbar/rest/rest.html>.
====================================================
Auto-scale Compute based on Consumption of Resources
====================================================
As a user of OpenStack I want to define a logical group of compute
resources which are increased or decreased automatically based on the
consumption of discrete physical resources within my instances, for
example CPU or Memory utilization; Disk IOPS, etc.
Problem description
===================
..
A more detailed description of the auto-scaling scenario;
however it is not advised to duplicate details covered in the
sections below. If the problem is not too complex, it may be more
appropriate to simply delete this section and provide the details in
the sections below.
In many ways, this is the basic use case for Auto-Scaling in OpenStack.
An OpenStack user, whether admin or operator, defines logic for triggering
the auto-scaling based on the consumption of resources in the cloud.
The resources could be CPU cycles, storage, memory, network, or a combination.
The logic will typically define both scale up and scale down thresholds,
and include an upper and lower bound for scaling.
An orchestration engine for the cloud will be instructed to perform the
scale up or scale down operations as specified.
This use case was called out in the Denver 2019 PTG - https://etherpad.openstack.org/p/DEN-auto-scaling-SIG
OpenStack projects used
=======================
..
Please provide a list of projects (OpenStack and otherwise) which
may be used in order to implement this use case. If no
implementation exists yet, suggestions are sufficient here.
* ...
* ...
Inputs and decision-making
==========================
..
Describe how decisions about when/how to auto-scale are taken. In
particular list any other components or inputs which may provide
additional context to help determine the correct action.
The resources monitoring used to determine auto scaling can be any monitored metric.
A few classic examples of metrics are:
* Overall CPU percentage on a host of Virtual Machines
* CPU percentabe utilization averaged across a group of Virtual Machines
See https://github.com/sjamgade/monasca-autoscaling/blob/master/autoscaling.yaml
for an example.
* System memory average utilization percentage
* Storage or disk space utilization percentage
In most cases, a high and a low threshold for the metric are determined to correspond
with scale up or scale down actions.
Auto-scaling
============
..
Describe how the auto-scaling may occur. If there may be different
approaches available, please list them all.
Existing implementation(s)
==========================
..
If there are one or more existing implementations of this use case,
please give as many details as possible, in order that operators can
re-implement the use case in their own clouds. However any
information is better than no information! Linking to external
documents is perfectly acceptable.
* Monasca and Heat
Monasca can monitor the physical resources and generate alarms if a
resource usage exceeds a threshold. The alarm notification can then
trigger a Heat template and scale the topology appropriately.
* <TODO: record other options>
Future work
===========
..
Please link from here to any relevant specs. If a cross-project
spec is required, it can be placed under ../specs/ in this
repository.
Please also make sure that any linked specs contain back-links
to this use case for maximum discoverability.
Dependencies
============
..
- Include specific references to specs and/or blueprints in
auto-scaling-sig, or in other projects, that this one either depends
on or is related to.
- Does this feature require any new library dependencies or code
otherwise not included in OpenStack? Or does it depend on a specific
version of library?

View File

@ -0,0 +1,98 @@
============================================================
Auto-scale Compute based on Custom Metrics from Applications
============================================================
As a user of OpenStack I want to define a logical group of compute
resources which are increased or decreased automatically based on
metrics which are emitted from an application running in the cloud.
Problem description
===================
..
A more detailed description of the auto-scaling scenario;
however it is not advised to duplicate details covered in the
sections below. If the problem is not too complex, it may be more
appropriate to simply delete this section and provide the details in
the sections below.
An Application developer will define metrics which their application
can emit to a Monitoring Service. This could be a report of RPS from
a web application. Or information about distribution of objects reported
by Swift. The Monitoring Service would generate alarms based on the
metrics or report them to a Decision Service for analysis. Commands
would be passed to an Orchestration Engine to scale up or down.
This use case was called out in the Denver 2019 PTG - https://etherpad.openstack.org/p/DEN-auto-scaling-SIG
OpenStack projects used
=======================
..
Please provide a list of projects (OpenStack and otherwise) which
may be used in order to implement this use case. If no
implementation exists yet, suggestions are sufficient here.
* ...
* ...
Inputs and decision-making
==========================
..
Describe how decisions about when/how to auto-scale are taken. In
particular list any other components or inputs which may provide
additional context to help determine the correct action.
Auto-scaling
============
..
Describe how the auto-scaling may occur. If there may be different
approaches available, please list them all.
Existing implementation(s)
==========================
..
If there are one or more existing implementations of this use case,
please give as many details as possible, in order that operators can
re-implement the use case in their own clouds. However any
information is better than no information! Linking to external
documents is perfectly acceptable.
* Monasca and Heat
Monasca can receive metrics from an application and generate alarms
if a metric exceeds a threshold. The alarm notification can then
trigger a Heat template and scale the topology appropriately.
* <TODO: record other options>
Future work
===========
..
Please link from here to any relevant specs. If a cross-project
spec is required, it can be placed under ../specs/ in this
repository.
Please also make sure that any linked specs contain back-links
to this use case for maximum discoverability.
Dependencies
============
..
- Include specific references to specs and/or blueprints in
auto-scaling-sig, or in other projects, that this one either depends
on or is related to.
- Does this feature require any new library dependencies or code
otherwise not included in OpenStack? Or does it depend on a specific
version of library?

View File

@ -0,0 +1,108 @@
============================================
Auto-scale Compute to Balance Resource Usage
============================================
* As a deployer and operator of OpenStack I want to be able to
configure highly available autoscaling services with Free Open
Source Software.
* As an operator of OpenStack I want to be able to add additional
compute nodes to my cluster from a pool of available bare metal
inventory automatically in response to resource consumption within
my cloud.
* As an operator of OpenStack I want to be able to remove compute
nodes from my cluster and return them to the pool of available bare
metal inventory nodes in response to an excess quantity of compute
resource availability within my cloud.
* As an app deployer I want to automatically scale-in one app to free
up physical infra to scale-out another app which needs the resources
more. More generally, I want to scale various apps in/out/up/down
based on load/priority/custom policy, subject to some global
resource constraints.
Problem description
===================
* Global constraints: As an app deployer I want to automatically scale-in one app to free up physical infra to scale-out another app which needs the resources more.
More generally, I want to scale various apps in/out/up/down based on load/priority/custom policy, subject to some global resource constraints.
* Sort of like pre-emptible resources or something like that?
* Yes, but maybe more dynamic and more levels of priority. One workload may be high priority in one load condition, but become low priority under a different load condition.
* Ah, interesting! Like each autoscale group would have some concept of priority and timeframe (critical from 0900-1700, medium priority 1800->2000, low priority from 2100->0800)
* Could be something like that. Here's a more concrete example:
* I have two apps, A and B. Both apps are monitored for request completion time.
* App A has the targets: good: 0-10ms ; ok: 10-30ms; bad: > 30 ms;
* App B has the targets: good: 0-100ms ; ok: 100-500ms; bad: > 500 ms;
* Based on the current load condition and request completion time, I want to allocate the physical compute resource between the two apps based on some optimization criteria.
This use case was called out in the Denver 2019 PTG - https://etherpad.openstack.org/p/DEN-auto-scaling-SIG
OpenStack projects used
=======================
..
Please provide a list of projects (OpenStack and otherwise) which
may be used in order to implement this use case. If no
implementation exists yet, suggestions are sufficient here.
* ...
* ...
Inputs and decision-making
==========================
..
Describe how decisions about when/how to auto-scale are taken. In
particular list any other components or inputs which may provide
additional context to help determine the correct action.
Auto-scaling
============
..
Describe how the auto-scaling may occur. If there may be different
approaches available, please list them all.
Existing implementation(s)
==========================
..
If there are one or more existing implementations of this use case,
please give as many details as possible, in order that operators can
re-implement the use case in their own clouds. However any
information is better than no information! Linking to external
documents is perfectly acceptable.
Future work
===========
..
Please link from here to any relevant specs. If a cross-project
spec is required, it can be placed under ../specs/ in this
repository.
Please also make sure that any linked specs contain back-links
to this use case for maximum discoverability.
Dependencies
============
..
- Include specific references to specs and/or blueprints in
auto-scaling-sig, or in other projects, that this one either depends
on or is related to.
- Does this feature require any new library dependencies or code
otherwise not included in OpenStack? Or does it depend on a specific
version of library?