Added an autoscaling heat template

The template scales the workers both up and down,
dependent on the load across all of the workers. The
orchestration text has been modified to describe
how it all hangs together.

Change-Id: I5959a734ecb21476ab6359cacf49317d370cd0a5
This commit is contained in:
Martin Paulo 2015-11-18 14:23:03 +11:00 committed by Diane Fleming
parent 8c93cdaf67
commit f799d51ef1
2 changed files with 589 additions and 38 deletions

View File

@ -0,0 +1,278 @@
heat_template_version: 2014-10-16
description: |
A template that starts the faafo application with auto-scaling workers
parameters:
key_name:
type: string
description: Name of an existing keypair to enable SSH access to the instances
default: id_rsa
constraints:
- custom_constraint: nova.keypair
description: Must already exist on your cloud
flavor:
type: string
description: The flavor that the application uses
constraints:
- custom_constraint: nova.flavor
description: Must be a valid flavor provided by your cloud provider.
image_id:
type: string
description: The ID of the image to use to create the instance
constraints:
- custom_constraint: glance.image
description: Must be a valid image on your cloud
period:
type: number
description: The period to use to calculate the ceilometer statistics (in seconds)
default: 60
faafo_source:
type: string
description: The location of the faafo application install script on the Internet
# allows you to clone and play with the faafo code if you like
default: https://git.openstack.org/cgit/openstack/faafo/plain/contrib/install.sh
resources:
api:
type: OS::Neutron::SecurityGroup
properties:
description: "For ssh and http on an api node"
rules: [
{remote_ip_prefix: 0.0.0.0/0,
protocol: tcp,
port_range_min: 22,
port_range_max: 22},
{remote_ip_prefix: 0.0.0.0/0,
protocol: tcp,
port_range_min: 80,
port_range_max: 80},]
worker:
type: OS::Neutron::SecurityGroup
properties:
description: "For ssh on a worker node"
rules: [
{remote_ip_prefix: 0.0.0.0/0,
protocol: tcp,
port_range_min: 22,
port_range_max: 22},]
services:
type: OS::Neutron::SecurityGroup
properties:
description: "For ssh, DB and AMPQ on the services node"
rules: [
{remote_ip_prefix: 0.0.0.0/0,
protocol: tcp,
port_range_min: 80,
port_range_max: 80},
{remote_ip_prefix: 0.0.0.0/0,
protocol: tcp,
port_range_min: 22,
port_range_max: 22},
{remote_ip_prefix: 0.0.0.0/0,
protocol: tcp,
port_range_min: 5672,
port_range_max: 5672,
remote_mode: remote_group_id,
remote_group_id: { get_resource: worker } },
{remote_ip_prefix: 0.0.0.0/0,
protocol: tcp,
port_range_min: 5672,
port_range_max: 5672,
remote_mode: remote_group_id,
remote_group_id: { get_resource: api } },
{remote_ip_prefix: 0.0.0.0/0,
protocol: tcp,
port_range_min: 3306,
port_range_max: 3306,
remote_mode: remote_group_id,
remote_group_id: { get_resource: api } },
]
app_services:
# The database and AMPQ services run on this instance.
type: OS::Nova::Server
properties:
image: { get_param: image_id }
flavor: { get_param: flavor }
key_name: { get_param: key_name }
name: services
security_groups:
- {get_resource: services}
user_data_format: RAW
user_data:
str_replace:
template: |
#!/usr/bin/env bash
curl -L -s faafo_installer | bash -s -- \
-i database -i messaging
wc_notify --data-binary '{"status": "SUCCESS"}'
params:
wc_notify: { get_attr: ['wait_handle', 'curl_cli'] }
faafo_installer: { get_param: faafo_source }
api_instance:
# The web interface runs on this instance
type: OS::Nova::Server
properties:
image: { get_param: image_id }
flavor: { get_param: flavor }
key_name: { get_param: key_name }
name: api
security_groups:
- {get_resource: api}
user_data_format: RAW
user_data:
str_replace:
template: |
#!/usr/bin/env bash
curl -L -s faafo_installer | bash -s -- \
-i faafo -r api -m 'amqp://guest:guest@services_ip:5672/' \
-d 'mysql+pymysql://faafo:password@services_ip:3306/faafo'
wc_notify --data-binary '{"status": "SUCCESS"}'
params:
wc_notify: { get_attr: ['wait_handle', 'curl_cli'] }
services_ip: { get_attr: [app_services, first_address] }
faafo_installer: { get_param: faafo_source }
worker_auto_scaling_group:
# The worker instances are managed by this auto-scaling group
type: OS::Heat::AutoScalingGroup
properties:
resource:
type: OS::Nova::Server
properties:
key_name: { get_param: key_name }
image: { get_param: image_id }
flavor: { get_param: flavor }
# The metadata used for ceilometer monitoring
metadata: {"metering.stack": {get_param: "OS::stack_id"}}
name: worker
security_groups:
- {get_resource: worker}
user_data_format: RAW
user_data:
str_replace:
template: |
#!/usr/bin/env bash
curl -L -s faafo_installer | bash -s -- \
-i faafo -r worker -e 'http://api_1_ip' -m 'amqp://guest:guest@services_ip:5672/'
wc_notify --data-binary '{"status": "SUCCESS"}'
params:
wc_notify: { get_attr: ['wait_handle', 'curl_cli'] }
api_1_ip: { get_attr: [api_instance, first_address] }
services_ip: { get_attr: [app_services, first_address] }
faafo_installer: { get_param: faafo_source }
min_size: 1
desired_capacity: 1
max_size: 3
wait_handle:
type: OS::Heat::WaitConditionHandle
wait_condition:
type: OS::Heat::WaitCondition
depends_on: [ app_services, api_instance, worker_auto_scaling_group ]
properties:
handle: { get_resource: wait_handle }
# All three initial servers clock in when they finish installing their software
count: 3
# 10 minute limit for installation
timeout: 600
scale_up_policy:
type: OS::Heat::ScalingPolicy
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: {get_resource: worker_auto_scaling_group}
cooldown: { get_param: period }
scaling_adjustment: 1
scale_down_policy:
type: OS::Heat::ScalingPolicy
properties:
adjustment_type: change_in_capacity
auto_scaling_group_id: {get_resource: worker_auto_scaling_group}
cooldown: { get_param: period }
scaling_adjustment: '-1'
cpu_alarm_high:
type: OS::Ceilometer::Alarm
properties:
description: Scale-up if the average CPU > 90% for period seconds
meter_name: cpu_util
statistic: avg
period: { get_param: period }
evaluation_periods: 1
threshold: 90
alarm_actions:
- {get_attr: [scale_up_policy, alarm_url]}
matching_metadata: {'metadata.user_metadata.stack': {get_param: "OS::stack_id"}}
comparison_operator: gt
cpu_alarm_low:
type: OS::Ceilometer::Alarm
properties:
description: Scale-down if the average CPU < 15% for period seconds
meter_name: cpu_util
statistic: avg
period: { get_param: period }
evaluation_periods: 1
threshold: 15
alarm_actions:
- {get_attr: [scale_down_policy, alarm_url]}
matching_metadata: {'metadata.user_metadata.stack': {get_param: "OS::stack_id"}}
comparison_operator: lt
outputs:
api_url:
description: The URL for api server
value:
list_join: ['', ['http://', get_attr: [api_instance, first_address]]]
scale__workers_up_url:
description: >
HTTP POST to this URL webhook to scale up the worker group.
Does not accept request headers or body. Place quotes around the URL.
value: {get_attr: [scale_up_policy, alarm_url]}
scale_workers_down_url:
description: >
HTTP POST to this URL webhook to scale down the worker group.
Does not accept request headers or body. Place quotes around the URL.
value: {get_attr: [scale_down_policy, alarm_url]}
ceilometer_statistics_query:
value:
str_replace:
template: >
ceilometer statistics -m cpu_util -q metadata.user_metadata.stack=stackval -p period -a avg
params:
stackval: { get_param: "OS::stack_id" }
period: { get_param: period }
description: >
This query shows the cpu_util sample statistics of the worker group in this stack.
These statistics trigger the alarms.
ceilometer_sample_query:
value:
str_replace:
template: >
ceilometer sample-list -m cpu_util -q metadata.user_metadata.stack=stackval
params:
stackval: { get_param: "OS::stack_id" }
description: >
This query shows the cpu_util meter samples of the worker group in this stack.
These samples are used to calculate the statistics.

View File

@ -15,11 +15,8 @@ information, volumes, security groups, and even users. It also provides
more advanced functionality, such as instance high availability,
instance auto-scaling, and nested stacks.
The OpenStack Orchestration API contains these constructs:
* Stacks
* Resources
* Templates
The OpenStack Orchestration API uses the stacks, resources, and templates
constructs.
You create stacks from templates, which contain resources. Resources are an
abstraction in the HOT (Heat Orchestration Template) template language, which
@ -28,10 +25,10 @@ attribute.
For example, you might use the Orchestration API to create two compute
instances by creating a stack and by passing a template to the Orchestration
API. That template would contain two resources with the :code:`type` attribute
set to :code:`OS::Nova::Server`.
API. That template contains two resources with the :code:`type` attribute set
to :code:`OS::Nova::Server`.
That is a simplistic example, of course, but the flexibility of the resource
That example is simplistic, of course, but the flexibility of the resource
object enables the creation of templates that contain all the required cloud
infrastructure to run an application, such as load balancers, block storage
volumes, compute instances, networking topology, and security policies.
@ -44,15 +41,14 @@ This section introduces the
`HOT templating language <http://docs.openstack.org/developer/heat/template_guide/hot_guide.html>`_,
and takes you through some common OpenStack Orchestration calls.
In previous sections of this guide, you used your SDK to
programatically interact with OpenStack. In this section you work from
the command line to use the Orchestration API directly through
template files.
In previous sections, you used your SDK to programatically interact with
OpenStack. In this section, you use the 'heat' command-line client to access
the Orchestration API directly through template files.
Install the 'heat' command-line client by following this guide:
http://docs.openstack.org/cli-reference/content/install_clients.html
Then, use this guide to set up the necessary variables for your cloud in an 'openrc' file:
Use this guide to set up the necessary variables for your cloud in an 'openrc' file:
http://docs.openstack.org/cli-reference/content/cli_openrc.html
.. only:: dotnet
@ -98,15 +94,16 @@ Work with stacks: Basics
**Stack create**
The following example uses the
`hello_faafo <https://git.openstack.org/cgit/openstack/api-site/plain/firstapp/samples/heat/hello_faafo.yaml>`_ Hot template to
demonstrate how to create a compute instance that builds and runs the Fractal
application as an all-in-one installation. These configuration settings are
passed in as parameters:
The
`hello_faafo <https://git.openstack.org/cgit/openstack/api-site/plain/firstapp/samples/heat/hello_faafo.yaml>`_ Hot template demonstrates
how to create a compute instance that builds and runs the Fractal application
as an all-in-one installation.
- The flavor to use
You pass in these configuration settings as parameters:
- The flavor
- Your ssh key name
- The unique identifier (UUID) of the image to use
- The unique identifier (UUID) of the image
::
@ -119,7 +116,7 @@ passed in as parameters:
| 0db2c026-fb9a-4849-b51d-b1df244096cd | hello_faafo | CREATE_IN_PROGRESS | 2015-04-01T03:20:25Z |
+--------------------------------------+-------------+--------------------+----------------------+
The resulting stack automatically creates a Nova instance, as follows:
The stack automatically creates a Nova instance, as follows:
::
@ -130,7 +127,7 @@ The resulting stack automatically creates a Nova instance, as follows:
| 9bdf0e2f-415e-43a0-90ea-63a5faf86cf9 | hello_faafo-server-dwmwhzfxgoor | ACTIVE | - | Running | private=10.0.0.3 |
+--------------------------------------+---------------------------------+--------+------------+-------------+------------------+
Use the following command to verify that the stack was successfully created:
Verify that the stack was successfully created:
::
@ -142,21 +139,21 @@ Use the following command to verify that the stack was successfully created:
+--------------------------------------+-------------+-----------------+----------------------+
The stack reports an initial :code:`CREATE_IN_PROGRESS` status. When all
software has been installed, the status changes to :code:`CREATE_COMPLETE`.
software is installed, the status changes to :code:`CREATE_COMPLETE`.
You might have to run the :code:`stack-list` command a few times before
the stack creation is complete.
**Show information about the stack**
Run this command to get more information about the stack:
Get more information about the stack:
::
$ heat stack-show hello_faafo
The `outputs` property shows the URL through which you can access the Fractal
app. You can SSH into the instance.
application. You can SSH into the instance.
**Remove the stack**
@ -179,30 +176,306 @@ Verify the nova instance was deleted when the stack was removed:
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+
While this stack starts a single instance that builds and runs the Fractal app
as an all-in-one installation, you can make very complicated templates that
impact dozens of instances or that add and remove instances on demand.
Continue to the next section to learn more.
While this stack starts a single instance that builds and runs the Fractal
application as an all-in-one installation, you can make very complicated
templates that impact dozens of instances or that add and remove instances on
demand. Continue to the next section to learn more.
Work with stacks: Advanced
.. todo:: needs more explanatory material
.. todo:: needs a heat template that uses fractal app
With the Orchestration API, the Fractal app can create an auto-scaling group
for all parts of the application to dynamically provision more compute
With the Orchestration API, the Fractal application can create an auto-scaling
group for all parts of the application, to dynamically provision more compute
resources during periods of heavy utilization, and also terminate compute
instances to scale down as demand decreases.
instances to scale down, as demand decreases.
To learn about auto-scaling with the Orchestration API, read these articles:
* http://superuser.openstack.org/articles/simple-auto-scaling-environment-with-heat
* http://superuser.openstack.org/articles/understanding-openstack-heat-auto-scaling
For an example template that creates an auto-scaling Wordpress instance, see
`the heat template repository <https://github.com/openstack/heat-templates/blob/master/hot/autoscaling.yaml>`_
Initially, the focus is on scaling the workers because they consume the most
resources.
The example template depends on the ceilometer project, which is part of the
`Telemetry service <https://wiki.openstack.org/wiki/Telemetry>`_.
.. note:: The Telemetry service is not deployed by default in every cloud.
If the ceilometer commands do not work, this example does not work;
ask your support team for assistance.
To better understand how the template works, use this guide to install the
'ceilometer' command-line client:
* http://docs.openstack.org/cli-reference/content/install_clients.html
To set up the necessary variables for your cloud in an 'openrc' file, use this
guide:
* http://docs.openstack.org/cli-reference/content/cli_openrc.html
The Telemetry service uses meters to measure a given aspect of a resources
usage. The meter that we are interested in is the :code:`cpu_util` meter.
The value of a meter is regularly sampled and saved with a timestamp.
These saved samples are aggregated to produce a statistic. The statistic that
we are interested in is **avg**: the average of the samples over a given period.
We are interested because the Telemetry service supports alarms: an alarm is
fired when our average statistic breaches a configured threshold. When the
alarm fires, an associated action is performed.
The stack we will be building uses the firing of alarms to control the
addition or removal of worker instances.
To verify that ceilometer is installed, list the known meters:
::
$ ceilometer meter-list
This command returns a very long list of meters. Once a meter is created, it
is never thrown away!
Launch the stack with auto-scaling workers:
::
$ wget https://git.openstack.org/cgit/openstack/api-site/plain/firstapp/samples/heat/faafo_autoscaling_workers.yaml
$ heat stack-create --template-file faafo_autoscaling_workers.yaml \
--parameters flavor=m1.small\;key_name=test\;image_id=5bbe4073-90c0-4ec9-833c-092459cc4539 \
faafo_autoscaling_workers
+--------------------------------------+---------------------------+--------------------+----------------------+
| id | stack_name | stack_status | creation_time |
+--------------------------------------+---------------------------+--------------------+----------------------+
| 0db2c026-fb9a-4849-b51d-b1df244096cd | faafo_autoscaling_workers | CREATE_IN_PROGRESS | 2015-11-17T05:12:06Z |
+--------------------------------------+---------------------------+--------------------+----------------------+
As before, pass in configuration settings as parameters.
And as before, the stack takes a few minutes to build!
Wait for it to reach the :code:`CREATE_COMPLETE` status:
::
$ heat stack-list
+--------------------------------------+---------------------------+-----------------+----------------------+
| id | stack_name | stack_status | creation_time |
+--------------------------------------+---------------------------+-----------------+----------------------+
| 0db2c026-fb9a-4849-b51d-b1df244096cd | faafo_autoscaling_workers | CREATE_COMPLETE | 2015-11-17T05:12:06Z |
+--------------------------------------+---------------------------+-----------------+----------------------+
Run the :code:`nova list` command. This template created three instances:
::
$ nova list
+--------------------------------------+----------+--------+------------+-------------+----------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+----------+--------+------------+-------------+----------------------+
| 0de89b0a-5bfd-497b-bfa2-c13f6ef7a67e | api | ACTIVE | - | Running | public=115.146.89.75 |
| a6b9b334-e8ba-4c56-ab53-cacfc6f3ad43 | services | ACTIVE | - | Running | public=115.146.89.74 |
| 10122bfb-881b-4122-9955-7e801dfc5a22 | worker | ACTIVE | - | Running | public=115.146.89.80 |
+--------------------------------------+----------+--------+------------+-------------+----------------------+
Note that the worker instance is part of an :code:`OS::Heat::AutoScalingGroup`.
Confirm that the stack created two alarms:
::
$ ceilometer alarm-list
+--------------------------------------+---------------------------------------+-------+----------+---------+------------+--------------------------------+------------------+
| Alarm ID | Name | State | Severity | Enabled | Continuous | Alarm condition | Time constraints |
+--------------------------------------+---------------------------------------+-------+----------+---------+------------+--------------------------------+------------------+
| 2bc8433f-9f8a-4c2c-be88-d841d9de1506 | testFaafo-cpu_alarm_low-torkcwquons4 | ok | low | True | True | cpu_util < 15.0 during 1 x 60s | None |
| 7755cc9a-26f3-4e2b-a9af-a285ec8524da | testFaafo-cpu_alarm_high-qqtbvk36l6nq | ok | low | True | True | cpu_util > 90.0 during 1 x 60s | None |
+--------------------------------------+---------------------------------------+-------+----------+---------+------------+--------------------------------+------------------+
.. note:: If either alarm reports the :code:`insufficient data` state, the
default sampling period of the stack is probably too low for your
cloud; ask your support team for assistance. You can set the
period through the :code:`period` parameter of the stack to match your
clouds requirements.
Use the stack ID to get more information about the stack:
::
$ heat stack-show 0db2c026-fb9a-4849-b51d-b1df244096cd
The outputs section of the stack contains two ceilometer command-line queries:
* :code:`ceilometer_sample_query`: shows the samples used to build the statistics.
* :code:`ceilometer_statistics_query`: shows the statistics used to trigger the alarms.
These queries provide a view into the behavior of the stack.
In a new Terminal window, SSH into the 'api' API instance. Use the key pair
name that you passed in as a parameter.
::
$ ssh -i ~/.ssh/test USERNAME@IP_API
In your SSH session, confirm that no fractals were generated:
::
$ faafo list
201-11-18 11:07:20.464 8079 INFO faafo.client [-] listing all fractals
+------+------------+----------+
| UUID | Dimensions | Filesize |
+------+------------+----------+
+------+------------+----------+
Then, create a pair of large fractals:
::
$ faafo create --height 9999 --width 9999 --tasks 2
In the Terminal window where you run ceilometer, run
:code:`ceilometer_sample_query` to see the samples.
::
$ ceilometer sample-list -m cpu_util -q metadata.user_metadata.stack=0db2c026-fb9a-4849-b51d-b1df244096cd
+--------------------------------------+----------+-------+----------------+------+---------------------+
| Resource ID | Name | Type | Volume | Unit | Timestamp |
+--------------------------------------+----------+-------+----------------+------+---------------------+
| 10122bfb-881b-4122-9955-7e801dfc5a22 | cpu_util | gauge | 100.847457627 | % | 2015-11-18T00:15:50 |
| 10122bfb-881b-4122-9955-7e801dfc5a22 | cpu_util | gauge | 82.4754098361 | % | 2015-11-18T00:14:51 |
| 10122bfb-881b-4122-9955-7e801dfc5a22 | cpu_util | gauge | 0.45 | % | 2015-11-18T00:13:50 |
| 10122bfb-881b-4122-9955-7e801dfc5a22 | cpu_util | gauge | 0.466666666667 | % | 2015-11-18T00:12:50 |
+--------------------------------------+----------+-------+----------------+------+---------------------+
The CPU utilization across workers increases as workers start to create the fractals.
Run the :code:`ceilometer_statistics_query`: command to see the derived statistics.
::
$ ceilometer statistics -m cpu_util -q metadata.user_metadata.stack=0db2c026-fb9a-4849-b51d-b1df244096cd -p 60 -a avg
+--------+---------------------+---------------------+----------------+----------+---------------------+---------------------+
| Period | Period Start | Period End | Avg | Duration | Duration Start | Duration End |
+--------+---------------------+---------------------+----------------+----------+---------------------+---------------------+
| 60 | 2015-11-18T00:12:45 | 2015-11-18T00:13:45 | 0.466666666667 | 0.0 | 2015-11-18T00:12:50 | 2015-11-18T00:12:50 |
| 60 | 2015-11-18T00:13:45 | 2015-11-18T00:14:45 | 0.45 | 0.0 | 2015-11-18T00:13:50 | 2015-11-18T00:13:50 |
| 60 | 2015-11-18T00:14:45 | 2015-11-18T00:15:45 | 82.4754098361 | 0.0 | 2015-11-18T00:14:51 | 2015-11-18T00:14:51 |
| 60 | 2015-11-18T00:15:45 | 2015-11-18T00:16:45 | 100.847457627 | 0.0 | 2015-11-18T00:15:50 | 2015-11-18T00:15:50 |
+--------+---------------------+---------------------+----------------+----------+---------------------+---------------------+
.. note:: The samples and the statistics are listed in opposite time order!
See the state of the alarms set up by the template:
::
$ ceilometer alarm-list
+--------------------------------------+---------------------------------------+-------+----------+---------+------------+--------------------------------+------------------+
| Alarm ID | Name | State | Severity | Enabled | Continuous | Alarm condition | Time constraints |
+--------------------------------------+---------------------------------------+-------+----------+---------+------------+--------------------------------+------------------+
| 56c3022e-f23c-49ad-bf59-16a6875f3bdf | testFaafo-cpu_alarm_low-miw5tmomewot | ok | low | True | True | cpu_util < 15.0 during 1 x 60s | None |
| 70ff7b00-d56d-4a43-bbb2-e18952ae6605 | testFaafo-cpu_alarm_high-ffhsmylfzx43 | alarm | low | True | True | cpu_util > 90.0 during 1 x 60s | None |
+--------------------------------------+---------------------------------------+-------+----------+---------+------------+--------------------------------+------------------+
Run the :code:`nova list` command to confirm that the
:code:`OS::Heat::AutoScalingGroup` has created more instances:
::
$ nova list
+--------------------------------------+----------+--------+------------+-------------+----------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+----------+--------+------------+-------------+----------------------+
| 0de89b0a-5bfd-497b-bfa2-c13f6ef7a67e | api | ACTIVE | - | Running | public=115.146.89.96 |
| a6b9b334-e8ba-4c56-ab53-cacfc6f3ad43 | services | ACTIVE | - | Running | public=115.146.89.95 |
| 10122bfb-881b-4122-9955-7e801dfc5a22 | worker | ACTIVE | - | Running | public=115.146.89.97 |
| 31e7c020-c37c-4311-816b-be8afcaef8fa | worker | ACTIVE | - | Running | public=115.146.89.99 |
| 3fff2489-488c-4458-99f1-0cc50363ae33 | worker | ACTIVE | - | Running | public=115.146.89.98 |
+--------------------------------------+----------+--------+------------+-------------+----------------------+
Now, wait until all the fractals are generated and the instances have idled
for some time.
Run the :code:`nova list` command to confirm that the
:code:`OS::Heat::AutoScalingGroup` removed the unneeded instances:
::
$ nova list
+--------------------------------------+----------+--------+------------+-------------+----------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+----------+--------+------------+-------------+----------------------+
| 0de89b0a-5bfd-497b-bfa2-c13f6ef7a67e | api | ACTIVE | - | Running | public=115.146.89.96 |
| a6b9b334-e8ba-4c56-ab53-cacfc6f3ad43 | services | ACTIVE | - | Running | public=115.146.89.95 |
| 3fff2489-488c-4458-99f1-0cc50363ae33 | worker | ACTIVE | - | Running | public=115.146.89.98 |
+--------------------------------------+----------+--------+------------+-------------+----------------------+
.. note:: The :code:`OS::Heat::AutoScalingGroup` removes instances in creation order.
So the worker instance that was created first is the first instance
to be removed.
In the outputs section of the stack, you can run these web API calls:
* :code:`scale__workers_up_url`: A post to this url will add worker instances.
* :code:`scale_workers_down_url`: A post to this url will remove worker instances.
These demonstrate how the Ceilometer alarms add and remove instances.
To use them:
::
$ curl -X POST "Put the very long url from the template outputs section between these quotes"
To recap:
The auto-scaling stack sets up an API instance, a services instance, and an
auto-scaling group with a single worker instance. It also sets up ceilometer
alarms that add worker instances to the auto-scaling group when it is under
load, and removes instances when the group is idling. To do this, the alarms
post to URLs.
In this template, the alarms use metadata that is attached to each worker
instance. The metadata is in the :code:`metering.stack=stack_id` format.
The prefix is `metering.` For example, `metering.some_name`.
::
$ nova show <instance_id>
...
| metadata | {"metering.some_name": "some_value"} |
...
You can aggregate samples and calculate statistics across all instances with
the `metering.some_name` metadata that has `some_value` by using a query of
the form:
::
-q metadata.user_metadata.some_name=some_value
For example:
::
$ ceilometer sample-list -m cpu_util -q metadata.user_metadata.some_name=some_value
$ ceilometer statistics -m cpu_util -q metadata.user_metadata.some_name=some_value -p 6
The alarms have the form:
::
matching_metadata: {'metadata.user_metadata.stack': {get_param: "OS::stack_id"}}
Spend some time playing with the stack and the Fractal app to see how it works.
.. note:: The message queue can take a while to notice that worker instances have died.
Next steps
----------