[admin-guide] Fix rst markups for telemetry project files

In this patch
1.) Fix rst mark-ups
2.) In one file there is a link for config-ref of kilo version
so it is updated with liberty version

Change-Id: I832b61b0c605002653a8e8abb5c206c7c6197e8f
This commit is contained in:
venkatamahesh 2015-12-15 00:16:09 +05:30
parent 84fce47ab4
commit 40535aa092
9 changed files with 549 additions and 459 deletions

View File

@ -143,6 +143,7 @@ minute periods. The notification in this case is simply a log message,
though it could alternatively be a webhook URL.
.. note::
Alarm names must be unique for the alarms associated with an
individual project.
The cloud administrator can limit the maximum resulting actions
@ -163,16 +164,17 @@ meter. This period matching illustrates an important general
principal to keep in mind for alarms:
.. note::
The alarm period should be a whole number multiple (1 or more)
of the interval configured in the pipeline corresponding to the
target meter.
The alarm period should be a whole number multiple (1 or more)
of the interval configured in the pipeline corresponding to the
target meter.
Otherwise the alarm will tend to flit in and out of the
``insufficient data`` state due to the mismatch between the actual
frequency of datapoints in the metering store and the statistics
queries used to compare against the alarm threshold. If a shorter
alarm period is needed, then the corresponding interval should be
adjusted in the :file:`pipeline.yaml` file.
adjusted in the ``pipeline.yaml`` file.
Other notable alarm attributes that may be set on creation, or via a
subsequent update, include:
@ -186,11 +188,11 @@ description
enabled
True if evaluation and actioning is to be enabled for this alarm
(defaults to True).
(defaults to ``True``).
repeat-actions
True if actions should be repeatedly notified while the alarm
remains in the target state (defaults to False).
remains in the target state (defaults to ``False``).
ok-action
An action to invoke when the alarm state transitions to ``ok``.
@ -250,6 +252,7 @@ could indicate that:
minute).
.. note::
The visibility of alarms depends on the role and project
associated with the user issuing the query:
@ -316,4 +319,5 @@ or even deleted permanently (an irreversible step):
$ ceilometer alarm-delete ALARM_ID
.. note::
By default, alarm history is retained for deleted alarms.

View File

@ -11,13 +11,13 @@ Data collection
#. The Telemetry service collects a continuously growing set of data. Not
all the data will be relevant for a cloud administrator to monitor.
- Based on your needs, you can edit the :file:`pipeline.yaml` configuration
- Based on your needs, you can edit the ``pipeline.yaml`` configuration
file to include a selected number of meters while disregarding the
rest.
- By default, Telemetry service polls the service APIs every 10
minutes. You can change the polling interval on a per meter basis by
editing the :file:`pipeline.yaml` configuration file.
editing the ``pipeline.yaml`` configuration file.
.. warning::
@ -36,7 +36,7 @@ Data collection
polling requests by enabling the jitter support. This adds a random
delay on how the polling agents send requests to the service APIs. To
enable jitter, set ``shuffle_time_before_polling_task`` in the
:file:`ceilometer.conf` configuration file to an integer greater
``ceilometer.conf`` configuration file to an integer greater
than 0.
#. If you are using Juno or later releases, based on the number of
@ -68,22 +68,22 @@ Data storage
For example, this open-ended query might return an unpredictable amount
of data:
::
.. code-block:: console
$ ceilometer sample-list --meter cpu -q 'resource_id=INSTANCE_ID_1'
$ ceilometer sample-list --meter cpu -q 'resource_id=INSTANCE_ID_1'
Whereas, this well-formed query returns a more reasonable amount of
data, hence better performance:
::
.. code-block:: console
$ ceilometer sample-list --meter cpu -q 'resource_id=INSTANCE_ID_1;timestamp > 2015-05-01T00:00:00;timestamp < 2015-06-01T00:00:00'
$ ceilometer sample-list --meter cpu -q 'resource_id=INSTANCE_ID_1;timestamp > 2015-05-01T00:00:00;timestamp < 2015-06-01T00:00:00'
.. note::
As of the Liberty release, the number of items returned will be
restricted to the value defined by ``default_api_return_limit`` in the
:file:`ceilometer.conf` configuration file. Alternatively, the value can
``ceilometer.conf`` configuration file. Alternatively, the value can
be set per query by passing ``limit`` option in request.
#. You can install the API behind ``mod_wsgi``, as it provides more

View File

@ -167,22 +167,23 @@ each OpenStack service that are transformed to samples by Telemetry.
above table for each OpenStack service that needs it.
Specific notifications from the Compute service are important for
administrators and users. Configuring nova_notifications in the
:file:`nova.conf` file allows administrators to respond to events
administrators and users. Configuring ``nova_notifications`` in the
``nova.conf`` file allows administrators to respond to events
rapidly. For more information on configuring notifications for the
compute service, see
`Chapter 11 on Telemetry services <http://docs.openstack.org/
`Telemetry services <http://docs.openstack.org/
liberty/install-guide-ubuntu/ceilometer-nova.html>`__ in the
OpenStack Installation Guide.
.. note::
When the ``store_events`` option is set to True in
:file:`ceilometer.conf`, Prior to the Kilo release, the notification agent
When the ``store_events`` option is set to ``True`` in
``ceilometer.conf``, Prior to the Kilo release, the notification agent
needed database access in order to work properly.
Middleware for the OpenStack Object Storage service
---------------------------------------------------
A subset of Object Store statistics requires additional middleware to
be installed behind the proxy of Object Store. This additional component
emits notifications containing data-flow-oriented meters, namely the
@ -197,6 +198,7 @@ section in the OpenStack Installation Guide.
Telemetry middleware
--------------------
Telemetry provides the capability of counting the HTTP requests and
responses for each API endpoint in OpenStack. This is achieved by
storing a sample for each event marked as ``audit.http.request``,
@ -210,6 +212,7 @@ notifications.
Polling
~~~~~~~
The Telemetry service is intended to store a complex picture of the
infrastructure. This goal requires additional information than what is
provided by the events and notifications published by each service. Some
@ -224,34 +227,39 @@ Telemetry uses an agent based architecture to fulfill the requirements
against the data collection.
There are three types of agents supporting the polling mechanism, the
compute agent, the central agent, and the IPMI agent. Under the hood,
all the types of polling agents are the same ``ceilometer-polling`` agent,
except that they load different polling plug-ins (pollsters) from
different namespaces to gather data. The following subsections give
further information regarding the architectural and configuration
details of these components.
``compute agent``, the ``central agent``, and the ``IPMI agent``. Under
the hood, all the types of polling agents are the same
``ceilometer-polling`` agent, except that they load different polling
plug-ins (pollsters) from different namespaces to gather data. The following
subsections give further information regarding the architectural and
configuration details of these components.
Running ceilometer-agent-compute is exactly the same as::
Running :command:`ceilometer-agent-compute` is exactly the same as:
$ ceilometer-polling --polling-namespaces compute
.. code-block:: console
$ ceilometer-polling --polling-namespaces compute
Running ceilometer-agent-central is exactly the same as::
Running :command:`ceilometer-agent-central` is exactly the same as:
$ ceilometer-polling --polling-namespaces central
.. code-block:: console
$ ceilometer-polling --polling-namespaces central
Running ceilometer-agent-ipmi is exactly the same as::
Running :command:`ceilometer-agent-ipmi` is exactly the same as:
$ ceilometer-polling --polling-namespaces ipmi
.. code-block:: console
$ ceilometer-polling --polling-namespaces ipmi
In addition to loading all the polling plug-ins registered in the
specified namespaces, the ceilometer-polling agent can also specify the
polling plug-ins to be loaded by using the ``pollster-list`` option::
specified namespaces, the ``ceilometer-polling`` agent can also specify the
polling plug-ins to be loaded by using the ``pollster-list`` option:
$ ceilometer-polling --polling-namespaces central \
--pollster-list image image.size storage.*
.. code-block:: console
$ ceilometer-polling --polling-namespaces central \
--pollster-list image image.size storage.*
.. note::
@ -260,10 +268,11 @@ polling plug-ins to be loaded by using the ``pollster-list`` option::
.. note::
The ceilometer-polling service is available since Kilo release.
The ``ceilometer-polling`` service is available since Kilo release.
Central agent
-------------
This agent is responsible for polling public REST APIs to retrieve additional
information on OpenStack resources not already surfaced via notifications,
and also for polling hardware resources over SNMP.
@ -296,6 +305,7 @@ processed.
Compute agent
-------------
This agent is responsible for collecting resource usage data of VM
instances on individual compute nodes within an OpenStack deployment.
This mechanism requires a closer interaction with the hypervisor,
@ -330,6 +340,7 @@ each hypervisor supported by the Telemetry service.
IPMI agent
----------
This agent is responsible for collecting IPMI sensor data and Intel Node
Manager data on individual compute nodes within an OpenStack deployment.
This agent requires an IPMI capable node with the ipmitool utility installed,
@ -352,9 +363,9 @@ The list of collected meters can be found in
.. note::
Do not deploy both the IPMI agent and the Bare metal service on one
compute node. If ``conductor.send_sensor_data`` is set, this
misconfiguration causes duplicated IPMI sensor samples.
Do not deploy both the IPMI agent and the Bare metal service on one
compute node. If ``conductor.send_sensor_data`` is set, this
misconfiguration causes duplicated IPMI sensor samples.
.. _ha-deploy-services:
@ -386,20 +397,21 @@ You must configure a supported Tooz driver for the HA deployment of the
Telemetry services.
For information about the required configuration options that have to be
set in the :file:`ceilometer.conf` configuration file for both the central
set in the ``ceilometer.conf`` configuration file for both the central
and compute agents, see the `Coordination section
<http://docs.openstack.org/liberty/config-reference/content/ch_configuring-openstack-telemetry.html>`__
in the OpenStack Configuration Reference.
Notification agent HA deployment
--------------------------------
In the Kilo release, workload partitioning support was added to the
notification agent. This is particularly useful as the pipeline processing
is handled exclusively by the notification agent now which may result
in a larger amount of load.
To enable workload partitioning by notification agent, the ``backend_url``
option must be set in the :file:`ceilometer.conf` configuration file.
option must be set in the ``ceilometer.conf`` configuration file.
Additionally, ``workload_partitioning`` should be enabled in the
`Notification section <http://docs.openstack.org/liberty/config-reference/content/ch_configuring-openstack-telemetry.html>`__ in the OpenStack Configuration Reference.
@ -407,7 +419,7 @@ Additionally, ``workload_partitioning`` should be enabled in the
In Liberty, the notification agent creates multiple queues to divide the
workload across all active agents. The number of queues can be controlled by
the ``pipeline_processing_queues`` option in the :file:`ceilometer.conf`
the ``pipeline_processing_queues`` option in the ``ceilometer.conf``
configuration file. A larger value will result in better distribution of
tasks but will also require more memory and longer startup time. It is
recommended to have a value approximately three times the number of active
@ -447,7 +459,7 @@ in the OpenStack Configuration Reference.
For each sub-group of the central agent pool with the same
``partitioning_group_prefix`` a disjoint subset of meters must be
polled, otherwise samples may be missing or duplicated. The list of
meters to poll can be set in the :file:`/etc/ceilometer/pipeline.yaml`
meters to poll can be set in the ``/etc/ceilometer/pipeline.yaml``
configuration file. For more information about pipelines see
:ref:`data-collection-and-processing`.
@ -455,11 +467,12 @@ To enable the compute agent to run multiple instances simultaneously
with workload partitioning, the ``workload_partitioning`` option has to
be set to ``True`` under the `Compute section
<http://docs.openstack.org/liberty/config-reference/content/ch_configuring-openstack-telemetry.html>`__
in the :file:`ceilometer.conf` configuration file.
in the ``ceilometer.conf`` configuration file.
Send samples to Telemetry
~~~~~~~~~~~~~~~~~~~~~~~~~
While most parts of the data collection in the Telemetry service are
automated, Telemetry provides the possibility to submit samples via the
REST API to allow users to send custom samples into this service.
@ -475,14 +488,14 @@ POST request.
If the sample corresponds to an existing meter, then the fields like
``meter-type`` and meter name should be matched accordingly.
The required fields for sending a sample using the command line client
The required fields for sending a sample using the command-line client
are:
- ID of the corresponding resource. (``--resource-id``)
- ID of the corresponding resource. (:option:`--resource-id`)
- Name of meter. (``--meter-name``)
- Name of meter. (:option:`--meter-name`)
- Type of meter. (``--meter-type``)
- Type of meter. (:option:`--meter-type`)
Predefined meter types:
@ -492,41 +505,44 @@ are:
- Cumulative
- Unit of meter. (``--meter-unit``)
- Unit of meter. (:option:`--meter-unit`)
- Volume of sample. (``--sample-volume``)
- Volume of sample. (:option:`--sample-volume`)
To send samples to Telemetry using the command line client, the
following command should be invoked::
To send samples to Telemetry using the command-line client, the
following command should be invoked:
$ ceilometer sample-create -r 37128ad6-daaa-4d22-9509-b7e1c6b08697 \
-m memory.usage --meter-type gauge --meter-unit MB --sample-volume 48
+-------------------+--------------------------------------------+
| Property | Value |
+-------------------+--------------------------------------------+
| message_id | 6118820c-2137-11e4-a429-08002715c7fb |
| name | memory.usage |
| project_id | e34eaa91d52a4402b4cb8bc9bbd308c1 |
| resource_id | 37128ad6-daaa-4d22-9509-b7e1c6b08697 |
| resource_metadata | {} |
| source | e34eaa91d52a4402b4cb8bc9bbd308c1:openstack |
| timestamp | 2014-08-11T09:10:46.358926 |
| type | gauge |
| unit | MB |
| user_id | 679b0499e7a34ccb9d90b64208401f8e |
| volume | 48.0 |
+-------------------+--------------------------------------------+
.. code-block:: console
$ ceilometer sample-create -r 37128ad6-daaa-4d22-9509-b7e1c6b08697 \
-m memory.usage --meter-type gauge --meter-unit MB --sample-volume 48
+-------------------+--------------------------------------------+
| Property | Value |
+-------------------+--------------------------------------------+
| message_id | 6118820c-2137-11e4-a429-08002715c7fb |
| name | memory.usage |
| project_id | e34eaa91d52a4402b4cb8bc9bbd308c1 |
| resource_id | 37128ad6-daaa-4d22-9509-b7e1c6b08697 |
| resource_metadata | {} |
| source | e34eaa91d52a4402b4cb8bc9bbd308c1:openstack |
| timestamp | 2014-08-11T09:10:46.358926 |
| type | gauge |
| unit | MB |
| user_id | 679b0499e7a34ccb9d90b64208401f8e |
| volume | 48.0 |
+-------------------+--------------------------------------------+
.. _data-collection-and-processing:
Data collection and processing
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The mechanism by which data is collected and processed is called a
pipeline. Pipelines, at the configuration level, describe a coupling
between sources of data and the corresponding sinks for transformation
and publication of data.
A source is a producer of data: samples or events. In effect, it is a
A source is a producer of data: ``samples`` or ``events``. In effect, it is a
set of pollsters or notification handlers emitting datapoints for a set
of matching meters and event types.
@ -541,11 +557,11 @@ meter may be needed for performance tuning every minute.
.. warning::
Rapid polling cadences should be avoided, as it results in a huge
amount of data in a short time frame, which may negatively affect
the performance of both Telemetry and the underlying database back
end. We therefore strongly recommend you do not use small
granularity values like 10 seconds.
Rapid polling cadences should be avoided, as it results in a huge
amount of data in a short time frame, which may negatively affect
the performance of both Telemetry and the underlying database back
end. We therefore strongly recommend you do not use small
granularity values like 10 seconds.
A sink, on the other hand, is a consumer of data, providing logic for
the transformation and publication of data emitted from related sources.
@ -562,8 +578,8 @@ next step that is described in :ref:`telemetry-publishers`.
Pipeline configuration
----------------------
Pipeline configuration by default, is stored in separate configuration
files, called :file:`pipeline.yaml` and :file:`event_pipeline.yaml`, next to
the :file:`ceilometer.conf` file. The meter pipeline and event pipeline
files, called ``pipeline.yaml`` and ``event_pipeline.yaml``, next to
the ``ceilometer.conf`` file. The meter pipeline and event pipeline
configuration files can be set by the ``pipeline_cfg_file`` and
``event_pipeline_cfg_file`` options listed in the `Description of
configuration options for api table
@ -571,23 +587,25 @@ configuration options for api table
section in the OpenStack Configuration Reference respectively. Multiple
pipelines can be defined in one pipeline configuration file.
The meter pipeline definition looks like the following::
The meter pipeline definition looks like:
---
sources:
- name: 'source name'
interval: 'how often should the samples be injected into the pipeline'
meters:
- 'meter filter'
resources:
- 'list of resource URLs'
sinks
- 'sink name'
sinks:
- name: 'sink name'
transformers: 'definition of transformers'
publishers:
- 'list of publishers'
.. code-block:: yaml
---
sources:
- name: 'source name'
interval: 'how often should the samples be injected into the pipeline'
meters:
- 'meter filter'
resources:
- 'list of resource URLs'
sinks
- 'sink name'
sinks:
- name: 'sink name'
transformers: 'definition of transformers'
publishers:
- 'list of publishers'
The interval parameter in the sources section should be defined in
seconds. It determines the polling cadence of sample injection into the
@ -612,15 +630,15 @@ meters, with which a source should operate:
syntax.
- For meters, which have variants identified by a complex name
field, use the wildcard symbol to select all, e.g. for
"instance:m1.tiny", use "instance:\*".
field, use the wildcard symbol to select all, for example,
for ``instance:m1.tiny``, use ``instance:\*``.
.. note::
Please be aware that we do not have any duplication check between
pipelines and if you add a meter to multiple pipelines then it is
assumed the duplication is intentional and may be stored multiple
times according to the specified sinks.
Please be aware that we do not have any duplication check between
pipelines and if you add a meter to multiple pipelines then it is
assumed the duplication is intentional and may be stored multiple
times according to the specified sinks.
The above definition methods can be used in the following combinations:
@ -634,10 +652,10 @@ The above definition methods can be used in the following combinations:
.. note::
At least one of the above variations should be included in the
meters section. Included and excluded meters cannot co-exist in the
same pipeline. Wildcard and included meters cannot co-exist in the
same pipeline definition section.
At least one of the above variations should be included in the
meters section. Included and excluded meters cannot co-exist in the
same pipeline. Wildcard and included meters cannot co-exist in the
same pipeline definition section.
The optional resources section of a pipeline source allows a static list
of resource URLs to be configured for polling.
@ -664,19 +682,21 @@ add a list of transformer definitions. The available transformers are:
The publishers section contains the list of publishers, where the
samples data should be sent after the possible transformations.
Similarly, the event pipeline definition looks like the following::
Similarly, the event pipeline definition looks like:
---
sources:
- name: 'source name'
events:
- 'event filter'
sinks
- 'sink name'
sinks:
- name: 'sink name'
publishers:
- 'list of publishers'
.. code-block:: yaml
---
sources:
- name: 'source name'
events:
- 'event filter'
sinks
- 'sink name'
sinks:
- name: 'sink name'
publishers:
- 'list of publishers'
The event filter uses the same filtering logic as the meter pipeline.
@ -698,39 +718,43 @@ source and target fields with different subfields in case of the rate of
change, which depends on the implementation of the transformer.
In the case of the transformer that creates the ``cpu_util`` meter, the
definition looks like the following::
definition looks like:
transformers:
- name: "rate_of_change"
parameters:
target:
name: "cpu_util"
unit: "%"
type: "gauge"
scale: "100.0 / (10**9 * (resource_metadata.cpu_number or 1))"
.. code-block:: yaml
transformers:
- name: "rate_of_change"
parameters:
target:
name: "cpu_util"
unit: "%"
type: "gauge"
scale: "100.0 / (10**9 * (resource_metadata.cpu_number or 1))"
The rate of change the transformer generates is the ``cpu_util`` meter
from the sample values of the ``cpu`` counter, which represents
cumulative CPU time in nanoseconds. The transformer definition above
defines a scale factor (for nanoseconds and multiple CPUs), which is
applied before the transformation derives a sequence of gauge samples
with unit '%', from sequential values of the ``cpu`` meter.
with unit ``%``, from sequential values of the ``cpu`` meter.
The definition for the disk I/O rate, which is also generated by the
rate of change transformer::
rate of change transformer:
transformers:
- name: "rate_of_change"
parameters:
source:
map_from:
name: "disk\\.(read|write)\\.(bytes|requests)"
unit: "(B|request)"
target:
map_to:
name: "disk.\\1.\\2.rate"
unit: "\\1/s"
type: "gauge"
.. code-block:: yaml
transformers:
- name: "rate_of_change"
parameters:
source:
map_from:
name: "disk\\.(read|write)\\.(bytes|requests)"
unit: "(B|request)"
target:
map_to:
name: "disk.\\1.\\2.rate"
unit: "\\1/s"
type: "gauge"
**Unit conversion transformer**
@ -738,29 +762,33 @@ Transformer to apply a unit conversion. It takes the volume of the meter
and multiplies it with the given ``scale`` expression. Also supports
``map_from`` and ``map_to`` like the rate of change transformer.
Sample configuration::
Sample configuration:
transformers:
- name: "unit_conversion"
parameters:
target:
name: "disk.kilobytes"
unit: "KB"
scale: "volume * 1.0 / 1024.0"
.. code-block:: yaml
With ``map_from`` and ``map_to`` ::
transformers:
- name: "unit_conversion"
parameters:
target:
name: "disk.kilobytes"
unit: "KB"
scale: "volume * 1.0 / 1024.0"
transformers:
- name: "unit_conversion"
parameters:
source:
map_from:
name: "disk\\.(read|write)\\.bytes"
target:
map_to:
name: "disk.\\1.kilobytes"
scale: "volume * 1.0 / 1024.0"
unit: "KB"
With ``map_from`` and ``map_to``:
.. code-block:: yaml
transformers:
- name: "unit_conversion"
parameters:
source:
map_from:
name: "disk\\.(read|write)\\.bytes"
target:
map_to:
name: "disk.\\1.kilobytes"
scale: "volume * 1.0 / 1024.0"
unit: "KB"
**Aggregator transformer**
@ -780,41 +808,49 @@ first sample's attribute, last to take the last sample's attribute, and
drop to discard the attribute).
To aggregate 60s worth of samples by ``resource_metadata`` and keep the
``resource_metadata`` of the latest received sample::
``resource_metadata`` of the latest received sample:
transformers:
- name: "aggregator"
parameters:
retention_time: 60
resource_metadata: last
.. code-block:: yaml
transformers:
- name: "aggregator"
parameters:
retention_time: 60
resource_metadata: last
To aggregate each 15 samples by ``user_id`` and ``resource_metadata`` and keep
the ``user_id`` of the first received sample and drop the
``resource_metadata``::
``resource_metadata``:
transformers:
- name: "aggregator"
parameters:
size: 15
user_id: first
resource_metadata: drop
.. code-block:: yaml
transformers:
- name: "aggregator"
parameters:
size: 15
user_id: first
resource_metadata: drop
**Accumulator transformer**
This transformer simply caches the samples until enough samples have
arrived and then flushes them all down the pipeline at once::
arrived and then flushes them all down the pipeline at once:
transformers:
- name: "accumulator"
parameters:
size: 15
.. code-block:: yaml
transformers:
- name: "accumulator"
parameters:
size: 15
**Muli meter arithmetic transformer**
This transformer enables us to perform arithmetic calculations over one
or more meters and/or their metadata, for example::
or more meters and/or their metadata, for example:
memory_util = 100 * memory.usage / memory
.. code-block:: json
memory_util = 100 * memory.usage / memory
A new sample is created with the properties described in the ``target``
section of the transformer's configuration. The sample's
@ -823,49 +859,55 @@ performed on samples from the same resource.
.. note::
The calculation is limited to meters with the same interval.
The calculation is limited to meters with the same interval.
Example configuration::
Example configuration:
transformers:
- name: "arithmetic"
parameters:
target:
name: "memory_util"
unit: "%"
type: "gauge"
expr: "100 * $(memory.usage) / $(memory)"
.. code-block:: yaml
transformers:
- name: "arithmetic"
parameters:
target:
name: "memory_util"
unit: "%"
type: "gauge"
expr: "100 * $(memory.usage) / $(memory)"
To demonstrate the use of metadata, here is the implementation of a
silly meter that shows average CPU time per core::
silly meter that shows average CPU time per core:
transformers:
- name: "arithmetic"
parameters:
target:
name: "avg_cpu_per_core"
unit: "ns"
type: "cumulative"
expr: "$(cpu) / ($(cpu).resource_metadata.cpu_number or 1)"
.. code-block:: yaml
transformers:
- name: "arithmetic"
parameters:
target:
name: "avg_cpu_per_core"
unit: "ns"
type: "cumulative"
expr: "$(cpu) / ($(cpu).resource_metadata.cpu_number or 1)"
.. note::
Expression evaluation gracefully handles NaNs and exceptions. In
such a case it does not create a new sample but only logs a warning.
Expression evaluation gracefully handles NaNs and exceptions. In
such a case it does not create a new sample but only logs a warning.
**Delta transformer**
This transformer calculates the change between two sample datapoints of a
resource. It can be configured to capture only the positive growth deltas.
Example configuration::
Example configuration:
transformers:
- name: "delta"
parameters:
target:
name: "cpu.delta"
growth_only: True
.. code-block:: yaml
transformers:
- name: "delta"
parameters:
target:
name: "cpu.delta"
growth_only: True
.. _telemetry-meter-definitions:
@ -874,35 +916,37 @@ Meter definitions
The Telemetry service collects a subset of the meters by filtering
notifications emitted by other OpenStack services. Starting with the Liberty
release, you can find the meter definitions in a separate configuration file,
called :file:`ceilometer/meter/data/meter.yaml`. This enables
called ``ceilometer/meter/data/meter.yaml``. This enables
operators/administrators to add new meters to Telemetry project by updating
the :file:`meter.yaml` file without any need for additional code changes.
the ``meter.yaml`` file without any need for additional code changes.
.. note::
The :file:`meter.yaml` file should be modified with care. Unless intended
The ``meter.yaml`` file should be modified with care. Unless intended
do not remove any existing meter definitions from the file. Also, the
collected meters can differ in some cases from what is referenced in the
documentation.
A standard meter definition looks like the following::
A standard meter definition looks like:
---
metric:
- name: 'meter name'
event_type: 'event name'
type: 'type of meter eg: gauge, cumulative or delta'
unit: 'name of unit eg: MB'
volume: 'path to a measurable value eg: $.payload.size'
resource_id: 'path to resouce id eg: $.payload.id'
project_id: 'path to project id eg: $.payload.owner'
.. code-block:: yaml
---
metric:
- name: 'meter name'
event_type: 'event name'
type: 'type of meter eg: gauge, cumulative or delta'
unit: 'name of unit eg: MB'
volume: 'path to a measurable value eg: $.payload.size'
resource_id: 'path to resouce id eg: $.payload.id'
project_id: 'path to project id eg: $.payload.owner'
The definition above shows a simple meter definition with some fields,
from which ``name``, ``event_type``, ``type``, ``unit``, and ``volume``
are required. If there is a match on the event type, samples are generated
for the meter.
If you take a look at the :file:`meter.yaml` file, it contains the sample
If you take a look at the ``meter.yaml`` file, it contains the sample
definitions for all the meters that Telemetry is collecting from
notifications. The value of each field is specified by using json path in
order to find the right value from the notification message. In order to be
@ -914,69 +958,78 @@ the ``size`` information from the payload you can define it like
A notification message may contain multiple meters. You can use ``*`` in
the meter definition to capture all the meters and generate samples
respectively. You can use wild cards as shown in the following example::
respectively. You can use wild cards as shown in the following example:
---
metric:
- name: $.payload.measurements.[*].metric.[*].name
event_type: 'event_name.*'
type: 'delta'
unit: $.payload.measurements.[*].metric.[*].unit
volume: payload.measurements.[*].result
resource_id: $.payload.target
user_id: $.payload.initiator.id
project_id: $.payload.initiator.project_id
.. code-block:: yaml
---
metric:
- name: $.payload.measurements.[*].metric.[*].name
event_type: 'event_name.*'
type: 'delta'
unit: $.payload.measurements.[*].metric.[*].unit
volume: payload.measurements.[*].result
resource_id: $.payload.target
user_id: $.payload.initiator.id
project_id: $.payload.initiator.project_id
In the above example, the ``name`` field is a json path with matching
a list of meter names defined in the notification message.
You can even use complex operations on json paths. In the following example,
``volume`` and ``resource_id`` fields perform an arithmetic
and string concatenation::
and string concatenation:
---
metric:
- name: 'compute.node.cpu.idle.percent'
event_type: 'compute.metrics.update'
type: 'gauge'
unit: 'percent'
volume: payload.metrics[?(@.name='cpu.idle.percent')].value * 100
resource_id: $.payload.host + "_" + $.payload.nodename
.. code-block:: yaml
---
metric:
- name: 'compute.node.cpu.idle.percent'
event_type: 'compute.metrics.update'
type: 'gauge'
unit: 'percent'
volume: payload.metrics[?(@.name='cpu.idle.percent')].value * 100
resource_id: $.payload.host + "_" + $.payload.nodename
You will find some existence meters in the :file:`meter.yaml`. These
You will find some existence meters in the ``meter.yaml``. These
meters have a ``volume`` as ``1`` and are at the bottom of the yaml file
with a note suggesting that these will be removed in Mitaka release.
For example, the meter definition for existence meters is as follows::
For example, the meter definition for existence meters is as follows:
---
metric:
- name: 'meter name'
type: 'delta'
unit: 'volume'
volume: 1
event_type:
- 'event type'
resource_id: $.payload.volume_id
user_id: $.payload.user_id
project_id: $.payload.tenant_id
.. code-block:: yaml
---
metric:
- name: 'meter name'
type: 'delta'
unit: 'volume'
volume: 1
event_type:
- 'event type'
resource_id: $.payload.volume_id
user_id: $.payload.user_id
project_id: $.payload.tenant_id
These meters are not loaded by default. To load these meters, flip
the `disable_non_metric_meters` option in the :file:`ceilometer.conf`
the `disable_non_metric_meters` option in the ``ceilometer.conf``
file.
Block Storage audit script setup to get notifications
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you want to collect OpenStack Block Storage notification on demand,
you can use ``cinder-volume-usage-audit`` from OpenStack Block Storage.
you can use :command:`cinder-volume-usage-audit` from OpenStack Block Storage.
This script becomes available when you install OpenStack Block Storage,
so you can use it without any specific settings and you don't need to
authenticate to access the data. To use it, you must run this command in
the following format::
the following format:
$ cinder-volume-usage-audit \
--start_time='YYYY-MM-DD HH:MM:SS' --end_time='YYYY-MM-DD HH:MM:SS' --send_actions
.. code-block:: console
$ cinder-volume-usage-audit \
--start_time='YYYY-MM-DD HH:MM:SS' --end_time='YYYY-MM-DD HH:MM:SS' --send_actions
This script outputs what volumes or snapshots were created, deleted, or
exists in a given period of time and some information about these
@ -994,6 +1047,7 @@ example, every 5 minutes::
Storing samples
~~~~~~~~~~~~~~~
The Telemetry service has a separate service that is responsible for
persisting the data that comes from the pollsters or is received as
notifications. The data can be stored in a file or a database back end,
@ -1009,17 +1063,18 @@ dispatcher.
.. note::
Multiple dispatchers can be configured for Telemetry at one time.
Multiple dispatchers can be configured for Telemetry at one time.
Multiple ``ceilometer-collector`` processes can be run at a time. It is also
supported to start multiple worker threads per collector process. The
``collector_workers`` configuration option has to be modified in the
`Collector section
<http://docs.openstack.org/liberty/config-reference/content/ch_configuring-openstack-telemetry.html>`__
of the :file:`ceilometer.conf` configuration file.
of the ``ceilometer.conf`` configuration file.
Database dispatcher
-------------------
When the database dispatcher is configured as data store, you have the
option to set a ``time_to_live`` option (ttl) for samples. By default
the time to live value for samples is set to -1, which means that they
@ -1034,7 +1089,7 @@ database.
Certain databases support native TTL expiration. In cases where this is
not possible, a command-line script, which you can use for this purpose
is ceilometer-expirer. You can run it in a cron job, which helps to keep
is ``ceilometer-expirer``. You can run it in a cron job, which helps to keep
your database in a consistent state.
The level of support differs in case of the configured back end:
@ -1046,41 +1101,45 @@ The level of support differs in case of the configured back end:
| | | deleting samples that are older |
| | | than the configured ttl value. |
+--------------------+-------------------+------------------------------------+
| SQL-based back | Yes | ceilometer-expirer has to be used |
| ends | | for deleting samples and its |
| SQL-based back | Yes | ``ceilometer-expirer`` has to be |
| ends | | used for deleting samples and its |
| | | related data from the database. |
+--------------------+-------------------+------------------------------------+
| HBase | No | Telemetry's HBase support does not |
| | | include native TTL nor |
| | | ceilometer-expirer support. |
| | | ``ceilometer-expirer`` support. |
+--------------------+-------------------+------------------------------------+
| DB2 NoSQL | No | DB2 NoSQL does not have native TTL |
| | | nor ceilometer-expirer support. |
| | | nor ``ceilometer-expirer`` |
| | | support. |
+--------------------+-------------------+------------------------------------+
HTTP dispatcher
---------------
The Telemetry service supports sending samples to an external HTTP
target. The samples are sent without any modification. To set this
option as the collector's target, the ``dispatcher`` has to be changed
to ``http`` in the :file:`ceilometer.conf` configuration file. For the list
to ``http`` in the ``ceilometer.conf`` configuration file. For the list
of options that you need to set, see the see the `dispatcher_http
section <http://docs.openstack.org/liberty/config-reference/content/ch_configuring-openstack-telemetry.html>`__
in the OpenStack Configuration Reference.
File dispatcher
---------------
You can store samples in a file by setting the ``dispatcher`` option in the
:file:`ceilometer.conf` file. For the list of configuration options,
``ceilometer.conf`` file. For the list of configuration options,
see the `dispatcher_file section
<http://docs.openstack.org/liberty/config-reference/content/ch_configuring-openstack-telemetry.html>`__
in the OpenStack Configuration Reference.
Gnocchi dispatcher
------------------
The Telemetry service supports sending the metering data to Gnocchi back end
through the gnocchi dispatcher. To set this option as the target, change the
``dispatcher`` to ``gnocchi`` in the :file:`ceilometer.conf`
``dispatcher`` to ``gnocchi`` in the ``ceilometer.conf``
configuration file.
For the list of options that you need to set, see the

View File

@ -9,13 +9,14 @@ one or more database back ends, which are hidden by the Telemetry RESTful API.
.. note::
It is highly recommended not to access the database directly and
read or modify any data in it. The API layer hides all the changes
in the actual database schema and provides a standard interface to
expose the samples, alarms and so forth.
It is highly recommended not to access the database directly and
read or modify any data in it. The API layer hides all the changes
in the actual database schema and provides a standard interface to
expose the samples, alarms and so forth.
Telemetry v2 API
~~~~~~~~~~~~~~~~
The Telemetry service provides a RESTful API, from which the collected
samples and all the related information can be retrieved, like the list
of meters, alarm definitions and so forth.
@ -31,6 +32,7 @@ the `Telemetry API Reference
Query
-----
The API provides some additional functionalities, like querying the
collected data set. For the samples and alarms API endpoints, both
simple and complex query styles are available, whereas for the other
@ -102,14 +104,14 @@ The following logical operators can be used:
.. note::
The ``not`` operator has different behavior in MongoDB and in the
SQLAlchemy-based database engines. If the ``not`` operator is
applied on a non existent metadata field then the result depends on
the database engine. In case of MongoDB, it will return every sample
as the ``not`` operator is evaluated true for every sample where the
given field does not exist. On the other hand the SQL-based database
engine will return an empty result because of the underlying
``join`` operation.
The ``not`` operator has different behavior in MongoDB and in the
SQLAlchemy-based database engines. If the ``not`` operator is
applied on a non existent metadata field then the result depends on
the database engine. In case of MongoDB, it will return every sample
as the ``not`` operator is evaluated true for every sample where the
given field does not exist. On the other hand the SQL-based database
engine will return an empty result because of the underlying
``join`` operation.
Complex query supports specifying a list of ``orderby`` expressions.
This means that the result of the query can be ordered based on the
@ -125,12 +127,13 @@ The ``filter``, ``orderby`` and ``limit`` fields are optional.
.. note::
As opposed to the simple query, complex query is available via a
separate API endpoint. For more information see the `Telemetry v2 Web API
Reference <http://docs.openstack.org/developer/ceilometer/webapi/v2.html#v2-web-api>`__.
As opposed to the simple query, complex query is available via a
separate API endpoint. For more information see the `Telemetry v2 Web API
Reference <http://docs.openstack.org/developer/ceilometer/webapi/v2.html#v2-web-api>`__.
Statistics
----------
The sample data can be used in various ways for several purposes, like
billing or profiling. In external systems the data is often used in the
form of aggregated statistics. The Telemetry API provides several
@ -155,7 +158,7 @@ Telemetry supports the following statistics and aggregation functions:
.. note::
The ``aggregate.param`` option is required.
The ``aggregate.param`` option is required.
``count``
Number of samples in each period.
@ -175,14 +178,15 @@ Telemetry supports the following statistics and aggregation functions:
The simple query and the statistics functionality can be used together
in a single API request.
Telemetry command line client and SDK
Telemetry command-line client and SDK
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The Telemetry service provides a command line client, with which the
The Telemetry service provides a command-line client, with which the
collected data is available just as the alarm definition and retrieval
options. The client uses the Telemetry RESTful API in order to execute
the requested operations.
To be able to use the ``ceilometer`` command, the
To be able to use the :command:`ceilometer` command, the
python-ceilometerclient package needs to be installed and configured
properly. For details about the installation process, see the `Telemetry
chapter <http://docs.openstack.org/liberty/install-guide-ubuntu/ceilometer.html>`__
@ -190,14 +194,14 @@ in the OpenStack Installation Guide.
.. note::
The Telemetry service captures the user-visible resource usage data.
Therefore the database will not contain any data without the
existence of these resources, like VM images in the OpenStack Image
service.
The Telemetry service captures the user-visible resource usage data.
Therefore the database will not contain any data without the
existence of these resources, like VM images in the OpenStack Image
service.
Similarly to other OpenStack command line clients, the ``ceilometer``
Similarly to other OpenStack command-line clients, the ``ceilometer``
client uses OpenStack Identity for authentication. The proper
credentials and ``--auth_url`` parameter have to be defined via command
credentials and :option:`--auth_url` parameter have to be defined via command
line parameters or environment variables.
This section provides some examples without the aim of completeness.
@ -205,7 +209,9 @@ These commands can be used for instance for validating an installation
of Telemetry.
To retrieve the list of collected meters, the following command should
be used::
be used:
.. code-block:: console
$ ceilometer meter-list
+------------------------+------------+------+------------------------------------------+----------------------------------+----------------------------------+
@ -222,7 +228,7 @@ be used::
| ... |
+------------------------+------------+------+------------------------------------------+----------------------------------+----------------------------------+
The ``ceilometer`` command was run with ``admin`` rights, which means
The :command:`ceilometer` command was run with ``admin`` rights, which means
that all the data is accessible in the database. For more information
about access right see :ref:`telemetry-users-roles-tenants`. As it can be seen
in the above example, there are two VM instances existing in the system, as
@ -234,7 +240,7 @@ resource, in an ascending order based on the name of the meter.
Samples are collected for each meter that is present in the list of
meters, except in case of instances that are not running or deleted from
the OpenStack Compute database. If an instance no longer exists and
there is a ``time_to_live`` value set in the :file:`ceilometer.conf`
there is a ``time_to_live`` value set in the ``ceilometer.conf``
configuration file, then a group of samples are deleted in each
expiration cycle. When the last sample is deleted for a meter, the
database can be cleaned up by running ceilometer-expirer and the meter
@ -242,12 +248,16 @@ will not be present in the list above anymore. For more information
about the expiration procedure see :ref:`telemetry-storing-samples`.
The Telemetry API supports simple query on the meter endpoint. The query
functionality has the following syntax::
functionality has the following syntax:
--query <field1><operator1><value1>;...;<field_n><operator_n><value_n>
.. code-block:: console
--query <field1><operator1><value1>;...;<field_n><operator_n><value_n>
The following command needs to be invoked to request the meters of one
VM instance::
VM instance:
.. code-block:: console
$ ceilometer meter-list --query resource=bb52e52b-1e42-4751-b3ac-45c52d83ba07
+-------------------------+------------+-----------+--------------------------------------+----------------------------------+----------------------------------+
@ -274,46 +284,50 @@ VM instance::
As it was described above, the whole set of samples can be retrieved
that are stored for a meter or filtering the result set by using one of
the available query types. The request for all the samples of the
``cpu`` meter without any additional filtering looks like the following::
``cpu`` meter without any additional filtering looks like the following:
$ ceilometer sample-list --meter cpu
+--------------------------------------+-------+------------+------------+------+---------------------+
| Resource ID | Meter | Type | Volume | Unit | Timestamp |
+--------------------------------------+-------+------------+------------+------+---------------------+
| c8d2e153-a48f-4cec-9e93-86e7ac6d4b0b | cpu | cumulative | 5.4863e+11 | ns | 2014-08-31T11:17:03 |
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | cpu | cumulative | 5.7848e+11 | ns | 2014-08-31T11:17:03 |
| c8d2e153-a48f-4cec-9e93-86e7ac6d4b0b | cpu | cumulative | 5.4811e+11 | ns | 2014-08-31T11:07:05 |
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | cpu | cumulative | 5.7797e+11 | ns | 2014-08-31T11:07:05 |
| c8d2e153-a48f-4cec-9e93-86e7ac6d4b0b | cpu | cumulative | 5.3589e+11 | ns | 2014-08-31T10:27:19 |
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | cpu | cumulative | 5.6397e+11 | ns | 2014-08-31T10:27:19 |
| ... |
+--------------------------------------+-------+------------+------------+------+---------------------+
.. code-block:: console
$ ceilometer sample-list --meter cpu
+--------------------------------------+-------+------------+------------+------+---------------------+
| Resource ID | Meter | Type | Volume | Unit | Timestamp |
+--------------------------------------+-------+------------+------------+------+---------------------+
| c8d2e153-a48f-4cec-9e93-86e7ac6d4b0b | cpu | cumulative | 5.4863e+11 | ns | 2014-08-31T11:17:03 |
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | cpu | cumulative | 5.7848e+11 | ns | 2014-08-31T11:17:03 |
| c8d2e153-a48f-4cec-9e93-86e7ac6d4b0b | cpu | cumulative | 5.4811e+11 | ns | 2014-08-31T11:07:05 |
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | cpu | cumulative | 5.7797e+11 | ns | 2014-08-31T11:07:05 |
| c8d2e153-a48f-4cec-9e93-86e7ac6d4b0b | cpu | cumulative | 5.3589e+11 | ns | 2014-08-31T10:27:19 |
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | cpu | cumulative | 5.6397e+11 | ns | 2014-08-31T10:27:19 |
| ... |
+--------------------------------------+-------+------------+------------+------+---------------------+
The result set of the request contains the samples for both instances
ordered by the timestamp field in the default descending order.
The simple query makes it possible to retrieve only a subset of the
collected samples. The following command can be executed to request the
``cpu`` samples of only one of the VM instances::
``cpu`` samples of only one of the VM instances:
$ ceilometer sample-list --meter cpu --query resource=bb52e52b-1e42-4751-
b3ac-45c52d83ba07
+--------------------------------------+------+------------+------------+------+---------------------+
| Resource ID | Name | Type | Volume | Unit | Timestamp |
+--------------------------------------+------+------------+------------+------+---------------------+
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | cpu | cumulative | 5.7906e+11 | ns | 2014-08-31T11:27:08 |
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | cpu | cumulative | 5.7848e+11 | ns | 2014-08-31T11:17:03 |
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | cpu | cumulative | 5.7797e+11 | ns | 2014-08-31T11:07:05 |
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | cpu | cumulative | 5.6397e+11 | ns | 2014-08-31T10:27:19 |
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | cpu | cumulative | 5.6207e+11 | ns | 2014-08-31T10:17:03 |
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | cpu | cumulative | 5.3831e+11 | ns | 2014-08-31T08:41:57 |
| ... |
+--------------------------------------+------+------------+------------+------+---------------------+
.. code-block:: console
$ ceilometer sample-list --meter cpu --query resource=bb52e52b-1e42-4751-
b3ac-45c52d83ba07
+--------------------------------------+------+------------+------------+------+---------------------+
| Resource ID | Name | Type | Volume | Unit | Timestamp |
+--------------------------------------+------+------------+------------+------+---------------------+
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | cpu | cumulative | 5.7906e+11 | ns | 2014-08-31T11:27:08 |
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | cpu | cumulative | 5.7848e+11 | ns | 2014-08-31T11:17:03 |
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | cpu | cumulative | 5.7797e+11 | ns | 2014-08-31T11:07:05 |
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | cpu | cumulative | 5.6397e+11 | ns | 2014-08-31T10:27:19 |
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | cpu | cumulative | 5.6207e+11 | ns | 2014-08-31T10:17:03 |
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | cpu | cumulative | 5.3831e+11 | ns | 2014-08-31T08:41:57 |
| ... |
+--------------------------------------+------+------------+------------+------+---------------------+
As it can be seen on the output above, the result set contains samples
for only one instance of the two.
The ``ceilometer query-samples`` command is used to execute rich
The :command:`ceilometer query-samples` command is used to execute rich
queries. This command accepts the following parameters:
``--filter``
@ -333,30 +347,32 @@ For more information about complex queries see
As the complex query functionality provides the possibility of using
complex operators, it is possible to retrieve a subset of samples for a
given VM instance. To request for the first six samples for the ``cpu``
and ``disk.read.bytes`` meters, the following command should be invoked::
and ``disk.read.bytes`` meters, the following command should be invoked:
$ ceilometer query-samples --filter '{"and": \
[{"=":{"resource":"bb52e52b-1e42-4751-b3ac-45c52d83ba07"}},{"or":[{"=":{"counter_name":"cpu"}}, \
{"=":{"counter_name":"disk.read.bytes"}}]}]}' --orderby '[{"timestamp":"asc"}]' --limit 6
+--------------------------------------+-----------------+------------+------------+------+---------------------+
| Resource ID | Meter | Type | Volume | Unit | Timestamp |
+--------------------------------------+-----------------+------------+------------+------+---------------------+
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | disk.read.bytes | cumulative | 385334.0 | B | 2014-08-30T13:00:46 |
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | cpu | cumulative | 1.2132e+11 | ns | 2014-08-30T13:00:47 |
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | cpu | cumulative | 1.4295e+11 | ns | 2014-08-30T13:10:51 |
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | disk.read.bytes | cumulative | 601438.0 | B | 2014-08-30T13:10:51 |
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | disk.read.bytes | cumulative | 601438.0 | B | 2014-08-30T13:20:33 |
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | cpu | cumulative | 1.4795e+11 | ns | 2014-08-30T13:20:34 |
+--------------------------------------+-----------------+------------+------------+------+---------------------+
.. code-block:: console
$ ceilometer query-samples --filter '{"and": \
[{"=":{"resource":"bb52e52b-1e42-4751-b3ac-45c52d83ba07"}},{"or":[{"=":{"counter_name":"cpu"}}, \
{"=":{"counter_name":"disk.read.bytes"}}]}]}' --orderby '[{"timestamp":"asc"}]' --limit 6
+--------------------------------------+-----------------+------------+------------+------+---------------------+
| Resource ID | Meter | Type | Volume | Unit | Timestamp |
+--------------------------------------+-----------------+------------+------------+------+---------------------+
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | disk.read.bytes | cumulative | 385334.0 | B | 2014-08-30T13:00:46 |
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | cpu | cumulative | 1.2132e+11 | ns | 2014-08-30T13:00:47 |
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | cpu | cumulative | 1.4295e+11 | ns | 2014-08-30T13:10:51 |
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | disk.read.bytes | cumulative | 601438.0 | B | 2014-08-30T13:10:51 |
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | disk.read.bytes | cumulative | 601438.0 | B | 2014-08-30T13:20:33 |
| bb52e52b-1e42-4751-b3ac-45c52d83ba07 | cpu | cumulative | 1.4795e+11 | ns | 2014-08-30T13:20:34 |
+--------------------------------------+-----------------+------------+------------+------+---------------------+
Ceilometer also captures data as events, which represents the state of a
resource. Refer to :doc:`/telemetry-events` for more information regarding
resource. Refer to ``/telemetry-events`` for more information regarding
Events.
To retrieve a list of recent events that occurred in the system, the
following command can be executed:
.. code:: console
.. code-block:: console
$ ceilometer event-list
+--------------------------------------+---------------+----------------------------+-----------------------------------------------------------------+
@ -395,7 +411,7 @@ following command can be executed:
Similar to querying meters, additional filter parameters can be given to
retrieve specific events:
.. code:: console
.. code-block:: console
$ ceilometer event-list -q 'event_type=compute.instance.exists; \
instance_type=m1.tiny'
@ -430,31 +446,36 @@ retrieve specific events:
As of the Liberty release, the number of items returned will be
restricted to the value defined by ``default_api_return_limit`` in the
:file:`ceilometer.conf` configuration file. Alternatively, the value can
be set per query by passing the ``limit`` option in the request.
``ceilometer.conf`` configuration file. Alternatively, the value can
be set per query by passing the :option:`limit` option in the request.
Telemetry python bindings
Telemetry Python bindings
-------------------------
The command line client library provides python bindings in order to use
The command-line client library provides python bindings in order to use
the Telemetry Python API directly from python programs.
The first step in setting up the client is to create a client instance
with the proper credentials::
with the proper credentials:
>>> import ceilometerclient.client
>>> cclient = ceilometerclient.client.get_client(VERSION, username=USERNAME, password=PASSWORD, tenant_name=PROJECT_NAME, auth_url=AUTH_URL)
.. code-block:: python
>>> import ceilometerclient.client
>>> cclient = ceilometerclient.client.get_client(VERSION, username=USERNAME, password=PASSWORD, tenant_name=PROJECT_NAME, auth_url=AUTH_URL)
The ``VERSION`` parameter can be ``1`` or ``2``, specifying the API
version to be used.
The method calls look like the following::
The method calls look like the following:
>>> cclient.meters.list()
[<Meter ...>, ...]
.. code-block:: python
>>> cclient.samples.list()
[<Sample ...>, ...]
>>> cclient.meters.list()
[<Meter ...>, ...]
>>> cclient.samples.list()
[<Sample ...>, ...]
For further details about the python-ceilometerclient package, see the
`Python bindings to the OpenStack Ceilometer
@ -465,8 +486,9 @@ reference.
Publishers
~~~~~~~~~~
The Telemetry service provides several transport methods to forward the
data collected to the ceilometer-collector service or to an external
data collected to the ``ceilometer-collector`` service or to an external
system. The consumers of this data are widely different, like monitoring
systems, for which data loss is acceptable and billing systems, which
require reliable data transportation. Telemetry provides methods to
@ -515,9 +537,9 @@ file
.. note::
If a file name and location is not specified, this publisher
does not log any meters, instead it logs a warning message in
the configured log file for Telemetry.
If a file name and location is not specified, this publisher
does not log any meters, instead it logs a warning message in
the configured log file for Telemetry.
kafka
It can be specified in the form of:
@ -528,10 +550,10 @@ kafka
.. note::
If the topic parameter is missing, this publisher brings out
metering data under a topic name, ``ceilometer``. When the port
number is not specified, this publisher uses 9092 as the
broker's port.
If the topic parameter is missing, this publisher brings out
metering data under a topic name, ``ceilometer``. When the port
number is not specified, this publisher uses 9092 as the
broker's port.
The following options are available for ``rpc`` and ``notifier``. The
policy option can be used by ``kafka`` publisher:
@ -583,9 +605,11 @@ The following options are available for the ``file`` publisher:
The default publisher is ``notifier``, without any additional options
specified. A sample ``publishers`` section in the
:file:`/etc/ceilometer/pipeline.yaml` looks like the following::
``/etc/ceilometer/pipeline.yaml`` looks like the following:
publishers:
- udp://10.0.0.2:1234
- rpc://?per_meter_topic=1 (deprecated in Liberty)
- notifier://?policy=drop&max_queue_length=512&topic=custom_target
.. code-block:: yaml
publishers:
- udp://10.0.0.2:1234
- rpc://?per_meter_topic=1 (deprecated in Liberty)
- notifier://?policy=drop&max_queue_length=512&topic=custom_target

View File

@ -14,10 +14,11 @@ general, events represent any action made in the OpenStack system.
Event configuration
~~~~~~~~~~~~~~~~~~~
To enable the creation and storage of events in the Telemetry service
``store_events`` option needs to be set to ``True``. For further configuration
options, see the event section in the `OpenStack Configuration Reference
<http://docs.openstack.org/kilo/config-reference/content/ch_configuring-openstack-telemetry.html>`__.
<http://docs.openstack.org/liberty/config-reference/content/ch_configuring-openstack-telemetry.html>`__.
.. note::
@ -29,6 +30,7 @@ options, see the event section in the `OpenStack Configuration Reference
Event structure
~~~~~~~~~~~~~~~
Events captured by the Telemetry service are represented by five key
attributes:
@ -70,7 +72,7 @@ conversion is specified by a config file.
Event conversion
----------------
The conversion from notifications to events is driven by a configuration
file defined by the ``definitions_cfg_file`` in the :file:`ceilometer.conf`
file defined by the ``definitions_cfg_file`` in the ``ceilometer.conf``
configuration file.
This includes descriptions of how to map fields in the notification body
@ -93,7 +95,7 @@ empty set of definitions will be assumed. By default, any notifications
that do not have a corresponding event definition in the definitions
file will be converted to events with a set of minimal traits. This can
be changed by setting the option ``drop_unmatched_notifications`` in the
:file:`ceilometer.conf` file. If this is set to True, any unmapped
``ceilometer.conf`` file. If this is set to ``True``, any unmapped
notifications will be dropped.
The basic set of traits (all are TEXT type) that will be added to all
@ -104,6 +106,7 @@ added, but their definitions can be overridden for a given event\_type.
Event definitions format
------------------------
The event definitions file is in YAML format. It consists of a list of
event definitions, which are mappings. Order is significant, the list of
definitions is scanned in reverse order to find a definition which

View File

@ -17,13 +17,13 @@ below.
.. note::
You may need to configure Telemetry or other OpenStack services in
order to be able to collect all the samples you need. For further
information about configuration requirements see the `Telemetry chapter
<http://docs.openstack.org/liberty/install-guide-ubuntu/ceilometer.html>`__
in the OpenStack Installation Guide. Also check the `Telemetry manual
installation <http://docs.openstack.org/developer/ceilometer/install/manual.html>`__
description.
You may need to configure Telemetry or other OpenStack services in
order to be able to collect all the samples you need. For further
information about configuration requirements see the `Telemetry chapter
<http://docs.openstack.org/liberty/install-guide-ubuntu/ceilometer.html>`__
in the OpenStack Installation Guide. Also check the `Telemetry manual
installation <http://docs.openstack.org/developer/ceilometer/install/manual.html>`__
description.
Telemetry uses the following meter types:
@ -49,15 +49,17 @@ have two options to choose from. The first one is to specify them when
you boot up a new instance. The additional information will be stored
with the sample in the form of ``resource_metadata.user_metadata.*``.
The new field should be defined by using the prefix ``metering.``. The
modified boot command look like the following::
modified boot command look like the following:
$ nova boot --meta metering.custom_metadata=a_value my_vm
.. code-block:: console
$ nova boot --meta metering.custom_metadata=a_value my_vm
The other option is to set the ``reserved_metadata_keys`` to the list of
metadata keys that you would like to be included in
``resource_metadata`` of the instance related samples that are collected
for OpenStack Compute. This option is included in the ``DEFAULT``
section of the :file:`ceilometer.conf` configuration file.
section of the ``ceilometer.conf`` configuration file.
You might also specify headers whose values will be stored along with
the sample data of OpenStack Object Storage. The additional information
@ -75,22 +77,23 @@ Telemetry or emit notifications that this service consumes.
.. note::
The Telemetry service supports storing notifications as events. This
functionality was added later, therefore the list of meters still
contains existence type and other event related items. The proper
way of using Telemetry is to configure it to use the event store and
turn off the collection of the event related meters. For further
information about events see `Events section
<http://docs.openstack.org/developer/ceilometer/events.html>`__
in the Telemetry documentation. For further information about how to
turn on and off meters see :ref:`telemetry-pipeline-configuration`. Please
also note that currently no migration is available to move the already
existing event type samples to the event store.
The Telemetry service supports storing notifications as events. This
functionality was added later, therefore the list of meters still
contains existence type and other event related items. The proper
way of using Telemetry is to configure it to use the event store and
turn off the collection of the event related meters. For further
information about events see `Events section
<http://docs.openstack.org/developer/ceilometer/events.html>`__
in the Telemetry documentation. For further information about how to
turn on and off meters see :ref:`telemetry-pipeline-configuration`. Please
also note that currently no migration is available to move the already
existing event type samples to the event store.
.. _telemetry-compute-meters:
OpenStack Compute
~~~~~~~~~~~~~~~~~
The following meters are collected for OpenStack Compute:
+-----------+-------+------+----------+----------+---------+------------------+
@ -361,9 +364,10 @@ The following meters are collected for OpenStack Compute:
.. note::
The ``instance:<type>`` meter can be replaced by using extra parameters in
both the samples and statistics queries. Sample queries look like the
following::
The ``instance:<type>`` meter can be replaced by using extra parameters in
both the samples and statistics queries. Sample queries look like:
.. code-block:: console
statistics:
@ -418,7 +422,7 @@ above table is the following:
OpenStack Compute is capable of collecting ``CPU`` related meters from
the compute host machines. In order to use that you need to set the
``compute_monitors`` option to ``ComputeDriverCPUMonitor`` in the
:file:`nova.conf` configuration file. For further information see the
``nova.conf`` configuration file. For further information see the
Compute configuration section in the `Compute chapter
<http://docs.openstack.org/liberty/config-reference/content/list-of-compute-config-options.html>`__
of the OpenStack Configuration Reference.
@ -466,17 +470,18 @@ Compute:
Bare metal service
~~~~~~~~~~~~~~~~~~
Telemetry captures notifications that are emitted by the Bare metal
service. The source of the notifications are IPMI sensors that collect
data from the host machine.
.. note::
The sensor data is not available in the Bare metal service by
default. To enable the meters and configure this module to emit
notifications about the measured values see the `Installation
Guide <http://docs.openstack.org/developer/ironic/deploy/install-guide.html>`__
for the Bare metal service.
The sensor data is not available in the Bare metal service by
default. To enable the meters and configure this module to emit
notifications about the measured values see the `Installation
Guide <http://docs.openstack.org/developer/ironic/deploy/install-guide.html>`__
for the Bare metal service.
The following meters are recorded for the Bare metal service:
@ -515,11 +520,11 @@ IPMI agent see :ref:`telemetry-ipmi-agent`.
.. warning::
To avoid duplication of metering data and unnecessary load on the
IPMI interface, do not deploy the IPMI agent on nodes that are
managed by the Bare metal service and keep the
``conductor.send_sensor_data`` option set to ``False`` in the
:file:`ironic.conf` configuration file.
To avoid duplication of metering data and unnecessary load on the
IPMI interface, do not deploy the IPMI agent on nodes that are
managed by the Bare metal service and keep the
``conductor.send_sensor_data`` option set to ``False`` in the
``ironic.conf`` configuration file.
Besides generic IPMI sensor data, the following Intel Node Manager
meters are recorded from capable platform:
@ -579,6 +584,7 @@ meters are recorded from capable platform:
SNMP based meters
~~~~~~~~~~~~~~~~~
Telemetry supports gathering SNMP based generic host meters. In order to
be able to collect this data you need to run smpd on each target host.
@ -658,6 +664,7 @@ SNMP:
OpenStack Image service
~~~~~~~~~~~~~~~~~~~~~~~
The following meters are collected for OpenStack Image service:
+--------------------+--------+------+----------+----------+------------------+
@ -691,6 +698,7 @@ The following meters are collected for OpenStack Image service:
OpenStack Block Storage
~~~~~~~~~~~~~~~~~~~~~~~
The following meters are collected for OpenStack Block Storage:
+--------------------+-------+--------+----------+----------+-----------------+
@ -756,6 +764,7 @@ The following meters are collected for OpenStack Block Storage:
OpenStack Object Storage
~~~~~~~~~~~~~~~~~~~~~~~~
The following meters are collected for OpenStack Object Storage:
+--------------------+-------+-------+------------+---------+-----------------+
@ -843,6 +852,7 @@ The following meters are collected for Ceph Object Storage:
OpenStack Identity
~~~~~~~~~~~~~~~~~~
The following meters are collected for OpenStack Identity:
+-------------------+------+--------+-----------+-----------+-----------------+
@ -914,6 +924,7 @@ The following meters are collected for OpenStack Identity:
OpenStack Networking
~~~~~~~~~~~~~~~~~~~~
The following meters are collected for OpenStack Networking:
+-----------------+-------+--------+-----------+-----------+------------------+
@ -977,6 +988,7 @@ The following meters are collected for OpenStack Networking:
SDN controllers
~~~~~~~~~~~~~~~
The following meters are collected for SDN:
+-----------------+---------+--------+-----------+----------+-----------------+
@ -1063,6 +1075,7 @@ enable these meters, each driver needs to be properly configured.
Load-Balancer-as-a-Service (LBaaS)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following meters are collected for LBaaS:
+---------------+---------+---------+-----------+-----------+-----------------+
@ -1139,8 +1152,9 @@ The following meters are collected for LBaaS:
| pdate | | | | | |
+---------------+---------+---------+-----------+-----------+-----------------+
VPN as a Service (VPNaaS)
VPN-as-a-Service (VPNaaS)
~~~~~~~~~~~~~~~~~~~~~~~~~
The following meters are collected for VPNaaS:
+---------------+-------+---------+------------+-----------+------------------+
@ -1204,8 +1218,9 @@ The following meters are collected for VPNaaS:
| policy.update | | | | | |
+---------------+-------+---------+------------+-----------+------------------+
Firewall as a Service (FWaaS)
Firewall-as-a-Service (FWaaS)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following meters are collected for FWaaS:
+---------------+-------+---------+------------+-----------+------------------+
@ -1282,6 +1297,7 @@ The following meters are collected for the Orchestration service:
Data processing service for OpenStack
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following meters are collected for the Data processing service for
OpenStack:
@ -1306,6 +1322,7 @@ OpenStack:
Key Value Store module
~~~~~~~~~~~~~~~~~~~~~~
The following meters are collected for the Key Value Store module:
+------------------+-------+------+----------+-------------+------------------+
@ -1326,6 +1343,7 @@ The following meters are collected for the Key Value Store module:
Energy
~~~~~~
The following energy related meters are available:
+---------------+------------+------+----------+----------+-------------------+

View File

@ -54,8 +54,8 @@ ceilometer-alarm-notifier
.. note::
The ``ceilometer-polling`` service is available since the Kilo release.
It is intended to replace ceilometer-agent-central,
ceilometer-agent-compute, and ceilometer-agent-ipmi.
It is intended to replace ``ceilometer-agent-central``,
``ceilometer-agent-compute``, and ``ceilometer-agent-ipmi``.
Besides the ``ceilometer-agent-compute`` and the ``ceilometer-agent-ipmi``
services, all the other services are placed on one or more controller
@ -65,7 +65,6 @@ The Telemetry architecture highly depends on the AMQP service both for
consuming notifications coming from OpenStack services and internal
communication.
|
.. _telemetry-supported-databases:
@ -73,12 +72,12 @@ Supported databases
~~~~~~~~~~~~~~~~~~~
The other key external component of Telemetry is the database, where
events, samples, alarm definitions and alarms are stored.
events, samples, alarm definitions, and alarms are stored.
.. note::
Multiple database back ends can be configured in order to store
events, samples and alarms separately.
events, samples, and alarms separately.
The list of supported database back ends:
@ -92,14 +91,6 @@ The list of supported database back ends:
- `HBase <http://hbase.apache.org/>`__
- `DB2(deprecated) <http://www-01.ibm.com/software/data/db2/>`__
.. note::
DB2 nosql support is deprecated as of Liberty and will be removed in Mitaka
as the product is no longer in development.
|
.. _telemetry-supported-hypervisors:
@ -110,20 +101,17 @@ The Telemetry service collects information about the virtual machines,
which requires close connection to the hypervisor that runs on the
compute hosts.
The list of supported hypervisors is:
The following is a list of supported hypervisors.
- The following hypervisors are supported via
`Libvirt <http://libvirt.org/>`__:
- The following hypervisors are supported via `libvirt <http://libvirt.org/>`__
- `Kernel-based Virtual Machine
(KVM) <http://www.linux-kvm.org/page/Main_Page>`__
* `Kernel-based Virtual Machine (KVM) <http://www.linux-kvm.org/page/Main_Page>`__
- `Quick Emulator (QEMU) <http://wiki.qemu.org/Main_Page>`__
* `Quick Emulator (QEMU) <http://wiki.qemu.org/Main_Page>`__
- `Linux Containers (LXC) <https://linuxcontainers.org/>`__
* `Linux Containers (LXC) <https://linuxcontainers.org/>`__
- `User-mode Linux
(UML) <http://user-mode-linux.sourceforge.net/>`__
* `User-mode Linux (UML) <http://user-mode-linux.sourceforge.net/>`__
.. note::
@ -134,10 +122,8 @@ The list of supported hypervisors is:
- `XEN <http://www.xenproject.org/help/documentation.html>`__
- `VMWare
vSphere <http://www.vmware.com/products/vsphere-hypervisor/support.html>`__
- `VMware vSphere <http://www.vmware.com/products/vsphere-hypervisor/support.html>`__
|
Supported networking services
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -161,22 +147,21 @@ external networking services:
- `OpenContrail <http://www.opencontrail.org/>`__
|
.. _telemetry-users-roles-tenants:
Users, roles and tenants
~~~~~~~~~~~~~~~~~~~~~~~~
Users, roles, and tenants
~~~~~~~~~~~~~~~~~~~~~~~~~
This service of OpenStack uses OpenStack Identity for authenticating and
authorizing users. The required configuration options are listed in the
`Telemetry
section <http://docs.openstack.org/kilo/config-reference/content/ch_configuring-openstack-telemetry.html>`__
section <http://docs.openstack.org/liberty/config-reference/content/ch_configuring-openstack-telemetry.html>`__
in the *OpenStack Configuration Reference*.
The system uses two roles:`admin` and `non-admin`. The authorization happens
before processing each API request. The amount of returned data depends on the
role the requestor owns.
The system uses two roles:``admin`` and ``non-admin``. The authorization
happens before processing each API request. The amount of returned data
depends on the role the requestor owns.
The creation of alarm definitions also highly depends on the role of the
user, who initiated the action. Further details about :ref:`telemetry-alarms`

View File

@ -8,7 +8,7 @@ The Telemetry service has similar log settings as the other OpenStack
services. Multiple options are available to change the target of
logging, the format of the log entries and the log levels.
The log settings can be changed in :file:`ceilometer.conf`. The list of
The log settings can be changed in ``ceilometer.conf``. The list of
configuration options are listed in the logging configuration options
table in the `Telemetry
section <http://docs.openstack.org/liberty/config-reference/content/ch_configuring-openstack-telemetry.html>`__
@ -20,7 +20,6 @@ It can be changed to either a log file or syslog. The ``debug`` and
default log levels of the corresponding modules can be found in the
table referred above.
|
Recommended order of starting services
--------------------------------------
@ -31,23 +30,22 @@ ordering of service startup can result in data loss.
When the services are started for the first time or in line with the
message queue service restart, it takes time while the
**ceilometer-collector** service establishes the connection and joins or
``ceilometer-collector`` service establishes the connection and joins or
rejoins to the configured exchanges. Therefore, if the
**ceilometer-agent-compute**, **ceilometer-agent-central**, and the
**ceilometer-agent-notification** services are started before
the **ceilometer-collector** service, the **ceilometer-collector** service
``ceilometer-agent-compute``, ``ceilometer-agent-central``, and the
``ceilometer-agent-notification`` services are started before
the ``ceilometer-collector`` service, the ``ceilometer-collector`` service
may lose some messages while connecting to the message queue service.
The possibility of this issue to happen is higher, when the polling
interval is set to a relatively short period. In order to avoid this
situation, the recommended order of service startup is to start or
restart the **ceilometer-collector** service after the message queue. All
restart the ``ceilometer-collector`` service after the message queue. All
the other Telemetry services should be started or restarted after and
the **ceilometer-agent-compute** should be the last in the sequence, as this
the ``ceilometer-agent-compute`` should be the last in the sequence, as this
component emits metering messages in order to send the samples to the
collector.
|
Notification agent
------------------
@ -56,7 +54,7 @@ In the Icehouse release of OpenStack a new service was introduced to be
responsible for consuming notifications that are coming from other
OpenStack services.
If the **ceilometer-agent-notification** service is not installed and
If the ``ceilometer-agent-notification`` service is not installed and
started, samples originating from notifications will not be generated.
In case of the lack of notification based samples, the state of this
service and the log file of Telemetry should be checked first.
@ -65,12 +63,11 @@ For the list of meters that are originated from notifications, see the
`Telemetry Measurements
Reference <http://docs.openstack.org/developer/ceilometer/measurements.html>`__.
|
Recommended ``auth_url`` to be used
-----------------------------------
When using the Telemetry command line client, the credentials and the
When using the Telemetry command-line client, the credentials and the
``os_auth_url`` have to be set in order for the client to authenticate
against OpenStack Identity. For further details
about the credentials that have to be provided see the `Telemetry Python
@ -90,11 +87,11 @@ the OpenStack Identity API. If the ``adminURL`` is used as
``os_auth_url``, the :command:`ceilometer` command results in the following
error message:
::
.. code-block:: console
$ ceilometer meter-list
Unable to determine the Keystone version to authenticate with \
using the given auth_url: http://10.0.2.15:35357/v2.0
$ ceilometer meter-list
Unable to determine the Keystone version to authenticate with \
using the given auth_url: http://10.0.2.15:35357/v2.0
Therefore when specifying the ``os_auth_url`` parameter on the command
line or by using environment variable, use the ``internalURL`` or

View File

@ -10,7 +10,7 @@ environment are metering, rating, and billing. Because the provider's
requirements may be far too specific for a shared solution, rating
and billing solutions cannot be designed in a common module that
satisfies all. Providing users with measurements on cloud services is
required to meet the "measured service" definition of cloud computing.
required to meet the ``measured service`` definition of cloud computing.
The Telemetry service was originally designed to support billing
systems for OpenStack cloud resources. This project only covers the
@ -40,7 +40,7 @@ troubleshooting guide, which mentions error situations and possible
solutions to the problems.
You can retrieve the collected samples in three different ways: with
the REST API, with the command line interface, or with the Metering
the REST API, with the command-line interface, or with the Metering
tab on an OpenStack dashboard.