Merge "telemetry: cleanup pipline docs"
This commit is contained in:
@@ -98,7 +98,7 @@ Data storage
|
||||
.. note::
|
||||
|
||||
For more information on how to set the TTL, see
|
||||
:ref:`telemetry-storing-samples`.
|
||||
:ref:`telemetry-expiry`.
|
||||
|
||||
#. We recommend that you do not run MongoDB on the same node as the
|
||||
controller. Keep it on a separate node optimized for fast storage for
|
||||
|
||||
@@ -470,7 +470,7 @@ in the OpenStack Configuration Reference.
|
||||
polled, otherwise samples may be missing or duplicated. The list of
|
||||
meters to poll can be set in the ``/etc/ceilometer/pipeline.yaml``
|
||||
configuration file. For more information about pipelines see
|
||||
:ref:`data-collection-and-processing`.
|
||||
:ref:`telemetry-data-pipelines`.
|
||||
|
||||
To enable the Compute agent to run multiple instances simultaneously
|
||||
with workload partitioning, the ``workload_partitioning`` option has to
|
||||
@@ -673,129 +673,3 @@ Using this script via cron you can get notifications periodically, for
|
||||
example, every 5 minutes::
|
||||
|
||||
*/5 * * * * /path/to/cinder-volume-usage-audit --send_actions
|
||||
|
||||
.. _telemetry-storing-samples:
|
||||
|
||||
Storing samples
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
The Telemetry service has a separate service that is responsible for
|
||||
persisting the data that comes from the pollsters or is received as
|
||||
notifications. The data can be stored in a file or a database back end,
|
||||
for which the list of supported databases can be found in
|
||||
:ref:`telemetry-supported-databases`. The data can also be sent to an external
|
||||
data store by using an HTTP publisher.
|
||||
|
||||
The ``ceilometer-agent-notificaiton`` service receives the data as messages
|
||||
from the message bus of the configured AMQP service. It sends these datapoints
|
||||
without any modification to the configured target.
|
||||
|
||||
.. note::
|
||||
|
||||
Multiple publishers can be configured for Telemetry at one time by editing
|
||||
the pipeline definition.
|
||||
|
||||
Multiple ``ceilometer-agent-notification`` agents can be run at a time. It is
|
||||
also supported to start multiple worker threads per agent. The
|
||||
``workers`` configuration option has to be modified in the
|
||||
`notification section
|
||||
<https://docs.openstack.org/ocata/config-reference/telemetry/telemetry-config-options.html>`__
|
||||
of the ``ceilometer.conf`` configuration file.
|
||||
|
||||
.. note::
|
||||
|
||||
Prior to Ocata, this functionality was provided via dispatchers in
|
||||
``ceilometer-collector``. This can now be handled exclusively by
|
||||
``ceilometer-agent-notification`` to minimize messaging load.
|
||||
Dispatchers can still be leveraged by setting ``meter_dispatchers`` and
|
||||
``event_dispatchers`` in ``ceilometer.conf``.
|
||||
|
||||
Gnocchi publisher
|
||||
-----------------
|
||||
|
||||
When the gnocchi publisher is enabled, measurement and resource information is
|
||||
pushed to gnocchi for time-series optimized storage. ``gnocchi://`` should be
|
||||
added as a publisher endpoint in the ``pipeline.yaml`` and
|
||||
``event_pipeline.yaml`` files. Gnocchi must be registered in the Identity
|
||||
service as Ceilometer discovers the exact path via the Identity service.
|
||||
|
||||
More details on how to enable and configure gnocchi regarding how to enable and
|
||||
configure the service can be found on its
|
||||
`official documentation page <http://gnocchi.xyz>`__.
|
||||
|
||||
Panko publisher
|
||||
---------------
|
||||
|
||||
Event data in Ceilometer can be stored in panko which provides an HTTP REST
|
||||
interface to query system events in OpenStack. To push data to panko,
|
||||
set the publisher to ``direct://?dispatcher=panko``. Beginning in panko's
|
||||
Pike release, the publisher can be set as ``panko://``
|
||||
|
||||
HTTP publisher
|
||||
---------------
|
||||
|
||||
The Telemetry service supports sending samples to an external HTTP
|
||||
target. The samples are sent without any modification. To set this
|
||||
option as the notification agents' target, set ``http://`` as a publisher
|
||||
endpoint in the pipeline definition files. The http target should be set along
|
||||
with the publisher declaration. For example, various addtional configuration
|
||||
options can be passed in such as:
|
||||
``http://localhost:80/?timeout=1&max_retries=2&batch=False&poolsize=10``
|
||||
|
||||
File dispatcher
|
||||
---------------
|
||||
|
||||
You can store samples in a file by setting the publisher to ``file`` in the
|
||||
``pipeline.yaml`` file. You can also pass in configuration options
|
||||
such as ``file:///path/to/file?max_bytes=1000&backup_count=5``
|
||||
|
||||
Database publisher
|
||||
-------------------
|
||||
|
||||
.. note::
|
||||
|
||||
As of the Ocata release, this publisher is deprecated. Database storage
|
||||
should use gnocchi and/or panko publishers depending on requirements.
|
||||
|
||||
When the database dispatcher is configured as data store, you have the
|
||||
option to set a ``time_to_live`` option (ttl) for samples. By default
|
||||
the time to live value for samples is set to -1, which means that they
|
||||
are kept in the database forever.
|
||||
|
||||
The time to live value is specified in seconds. Each sample has a time
|
||||
stamp, and the ``ttl`` value indicates that a sample will be deleted
|
||||
from the database when the number of seconds has elapsed since that
|
||||
sample reading was stamped. For example, if the time to live is set to
|
||||
600, all samples older than 600 seconds will be purged from the
|
||||
database.
|
||||
|
||||
Certain databases support native TTL expiration. In cases where this is
|
||||
not possible, a command-line script, which you can use for this purpose
|
||||
is ``ceilometer-expirer``. You can run it in a cron job, which helps to keep
|
||||
your database in a consistent state.
|
||||
|
||||
The level of support differs in case of the configured back end:
|
||||
|
||||
.. list-table::
|
||||
:widths: 33 33 33
|
||||
:header-rows: 1
|
||||
|
||||
* - Database
|
||||
- TTL value support
|
||||
- Note
|
||||
* - MongoDB
|
||||
- Yes
|
||||
- MongoDB has native TTL support for deleting samples
|
||||
that are older than the configured ttl value.
|
||||
* - SQL-based back ends
|
||||
- Yes
|
||||
- ``ceilometer-expirer`` has to be used for deleting
|
||||
samples and its related data from the database.
|
||||
* - HBase
|
||||
- No
|
||||
- Telemetry's HBase support does not include native TTL
|
||||
nor ``ceilometer-expirer`` support.
|
||||
* - DB2 NoSQL
|
||||
- No
|
||||
- DB2 NoSQL does not have native TTL
|
||||
nor ``ceilometer-expirer`` support.
|
||||
|
||||
@@ -1,35 +1,21 @@
|
||||
.. _data-collection-and-processing:
|
||||
.. _telemetry-data-pipelines:
|
||||
|
||||
==========================================
|
||||
Data collection, processing, and pipelines
|
||||
==========================================
|
||||
=============================
|
||||
Data processing and pipelines
|
||||
=============================
|
||||
|
||||
The mechanism by which data is collected and processed is called a
|
||||
pipeline. Pipelines, at the configuration level, describe a coupling
|
||||
between sources of data and the corresponding sinks for transformation
|
||||
and publication of data.
|
||||
The mechanism by which data is processed is called a pipeline. Pipelines,
|
||||
at the configuration level, describe a coupling between sources of data and
|
||||
the corresponding sinks for transformation and publication of data. This
|
||||
functionality is handled by the notification agents.
|
||||
|
||||
A source is a producer of data: ``samples`` or ``events``. In effect, it is a
|
||||
set of pollsters or notification handlers emitting datapoints for a set
|
||||
of matching meters and event types.
|
||||
set of notification handlers emitting datapoints for a set of matching meters
|
||||
and event types.
|
||||
|
||||
Each source configuration encapsulates name matching, polling interval
|
||||
determination, optional resource enumeration or discovery, and mapping
|
||||
Each source configuration encapsulates name matching and mapping
|
||||
to one or more sinks for publication.
|
||||
|
||||
Data gathered can be used for different purposes, which can impact how
|
||||
frequently it needs to be published. Typically, a meter published for
|
||||
billing purposes needs to be updated every 30 minutes while the same
|
||||
meter may be needed for performance tuning every minute.
|
||||
|
||||
.. warning::
|
||||
|
||||
Rapid polling cadences should be avoided, as it results in a huge
|
||||
amount of data in a short time frame, which may negatively affect
|
||||
the performance of both Telemetry and the underlying database back
|
||||
end. We strongly recommend you do not use small granularity
|
||||
values like 10 seconds.
|
||||
|
||||
A sink, on the other hand, is a consumer of data, providing logic for
|
||||
the transformation and publication of data emitted from related sources.
|
||||
|
||||
@@ -37,8 +23,7 @@ In effect, a sink describes a chain of handlers. The chain starts with
|
||||
zero or more transformers and ends with one or more publishers. The
|
||||
first transformer in the chain is passed data from the corresponding
|
||||
source, takes some action such as deriving rate of change, performing
|
||||
unit conversion, or aggregating, before passing the modified data to the
|
||||
next step that is described in :ref:`telemetry-publishers`.
|
||||
unit conversion, or aggregating, before publishing_.
|
||||
|
||||
.. _telemetry-pipeline-configuration:
|
||||
|
||||
@@ -62,11 +47,8 @@ The meter pipeline definition looks like:
|
||||
---
|
||||
sources:
|
||||
- name: 'source name'
|
||||
interval: 'how often should the samples be injected into the pipeline'
|
||||
meters:
|
||||
- 'meter filter'
|
||||
resources:
|
||||
- 'list of resource URLs'
|
||||
sinks
|
||||
- 'sink name'
|
||||
sinks:
|
||||
@@ -75,11 +57,6 @@ The meter pipeline definition looks like:
|
||||
publishers:
|
||||
- 'list of publishers'
|
||||
|
||||
The interval parameter in the sources section should be defined in
|
||||
seconds. It determines the polling cadence of sample injection into the
|
||||
pipeline, where samples are produced under the direct control of an
|
||||
agent.
|
||||
|
||||
There are several ways to define the list of meters for a pipeline
|
||||
source. The list of valid meters can be found in :ref:`telemetry-measurements`.
|
||||
There is a possibility to define all the meters, or just included or excluded
|
||||
@@ -97,10 +74,6 @@ meters, with which a source should operate:
|
||||
- To define the list of excluded meters, use the ``!meter_name``
|
||||
syntax.
|
||||
|
||||
- For meters, which have variants identified by a complex name
|
||||
field, use the wildcard symbol to select all, for example,
|
||||
for ``instance:m1.tiny``, use ``instance:\*``.
|
||||
|
||||
.. note::
|
||||
|
||||
The OpenStack Telemetry service does not have any duplication check
|
||||
@@ -125,9 +98,6 @@ The above definition methods can be used in the following combinations:
|
||||
same pipeline. Wildcard and included meters cannot co-exist in the
|
||||
same pipeline definition section.
|
||||
|
||||
The optional resources section of a pipeline source allows a static list
|
||||
of resource URLs to be configured for polling.
|
||||
|
||||
The transformers section of a pipeline sink provides the possibility to
|
||||
add a list of transformer definitions. The available transformers are:
|
||||
|
||||
@@ -188,6 +158,11 @@ The parameters section can contain transformer specific fields, like
|
||||
source and target fields with different subfields in case of the rate of
|
||||
change, which depends on the implementation of the transformer.
|
||||
|
||||
The following are supported transformers:
|
||||
|
||||
Rate of change transformer
|
||||
``````````````````````````
|
||||
Transformer that computes the change in value between two data points in time.
|
||||
In the case of the transformer that creates the ``cpu_util`` meter, the
|
||||
definition looks like:
|
||||
|
||||
@@ -202,7 +177,7 @@ definition looks like:
|
||||
type: "gauge"
|
||||
scale: "100.0 / (10**9 * (resource_metadata.cpu_number or 1))"
|
||||
|
||||
The rate of change the transformer generates is the ``cpu_util`` meter
|
||||
The rate of change transformer generates the ``cpu_util`` meter
|
||||
from the sample values of the ``cpu`` counter, which represents
|
||||
cumulative CPU time in nanoseconds. The transformer definition above
|
||||
defines a scale factor (for nanoseconds and multiple CPUs), which is
|
||||
@@ -228,7 +203,7 @@ rate of change transformer:
|
||||
type: "gauge"
|
||||
|
||||
Unit conversion transformer
|
||||
---------------------------
|
||||
```````````````````````````
|
||||
|
||||
Transformer to apply a unit conversion. It takes the volume of the meter
|
||||
and multiplies it with the given ``scale`` expression. Also supports
|
||||
@@ -263,7 +238,7 @@ With ``map_from`` and ``map_to``:
|
||||
unit: "KB"
|
||||
|
||||
Aggregator transformer
|
||||
----------------------
|
||||
``````````````````````
|
||||
|
||||
A transformer that sums up the incoming samples until enough samples
|
||||
have come in or a timeout has been reached.
|
||||
@@ -305,7 +280,7 @@ the ``user_id`` of the first received sample and drop the
|
||||
resource_metadata: drop
|
||||
|
||||
Accumulator transformer
|
||||
-----------------------
|
||||
```````````````````````
|
||||
|
||||
This transformer simply caches the samples until enough samples have
|
||||
arrived and then flushes them all down the pipeline at once:
|
||||
@@ -318,7 +293,7 @@ arrived and then flushes them all down the pipeline at once:
|
||||
size: 15
|
||||
|
||||
Multi meter arithmetic transformer
|
||||
----------------------------------
|
||||
``````````````````````````````````
|
||||
|
||||
This transformer enables us to perform arithmetic calculations over one
|
||||
or more meters and/or their metadata, for example:
|
||||
@@ -369,7 +344,7 @@ novel meter shows average CPU time per core:
|
||||
such a case it does not create a new sample but only logs a warning.
|
||||
|
||||
Delta transformer
|
||||
-----------------
|
||||
`````````````````
|
||||
|
||||
This transformer calculates the change between two sample datapoints of a
|
||||
resource. It can be configured to capture only the positive growth deltas.
|
||||
@@ -384,3 +359,259 @@ Example configuration:
|
||||
target:
|
||||
name: "cpu.delta"
|
||||
growth_only: True
|
||||
|
||||
.. _publishing:
|
||||
|
||||
Publishers
|
||||
----------
|
||||
|
||||
The Telemetry service provides several transport methods to transfer the
|
||||
data collected to an external system. The consumers of this data are widely
|
||||
different, like monitoring systems, for which data loss is acceptable and
|
||||
billing systems, which require reliable data transportation. Telemetry provides
|
||||
methods to fulfill the requirements of both kind of systems.
|
||||
|
||||
The publisher component makes it possible to save the data into persistent
|
||||
storage through the message bus or to send it to one or more external
|
||||
consumers. One chain can contain multiple publishers.
|
||||
|
||||
To solve this problem, the multi-publisher can
|
||||
be configured for each data point within the Telemetry service, allowing
|
||||
the same technical meter or event to be published multiple times to
|
||||
multiple destinations, each potentially using a different transport.
|
||||
|
||||
Publishers are specified in the ``publishers`` section for each
|
||||
pipeline that is defined in the `pipeline.yaml
|
||||
<https://git.openstack.org/cgit/openstack/ceilometer/plain/ceilometer/pipeline/data/pipeline.yaml>`__
|
||||
and the `event_pipeline.yaml
|
||||
<https://git.openstack.org/cgit/openstack/ceilometer/plain/ceilometer/pipeline/data/event_pipeline.yaml>`__
|
||||
files.
|
||||
|
||||
The following publisher types are supported:
|
||||
|
||||
gnocchi (default)
|
||||
`````````````````
|
||||
|
||||
When the gnocchi publisher is enabled, measurement and resource information is
|
||||
pushed to gnocchi for time-series optimized storage. Gnocchi must be registered
|
||||
in the Identity service as Ceilometer discovers the exact path via the Identity
|
||||
service.
|
||||
|
||||
More details on how to enable and configure gnocchi can be found on its
|
||||
`official documentation page <http://gnocchi.xyz>`__.
|
||||
|
||||
panko
|
||||
`````
|
||||
|
||||
Event data in Ceilometer can be stored in panko which provides an HTTP REST
|
||||
interface to query system events in OpenStack. To push data to panko,
|
||||
set the publisher to ``direct://?dispatcher=panko``. Beginning in panko's
|
||||
Pike release, the publisher can be set as ``panko://``
|
||||
|
||||
notifier
|
||||
````````
|
||||
|
||||
The notifier publisher can be specified in the form of
|
||||
``notifier://?option1=value1&option2=value2``. It emits data over AMQP using
|
||||
oslo.messaging. Any consumer can then subscribe to the published topic
|
||||
for additional processing.
|
||||
|
||||
.. note::
|
||||
|
||||
Prior to Ocata, the collector would consume this publisher but has since
|
||||
been deprecated and therefore not required.
|
||||
|
||||
The following customization options are available:
|
||||
|
||||
``per_meter_topic``
|
||||
The value of this parameter is 1. It is used for publishing the samples on
|
||||
additional ``metering_topic.sample_name`` topic queue besides the
|
||||
default ``metering_topic`` queue.
|
||||
|
||||
``policy``
|
||||
Used for configuring the behavior for the case, when the
|
||||
publisher fails to send the samples, where the possible predefined
|
||||
values are:
|
||||
|
||||
default
|
||||
Used for waiting and blocking until the samples have been sent.
|
||||
|
||||
drop
|
||||
Used for dropping the samples which are failed to be sent.
|
||||
|
||||
queue
|
||||
Used for creating an in-memory queue and retrying to send the
|
||||
samples on the queue in the next samples publishing period (the
|
||||
queue length can be configured with ``max_queue_length``, where
|
||||
1024 is the default value).
|
||||
|
||||
``topic``
|
||||
The topic name of the queue to publish to. Setting this will override the
|
||||
default topic defined by ``metering_topic`` and ``event_topic`` options.
|
||||
This option can be used to support multiple consumers.
|
||||
|
||||
udp
|
||||
```
|
||||
|
||||
This publisher can be specified in the form of ``udp://<host>:<port>/``. It
|
||||
emits metering data over UDP.
|
||||
|
||||
file
|
||||
````
|
||||
|
||||
The file publisher can be specified in the form of
|
||||
``file://path?option1=value1&option2=value2``. This publisher
|
||||
records metering data into a file.
|
||||
|
||||
.. note::
|
||||
|
||||
If a file name and location is not specified, the ``file`` publisher
|
||||
does not log any meters, instead it logs a warning message in
|
||||
the configured log file for Telemetry.
|
||||
|
||||
The following options are available for the ``file`` publisher:
|
||||
|
||||
``max_bytes``
|
||||
When this option is greater than zero, it will cause a rollover.
|
||||
When the specified size is about to be exceeded, the file is closed and a
|
||||
new file is silently opened for output. If its value is zero, rollover
|
||||
never occurs.
|
||||
|
||||
``backup_count``
|
||||
If this value is non-zero, an extension will be appended to the
|
||||
filename of the old log, as '.1', '.2', and so forth until the
|
||||
specified value is reached. The file that is written and contains
|
||||
the newest data is always the one that is specified without any
|
||||
extensions.
|
||||
|
||||
http
|
||||
````
|
||||
|
||||
The Telemetry service supports sending samples to an external HTTP
|
||||
target. The samples are sent without any modification. To set this
|
||||
option as the notification agents' target, set ``http://`` as a publisher
|
||||
endpoint in the pipeline definition files. The HTTP target should be set along
|
||||
with the publisher declaration. For example, addtional configuration options
|
||||
can be passed in: ``http://localhost:80/?option1=value1&option2=value2``
|
||||
|
||||
The following options are availble:
|
||||
|
||||
``timeout``
|
||||
The number of seconds before HTTP request times out.
|
||||
|
||||
``max_retries``
|
||||
The number of times to retry a request before failing.
|
||||
|
||||
``batch``
|
||||
If false, the publisher will send each sample and event individually,
|
||||
whether or not the notification agent is configured to process in batches.
|
||||
|
||||
``poolsize``
|
||||
The maximum number of open connections the publisher will maintain.
|
||||
Increasing value may improve performance but will also increase memory and
|
||||
socket consumption requirements.
|
||||
|
||||
The default publisher is ``gnocchi``, without any additional options
|
||||
specified. A sample ``publishers`` section in the
|
||||
``/etc/ceilometer/pipeline.yaml`` looks like the following:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
publishers:
|
||||
- gnocchi://
|
||||
- panko://
|
||||
- udp://10.0.0.2:1234
|
||||
- notifier://?policy=drop&max_queue_length=512&topic=custom_target
|
||||
- direct://?dispatcher=http
|
||||
|
||||
|
||||
Deprecated publishers
|
||||
---------------------
|
||||
|
||||
The following publishers are deprecated as of Ocata and may be removed in
|
||||
subsequent releases.
|
||||
|
||||
direct
|
||||
``````
|
||||
|
||||
This publisher can be specified in the form of ``direct://?dispatcher=http``.
|
||||
The dispatcher's options include: ``database``, ``file``, ``http``, and
|
||||
``gnocchi``. It emits data in the configured dispatcher directly, default
|
||||
configuration (the form is ``direct://``) is database dispatcher.
|
||||
In the Mitaka release, this method can only emit data to the database
|
||||
dispatcher, and the form is ``direct://``.
|
||||
|
||||
kafka
|
||||
`````
|
||||
|
||||
.. note::
|
||||
|
||||
We recommened you use oslo.messaging if possible as it provides consistent
|
||||
OpenStack API.
|
||||
|
||||
The ``kafka`` publisher can be specified in the form of:
|
||||
``kafka://kafka_broker_ip: kafka_broker_port?topic=kafka_topic
|
||||
&option1=value1``.
|
||||
|
||||
This publisher sends metering data to a kafka broker. The kafka publisher
|
||||
offers similar options as ``notifier`` publisher.
|
||||
|
||||
.. note::
|
||||
|
||||
If the topic parameter is missing, this publisher brings out
|
||||
metering data under a topic name, ``ceilometer``. When the port
|
||||
number is not specified, this publisher uses 9092 as the
|
||||
broker's port.
|
||||
|
||||
|
||||
.. _telemetry-expiry:
|
||||
|
||||
database
|
||||
````````
|
||||
|
||||
.. note::
|
||||
|
||||
This functionality was replaced by ``gnocchi`` and ``panko`` publishers.
|
||||
|
||||
When the database dispatcher is configured as a data store, you have the
|
||||
option to set a ``time_to_live`` option (ttl) for samples. By default
|
||||
the ttl value for samples is set to -1, which means that they
|
||||
are kept in the database forever.
|
||||
|
||||
The time to live value is specified in seconds. Each sample has a time
|
||||
stamp, and the ``ttl`` value indicates that a sample will be deleted
|
||||
from the database when the number of seconds has elapsed since that
|
||||
sample reading was stamped. For example, if the time to live is set to
|
||||
600, all samples older than 600 seconds will be purged from the
|
||||
database.
|
||||
|
||||
Certain databases support native TTL expiration. In cases where this is
|
||||
not possible, a command-line script, which you can use for this purpose
|
||||
is ``ceilometer-expirer``. You can run it in a cron job, which helps to keep
|
||||
your database in a consistent state.
|
||||
|
||||
The level of support differs in case of the configured back end:
|
||||
|
||||
.. list-table::
|
||||
:widths: 33 33 33
|
||||
:header-rows: 1
|
||||
|
||||
* - Database
|
||||
- TTL value support
|
||||
- Note
|
||||
* - MongoDB
|
||||
- Yes
|
||||
- MongoDB has native TTL support for deleting samples
|
||||
that are older than the configured ttl value.
|
||||
* - SQL-based back ends
|
||||
- Yes
|
||||
- ``ceilometer-expirer`` has to be used for deleting
|
||||
samples and its related data from the database.
|
||||
* - HBase
|
||||
- No
|
||||
- Telemetry's HBase support does not include native TTL
|
||||
nor ``ceilometer-expirer`` support.
|
||||
* - DB2 NoSQL
|
||||
- No
|
||||
- DB2 NoSQL does not have native TTL
|
||||
nor ``ceilometer-expirer`` support.
|
||||
|
||||
@@ -245,7 +245,7 @@ configuration file, then a group of samples are deleted in each
|
||||
expiration cycle. When the last sample is deleted for a meter, the
|
||||
database can be cleaned up by running ceilometer-expirer and the meter
|
||||
will not be present in the list above anymore. For more information
|
||||
about the expiration procedure see :ref:`telemetry-storing-samples`.
|
||||
about the expiration procedure see :ref:`telemetry-expiry`.
|
||||
|
||||
The Telemetry API supports simple query on the meter endpoint. The query
|
||||
functionality has the following syntax:
|
||||
@@ -481,145 +481,3 @@ For further details about the python-ceilometerclient package, see the
|
||||
`Python bindings to the OpenStack Ceilometer
|
||||
API <https://docs.openstack.org/developer/python-ceilometerclient/>`__
|
||||
reference.
|
||||
|
||||
.. _telemetry-publishers:
|
||||
|
||||
Publishers
|
||||
~~~~~~~~~~
|
||||
|
||||
The Telemetry service provides several transport methods to forward the
|
||||
data collected to the ``ceilometer-collector`` service or to an external
|
||||
system. The consumers of this data are widely different, like monitoring
|
||||
systems, for which data loss is acceptable and billing systems, which
|
||||
require reliable data transportation. Telemetry provides methods to
|
||||
fulfill the requirements of both kind of systems, as it is described
|
||||
below.
|
||||
|
||||
The publisher component makes it possible to persist the data into
|
||||
storage through the message bus or to send it to one or more external
|
||||
consumers. One chain can contain multiple publishers.
|
||||
|
||||
To solve the above mentioned problem, the notion of multi-publisher can
|
||||
be configured for each datapoint within the Telemetry service, allowing
|
||||
the same technical meter or event to be published multiple times to
|
||||
multiple destinations, each potentially using a different transport.
|
||||
|
||||
Publishers can be specified in the ``publishers`` section for each
|
||||
pipeline (for further details about pipelines see
|
||||
:ref:`data-collection-and-processing`) that is defined in
|
||||
the `pipeline.yaml
|
||||
<https://git.openstack.org/cgit/openstack/ceilometer/plain/etc/ceilometer/pipeline.yaml>`__
|
||||
file.
|
||||
|
||||
The following publisher types are supported:
|
||||
|
||||
direct
|
||||
It can be specified in the form of ``direct://?dispatcher=http``. The
|
||||
dispatcher's options include database, file, http, and gnocchi. For
|
||||
more details on dispatcher, see :ref:`telemetry-storing-samples`.
|
||||
It emits data in the configured dispatcher directly, default configuration
|
||||
(the form is ``direct://``) is database dispatcher.
|
||||
In the Mitaka release, this method can only emit data to the database
|
||||
dispatcher, and the form is ``direct://``.
|
||||
|
||||
notifier
|
||||
It can be specified in the form of
|
||||
``notifier://?option1=value1&option2=value2``. It emits data over
|
||||
AMQP using oslo.messaging. This is the recommended method of
|
||||
publishing.
|
||||
|
||||
rpc
|
||||
It can be specified in the form of
|
||||
``rpc://?option1=value1&option2=value2``. It emits metering data
|
||||
over lossy AMQP. This method is synchronous and may experience
|
||||
performance issues. This publisher is deprecated in Liberty in favor of
|
||||
the notifier publisher.
|
||||
|
||||
udp
|
||||
It can be specified in the form of ``udp://<host>:<port>/``. It emits
|
||||
metering data for over UDP.
|
||||
|
||||
file
|
||||
It can be specified in the form of
|
||||
``file://path?option1=value1&option2=value2``. This publisher
|
||||
records metering data into a file.
|
||||
|
||||
.. note::
|
||||
|
||||
If a file name and location is not specified, this publisher
|
||||
does not log any meters, instead it logs a warning message in
|
||||
the configured log file for Telemetry.
|
||||
|
||||
kafka
|
||||
It can be specified in the form of:
|
||||
``kafka://kafka_broker_ip: kafka_broker_port?topic=kafka_topic
|
||||
&option1=value1``.
|
||||
|
||||
This publisher sends metering data to a kafka broker.
|
||||
|
||||
.. note::
|
||||
|
||||
If the topic parameter is missing, this publisher brings out
|
||||
metering data under a topic name, ``ceilometer``. When the port
|
||||
number is not specified, this publisher uses 9092 as the
|
||||
broker's port.
|
||||
|
||||
The following options are available for ``rpc`` and ``notifier``. The
|
||||
policy option can be used by ``kafka`` publisher:
|
||||
|
||||
``per_meter_topic``
|
||||
The value of it is 1. It is used for publishing the samples on
|
||||
additional ``metering_topic.sample_name`` topic queue besides the
|
||||
default ``metering_topic`` queue.
|
||||
|
||||
``policy``
|
||||
It is used for configuring the behavior for the case, when the
|
||||
publisher fails to send the samples, where the possible predefined
|
||||
values are the following:
|
||||
|
||||
default
|
||||
Used for waiting and blocking until the samples have been sent.
|
||||
|
||||
drop
|
||||
Used for dropping the samples which are failed to be sent.
|
||||
|
||||
queue
|
||||
Used for creating an in-memory queue and retrying to send the
|
||||
samples on the queue on the next samples publishing period (the
|
||||
queue length can be configured with ``max_queue_length``, where
|
||||
1024 is the default value).
|
||||
|
||||
The following option is additionally available for the ``notifier`` publisher:
|
||||
|
||||
``topic``
|
||||
The topic name of queue to publish to. Setting this will override the
|
||||
default topic defined by ``metering_topic`` and ``event_topic`` options.
|
||||
This option can be used to support multiple consumers. Support for this
|
||||
feature was added in Kilo.
|
||||
|
||||
The following options are available for the ``file`` publisher:
|
||||
|
||||
``max_bytes``
|
||||
When this option is greater than zero, it will cause a rollover.
|
||||
When the size is about to be exceeded, the file is closed and a new
|
||||
file is silently opened for output. If its value is zero, rollover
|
||||
never occurs.
|
||||
|
||||
``backup_count``
|
||||
If this value is non-zero, an extension will be appended to the
|
||||
filename of the old log, as '.1', '.2', and so forth until the
|
||||
specified value is reached. The file that is written and contains
|
||||
the newest data is always the one that is specified without any
|
||||
extensions.
|
||||
|
||||
The default publisher is ``notifier``, without any additional options
|
||||
specified. A sample ``publishers`` section in the
|
||||
``/etc/ceilometer/pipeline.yaml`` looks like the following:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
publishers:
|
||||
- udp://10.0.0.2:1234
|
||||
- rpc://?per_meter_topic=1 (deprecated in Liberty)
|
||||
- notifier://?policy=drop&max_queue_length=512&topic=custom_target
|
||||
- direct://?dispatcher=http
|
||||
|
||||
@@ -70,9 +70,8 @@ in the OpenStack Configuration Reference.
|
||||
``partitioning_group_prefix``, a disjoint subset of meters must be polled
|
||||
to avoid samples being missing or duplicated. The list of meters to poll
|
||||
can be set in the :file:`/etc/ceilometer/pipeline.yaml` configuration file.
|
||||
For more information about pipelines see the `Data collection and
|
||||
processing
|
||||
<https://docs.openstack.org/admin-guide/telemetry-data-collection.html#data-collection-and-processing>`_
|
||||
For more information about pipelines see the `Data processing and pipelines
|
||||
<https://docs.openstack.org/admin-guide/telemetry-data-pipelines.html>`_
|
||||
section.
|
||||
|
||||
To enable the compute agent to run multiple instances simultaneously with
|
||||
|
||||
Reference in New Issue
Block a user