openstack-manuals/doc/ha-guide/source/controller-ha-telemetry.rst
qiaomin 531c18afd0 Use newton config-reference link to replase liberty link
This patch fix the wrong liberty config-reference link.

Change-Id: Ie66c51f4078e358aae0e7935c721116fd64db219
2016-10-04 10:32:07 +00:00

78 lines
3.4 KiB
ReStructuredText

==============================
Highly available Telemetry API
==============================
`Telemetry service
<http://docs.openstack.org/admin-guide/common/get-started-telemetry.html>`__
provides data collection service and alarming service.
Telemetry central agent
~~~~~~~~~~~~~~~~~~~~~~~
The Telemetry central agent can be configured to partition its polling
workload between multiple agents, enabling high availability.
Both the central and the compute agent can run in an HA deployment,
which means that multiple instances of these services can run in
parallel with workload partitioning among these running instances.
The `Tooz <https://pypi.python.org/pypi/tooz>`__ library provides
the coordination within the groups of service instances.
It provides an API above several back ends that can be used for building
distributed applications.
Tooz supports
`various drivers <http://docs.openstack.org/developer/tooz/drivers.html>`__
including the following back end solutions:
* `Zookeeper <http://zookeeper.apache.org/>`__.
Recommended solution by the Tooz project.
* `Redis <http://redis.io/>`__.
Recommended solution by the Tooz project.
* `Memcached <http://memcached.org/>`__.
Recommended for testing.
You must configure a supported Tooz driver for the HA deployment of
the Telemetry services.
For information about the required configuration options that have
to be set in the :file:`ceilometer.conf` configuration file for both
the central and compute agents, see the `coordination section
<http://docs.openstack.org/newton/config-reference/telemetry.html>`__
in the OpenStack Configuration Reference.
.. note:: Without the ``backend_url`` option being set only one
instance of both the central and compute agent service is able to run
and function correctly.
The availability check of the instances is provided by heartbeat messages.
When the connection with an instance is lost, the workload will be
reassigned within the remained instances in the next polling cycle.
.. note:: Memcached uses a timeout value, which should always be set to
a value that is higher than the heartbeat value set for Telemetry.
For backward compatibility and supporting existing deployments, the central
agent configuration also supports using different configuration files for
groups of service instances of this type that are running in parallel.
For enabling this configuration, set a value for the
``partitioning_group_prefix`` option in the
`polling section <http://docs.openstack.org/newton/config-reference/telemetry/telemetry-config-options.html>`__
in the OpenStack Configuration Reference.
.. warning:: For each sub-group of the central agent pool with the same
``partitioning_group_prefix`` a disjoint subset of meters must be polled --
otherwise samples may be missing or duplicated. The list of meters to poll
can be set in the :file:`/etc/ceilometer/pipeline.yaml` configuration file.
For more information about pipelines see the `Data collection and
processing
<http://docs.openstack.org/admin-guide/telemetry-data-collection.html#data-collection-and-processing>`__
section.
To enable the compute agent to run multiple instances simultaneously with
workload partitioning, the workload_partitioning option has to be set to
``True`` under the `compute section <http://docs.openstack.org/newton/config-reference/telemetry.html>`__
in the :file:`ceilometer.conf` configuration file.