openstack-manuals/doc/common/tables/ceilometer-common.xml
Ildiko Vancsa a8c869e50c Update Telemetry config reference
The Telemetry config reference is updated by using the doc tools
script. New config options are added to the existing tables following
the current grouping of options. Flagmappings are also updated with the
new options.

Change-Id: Ia1172727879dd8dc71b0492a54a865fcc87b4064
2014-11-06 15:43:02 +01:00

71 lines
2.9 KiB
XML

<?xml version='1.0' encoding='UTF-8'?>
<para xmlns="http://docbook.org/ns/docbook" version="5.0">
<!-- Warning: Do not edit this file. It is automatically
generated and your changes will be overwritten.
The tool to do so lives in openstack-doc-tools repository. -->
<table rules="all" xml:id="config_table_ceilometer_common">
<caption>Description of common configuration options</caption>
<col width="50%"/>
<col width="50%"/>
<thead>
<tr>
<th>Configuration option = Default value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<th colspan="2">[DEFAULT]</th>
</tr>
<tr>
<td>host = localhost</td>
<td>(StrOpt) Name of this node, which must be valid in an AMQP key. Can be an opaque identifier. For ZeroMQ only, must be a valid host name, FQDN, or IP address.</td>
</tr>
<tr>
<td>memcached_servers = None</td>
<td>(ListOpt) Memcached servers or None for in process cache.</td>
</tr>
<tr>
<td>notification_workers = 1</td>
<td>(IntOpt) Number of workers for notification service. A single notification agent is enabled by default.</td>
</tr>
<tr>
<td>rootwrap_config = /etc/ceilometer/rootwrap.conf</td>
<td>(StrOpt) Path to the rootwrap configuration file touse for running commands as root</td>
</tr>
<tr>
<th colspan="2">[central]</th>
</tr>
<tr>
<td>partitioning_group_prefix = None</td>
<td>(StrOpt) Work-load partitioning group prefix. Use only if you want to run multiple central agents with different config files. For each sub-group of the central agent pool with the same partitioning_group_prefix a disjoint subset of pollsters should be loaded.</td>
</tr>
<tr>
<th colspan="2">[compute]</th>
</tr>
<tr>
<td>workload_partitioning = False</td>
<td>(BoolOpt) Enable work-load partitioning, allowing multiple compute agents to be run simultaneously.</td>
</tr>
<tr>
<th colspan="2">[coordination]</th>
</tr>
<tr>
<td>backend_url = None</td>
<td>(StrOpt) The backend URL to use for distributed coordination. If left empty, per-deployment central agent and per-host compute agent won't do workload partitioning and will only function correctly if a single instance of that service is running.</td>
</tr>
<tr>
<td>heartbeat = 1.0</td>
<td>(FloatOpt) Number of seconds between heartbeats for distributed coordination.</td>
</tr>
<tr>
<th colspan="2">[keystone_authtoken]</th>
</tr>
<tr>
<td>memcached_servers = None</td>
<td>(ListOpt) Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process.</td>
</tr>
</tbody>
</table>
</para>