Description of common configuration options
Configuration option = Default value Description
[DEFAULT]
= localhost (StrOpt) Name of this node, which must be valid in an AMQP key. Can be an opaque identifier. For ZeroMQ only, must be a valid host name, FQDN, or IP address.
= 600 (IntOpt) Timeout seconds for HTTP requests. Set it to None to disable timeout.
= None (ListOpt) Memcached servers or None for in process cache.
= 1 (IntOpt) Number of workers for notification service. A single notification agent is enabled by default.
= ['compute', 'central'] (MultiChoicesOpt) Polling namespace(s) to be used while resource polling
= [] (MultiChoicesOpt) List of pollsters (or wildcard templates) to be used while polling
= /etc/ceilometer/rootwrap.conf (StrOpt) Path to the rootwrap configuration file touse for running commands as root
= 0 (IntOpt) To reduce large requests at same time to Nova or other components from different compute agents, shuffle start time of polling task.
= False (BoolOpt) Indicates if expirer expires only samples. If set true, expired samples will be deleted, but residual resource and meter definition data will remain.
[compute]
= False (BoolOpt) Enable work-load partitioning, allowing multiple compute agents to be run simultaneously.
[coordination]
= None (StrOpt) The backend URL to use for distributed coordination. If left empty, per-deployment central agent and per-host compute agent won't do workload partitioning and will only function correctly if a single instance of that service is running.
= 10.0 (FloatOpt) Number of seconds between checks to see if group membership has changed
= 1.0 (FloatOpt) Number of seconds between heartbeats for distributed coordination.
[keystone_authtoken]
= None (ListOpt) Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process.
[meter]
= meters.yaml (StrOpt) Configuration file for defining meter notifications.
[polling]
= None (StrOpt) Work-load partitioning group prefix. Use only if you want to run multiple polling agents with different config files. For each sub-group of the agent pool with the same partitioning_group_prefix a disjoint subset of pollsters should be loaded.