As per discussion in the OSA docs summit session, clean up of installation guide. This fixes typos, minor RST mark up changes, and passive voice. This patch also merges a some of the sections into the larger chapter. This is in an effort to remove multiple smaller files. This patch is the first of many to avoid major conflicts. Change-Id: I5b6d540bfa691b36dbe9cf909de8e61affb0cd92
6.8 KiB
Home OpenStack-Ansible Installation Guide
Configuring the Telemetry (ceilometer) service (optional)
The Telemetry module (ceilometer) performs the following functions:
- Efficiently polls metering data related to OpenStack services.
- Collects event and metering data by monitoring notifications sent from services.
- Publishes collected data to various targets including data stores and message queues.
Note
As of Liberty, the alarming functionality is in a separate component. The metering-alarm containers handle the functionality through aodh services. For configuring these services, see the aodh docs: http://docs.openstack.org/developer/aodh/
Configure a MongoDB backend prior to running the ceilometer
playbooks. The connection data is in the user_variables.yml
file (see section Configuring the
user data below).
Setting up a MongoDB database for ceilometer
- Install the MongoDB package:
# apt-get install mongodb-server mongodb-clients python-pymongo
- Edit the
/etc/mongodb.conffile and change thebind_ito the management interface:
bind_ip = 10.0.0.11
- Edit the
/etc/mongodb.conffile and enablesmallfiles:
smallfiles = true
- Restart the MongoDB service:
# service mongodb restart
- Create the ceilometer database:
# mongo --host controller --eval 'db = db.getSiblingDB("ceilometer"); db.addUser({user: "ceilometer", pwd: "CEILOMETER_DBPASS", roles: [ "readWrite", "dbAdmin" ]})'This returns:
MongoDB shell version: 2.4.x connecting to: controller:27017/test { "user" : "ceilometer", "pwd" : "72f25aeee7ad4be52437d7cd3fc60f6f", "roles" : [ "readWrite", "dbAdmin" ], "_id" : ObjectId("5489c22270d7fad1ba631dc3") }Note
Ensure
CEILOMETER_DBPASSmatches theceilometer_container_db_passwordin the/etc/openstack_deploy/user_secrets.ymlfile. This is how Ansible knows how to configure the connection string within the ceilometer configuration files.
Configuring the hosts
Configure ceilometer by specifying the
metering-compute_hosts and
metering-infra_hosts directives in the
/etc/openstack_deploy/conf.d/ceilometer.yml file. Below is
the example included in the
etc/openstack_deploy/conf.d/ceilometer.yml.example
file:
# The compute host that the ceilometer compute agent runs on ``metering-compute_hosts``: compute1: ip: 172.20.236.110 # The infra node that the central agents runs on ``metering-infra_hosts``: infra1: ip: 172.20.236.111 # Adding more than one host requires further configuration for ceilometer # to work properly. infra2: ip: 172.20.236.112 infra3: ip: 172.20.236.113
The metering-compute_hosts houses the
ceilometer-agent-compute service. It runs on each compute
node and polls for resource utilization statistics. The
metering-infra_hosts houses serveral services:
- A central agent (ceilometer-agent-central): Runs on a central management server to poll for resource utilization statistics for resources not tied to instances or compute nodes. Multiple agents can be started to enable workload partitioning (See HA section below).
- A notification agent (ceilometer-agent-notification): Runs on a central management server(s) and consumes messages from the message queue(s) to build event and metering data. Multiple notification agents can be started to enable workload partitioning (See HA section below).
- A collector (ceilometer-collector): Runs on central management server(s) and dispatches data to a data store or external consumer without modification.
- An API server (ceilometer-api): Runs on one or more central management servers to provide data access from the data store.
Configuring the hosts for an HA deployment
Ceilometer supports running the polling and notification agents in an HA deployment.
The Tooz library provides the coordination within the groups of service instances. Tooz can be used with several backends. At the time of this writing, the following backends are supported:
- Zookeeper: Recommended solution by the Tooz project.
- Redis: Recommended solution by the Tooz project.
- Memcached: Recommended for testing.
Important
The OpenStack-Ansible project does not deploy these backends. The backends exist before deploying the ceilometer service.
Achieve HA by configuring the proper directives in
ceilometer.conf using
ceilometer_ceilometer_conf_overrides in the
user_variables.yml file. The ceilometer admin guide[1]
details the options used in ceilometer.conf for HA
deployment. The following is an example of
ceilometer_ceilometer_conf_overrides:
ceilometer_ceilometer_conf_overrides:
coordination:
backend_url: "zookeeper://172.20.1.110:2181"
notification:
workload_partitioning: TrueConfiguring the user data
Specify the following configurations in the
/etc/openstack_deploy/user_variables.yml file:
- The type of database backend ceilometer uses. Currently only MongoDB is supported:
ceilometer_db_type: mongodb- The IP address of the MonogoDB host:
ceilometer_db_ip: localhost- The port of the MongoDB service:
ceilometer_db_port: 27017- This configures swift to send notifications to the message bus:
swift_ceilometer_enabled: False- This configures heat to send notifications to the message bus:
heat_ceilometer_enabled: False- This configures cinder to send notifications to the message bus:
cinder_ceilometer_enabled: False- This configures glance to send notifications to the message bus:
glance_ceilometer_enabled: False- This configures nova to send notifications to the message bus:
nova_ceilometer_enabled: False- This configures neutron to send notifications to the message bus:
neutron_ceilometer_enabled: False- This configures keystone to send notifications to the message bus:
keystone_ceilometer_enabled: False
Run the os-ceilometer-install.yml playbook. If deploying
a new OpenStack (instead of only ceilometer), run
setup-openstack.yml. The ceilometer playbooks run as part
of this playbook.