Better follow conventions in HA Guide

Follow our conventions for title capitalization and usage of
project/service names.

Follow conventions on using active voice, no -ing for titles.

Co-Authored-By: Diane Fleming <diane.fleming@rackspace.com>

Change-Id: I05e368f8dda436454d2daca1a6775105032a39fc
Partial-Bug: #1217503
This commit is contained in:
Andreas Jaeger 2014-05-11 15:43:07 +02:00
parent 41f9ff0834
commit 18be804cdb
23 changed files with 147 additions and 146 deletions

View File

@ -1,4 +1,4 @@
[[ha-aa-automated-deployment]]
=== Automating Deployment with Puppet
=== Automate deployment with Puppet
(Coming soon)

View File

@ -1,4 +1,4 @@
[[ha-aa-computes]]
=== OpenStack Compute Nodes
=== OpenStack compute nodes
(Coming soon)

View File

@ -1,26 +1,26 @@
[[ha-aa-controllers]]
=== OpenStack Controller Nodes
=== OpenStack controller nodes
OpenStack Controller Nodes contains:
OpenStack controller nodes contain:
* All OpenStack API services
* All OpenStack schedulers
* Memcached service
==== Running OpenStack API & schedulers
==== Run OpenStack API and schedulers
===== API Services
===== API services
All OpenStack projects have an API service for controlling all the resources in the Cloud.
In Active / Active mode, the most common setup is to scale-out these services on at least two nodes
and use load-balancing and virtual IP (with HAproxy & Keepalived in this setup).
*Configuring API OpenStack services*
*Configure API OpenStack services*
To configure our Cloud using Highly available and scalable API services, we need to ensure that:
* Using Virtual IP when configuring OpenStack Identity Endpoints.
* You use Virtual IP when configuring OpenStack Identity endpoints.
* All OpenStack configuration files should refer to Virtual IP.
*In case of failure*
@ -49,17 +49,18 @@ Please refer to the RabbitMQ section for configure these services with multiple
==== Memcached
Most of OpenStack services use an application to offer persistence and store ephemeral datas (like tokens).
Most of OpenStack services use an application to offer persistence and store ephemeral data (like tokens).
Memcached is one of them and can scale-out easily without specific trick.
To install and configure it, you can read the http://code.google.com/p/memcached/wiki/NewStart[official documentation].
To install and configure it, read the http://code.google.com/p/memcached/wiki/NewStart[official documentation].
Memory caching is managed by Oslo-incubator for so the way to use multiple memcached servers is the same for all projects.
Memory caching is managed by oslo-incubator so the way to use multiple memcached servers is the same for all projects.
Example with two hosts:
----
memcached_servers = controller1:11211,controller2:11211
----
By default, controller1 will handle the caching service but if the host goes down, controller2 will do the job.
More informations about memcached installation are in the OpenStack Compute Manual.
By default, controller1 handles the caching service but if the host goes down, controller2 does the job.
For more information about memcached installation, see the OpenStack
Cloud Administrator Guide.

View File

@ -2,9 +2,8 @@
=== Database
The first step is installing the database that sits at the heart of the
cluster. When were talking about High Availability, however, were
talking about not just one database, but several (for redundancy) and a
means to keep them synchronized. In this case, were going to choose the
cluster. When we talk about High Availability, we talk about several databases (for redundancy) and a
means to keep them synchronized. In this case, we must choose the
MySQL database, along with Galera for synchronous multi-master replication.
The choice of database isnt a foregone conclusion; youre not required
@ -81,7 +80,7 @@ ports.
Now youre ready to actually create the cluster.
===== Creating the cluster
===== Create the cluster
In creating a cluster, you first start a single instance, which creates the cluster. The rest of the MySQL instances then connect to that cluster. For example, if you started on +10.0.0.10+ by executing the command:
@ -157,11 +156,11 @@ mysql> show status like 'wsrep%';
[[ha-aa-db-galera-monitoring]]
==== Galera Monitoring Scripts
==== Galera monitoring scripts
(Coming soon)
==== Other ways to provide a Highly Available database
==== Other ways to provide a highly available database
MySQL with Galera is by no means the only way to achieve database HA. MariaDB
(https://mariadb.org/) and Percona (http://www.percona.com/) also work with Galera.

View File

@ -1,5 +1,5 @@
[[ha-aa-haproxy]]
=== HAproxy Nodes
=== HAproxy nodes
HAProxy is a very fast and reliable solution offering high availability, load balancing, and proxying
for TCP and HTTP-based applications. It is particularly suited for web sites crawling under very high loads

View File

@ -1,39 +1,39 @@
[[ha-aa-network]]
=== OpenStack Network Nodes
=== OpenStack network nodes
OpenStack Network Nodes contains:
OpenStack network nodes contain:
* Neutron DHCP Agent
* Neutron L2 Agent
* Neutron L3 Agent
* Neutron Metadata Agent
* Neutron LBaaS Agent
* neutron DHCP agent
* neutron L2 agent
* Neutron L3 agent
* neutron metadata agent
* neutron lbaas agent
NOTE: The Neutron L2 Agent does not need to be highly available. It has to be
NOTE: The neutron L2 agent does not need to be highly available. It has to be
installed on each Data Forwarding Node and controls the virtual networking
drivers as Open-vSwitch or Linux Bridge. One L2 agent runs per node
drivers as Open vSwitch or Linux Bridge. One L2 agent runs per node
and controls its virtual interfaces. That's why it cannot be distributed and
highly available.
==== Running Neutron DHCP Agent
==== Run neutron DHCP agent
OpenStack Networking service has a scheduler that
lets you run multiple agents across nodes. Also, the DHCP agent can be natively
highly available. For details, see http://docs.openstack.org/trunk/config-reference/content/app_demo_multi_dhcp_agents.html[OpenStack Configuration Reference].
==== Running Neutron L3 Agent
==== Run neutron L3 agent
The Neutron L3 Agent is scalable thanks to the scheduler
The neutron L3 agent is scalable thanks to the scheduler
that allows distribution of virtual routers across multiple nodes.
But there is no native feature to make these routers highly available.
At this time, the Active / Passive solution exists to run the Neutron L3
agent in failover mode with Pacemaker. See the Active / Passive
section of this guide.
==== Running Neutron Metadata Agent
==== Run neutron metadata agent
There is no native feature to make this service highly available.
At this time, the Active / Passive solution exists to run the Neutron
Metadata agent in failover mode with Pacemaker. See the Active /
At this time, the Active / Passive solution exists to run the neutron
metadata agent in failover mode with Pacemaker. See the Active /
Passive section of this guide.

View File

@ -1,2 +1,2 @@
[[ha-using-active-active]]
== HA Using Active/Active
== HA using active/active

View File

@ -81,7 +81,7 @@ rabbitmqctl set_policy HA '^(?!amq\.).*' '{"ha-mode": "all"}'
http://www.rabbitmq.com/ha.html[More information about High availability in RabbitMQ]
==== Configure OpenStack Services to use RabbitMQ
==== Configure OpenStack services to use RabbitMQ
We have to configure the OpenStack components to use at least two RabbitMQ nodes.

View File

@ -1,5 +1,5 @@
[[ha-aa-storage]]
=== Storage Backends for OpenStack Image and Cinder
=== Storage back ends for OpenStack Image Service and Block Storage
(Coming soon)

View File

@ -1,8 +1,8 @@
[[ch-api]]
=== API Node Cluster Stack
=== API node cluster stack
The API node exposes OpenStack API endpoints onto external network (Internet).
It needs to talk to the Cloud Controller on the management network.
It must talk to the cloud controller on the management network.
include::ap-api-vip.txt[]
include::ap-keystone.txt[]

View File

@ -1,7 +1,7 @@
[[s-api-pacemaker]]
==== Configure Pacemaker Group
==== Configure Pacemaker group
Finally, we need to create a service +group+ to ensure that virtual IP is linked to the API services resources :
Finally, we need to create a service +group+ to ensure that virtual IP is linked to the API services resources:
----
include::includes/pacemaker-api.crm[]

View File

@ -1,7 +1,7 @@
[[s-api-vip]]
==== Configure the VIP
First of all, we need to select and assign a virtual IP address (VIP) that can freely float between cluster nodes.
First, you must select and assign a virtual IP address (VIP) that can freely float between cluster nodes.
This configuration creates +p_ip_api+, a virtual IP address for use by the API node (192.168.42.103) :

View File

@ -1,23 +1,23 @@
[[s-ceilometer-agent-central]]
==== Highly available Ceilometer Central Agent
==== Highly available Telemetry central agent
Ceilometer is the metering service in OpenStack.
Central Agent polls for resource utilization statistics for resources not tied
to instances or compute nodes.
Telemetry (ceilometer) is the metering and monitoring service in
OpenStack. The Central agent polls for resource utilization
statistics for resources not tied to instances or compute nodes.
NOTE: Due to limitations of a polling model, a single instance of this agent
can be polling a given list of meters. In this setup, we install this service
on the API nodes also in the active / passive mode.
Making the Ceilometer Central Agent service highly available in active / passive mode involves
Making the Telemetry central agent service highly available in active / passive mode involves
managing its daemon with the Pacemaker cluster manager.
NOTE: You will find at http://docs.openstack.org/developer/ceilometer/install/manual.html#installing-the-central-agent[this page]
the process to install the Ceilometer Central Agent.
the process to install the Telemetry central agent.
===== Adding the Ceilometer Central Agent resource to Pacemaker
===== Add the Telemetry central agent resource to Pacemaker
First of all, you need to download the resource agent to your system:
@ -28,7 +28,7 @@ chmod a+rx *
----
You may then proceed with adding the Pacemaker configuration for
the Ceilometer Central Agent resource. Connect to the Pacemaker cluster with +crm
the Telemetry central agent resource. Connect to the Pacemaker cluster with +crm
configure+, and add the following cluster resources:
----
@ -47,7 +47,7 @@ Once completed, commit your configuration changes by entering +commit+
from the +crm configure+ menu. Pacemaker will then start the Ceilometer Central Agent
service, and its dependent resources, on one of your nodes.
===== Configuring Ceilometer Central Agent service
===== Configure Telemetry central agent service
Edit +/etc/ceilometer/ceilometer.conf+ :
----
@ -59,6 +59,6 @@ notifier_strategy = rabbit
rabbit_host = 192.168.42.102
[database]
# We have to use MySQL connection to store datas :
# We have to use MySQL connection to store data :
sql_connection=mysql://ceilometer:password@192.168.42.101/ceilometer
----

View File

@ -1,17 +1,18 @@
[[s-cinder-api]]
==== Highly available Cinder API
==== Highly available Block Storage API
Cinder is the block storage service in OpenStack.
Making the Cinder API service highly available in active / passive mode involves
Making the Block Storage (cinder) API service highly available in active / passive mode involves
* configuring Cinder to listen on the VIP address,
* managing Cinder API daemon with the Pacemaker cluster manager,
* configuring OpenStack services to use this IP address.
* Configure Block Storage to listen on the VIP address,
* managing Block Storage API daemon with the Pacemaker cluster manager,
* Configure OpenStack services to use this IP address.
NOTE: Here is the http://docs.openstack.org/trunk/install-guide/install/apt/content/cinder-install.html[documentation] for installing Cinder service.
NOTE: Here is the
http://docs.openstack.org/trunk/install-guide/install/apt/content/ch_cinder.html[documentation]
for installing Block Storage service.
===== Adding Cinder API resource to Pacemaker
===== Add Block Storage API resource to Pacemaker
First of all, you need to download the resource agent to your system:
@ -21,8 +22,8 @@ wget https://raw.github.com/madkiss/openstack-resource-agents/master/ocf/cinder-
chmod a+rx *
----
You may now proceed with adding the Pacemaker configuration for
Cinder API resource. Connect to the Pacemaker cluster with +crm
You can now add the Pacemaker configuration for
Block Storage API resource. Connect to the Pacemaker cluster with +crm
configure+, and add the following cluster resources:
----
@ -31,7 +32,7 @@ include::includes/pacemaker-cinder_api.crm[]
This configuration creates
* +p_cinder-api+, a resource for manage Cinder API service
* +p_cinder-api+, a resource for manage Block Storage API service
+crm configure+ supports batch input, so you may copy and paste the
above into your live pacemaker configuration, and then make changes as
@ -40,17 +41,17 @@ required. For example, you may enter +edit p_ip_cinder-api+ from the
virtual IP address.
Once completed, commit your configuration changes by entering +commit+
from the +crm configure+ menu. Pacemaker will then start the Cinder API
from the +crm configure+ menu. Pacemaker will then start the Block Storage API
service, and its dependent resources, on one of your nodes.
===== Configuring Cinder API service
===== Configure Block Storage API service
Edit +/etc/cinder/cinder.conf+:
----
# We have to use MySQL connection to store data:
sql_connection=mysql://cinder:password@192.168.42.101/cinder
# We bind Cinder API to the VIP :
# We bind Block Storage API to the VIP:
osapi_volume_listen = 192.168.42.103
# We send notifications to High Available RabbitMQ:
@ -59,13 +60,13 @@ rabbit_host = 192.168.42.102
----
===== Configuring OpenStack Services to use High Available Cinder API
===== Configure OpenStack services to use highly available Block Storage API
Your OpenStack services must now point their Cinder API configuration to
Your OpenStack services must now point their Block Storage API configuration to
the highly available, virtual cluster IP address -- rather than a
Cinder API server's physical IP address as you normally would.
Block Storage API server's physical IP address as you normally would.
You need to create the Cinder API Endpoint with this IP.
You must create the Block Storage API endpoint with this IP.
NOTE: If you are using both private and public IP, you should create two Virtual IPs and define your endpoint like this:
----

View File

@ -1,7 +1,7 @@
[[ch-controller]]
=== Cloud Controller Cluster Stack
=== Cloud controller cluster stack
The Cloud Controller sits on the management network and needs to talk to all other services.
The cloud controller runs on the management network and must talk to all other services.
include::ap-mysql.txt[]
include::ap-rabbitmq.txt[]

View File

@ -2,16 +2,16 @@
==== Highly available OpenStack Image API
OpenStack Image Service offers a service for discovering, registering, and retrieving virtual machine images.
Making the OpenStack Image API service highly available in active / passive mode involves
To make the OpenStack Image API service highly available in active / passive mode, you must:
* configuring OpenStack Image to listen on the VIP address,
* managing OpenStack Image API daemon with the Pacemaker cluster manager,
* configuring OpenStack services to use this IP address.
* Configure OpenStack Image to listen on the VIP address.
* Manage OpenStack Image API daemon with the Pacemaker cluster manager.
* Configure OpenStack services to use this IP address.
NOTE: Here is the http://docs.openstack.org/trunk/install-guide/install/apt/content/ch_installing-openstack-image.html[documentation] for installing OpenStack Image API service.
===== Adding OpenStack Image API resource to Pacemaker
===== Add OpenStack Image API resource to Pacemaker
First of all, you need to download the resource agent to your system:
@ -21,7 +21,7 @@ wget https://raw.github.com/madkiss/openstack-resource-agents/master/ocf/glance-
chmod a+rx *
----
You may now proceed with adding the Pacemaker configuration for
You can now add the Pacemaker configuration for
OpenStack Image API resource. Connect to the Pacemaker cluster with +crm
configure+, and add the following cluster resources:
@ -43,7 +43,7 @@ Once completed, commit your configuration changes by entering +commit+
from the +crm configure+ menu. Pacemaker will then start the OpenStack Image API
service, and its dependent resources, on one of your nodes.
===== Configuring OpenStack Image API service
===== Configure OpenStack Image Service API
Edit +/etc/glance/glance-api.conf+:
----
@ -62,7 +62,7 @@ rabbit_host = 192.168.42.102
----
===== Configuring OpenStack Services to use High Available OpenStack Image API
===== Configure OpenStack services to use high available OpenStack Image API
Your OpenStack services must now point their OpenStack Image API configuration to
the highly available, virtual cluster IP address -- rather than an
@ -76,7 +76,7 @@ the following line in your +nova.conf+ file:
glance_api_servers = 192.168.42.103
----
You need also to create the OpenStack Image API Endpoint with this IP.
You must also create the OpenStack Image API endpoint with this IP.
NOTE: If you are using both private and public IP addresses, you should create two Virtual IP addresses and define your endpoint like this:
----

View File

@ -4,14 +4,14 @@
OpenStack Identity is the Identity Service in OpenStack and used by many services.
Making the OpenStack Identity service highly available in active / passive mode involves
* configuring OpenStack Identity to listen on the VIP address,
* Configure OpenStack Identity to listen on the VIP address,
* managing OpenStack Identity daemon with the Pacemaker cluster manager,
* configuring OpenStack services to use this IP address.
* Configure OpenStack services to use this IP address.
NOTE: Here is the http://docs.openstack.org/trunk/install-guide/install/apt/content/ch_installing-openstack-identity-service.html[documentation] for installing OpenStack Identity service.
===== Adding OpenStack Identity resource to Pacemaker
===== Add OpenStack Identity resource to Pacemaker
First of all, you need to download the resource agent to your system:
@ -23,7 +23,7 @@ wget https://raw.github.com/madkiss/openstack-resource-agents/master/ocf/keyston
chmod a+rx *
----
You may now proceed with adding the Pacemaker configuration for
You can now add the Pacemaker configuration for
OpenStack Identity resource. Connect to the Pacemaker cluster with +crm
configure+, and add the following cluster resources:
@ -43,7 +43,7 @@ Once completed, commit your configuration changes by entering +commit+
from the +crm configure+ menu. Pacemaker will then start the OpenStack Identity
service, and its dependent resources, on one of your nodes.
===== Configuring OpenStack Identity service
===== Configure OpenStack Identity service
You need to edit your OpenStack Identity configuration file (+keystone.conf+) and change the bind parameters:
@ -69,7 +69,7 @@ driver = keystone.identity.backends.sql.Identity
----
===== Configuring OpenStack Services to use the Highly Available OpenStack Identity
===== Configure OpenStack services to use the highly available OpenStack Identity
Your OpenStack services must now point their OpenStack Identity configuration to
the highly available, virtual cluster IP address -- rather than a
@ -91,7 +91,7 @@ NOTE : If you are using both private and public IP addresses, you should create
keystone endpoint-create --region $KEYSTONE_REGION --service-id $service-id --publicurl 'http://PUBLIC_VIP:5000/v2.0' --adminurl 'http://192.168.42.103:35357/v2.0' --internalurl 'http://192.168.42.103:5000/v2.0'
----
If you are using the Horizon Dashboard, you should edit the +local_settings.py+ file:
If you are using the horizon dashboard, you should edit the +local_settings.py+ file:
----
OPENSTACK_HOST = 192.168.42.103
----

View File

@ -4,23 +4,23 @@
MySQL is the default database server used by many OpenStack
services. Making the MySQL service highly available involves
* configuring a DRBD device for use by MySQL,
* configuring MySQL to use a data directory residing on that DRBD
* Configure a DRBD device for use by MySQL,
* Configure MySQL to use a data directory residing on that DRBD
device,
* selecting and assigning a virtual IP address (VIP) that can freely
float between cluster nodes,
* configuring MySQL to listen on that IP address,
* Configure MySQL to listen on that IP address,
* managing all resources, including the MySQL daemon itself, with
the Pacemaker cluster manager.
NOTE: http://codership.com/products/mysql_galera[MySQL/Galera] is an
alternative method of configuring MySQL for high availability. It is
alternative method of Configure MySQL for high availability. It is
likely to become the preferred method of achieving MySQL high
availability once it has sufficiently matured. At the time of writing,
however, the Pacemaker/DRBD based approach remains the recommended one
for OpenStack environments.
===== Configuring DRBD
===== Configure DRBD
The Pacemaker based MySQL server requires a DRBD resource from
which it mounts the +/var/lib/mysql+ directory. In this example,
@ -93,7 +93,7 @@ background:
drbdadm secondary mysql
----
===== Preparing MySQL for Pacemaker high availability
===== Prepare MySQL for Pacemaker high availability
In order for Pacemaker monitoring to function properly, you must
ensure that MySQL's database files reside on the DRBD device. If you
@ -122,9 +122,9 @@ node1:# umount /mnt
Regardless of the approach, the steps outlined here must be completed
on only one cluster node.
===== Adding MySQL resources to Pacemaker
===== Add MySQL resources to Pacemaker
You may now proceed with adding the Pacemaker configuration for
You can now add the Pacemaker configuration for
MySQL resources. Connect to the Pacemaker cluster with +crm
configure+, and add the following cluster resources:
@ -154,7 +154,7 @@ Once completed, commit your configuration changes by entering +commit+
from the +crm configure+ menu. Pacemaker will then start the MySQL
service, and its dependent resources, on one of your nodes.
===== Configuring OpenStack services for highly available MySQL
===== Configure OpenStack services for highly available MySQL
Your OpenStack services must now point their MySQL configuration to
the highly available, virtual cluster IP address -- rather than a

View File

@ -1,21 +1,21 @@
[[ch-network]]
=== Network Controller Cluster Stack
=== Network controller cluster stack
The Network controller sits on the management and data network, and needs to be connected to the Internet if a VM needs access to it.
The network controller sits on the management and data network, and needs to be connected to the Internet if a VM needs access to it.
NOTE: Both nodes should have the same hostname since the Neutron scheduler will be
NOTE: Both nodes should have the same hostname since the Networking scheduler will be
aware of one node, for example a virtual router attached to a single L3 node.
==== Highly available Neutron L3 Agent
==== Highly available neutron L3 agent
The Neutron L3 agent provides L3/NAT forwarding to ensure external network access
for VMs on tenant networks. High Availability for the L3 agent is achieved by
The neutron L3 agent provides L3/NAT forwarding to ensure external network access
for VMs on tenant networks. High availability for the L3 agent is achieved by
adopting Pacemaker.
NOTE: Here is the http://docs.openstack.org/trunk/config-reference/content/section_adv_cfg_l3_agent.html[documentation] for installing Neutron L3 Agent.
NOTE: Here is the http://docs.openstack.org/trunk/config-reference/content/section_adv_cfg_l3_agent.html[documentation] for installing neutron L3 agent.
===== Adding Neutron L3 Agent resource to Pacemaker
===== Add neutron L3 agent resource to Pacemaker
First of all, you need to download the resource agent to your system:
----
@ -25,7 +25,7 @@ chmod a+rx neutron-l3-agent
----
You may now proceed with adding the Pacemaker configuration for
Neutron L3 Agent resource. Connect to the Pacemaker cluster with +crm
neutron L3 agent resource. Connect to the Pacemaker cluster with +crm
configure+, and add the following cluster resources:
----
@ -41,23 +41,23 @@ above into your live pacemaker configuration, and then make changes as
required.
Once completed, commit your configuration changes by entering +commit+
from the +crm configure+ menu. Pacemaker will then start the Neutron L3 Agent
from the +crm configure+ menu. Pacemaker will then start the neutron L3 agent
service, and its dependent resources, on one of your nodes.
NOTE: This method does not ensure a zero downtime since it has to recreate all
the namespaces and virtual routers on the node.
==== Highly available Neutron DHCP Agent
==== Highly available neutron DHCP agent
Neutron DHCP agent distributes IP addresses to the VMs with dnsmasq (by
default). High Availability for the DHCP agent is achieved by adopting
default). High availability for the DHCP agent is achieved by adopting
Pacemaker.
NOTE: Here is the http://docs.openstack.org/trunk/config-reference/content/section_adv_cfg_dhcp_agent.html[documentation] for installing Neutron DHCP Agent.
NOTE: Here is the http://docs.openstack.org/trunk/config-reference/content/section_adv_cfg_dhcp_agent.html[documentation] for installing neutron DHCP agent.
===== Adding Neutron DHCP Agent resource to Pacemaker
===== Add neutron DHCP agent resource to Pacemaker
First of all, you need to download the resource agent to your system:
----
@ -67,7 +67,7 @@ chmod a+rx neutron-dhcp-agent
----
You may now proceed with adding the Pacemaker configuration for
Neutron DHCP Agent resource. Connect to the Pacemaker cluster with +crm
neutron DHCP agent resource. Connect to the Pacemaker cluster with +crm
configure+, and add the following cluster resources:
----
@ -84,20 +84,20 @@ above into your live pacemaker configuration, and then make changes as
required.
Once completed, commit your configuration changes by entering +commit+
from the +crm configure+ menu. Pacemaker will then start the Neutron DHCP
Agent service, and its dependent resources, on one of your nodes.
from the +crm configure+ menu. Pacemaker will then start the neutron DHCP
agent service, and its dependent resources, on one of your nodes.
==== Highly available Neutron Metadata Agent
==== Highly available neutron metadata agent
Neutron Metadata agent allows Nova API Metadata to be reachable by VMs on tenant
networks. High Availability for the Metadata agent is achieved by adopting
Neutron metadata agent allows Compute API metadata to be reachable by VMs on tenant
networks. High availability for the metadata agent is achieved by adopting
Pacemaker.
NOTE: Here is the http://docs.openstack.org/trunk/config-reference/content/networking-options-metadata.html[documentation] for installing Neutron Metadata Agent.
===== Adding Neutron Metadata Agent resource to Pacemaker
===== Add neutron metadata agent resource to Pacemaker
First of all, you need to download the resource agent to your system:
----
@ -107,7 +107,7 @@ chmod a+rx neutron-metadata-agent
----
You may now proceed with adding the Pacemaker configuration for
Neutron Metadata Agent resource. Connect to the Pacemaker cluster with +crm
neutron metadata agent resource. Connect to the Pacemaker cluster with +crm
configure+, and add the following cluster resources:
----
@ -124,12 +124,12 @@ above into your live pacemaker configuration, and then make changes as
required.
Once completed, commit your configuration changes by entering +commit+
from the +crm configure+ menu. Pacemaker will then start the Neutron Metadata
Agent service, and its dependent resources, on one of your nodes.
from the +crm configure+ menu. Pacemaker will then start the neutron metadata
agent service, and its dependent resources, on one of your nodes.
==== Manage network resources
You may now proceed with adding the Pacemaker configuration for
You can now add the Pacemaker configuration for
managing all network resources together with a group.
Connect to the Pacemaker cluster with +crm configure+, and add the following
cluster resources:

View File

@ -1,17 +1,17 @@
[[s-neutron-server]]
==== Highly available OpenStack Networking Server
==== Highly available OpenStack Networking server
OpenStack Networking is the network connectivity service in OpenStack.
Making the OpenStack Networking Server service highly available in active / passive mode involves
* configuring OpenStack Networking to listen on the VIP address,
* Configure OpenStack Networking to listen on the VIP address,
* managing OpenStack Networking API Server daemon with the Pacemaker cluster manager,
* configuring OpenStack services to use this IP address.
* Configure OpenStack services to use this IP address.
NOTE: Here is the http://docs.openstack.org/trunk/install-guide/install/apt/content/ch_installing-openstack-networking.html[documentation] for installing OpenStack Networking service.
===== Adding OpenStack Networking Server resource to Pacemaker
===== Add OpenStack Networking Server resource to Pacemaker
First of all, you need to download the resource agent to your system:
----
@ -20,7 +20,7 @@ wget https://raw.github.com/madkiss/openstack-resource-agents/master/ocf/neutron
chmod a+rx *
----
You may now proceed with adding the Pacemaker configuration for
You can now add the Pacemaker configuration for
OpenStack Networking Server resource. Connect to the Pacemaker cluster with +crm
configure+, and add the following cluster resources:
----
@ -39,7 +39,7 @@ Once completed, commit your configuration changes by entering +commit+
from the +crm configure+ menu. Pacemaker will then start the OpenStack Networking API
service, and its dependent resources, on one of your nodes.
===== Configuring OpenStack Networking Server
===== Configure OpenStack Networking server
Edit +/etc/neutron/neutron.conf+ :
----
@ -59,20 +59,20 @@ connection = mysql://neutron:password@192.168.42.101/neutron
----
===== Configuring OpenStack Services to use Highly available OpenStack Networking Server
===== Configure OpenStack services to use highly available OpenStack Networking server
Your OpenStack services must now point their OpenStack Networking Server configuration to
the highly available, virtual cluster IP address -- rather than an
OpenStack Networking server's physical IP address as you normally would.
For example, you should configure OpenStack Compute for using Highly Available OpenStack Networking Server in editing +nova.conf+ file:
For example, you should configure OpenStack Compute for using highly available OpenStack Networking server in editing +nova.conf+ file:
----
neutron_url = http://192.168.42.103:9696
----
You need to create the OpenStack Networking Server Endpoint with this IP.
You need to create the OpenStack Networking server endpoint with this IP.
NOTE : If you are using both private and public IP addresses, you should create two Virtual IP addresses and define your endpoint like this:
NOTE: If you are using both private and public IP addresses, you should create two Virtual IP addresses and define your endpoint like this:
----
keystone endpoint-create --region $KEYSTONE_REGION --service-id $service-id --publicurl 'http://PUBLIC_VIP:9696/' --adminurl 'http://192.168.42.103:9696/' --internalurl 'http://192.168.42.103:9696/'
----

View File

@ -1,2 +1,2 @@
[[ha-using-active-passive]]
== HA Using Active/Passive
== HA using active/passive

View File

@ -1,5 +1,5 @@
[[ch-pacemaker]]
=== The Pacemaker Cluster Stack
=== The Pacemaker cluster stack
OpenStack infrastructure high availability relies on the
http://www.clusterlabs.org[Pacemaker] cluster stack, the
@ -21,11 +21,11 @@ databases or virtual IP addresses), existing third-party RAs (such as
for RabbitMQ), and native OpenStack RAs (such as those managing the
OpenStack Identity and Image Services).
==== Installing Packages
==== Install packages
On any host that is meant to be part of a Pacemaker cluster, you must
first establish cluster communications through the Corosync messaging
layer. This involves installing the following packages (and their
layer. This involves Install the following packages (and their
dependencies, which your package manager will normally install
automatically):
@ -37,9 +37,9 @@ automatically):
agents from +cluster-glue+)
* +resource-agents+
==== Setting up Corosync
==== Set up Corosync
Besides installing the +corosync+ package, you will also have to
Besides installing the +corosync+ package, you must also
create a configuration file, stored in
+/etc/corosync/corosync.conf+. Most distributions ship an example
configuration file (+corosync.conf.example+) as part of the
@ -141,7 +141,7 @@ runtime.totem.pg.mrp.srp.983895584.status=joined
You should see a +status=joined+ entry for each of your constituent
cluster nodes.
==== Starting Pacemaker
==== Start Pacemaker
Once the Corosync services have been started, and you have established
that the cluster is communicating properly, it is safe to start
@ -170,7 +170,7 @@ Version: 1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c
Online: [ node2 node1 ]
----
==== Setting basic cluster properties
==== Set basic cluster properties
Once your Pacemaker cluster is set up, it is recommended to set a few
basic cluster properties. To do so, start the +crm+ shell and change

View File

@ -23,7 +23,7 @@ approach remains the recommended one for OpenStack environments,
although this may change in the near future as RabbitMQ active-active
mirrored queues mature.
===== Configuring DRBD
===== Configure DRBD
The Pacemaker based RabbitMQ server requires a DRBD resource from
which it mounts the +/var/lib/rabbitmq+ directory. In this example,
@ -69,7 +69,7 @@ the primary and secondary roles in DRBD. Must be completed _on one
node only,_ namely the one where you are about to continue with
creating your filesystem.
===== Creating a file system
===== Create a file system
Once the DRBD resource is running and in the primary role (and
potentially still in the process of running the initial device
@ -96,7 +96,7 @@ background:
drbdadm secondary rabbitmq
----
===== Preparing RabbitMQ for Pacemaker high availability
===== Prepare RabbitMQ for Pacemaker high availability
In order for Pacemaker monitoring to function properly, you must
ensure that RabbitMQ's +.erlang.cookie+ files are identical on all
@ -112,7 +112,7 @@ node1:# cp -a /var/lib/rabbitmq/.erlang.cookie /mnt
node1:# umount /mnt
----
===== Adding RabbitMQ resources to Pacemaker
===== Add RabbitMQ resources to Pacemaker
You may now proceed with adding the Pacemaker configuration for
RabbitMQ resources. Connect to the Pacemaker cluster with +crm
@ -144,7 +144,7 @@ Once completed, commit your configuration changes by entering +commit+
from the +crm configure+ menu. Pacemaker will then start the RabbitMQ
service, and its dependent resources, on one of your nodes.
===== Configuring OpenStack services for highly available RabbitMQ
===== Configure OpenStack services for highly available RabbitMQ
Your OpenStack services must now point their RabbitMQ configuration to
the highly available, virtual cluster IP address -- rather than a