Better follow conventions in HA Guide
Follow our conventions for title capitalization and usage of project/service names. Follow conventions on using active voice, no -ing for titles. Co-Authored-By: Diane Fleming <diane.fleming@rackspace.com> Change-Id: I05e368f8dda436454d2daca1a6775105032a39fc Partial-Bug: #1217503
This commit is contained in:
parent
41f9ff0834
commit
18be804cdb
@ -1,4 +1,4 @@
|
|||||||
[[ha-aa-automated-deployment]]
|
[[ha-aa-automated-deployment]]
|
||||||
=== Automating Deployment with Puppet
|
=== Automate deployment with Puppet
|
||||||
|
|
||||||
(Coming soon)
|
(Coming soon)
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
[[ha-aa-computes]]
|
[[ha-aa-computes]]
|
||||||
=== OpenStack Compute Nodes
|
=== OpenStack compute nodes
|
||||||
|
|
||||||
(Coming soon)
|
(Coming soon)
|
||||||
|
@ -1,26 +1,26 @@
|
|||||||
[[ha-aa-controllers]]
|
[[ha-aa-controllers]]
|
||||||
=== OpenStack Controller Nodes
|
=== OpenStack controller nodes
|
||||||
|
|
||||||
OpenStack Controller Nodes contains:
|
OpenStack controller nodes contain:
|
||||||
|
|
||||||
* All OpenStack API services
|
* All OpenStack API services
|
||||||
* All OpenStack schedulers
|
* All OpenStack schedulers
|
||||||
* Memcached service
|
* Memcached service
|
||||||
|
|
||||||
==== Running OpenStack API & schedulers
|
==== Run OpenStack API and schedulers
|
||||||
|
|
||||||
===== API Services
|
===== API services
|
||||||
|
|
||||||
All OpenStack projects have an API service for controlling all the resources in the Cloud.
|
All OpenStack projects have an API service for controlling all the resources in the Cloud.
|
||||||
In Active / Active mode, the most common setup is to scale-out these services on at least two nodes
|
In Active / Active mode, the most common setup is to scale-out these services on at least two nodes
|
||||||
and use load-balancing and virtual IP (with HAproxy & Keepalived in this setup).
|
and use load-balancing and virtual IP (with HAproxy & Keepalived in this setup).
|
||||||
|
|
||||||
|
|
||||||
*Configuring API OpenStack services*
|
*Configure API OpenStack services*
|
||||||
|
|
||||||
To configure our Cloud using Highly available and scalable API services, we need to ensure that:
|
To configure our Cloud using Highly available and scalable API services, we need to ensure that:
|
||||||
|
|
||||||
* Using Virtual IP when configuring OpenStack Identity Endpoints.
|
* You use Virtual IP when configuring OpenStack Identity endpoints.
|
||||||
* All OpenStack configuration files should refer to Virtual IP.
|
* All OpenStack configuration files should refer to Virtual IP.
|
||||||
|
|
||||||
*In case of failure*
|
*In case of failure*
|
||||||
@ -49,17 +49,18 @@ Please refer to the RabbitMQ section for configure these services with multiple
|
|||||||
|
|
||||||
==== Memcached
|
==== Memcached
|
||||||
|
|
||||||
Most of OpenStack services use an application to offer persistence and store ephemeral datas (like tokens).
|
Most of OpenStack services use an application to offer persistence and store ephemeral data (like tokens).
|
||||||
Memcached is one of them and can scale-out easily without specific trick.
|
Memcached is one of them and can scale-out easily without specific trick.
|
||||||
|
|
||||||
To install and configure it, you can read the http://code.google.com/p/memcached/wiki/NewStart[official documentation].
|
To install and configure it, read the http://code.google.com/p/memcached/wiki/NewStart[official documentation].
|
||||||
|
|
||||||
Memory caching is managed by Oslo-incubator for so the way to use multiple memcached servers is the same for all projects.
|
Memory caching is managed by oslo-incubator so the way to use multiple memcached servers is the same for all projects.
|
||||||
|
|
||||||
Example with two hosts:
|
Example with two hosts:
|
||||||
----
|
----
|
||||||
memcached_servers = controller1:11211,controller2:11211
|
memcached_servers = controller1:11211,controller2:11211
|
||||||
----
|
----
|
||||||
|
|
||||||
By default, controller1 will handle the caching service but if the host goes down, controller2 will do the job.
|
By default, controller1 handles the caching service but if the host goes down, controller2 does the job.
|
||||||
More informations about memcached installation are in the OpenStack Compute Manual.
|
For more information about memcached installation, see the OpenStack
|
||||||
|
Cloud Administrator Guide.
|
||||||
|
@ -2,9 +2,8 @@
|
|||||||
=== Database
|
=== Database
|
||||||
|
|
||||||
The first step is installing the database that sits at the heart of the
|
The first step is installing the database that sits at the heart of the
|
||||||
cluster. When we’re talking about High Availability, however, we’re
|
cluster. When we talk about High Availability, we talk about several databases (for redundancy) and a
|
||||||
talking about not just one database, but several (for redundancy) and a
|
means to keep them synchronized. In this case, we must choose the
|
||||||
means to keep them synchronized. In this case, we’re going to choose the
|
|
||||||
MySQL database, along with Galera for synchronous multi-master replication.
|
MySQL database, along with Galera for synchronous multi-master replication.
|
||||||
|
|
||||||
The choice of database isn’t a foregone conclusion; you’re not required
|
The choice of database isn’t a foregone conclusion; you’re not required
|
||||||
@ -81,7 +80,7 @@ ports.
|
|||||||
|
|
||||||
Now you’re ready to actually create the cluster.
|
Now you’re ready to actually create the cluster.
|
||||||
|
|
||||||
===== Creating the cluster
|
===== Create the cluster
|
||||||
|
|
||||||
In creating a cluster, you first start a single instance, which creates the cluster. The rest of the MySQL instances then connect to that cluster. For example, if you started on +10.0.0.10+ by executing the command:
|
In creating a cluster, you first start a single instance, which creates the cluster. The rest of the MySQL instances then connect to that cluster. For example, if you started on +10.0.0.10+ by executing the command:
|
||||||
|
|
||||||
@ -157,11 +156,11 @@ mysql> show status like 'wsrep%';
|
|||||||
|
|
||||||
|
|
||||||
[[ha-aa-db-galera-monitoring]]
|
[[ha-aa-db-galera-monitoring]]
|
||||||
==== Galera Monitoring Scripts
|
==== Galera monitoring scripts
|
||||||
|
|
||||||
(Coming soon)
|
(Coming soon)
|
||||||
|
|
||||||
==== Other ways to provide a Highly Available database
|
==== Other ways to provide a highly available database
|
||||||
|
|
||||||
MySQL with Galera is by no means the only way to achieve database HA. MariaDB
|
MySQL with Galera is by no means the only way to achieve database HA. MariaDB
|
||||||
(https://mariadb.org/) and Percona (http://www.percona.com/) also work with Galera.
|
(https://mariadb.org/) and Percona (http://www.percona.com/) also work with Galera.
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
[[ha-aa-haproxy]]
|
[[ha-aa-haproxy]]
|
||||||
=== HAproxy Nodes
|
=== HAproxy nodes
|
||||||
|
|
||||||
HAProxy is a very fast and reliable solution offering high availability, load balancing, and proxying
|
HAProxy is a very fast and reliable solution offering high availability, load balancing, and proxying
|
||||||
for TCP and HTTP-based applications. It is particularly suited for web sites crawling under very high loads
|
for TCP and HTTP-based applications. It is particularly suited for web sites crawling under very high loads
|
||||||
|
@ -1,39 +1,39 @@
|
|||||||
[[ha-aa-network]]
|
[[ha-aa-network]]
|
||||||
=== OpenStack Network Nodes
|
=== OpenStack network nodes
|
||||||
|
|
||||||
OpenStack Network Nodes contains:
|
OpenStack network nodes contain:
|
||||||
|
|
||||||
* Neutron DHCP Agent
|
* neutron DHCP agent
|
||||||
* Neutron L2 Agent
|
* neutron L2 agent
|
||||||
* Neutron L3 Agent
|
* Neutron L3 agent
|
||||||
* Neutron Metadata Agent
|
* neutron metadata agent
|
||||||
* Neutron LBaaS Agent
|
* neutron lbaas agent
|
||||||
|
|
||||||
NOTE: The Neutron L2 Agent does not need to be highly available. It has to be
|
NOTE: The neutron L2 agent does not need to be highly available. It has to be
|
||||||
installed on each Data Forwarding Node and controls the virtual networking
|
installed on each Data Forwarding Node and controls the virtual networking
|
||||||
drivers as Open-vSwitch or Linux Bridge. One L2 agent runs per node
|
drivers as Open vSwitch or Linux Bridge. One L2 agent runs per node
|
||||||
and controls its virtual interfaces. That's why it cannot be distributed and
|
and controls its virtual interfaces. That's why it cannot be distributed and
|
||||||
highly available.
|
highly available.
|
||||||
|
|
||||||
|
|
||||||
==== Running Neutron DHCP Agent
|
==== Run neutron DHCP agent
|
||||||
|
|
||||||
OpenStack Networking service has a scheduler that
|
OpenStack Networking service has a scheduler that
|
||||||
lets you run multiple agents across nodes. Also, the DHCP agent can be natively
|
lets you run multiple agents across nodes. Also, the DHCP agent can be natively
|
||||||
highly available. For details, see http://docs.openstack.org/trunk/config-reference/content/app_demo_multi_dhcp_agents.html[OpenStack Configuration Reference].
|
highly available. For details, see http://docs.openstack.org/trunk/config-reference/content/app_demo_multi_dhcp_agents.html[OpenStack Configuration Reference].
|
||||||
|
|
||||||
==== Running Neutron L3 Agent
|
==== Run neutron L3 agent
|
||||||
|
|
||||||
The Neutron L3 Agent is scalable thanks to the scheduler
|
The neutron L3 agent is scalable thanks to the scheduler
|
||||||
that allows distribution of virtual routers across multiple nodes.
|
that allows distribution of virtual routers across multiple nodes.
|
||||||
But there is no native feature to make these routers highly available.
|
But there is no native feature to make these routers highly available.
|
||||||
At this time, the Active / Passive solution exists to run the Neutron L3
|
At this time, the Active / Passive solution exists to run the Neutron L3
|
||||||
agent in failover mode with Pacemaker. See the Active / Passive
|
agent in failover mode with Pacemaker. See the Active / Passive
|
||||||
section of this guide.
|
section of this guide.
|
||||||
|
|
||||||
==== Running Neutron Metadata Agent
|
==== Run neutron metadata agent
|
||||||
|
|
||||||
There is no native feature to make this service highly available.
|
There is no native feature to make this service highly available.
|
||||||
At this time, the Active / Passive solution exists to run the Neutron
|
At this time, the Active / Passive solution exists to run the neutron
|
||||||
Metadata agent in failover mode with Pacemaker. See the Active /
|
metadata agent in failover mode with Pacemaker. See the Active /
|
||||||
Passive section of this guide.
|
Passive section of this guide.
|
||||||
|
@ -1,2 +1,2 @@
|
|||||||
[[ha-using-active-active]]
|
[[ha-using-active-active]]
|
||||||
== HA Using Active/Active
|
== HA using active/active
|
||||||
|
@ -81,7 +81,7 @@ rabbitmqctl set_policy HA '^(?!amq\.).*' '{"ha-mode": "all"}'
|
|||||||
http://www.rabbitmq.com/ha.html[More information about High availability in RabbitMQ]
|
http://www.rabbitmq.com/ha.html[More information about High availability in RabbitMQ]
|
||||||
|
|
||||||
|
|
||||||
==== Configure OpenStack Services to use RabbitMQ
|
==== Configure OpenStack services to use RabbitMQ
|
||||||
|
|
||||||
We have to configure the OpenStack components to use at least two RabbitMQ nodes.
|
We have to configure the OpenStack components to use at least two RabbitMQ nodes.
|
||||||
|
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
[[ha-aa-storage]]
|
[[ha-aa-storage]]
|
||||||
=== Storage Backends for OpenStack Image and Cinder
|
=== Storage back ends for OpenStack Image Service and Block Storage
|
||||||
|
|
||||||
(Coming soon)
|
(Coming soon)
|
||||||
|
|
||||||
|
@ -1,8 +1,8 @@
|
|||||||
[[ch-api]]
|
[[ch-api]]
|
||||||
=== API Node Cluster Stack
|
=== API node cluster stack
|
||||||
|
|
||||||
The API node exposes OpenStack API endpoints onto external network (Internet).
|
The API node exposes OpenStack API endpoints onto external network (Internet).
|
||||||
It needs to talk to the Cloud Controller on the management network.
|
It must talk to the cloud controller on the management network.
|
||||||
|
|
||||||
include::ap-api-vip.txt[]
|
include::ap-api-vip.txt[]
|
||||||
include::ap-keystone.txt[]
|
include::ap-keystone.txt[]
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
[[s-api-pacemaker]]
|
[[s-api-pacemaker]]
|
||||||
==== Configure Pacemaker Group
|
==== Configure Pacemaker group
|
||||||
|
|
||||||
Finally, we need to create a service +group+ to ensure that virtual IP is linked to the API services resources:
|
Finally, we need to create a service +group+ to ensure that virtual IP is linked to the API services resources:
|
||||||
|
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
[[s-api-vip]]
|
[[s-api-vip]]
|
||||||
==== Configure the VIP
|
==== Configure the VIP
|
||||||
|
|
||||||
First of all, we need to select and assign a virtual IP address (VIP) that can freely float between cluster nodes.
|
First, you must select and assign a virtual IP address (VIP) that can freely float between cluster nodes.
|
||||||
|
|
||||||
This configuration creates +p_ip_api+, a virtual IP address for use by the API node (192.168.42.103) :
|
This configuration creates +p_ip_api+, a virtual IP address for use by the API node (192.168.42.103) :
|
||||||
|
|
||||||
|
@ -1,23 +1,23 @@
|
|||||||
[[s-ceilometer-agent-central]]
|
[[s-ceilometer-agent-central]]
|
||||||
==== Highly available Ceilometer Central Agent
|
==== Highly available Telemetry central agent
|
||||||
|
|
||||||
Ceilometer is the metering service in OpenStack.
|
Telemetry (ceilometer) is the metering and monitoring service in
|
||||||
Central Agent polls for resource utilization statistics for resources not tied
|
OpenStack. The Central agent polls for resource utilization
|
||||||
to instances or compute nodes.
|
statistics for resources not tied to instances or compute nodes.
|
||||||
|
|
||||||
NOTE: Due to limitations of a polling model, a single instance of this agent
|
NOTE: Due to limitations of a polling model, a single instance of this agent
|
||||||
can be polling a given list of meters. In this setup, we install this service
|
can be polling a given list of meters. In this setup, we install this service
|
||||||
on the API nodes also in the active / passive mode.
|
on the API nodes also in the active / passive mode.
|
||||||
|
|
||||||
|
|
||||||
Making the Ceilometer Central Agent service highly available in active / passive mode involves
|
Making the Telemetry central agent service highly available in active / passive mode involves
|
||||||
managing its daemon with the Pacemaker cluster manager.
|
managing its daemon with the Pacemaker cluster manager.
|
||||||
|
|
||||||
NOTE: You will find at http://docs.openstack.org/developer/ceilometer/install/manual.html#installing-the-central-agent[this page]
|
NOTE: You will find at http://docs.openstack.org/developer/ceilometer/install/manual.html#installing-the-central-agent[this page]
|
||||||
the process to install the Ceilometer Central Agent.
|
the process to install the Telemetry central agent.
|
||||||
|
|
||||||
|
|
||||||
===== Adding the Ceilometer Central Agent resource to Pacemaker
|
===== Add the Telemetry central agent resource to Pacemaker
|
||||||
|
|
||||||
First of all, you need to download the resource agent to your system:
|
First of all, you need to download the resource agent to your system:
|
||||||
|
|
||||||
@ -28,7 +28,7 @@ chmod a+rx *
|
|||||||
----
|
----
|
||||||
|
|
||||||
You may then proceed with adding the Pacemaker configuration for
|
You may then proceed with adding the Pacemaker configuration for
|
||||||
the Ceilometer Central Agent resource. Connect to the Pacemaker cluster with +crm
|
the Telemetry central agent resource. Connect to the Pacemaker cluster with +crm
|
||||||
configure+, and add the following cluster resources:
|
configure+, and add the following cluster resources:
|
||||||
|
|
||||||
----
|
----
|
||||||
@ -47,7 +47,7 @@ Once completed, commit your configuration changes by entering +commit+
|
|||||||
from the +crm configure+ menu. Pacemaker will then start the Ceilometer Central Agent
|
from the +crm configure+ menu. Pacemaker will then start the Ceilometer Central Agent
|
||||||
service, and its dependent resources, on one of your nodes.
|
service, and its dependent resources, on one of your nodes.
|
||||||
|
|
||||||
===== Configuring Ceilometer Central Agent service
|
===== Configure Telemetry central agent service
|
||||||
|
|
||||||
Edit +/etc/ceilometer/ceilometer.conf+ :
|
Edit +/etc/ceilometer/ceilometer.conf+ :
|
||||||
----
|
----
|
||||||
@ -59,6 +59,6 @@ notifier_strategy = rabbit
|
|||||||
rabbit_host = 192.168.42.102
|
rabbit_host = 192.168.42.102
|
||||||
|
|
||||||
[database]
|
[database]
|
||||||
# We have to use MySQL connection to store datas :
|
# We have to use MySQL connection to store data :
|
||||||
sql_connection=mysql://ceilometer:password@192.168.42.101/ceilometer
|
sql_connection=mysql://ceilometer:password@192.168.42.101/ceilometer
|
||||||
----
|
----
|
||||||
|
@ -1,17 +1,18 @@
|
|||||||
[[s-cinder-api]]
|
[[s-cinder-api]]
|
||||||
==== Highly available Cinder API
|
==== Highly available Block Storage API
|
||||||
|
|
||||||
Cinder is the block storage service in OpenStack.
|
Making the Block Storage (cinder) API service highly available in active / passive mode involves
|
||||||
Making the Cinder API service highly available in active / passive mode involves
|
|
||||||
|
|
||||||
* configuring Cinder to listen on the VIP address,
|
* Configure Block Storage to listen on the VIP address,
|
||||||
* managing Cinder API daemon with the Pacemaker cluster manager,
|
* managing Block Storage API daemon with the Pacemaker cluster manager,
|
||||||
* configuring OpenStack services to use this IP address.
|
* Configure OpenStack services to use this IP address.
|
||||||
|
|
||||||
NOTE: Here is the http://docs.openstack.org/trunk/install-guide/install/apt/content/cinder-install.html[documentation] for installing Cinder service.
|
NOTE: Here is the
|
||||||
|
http://docs.openstack.org/trunk/install-guide/install/apt/content/ch_cinder.html[documentation]
|
||||||
|
for installing Block Storage service.
|
||||||
|
|
||||||
|
|
||||||
===== Adding Cinder API resource to Pacemaker
|
===== Add Block Storage API resource to Pacemaker
|
||||||
|
|
||||||
First of all, you need to download the resource agent to your system:
|
First of all, you need to download the resource agent to your system:
|
||||||
|
|
||||||
@ -21,8 +22,8 @@ wget https://raw.github.com/madkiss/openstack-resource-agents/master/ocf/cinder-
|
|||||||
chmod a+rx *
|
chmod a+rx *
|
||||||
----
|
----
|
||||||
|
|
||||||
You may now proceed with adding the Pacemaker configuration for
|
You can now add the Pacemaker configuration for
|
||||||
Cinder API resource. Connect to the Pacemaker cluster with +crm
|
Block Storage API resource. Connect to the Pacemaker cluster with +crm
|
||||||
configure+, and add the following cluster resources:
|
configure+, and add the following cluster resources:
|
||||||
|
|
||||||
----
|
----
|
||||||
@ -31,7 +32,7 @@ include::includes/pacemaker-cinder_api.crm[]
|
|||||||
|
|
||||||
This configuration creates
|
This configuration creates
|
||||||
|
|
||||||
* +p_cinder-api+, a resource for manage Cinder API service
|
* +p_cinder-api+, a resource for manage Block Storage API service
|
||||||
|
|
||||||
+crm configure+ supports batch input, so you may copy and paste the
|
+crm configure+ supports batch input, so you may copy and paste the
|
||||||
above into your live pacemaker configuration, and then make changes as
|
above into your live pacemaker configuration, and then make changes as
|
||||||
@ -40,17 +41,17 @@ required. For example, you may enter +edit p_ip_cinder-api+ from the
|
|||||||
virtual IP address.
|
virtual IP address.
|
||||||
|
|
||||||
Once completed, commit your configuration changes by entering +commit+
|
Once completed, commit your configuration changes by entering +commit+
|
||||||
from the +crm configure+ menu. Pacemaker will then start the Cinder API
|
from the +crm configure+ menu. Pacemaker will then start the Block Storage API
|
||||||
service, and its dependent resources, on one of your nodes.
|
service, and its dependent resources, on one of your nodes.
|
||||||
|
|
||||||
===== Configuring Cinder API service
|
===== Configure Block Storage API service
|
||||||
|
|
||||||
Edit +/etc/cinder/cinder.conf+:
|
Edit +/etc/cinder/cinder.conf+:
|
||||||
----
|
----
|
||||||
# We have to use MySQL connection to store data:
|
# We have to use MySQL connection to store data:
|
||||||
sql_connection=mysql://cinder:password@192.168.42.101/cinder
|
sql_connection=mysql://cinder:password@192.168.42.101/cinder
|
||||||
|
|
||||||
# We bind Cinder API to the VIP :
|
# We bind Block Storage API to the VIP:
|
||||||
osapi_volume_listen = 192.168.42.103
|
osapi_volume_listen = 192.168.42.103
|
||||||
|
|
||||||
# We send notifications to High Available RabbitMQ:
|
# We send notifications to High Available RabbitMQ:
|
||||||
@ -59,13 +60,13 @@ rabbit_host = 192.168.42.102
|
|||||||
----
|
----
|
||||||
|
|
||||||
|
|
||||||
===== Configuring OpenStack Services to use High Available Cinder API
|
===== Configure OpenStack services to use highly available Block Storage API
|
||||||
|
|
||||||
Your OpenStack services must now point their Cinder API configuration to
|
Your OpenStack services must now point their Block Storage API configuration to
|
||||||
the highly available, virtual cluster IP address -- rather than a
|
the highly available, virtual cluster IP address -- rather than a
|
||||||
Cinder API server's physical IP address as you normally would.
|
Block Storage API server's physical IP address as you normally would.
|
||||||
|
|
||||||
You need to create the Cinder API Endpoint with this IP.
|
You must create the Block Storage API endpoint with this IP.
|
||||||
|
|
||||||
NOTE: If you are using both private and public IP, you should create two Virtual IPs and define your endpoint like this:
|
NOTE: If you are using both private and public IP, you should create two Virtual IPs and define your endpoint like this:
|
||||||
----
|
----
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
[[ch-controller]]
|
[[ch-controller]]
|
||||||
=== Cloud Controller Cluster Stack
|
=== Cloud controller cluster stack
|
||||||
|
|
||||||
The Cloud Controller sits on the management network and needs to talk to all other services.
|
The cloud controller runs on the management network and must talk to all other services.
|
||||||
|
|
||||||
include::ap-mysql.txt[]
|
include::ap-mysql.txt[]
|
||||||
include::ap-rabbitmq.txt[]
|
include::ap-rabbitmq.txt[]
|
||||||
|
@ -2,16 +2,16 @@
|
|||||||
==== Highly available OpenStack Image API
|
==== Highly available OpenStack Image API
|
||||||
|
|
||||||
OpenStack Image Service offers a service for discovering, registering, and retrieving virtual machine images.
|
OpenStack Image Service offers a service for discovering, registering, and retrieving virtual machine images.
|
||||||
Making the OpenStack Image API service highly available in active / passive mode involves
|
To make the OpenStack Image API service highly available in active / passive mode, you must:
|
||||||
|
|
||||||
* configuring OpenStack Image to listen on the VIP address,
|
* Configure OpenStack Image to listen on the VIP address.
|
||||||
* managing OpenStack Image API daemon with the Pacemaker cluster manager,
|
* Manage OpenStack Image API daemon with the Pacemaker cluster manager.
|
||||||
* configuring OpenStack services to use this IP address.
|
* Configure OpenStack services to use this IP address.
|
||||||
|
|
||||||
NOTE: Here is the http://docs.openstack.org/trunk/install-guide/install/apt/content/ch_installing-openstack-image.html[documentation] for installing OpenStack Image API service.
|
NOTE: Here is the http://docs.openstack.org/trunk/install-guide/install/apt/content/ch_installing-openstack-image.html[documentation] for installing OpenStack Image API service.
|
||||||
|
|
||||||
|
|
||||||
===== Adding OpenStack Image API resource to Pacemaker
|
===== Add OpenStack Image API resource to Pacemaker
|
||||||
|
|
||||||
First of all, you need to download the resource agent to your system:
|
First of all, you need to download the resource agent to your system:
|
||||||
|
|
||||||
@ -21,7 +21,7 @@ wget https://raw.github.com/madkiss/openstack-resource-agents/master/ocf/glance-
|
|||||||
chmod a+rx *
|
chmod a+rx *
|
||||||
----
|
----
|
||||||
|
|
||||||
You may now proceed with adding the Pacemaker configuration for
|
You can now add the Pacemaker configuration for
|
||||||
OpenStack Image API resource. Connect to the Pacemaker cluster with +crm
|
OpenStack Image API resource. Connect to the Pacemaker cluster with +crm
|
||||||
configure+, and add the following cluster resources:
|
configure+, and add the following cluster resources:
|
||||||
|
|
||||||
@ -43,7 +43,7 @@ Once completed, commit your configuration changes by entering +commit+
|
|||||||
from the +crm configure+ menu. Pacemaker will then start the OpenStack Image API
|
from the +crm configure+ menu. Pacemaker will then start the OpenStack Image API
|
||||||
service, and its dependent resources, on one of your nodes.
|
service, and its dependent resources, on one of your nodes.
|
||||||
|
|
||||||
===== Configuring OpenStack Image API service
|
===== Configure OpenStack Image Service API
|
||||||
|
|
||||||
Edit +/etc/glance/glance-api.conf+:
|
Edit +/etc/glance/glance-api.conf+:
|
||||||
----
|
----
|
||||||
@ -62,7 +62,7 @@ rabbit_host = 192.168.42.102
|
|||||||
----
|
----
|
||||||
|
|
||||||
|
|
||||||
===== Configuring OpenStack Services to use High Available OpenStack Image API
|
===== Configure OpenStack services to use high available OpenStack Image API
|
||||||
|
|
||||||
Your OpenStack services must now point their OpenStack Image API configuration to
|
Your OpenStack services must now point their OpenStack Image API configuration to
|
||||||
the highly available, virtual cluster IP address -- rather than an
|
the highly available, virtual cluster IP address -- rather than an
|
||||||
@ -76,7 +76,7 @@ the following line in your +nova.conf+ file:
|
|||||||
glance_api_servers = 192.168.42.103
|
glance_api_servers = 192.168.42.103
|
||||||
----
|
----
|
||||||
|
|
||||||
You need also to create the OpenStack Image API Endpoint with this IP.
|
You must also create the OpenStack Image API endpoint with this IP.
|
||||||
|
|
||||||
NOTE: If you are using both private and public IP addresses, you should create two Virtual IP addresses and define your endpoint like this:
|
NOTE: If you are using both private and public IP addresses, you should create two Virtual IP addresses and define your endpoint like this:
|
||||||
----
|
----
|
||||||
|
@ -4,14 +4,14 @@
|
|||||||
OpenStack Identity is the Identity Service in OpenStack and used by many services.
|
OpenStack Identity is the Identity Service in OpenStack and used by many services.
|
||||||
Making the OpenStack Identity service highly available in active / passive mode involves
|
Making the OpenStack Identity service highly available in active / passive mode involves
|
||||||
|
|
||||||
* configuring OpenStack Identity to listen on the VIP address,
|
* Configure OpenStack Identity to listen on the VIP address,
|
||||||
* managing OpenStack Identity daemon with the Pacemaker cluster manager,
|
* managing OpenStack Identity daemon with the Pacemaker cluster manager,
|
||||||
* configuring OpenStack services to use this IP address.
|
* Configure OpenStack services to use this IP address.
|
||||||
|
|
||||||
NOTE: Here is the http://docs.openstack.org/trunk/install-guide/install/apt/content/ch_installing-openstack-identity-service.html[documentation] for installing OpenStack Identity service.
|
NOTE: Here is the http://docs.openstack.org/trunk/install-guide/install/apt/content/ch_installing-openstack-identity-service.html[documentation] for installing OpenStack Identity service.
|
||||||
|
|
||||||
|
|
||||||
===== Adding OpenStack Identity resource to Pacemaker
|
===== Add OpenStack Identity resource to Pacemaker
|
||||||
|
|
||||||
First of all, you need to download the resource agent to your system:
|
First of all, you need to download the resource agent to your system:
|
||||||
|
|
||||||
@ -23,7 +23,7 @@ wget https://raw.github.com/madkiss/openstack-resource-agents/master/ocf/keyston
|
|||||||
chmod a+rx *
|
chmod a+rx *
|
||||||
----
|
----
|
||||||
|
|
||||||
You may now proceed with adding the Pacemaker configuration for
|
You can now add the Pacemaker configuration for
|
||||||
OpenStack Identity resource. Connect to the Pacemaker cluster with +crm
|
OpenStack Identity resource. Connect to the Pacemaker cluster with +crm
|
||||||
configure+, and add the following cluster resources:
|
configure+, and add the following cluster resources:
|
||||||
|
|
||||||
@ -43,7 +43,7 @@ Once completed, commit your configuration changes by entering +commit+
|
|||||||
from the +crm configure+ menu. Pacemaker will then start the OpenStack Identity
|
from the +crm configure+ menu. Pacemaker will then start the OpenStack Identity
|
||||||
service, and its dependent resources, on one of your nodes.
|
service, and its dependent resources, on one of your nodes.
|
||||||
|
|
||||||
===== Configuring OpenStack Identity service
|
===== Configure OpenStack Identity service
|
||||||
|
|
||||||
You need to edit your OpenStack Identity configuration file (+keystone.conf+) and change the bind parameters:
|
You need to edit your OpenStack Identity configuration file (+keystone.conf+) and change the bind parameters:
|
||||||
|
|
||||||
@ -69,7 +69,7 @@ driver = keystone.identity.backends.sql.Identity
|
|||||||
----
|
----
|
||||||
|
|
||||||
|
|
||||||
===== Configuring OpenStack Services to use the Highly Available OpenStack Identity
|
===== Configure OpenStack services to use the highly available OpenStack Identity
|
||||||
|
|
||||||
Your OpenStack services must now point their OpenStack Identity configuration to
|
Your OpenStack services must now point their OpenStack Identity configuration to
|
||||||
the highly available, virtual cluster IP address -- rather than a
|
the highly available, virtual cluster IP address -- rather than a
|
||||||
@ -91,7 +91,7 @@ NOTE : If you are using both private and public IP addresses, you should create
|
|||||||
keystone endpoint-create --region $KEYSTONE_REGION --service-id $service-id --publicurl 'http://PUBLIC_VIP:5000/v2.0' --adminurl 'http://192.168.42.103:35357/v2.0' --internalurl 'http://192.168.42.103:5000/v2.0'
|
keystone endpoint-create --region $KEYSTONE_REGION --service-id $service-id --publicurl 'http://PUBLIC_VIP:5000/v2.0' --adminurl 'http://192.168.42.103:35357/v2.0' --internalurl 'http://192.168.42.103:5000/v2.0'
|
||||||
----
|
----
|
||||||
|
|
||||||
If you are using the Horizon Dashboard, you should edit the +local_settings.py+ file:
|
If you are using the horizon dashboard, you should edit the +local_settings.py+ file:
|
||||||
----
|
----
|
||||||
OPENSTACK_HOST = 192.168.42.103
|
OPENSTACK_HOST = 192.168.42.103
|
||||||
----
|
----
|
||||||
|
@ -4,23 +4,23 @@
|
|||||||
MySQL is the default database server used by many OpenStack
|
MySQL is the default database server used by many OpenStack
|
||||||
services. Making the MySQL service highly available involves
|
services. Making the MySQL service highly available involves
|
||||||
|
|
||||||
* configuring a DRBD device for use by MySQL,
|
* Configure a DRBD device for use by MySQL,
|
||||||
* configuring MySQL to use a data directory residing on that DRBD
|
* Configure MySQL to use a data directory residing on that DRBD
|
||||||
device,
|
device,
|
||||||
* selecting and assigning a virtual IP address (VIP) that can freely
|
* selecting and assigning a virtual IP address (VIP) that can freely
|
||||||
float between cluster nodes,
|
float between cluster nodes,
|
||||||
* configuring MySQL to listen on that IP address,
|
* Configure MySQL to listen on that IP address,
|
||||||
* managing all resources, including the MySQL daemon itself, with
|
* managing all resources, including the MySQL daemon itself, with
|
||||||
the Pacemaker cluster manager.
|
the Pacemaker cluster manager.
|
||||||
|
|
||||||
NOTE: http://codership.com/products/mysql_galera[MySQL/Galera] is an
|
NOTE: http://codership.com/products/mysql_galera[MySQL/Galera] is an
|
||||||
alternative method of configuring MySQL for high availability. It is
|
alternative method of Configure MySQL for high availability. It is
|
||||||
likely to become the preferred method of achieving MySQL high
|
likely to become the preferred method of achieving MySQL high
|
||||||
availability once it has sufficiently matured. At the time of writing,
|
availability once it has sufficiently matured. At the time of writing,
|
||||||
however, the Pacemaker/DRBD based approach remains the recommended one
|
however, the Pacemaker/DRBD based approach remains the recommended one
|
||||||
for OpenStack environments.
|
for OpenStack environments.
|
||||||
|
|
||||||
===== Configuring DRBD
|
===== Configure DRBD
|
||||||
|
|
||||||
The Pacemaker based MySQL server requires a DRBD resource from
|
The Pacemaker based MySQL server requires a DRBD resource from
|
||||||
which it mounts the +/var/lib/mysql+ directory. In this example,
|
which it mounts the +/var/lib/mysql+ directory. In this example,
|
||||||
@ -93,7 +93,7 @@ background:
|
|||||||
drbdadm secondary mysql
|
drbdadm secondary mysql
|
||||||
----
|
----
|
||||||
|
|
||||||
===== Preparing MySQL for Pacemaker high availability
|
===== Prepare MySQL for Pacemaker high availability
|
||||||
|
|
||||||
In order for Pacemaker monitoring to function properly, you must
|
In order for Pacemaker monitoring to function properly, you must
|
||||||
ensure that MySQL's database files reside on the DRBD device. If you
|
ensure that MySQL's database files reside on the DRBD device. If you
|
||||||
@ -122,9 +122,9 @@ node1:# umount /mnt
|
|||||||
Regardless of the approach, the steps outlined here must be completed
|
Regardless of the approach, the steps outlined here must be completed
|
||||||
on only one cluster node.
|
on only one cluster node.
|
||||||
|
|
||||||
===== Adding MySQL resources to Pacemaker
|
===== Add MySQL resources to Pacemaker
|
||||||
|
|
||||||
You may now proceed with adding the Pacemaker configuration for
|
You can now add the Pacemaker configuration for
|
||||||
MySQL resources. Connect to the Pacemaker cluster with +crm
|
MySQL resources. Connect to the Pacemaker cluster with +crm
|
||||||
configure+, and add the following cluster resources:
|
configure+, and add the following cluster resources:
|
||||||
|
|
||||||
@ -154,7 +154,7 @@ Once completed, commit your configuration changes by entering +commit+
|
|||||||
from the +crm configure+ menu. Pacemaker will then start the MySQL
|
from the +crm configure+ menu. Pacemaker will then start the MySQL
|
||||||
service, and its dependent resources, on one of your nodes.
|
service, and its dependent resources, on one of your nodes.
|
||||||
|
|
||||||
===== Configuring OpenStack services for highly available MySQL
|
===== Configure OpenStack services for highly available MySQL
|
||||||
|
|
||||||
Your OpenStack services must now point their MySQL configuration to
|
Your OpenStack services must now point their MySQL configuration to
|
||||||
the highly available, virtual cluster IP address -- rather than a
|
the highly available, virtual cluster IP address -- rather than a
|
||||||
|
@ -1,21 +1,21 @@
|
|||||||
[[ch-network]]
|
[[ch-network]]
|
||||||
=== Network Controller Cluster Stack
|
=== Network controller cluster stack
|
||||||
|
|
||||||
The Network controller sits on the management and data network, and needs to be connected to the Internet if a VM needs access to it.
|
The network controller sits on the management and data network, and needs to be connected to the Internet if a VM needs access to it.
|
||||||
|
|
||||||
NOTE: Both nodes should have the same hostname since the Neutron scheduler will be
|
NOTE: Both nodes should have the same hostname since the Networking scheduler will be
|
||||||
aware of one node, for example a virtual router attached to a single L3 node.
|
aware of one node, for example a virtual router attached to a single L3 node.
|
||||||
|
|
||||||
==== Highly available Neutron L3 Agent
|
==== Highly available neutron L3 agent
|
||||||
|
|
||||||
The Neutron L3 agent provides L3/NAT forwarding to ensure external network access
|
The neutron L3 agent provides L3/NAT forwarding to ensure external network access
|
||||||
for VMs on tenant networks. High Availability for the L3 agent is achieved by
|
for VMs on tenant networks. High availability for the L3 agent is achieved by
|
||||||
adopting Pacemaker.
|
adopting Pacemaker.
|
||||||
|
|
||||||
NOTE: Here is the http://docs.openstack.org/trunk/config-reference/content/section_adv_cfg_l3_agent.html[documentation] for installing Neutron L3 Agent.
|
NOTE: Here is the http://docs.openstack.org/trunk/config-reference/content/section_adv_cfg_l3_agent.html[documentation] for installing neutron L3 agent.
|
||||||
|
|
||||||
|
|
||||||
===== Adding Neutron L3 Agent resource to Pacemaker
|
===== Add neutron L3 agent resource to Pacemaker
|
||||||
First of all, you need to download the resource agent to your system:
|
First of all, you need to download the resource agent to your system:
|
||||||
|
|
||||||
----
|
----
|
||||||
@ -25,7 +25,7 @@ chmod a+rx neutron-l3-agent
|
|||||||
----
|
----
|
||||||
|
|
||||||
You may now proceed with adding the Pacemaker configuration for
|
You may now proceed with adding the Pacemaker configuration for
|
||||||
Neutron L3 Agent resource. Connect to the Pacemaker cluster with +crm
|
neutron L3 agent resource. Connect to the Pacemaker cluster with +crm
|
||||||
configure+, and add the following cluster resources:
|
configure+, and add the following cluster resources:
|
||||||
|
|
||||||
----
|
----
|
||||||
@ -41,23 +41,23 @@ above into your live pacemaker configuration, and then make changes as
|
|||||||
required.
|
required.
|
||||||
|
|
||||||
Once completed, commit your configuration changes by entering +commit+
|
Once completed, commit your configuration changes by entering +commit+
|
||||||
from the +crm configure+ menu. Pacemaker will then start the Neutron L3 Agent
|
from the +crm configure+ menu. Pacemaker will then start the neutron L3 agent
|
||||||
service, and its dependent resources, on one of your nodes.
|
service, and its dependent resources, on one of your nodes.
|
||||||
|
|
||||||
NOTE: This method does not ensure a zero downtime since it has to recreate all
|
NOTE: This method does not ensure a zero downtime since it has to recreate all
|
||||||
the namespaces and virtual routers on the node.
|
the namespaces and virtual routers on the node.
|
||||||
|
|
||||||
|
|
||||||
==== Highly available Neutron DHCP Agent
|
==== Highly available neutron DHCP agent
|
||||||
|
|
||||||
Neutron DHCP agent distributes IP addresses to the VMs with dnsmasq (by
|
Neutron DHCP agent distributes IP addresses to the VMs with dnsmasq (by
|
||||||
default). High Availability for the DHCP agent is achieved by adopting
|
default). High availability for the DHCP agent is achieved by adopting
|
||||||
Pacemaker.
|
Pacemaker.
|
||||||
|
|
||||||
NOTE: Here is the http://docs.openstack.org/trunk/config-reference/content/section_adv_cfg_dhcp_agent.html[documentation] for installing Neutron DHCP Agent.
|
NOTE: Here is the http://docs.openstack.org/trunk/config-reference/content/section_adv_cfg_dhcp_agent.html[documentation] for installing neutron DHCP agent.
|
||||||
|
|
||||||
|
|
||||||
===== Adding Neutron DHCP Agent resource to Pacemaker
|
===== Add neutron DHCP agent resource to Pacemaker
|
||||||
First of all, you need to download the resource agent to your system:
|
First of all, you need to download the resource agent to your system:
|
||||||
|
|
||||||
----
|
----
|
||||||
@ -67,7 +67,7 @@ chmod a+rx neutron-dhcp-agent
|
|||||||
----
|
----
|
||||||
|
|
||||||
You may now proceed with adding the Pacemaker configuration for
|
You may now proceed with adding the Pacemaker configuration for
|
||||||
Neutron DHCP Agent resource. Connect to the Pacemaker cluster with +crm
|
neutron DHCP agent resource. Connect to the Pacemaker cluster with +crm
|
||||||
configure+, and add the following cluster resources:
|
configure+, and add the following cluster resources:
|
||||||
|
|
||||||
----
|
----
|
||||||
@ -84,20 +84,20 @@ above into your live pacemaker configuration, and then make changes as
|
|||||||
required.
|
required.
|
||||||
|
|
||||||
Once completed, commit your configuration changes by entering +commit+
|
Once completed, commit your configuration changes by entering +commit+
|
||||||
from the +crm configure+ menu. Pacemaker will then start the Neutron DHCP
|
from the +crm configure+ menu. Pacemaker will then start the neutron DHCP
|
||||||
Agent service, and its dependent resources, on one of your nodes.
|
agent service, and its dependent resources, on one of your nodes.
|
||||||
|
|
||||||
|
|
||||||
==== Highly available Neutron Metadata Agent
|
==== Highly available neutron metadata agent
|
||||||
|
|
||||||
Neutron Metadata agent allows Nova API Metadata to be reachable by VMs on tenant
|
Neutron metadata agent allows Compute API metadata to be reachable by VMs on tenant
|
||||||
networks. High Availability for the Metadata agent is achieved by adopting
|
networks. High availability for the metadata agent is achieved by adopting
|
||||||
Pacemaker.
|
Pacemaker.
|
||||||
|
|
||||||
NOTE: Here is the http://docs.openstack.org/trunk/config-reference/content/networking-options-metadata.html[documentation] for installing Neutron Metadata Agent.
|
NOTE: Here is the http://docs.openstack.org/trunk/config-reference/content/networking-options-metadata.html[documentation] for installing Neutron Metadata Agent.
|
||||||
|
|
||||||
|
|
||||||
===== Adding Neutron Metadata Agent resource to Pacemaker
|
===== Add neutron metadata agent resource to Pacemaker
|
||||||
First of all, you need to download the resource agent to your system:
|
First of all, you need to download the resource agent to your system:
|
||||||
|
|
||||||
----
|
----
|
||||||
@ -107,7 +107,7 @@ chmod a+rx neutron-metadata-agent
|
|||||||
----
|
----
|
||||||
|
|
||||||
You may now proceed with adding the Pacemaker configuration for
|
You may now proceed with adding the Pacemaker configuration for
|
||||||
Neutron Metadata Agent resource. Connect to the Pacemaker cluster with +crm
|
neutron metadata agent resource. Connect to the Pacemaker cluster with +crm
|
||||||
configure+, and add the following cluster resources:
|
configure+, and add the following cluster resources:
|
||||||
|
|
||||||
----
|
----
|
||||||
@ -124,12 +124,12 @@ above into your live pacemaker configuration, and then make changes as
|
|||||||
required.
|
required.
|
||||||
|
|
||||||
Once completed, commit your configuration changes by entering +commit+
|
Once completed, commit your configuration changes by entering +commit+
|
||||||
from the +crm configure+ menu. Pacemaker will then start the Neutron Metadata
|
from the +crm configure+ menu. Pacemaker will then start the neutron metadata
|
||||||
Agent service, and its dependent resources, on one of your nodes.
|
agent service, and its dependent resources, on one of your nodes.
|
||||||
|
|
||||||
|
|
||||||
==== Manage network resources
|
==== Manage network resources
|
||||||
You may now proceed with adding the Pacemaker configuration for
|
You can now add the Pacemaker configuration for
|
||||||
managing all network resources together with a group.
|
managing all network resources together with a group.
|
||||||
Connect to the Pacemaker cluster with +crm configure+, and add the following
|
Connect to the Pacemaker cluster with +crm configure+, and add the following
|
||||||
cluster resources:
|
cluster resources:
|
||||||
|
@ -1,17 +1,17 @@
|
|||||||
[[s-neutron-server]]
|
[[s-neutron-server]]
|
||||||
==== Highly available OpenStack Networking Server
|
==== Highly available OpenStack Networking server
|
||||||
|
|
||||||
OpenStack Networking is the network connectivity service in OpenStack.
|
OpenStack Networking is the network connectivity service in OpenStack.
|
||||||
Making the OpenStack Networking Server service highly available in active / passive mode involves
|
Making the OpenStack Networking Server service highly available in active / passive mode involves
|
||||||
|
|
||||||
* configuring OpenStack Networking to listen on the VIP address,
|
* Configure OpenStack Networking to listen on the VIP address,
|
||||||
* managing OpenStack Networking API Server daemon with the Pacemaker cluster manager,
|
* managing OpenStack Networking API Server daemon with the Pacemaker cluster manager,
|
||||||
* configuring OpenStack services to use this IP address.
|
* Configure OpenStack services to use this IP address.
|
||||||
|
|
||||||
NOTE: Here is the http://docs.openstack.org/trunk/install-guide/install/apt/content/ch_installing-openstack-networking.html[documentation] for installing OpenStack Networking service.
|
NOTE: Here is the http://docs.openstack.org/trunk/install-guide/install/apt/content/ch_installing-openstack-networking.html[documentation] for installing OpenStack Networking service.
|
||||||
|
|
||||||
|
|
||||||
===== Adding OpenStack Networking Server resource to Pacemaker
|
===== Add OpenStack Networking Server resource to Pacemaker
|
||||||
|
|
||||||
First of all, you need to download the resource agent to your system:
|
First of all, you need to download the resource agent to your system:
|
||||||
----
|
----
|
||||||
@ -20,7 +20,7 @@ wget https://raw.github.com/madkiss/openstack-resource-agents/master/ocf/neutron
|
|||||||
chmod a+rx *
|
chmod a+rx *
|
||||||
----
|
----
|
||||||
|
|
||||||
You may now proceed with adding the Pacemaker configuration for
|
You can now add the Pacemaker configuration for
|
||||||
OpenStack Networking Server resource. Connect to the Pacemaker cluster with +crm
|
OpenStack Networking Server resource. Connect to the Pacemaker cluster with +crm
|
||||||
configure+, and add the following cluster resources:
|
configure+, and add the following cluster resources:
|
||||||
----
|
----
|
||||||
@ -39,7 +39,7 @@ Once completed, commit your configuration changes by entering +commit+
|
|||||||
from the +crm configure+ menu. Pacemaker will then start the OpenStack Networking API
|
from the +crm configure+ menu. Pacemaker will then start the OpenStack Networking API
|
||||||
service, and its dependent resources, on one of your nodes.
|
service, and its dependent resources, on one of your nodes.
|
||||||
|
|
||||||
===== Configuring OpenStack Networking Server
|
===== Configure OpenStack Networking server
|
||||||
|
|
||||||
Edit +/etc/neutron/neutron.conf+ :
|
Edit +/etc/neutron/neutron.conf+ :
|
||||||
----
|
----
|
||||||
@ -59,18 +59,18 @@ connection = mysql://neutron:password@192.168.42.101/neutron
|
|||||||
----
|
----
|
||||||
|
|
||||||
|
|
||||||
===== Configuring OpenStack Services to use Highly available OpenStack Networking Server
|
===== Configure OpenStack services to use highly available OpenStack Networking server
|
||||||
|
|
||||||
Your OpenStack services must now point their OpenStack Networking Server configuration to
|
Your OpenStack services must now point their OpenStack Networking Server configuration to
|
||||||
the highly available, virtual cluster IP address -- rather than an
|
the highly available, virtual cluster IP address -- rather than an
|
||||||
OpenStack Networking server's physical IP address as you normally would.
|
OpenStack Networking server's physical IP address as you normally would.
|
||||||
|
|
||||||
For example, you should configure OpenStack Compute for using Highly Available OpenStack Networking Server in editing +nova.conf+ file:
|
For example, you should configure OpenStack Compute for using highly available OpenStack Networking server in editing +nova.conf+ file:
|
||||||
----
|
----
|
||||||
neutron_url = http://192.168.42.103:9696
|
neutron_url = http://192.168.42.103:9696
|
||||||
----
|
----
|
||||||
|
|
||||||
You need to create the OpenStack Networking Server Endpoint with this IP.
|
You need to create the OpenStack Networking server endpoint with this IP.
|
||||||
|
|
||||||
NOTE: If you are using both private and public IP addresses, you should create two Virtual IP addresses and define your endpoint like this:
|
NOTE: If you are using both private and public IP addresses, you should create two Virtual IP addresses and define your endpoint like this:
|
||||||
----
|
----
|
||||||
|
@ -1,2 +1,2 @@
|
|||||||
[[ha-using-active-passive]]
|
[[ha-using-active-passive]]
|
||||||
== HA Using Active/Passive
|
== HA using active/passive
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
[[ch-pacemaker]]
|
[[ch-pacemaker]]
|
||||||
=== The Pacemaker Cluster Stack
|
=== The Pacemaker cluster stack
|
||||||
|
|
||||||
OpenStack infrastructure high availability relies on the
|
OpenStack infrastructure high availability relies on the
|
||||||
http://www.clusterlabs.org[Pacemaker] cluster stack, the
|
http://www.clusterlabs.org[Pacemaker] cluster stack, the
|
||||||
@ -21,11 +21,11 @@ databases or virtual IP addresses), existing third-party RAs (such as
|
|||||||
for RabbitMQ), and native OpenStack RAs (such as those managing the
|
for RabbitMQ), and native OpenStack RAs (such as those managing the
|
||||||
OpenStack Identity and Image Services).
|
OpenStack Identity and Image Services).
|
||||||
|
|
||||||
==== Installing Packages
|
==== Install packages
|
||||||
|
|
||||||
On any host that is meant to be part of a Pacemaker cluster, you must
|
On any host that is meant to be part of a Pacemaker cluster, you must
|
||||||
first establish cluster communications through the Corosync messaging
|
first establish cluster communications through the Corosync messaging
|
||||||
layer. This involves installing the following packages (and their
|
layer. This involves Install the following packages (and their
|
||||||
dependencies, which your package manager will normally install
|
dependencies, which your package manager will normally install
|
||||||
automatically):
|
automatically):
|
||||||
|
|
||||||
@ -37,9 +37,9 @@ automatically):
|
|||||||
agents from +cluster-glue+)
|
agents from +cluster-glue+)
|
||||||
* +resource-agents+
|
* +resource-agents+
|
||||||
|
|
||||||
==== Setting up Corosync
|
==== Set up Corosync
|
||||||
|
|
||||||
Besides installing the +corosync+ package, you will also have to
|
Besides installing the +corosync+ package, you must also
|
||||||
create a configuration file, stored in
|
create a configuration file, stored in
|
||||||
+/etc/corosync/corosync.conf+. Most distributions ship an example
|
+/etc/corosync/corosync.conf+. Most distributions ship an example
|
||||||
configuration file (+corosync.conf.example+) as part of the
|
configuration file (+corosync.conf.example+) as part of the
|
||||||
@ -141,7 +141,7 @@ runtime.totem.pg.mrp.srp.983895584.status=joined
|
|||||||
You should see a +status=joined+ entry for each of your constituent
|
You should see a +status=joined+ entry for each of your constituent
|
||||||
cluster nodes.
|
cluster nodes.
|
||||||
|
|
||||||
==== Starting Pacemaker
|
==== Start Pacemaker
|
||||||
|
|
||||||
Once the Corosync services have been started, and you have established
|
Once the Corosync services have been started, and you have established
|
||||||
that the cluster is communicating properly, it is safe to start
|
that the cluster is communicating properly, it is safe to start
|
||||||
@ -170,7 +170,7 @@ Version: 1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c
|
|||||||
Online: [ node2 node1 ]
|
Online: [ node2 node1 ]
|
||||||
----
|
----
|
||||||
|
|
||||||
==== Setting basic cluster properties
|
==== Set basic cluster properties
|
||||||
|
|
||||||
Once your Pacemaker cluster is set up, it is recommended to set a few
|
Once your Pacemaker cluster is set up, it is recommended to set a few
|
||||||
basic cluster properties. To do so, start the +crm+ shell and change
|
basic cluster properties. To do so, start the +crm+ shell and change
|
||||||
|
@ -23,7 +23,7 @@ approach remains the recommended one for OpenStack environments,
|
|||||||
although this may change in the near future as RabbitMQ active-active
|
although this may change in the near future as RabbitMQ active-active
|
||||||
mirrored queues mature.
|
mirrored queues mature.
|
||||||
|
|
||||||
===== Configuring DRBD
|
===== Configure DRBD
|
||||||
|
|
||||||
The Pacemaker based RabbitMQ server requires a DRBD resource from
|
The Pacemaker based RabbitMQ server requires a DRBD resource from
|
||||||
which it mounts the +/var/lib/rabbitmq+ directory. In this example,
|
which it mounts the +/var/lib/rabbitmq+ directory. In this example,
|
||||||
@ -69,7 +69,7 @@ the primary and secondary roles in DRBD. Must be completed _on one
|
|||||||
node only,_ namely the one where you are about to continue with
|
node only,_ namely the one where you are about to continue with
|
||||||
creating your filesystem.
|
creating your filesystem.
|
||||||
|
|
||||||
===== Creating a file system
|
===== Create a file system
|
||||||
|
|
||||||
Once the DRBD resource is running and in the primary role (and
|
Once the DRBD resource is running and in the primary role (and
|
||||||
potentially still in the process of running the initial device
|
potentially still in the process of running the initial device
|
||||||
@ -96,7 +96,7 @@ background:
|
|||||||
drbdadm secondary rabbitmq
|
drbdadm secondary rabbitmq
|
||||||
----
|
----
|
||||||
|
|
||||||
===== Preparing RabbitMQ for Pacemaker high availability
|
===== Prepare RabbitMQ for Pacemaker high availability
|
||||||
|
|
||||||
In order for Pacemaker monitoring to function properly, you must
|
In order for Pacemaker monitoring to function properly, you must
|
||||||
ensure that RabbitMQ's +.erlang.cookie+ files are identical on all
|
ensure that RabbitMQ's +.erlang.cookie+ files are identical on all
|
||||||
@ -112,7 +112,7 @@ node1:# cp -a /var/lib/rabbitmq/.erlang.cookie /mnt
|
|||||||
node1:# umount /mnt
|
node1:# umount /mnt
|
||||||
----
|
----
|
||||||
|
|
||||||
===== Adding RabbitMQ resources to Pacemaker
|
===== Add RabbitMQ resources to Pacemaker
|
||||||
|
|
||||||
You may now proceed with adding the Pacemaker configuration for
|
You may now proceed with adding the Pacemaker configuration for
|
||||||
RabbitMQ resources. Connect to the Pacemaker cluster with +crm
|
RabbitMQ resources. Connect to the Pacemaker cluster with +crm
|
||||||
@ -144,7 +144,7 @@ Once completed, commit your configuration changes by entering +commit+
|
|||||||
from the +crm configure+ menu. Pacemaker will then start the RabbitMQ
|
from the +crm configure+ menu. Pacemaker will then start the RabbitMQ
|
||||||
service, and its dependent resources, on one of your nodes.
|
service, and its dependent resources, on one of your nodes.
|
||||||
|
|
||||||
===== Configuring OpenStack services for highly available RabbitMQ
|
===== Configure OpenStack services for highly available RabbitMQ
|
||||||
|
|
||||||
Your OpenStack services must now point their RabbitMQ configuration to
|
Your OpenStack services must now point their RabbitMQ configuration to
|
||||||
the highly available, virtual cluster IP address -- rather than a
|
the highly available, virtual cluster IP address -- rather than a
|
||||||
|
Loading…
Reference in New Issue
Block a user