Removes the doc/high-availability-guide directory
- This document has moved to a new repo, openstack/ha-guide (https://git.openstack.org/cgit/openstack/ha-guide/) Also, adjust files referencing it. Change-Id: Ia6c0747fd00d0447ab26c61a2e24f5546bce2f0a
This commit is contained in:
parent
fda913df7e
commit
1a201a9f8c
@ -36,13 +36,6 @@ source_file = doc/glossary/locale/glossary.pot
|
||||
source_lang = en
|
||||
type = PO
|
||||
|
||||
[openstack-manuals-i18n.high-availability-guide]
|
||||
file_filter = doc/high-availability-guide/locale/<lang>.po
|
||||
minimum_perc = 75
|
||||
source_file = doc/high-availability-guide/locale/high-availability-guide.pot
|
||||
source_lang = en
|
||||
type = PO
|
||||
|
||||
[openstack-manuals-i18n.image-guide]
|
||||
file_filter = doc/image-guide/locale/<lang>.po
|
||||
minimum_perc = 75
|
||||
|
@ -2,13 +2,13 @@
|
||||
|
||||
# directories to be set up
|
||||
declare -A DIRECTORIES=(
|
||||
["ja"]="common glossary high-availability-guide image-guide install-guide user-guide user-guide-admin"
|
||||
["ja"]="common glossary image-guide install-guide user-guide user-guide-admin"
|
||||
["fr"]="common glossary user-guide"
|
||||
)
|
||||
|
||||
# books to be built
|
||||
declare -A BOOKS=(
|
||||
["ja"]="high-availability-guide image-guide install-guide user-guide user-guide-admin"
|
||||
["ja"]="image-guide install-guide user-guide user-guide-admin"
|
||||
["fr"]="user-guide"
|
||||
)
|
||||
|
||||
|
@ -1,12 +0,0 @@
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="s-api-pacemaker">
|
||||
|
||||
<title>Configure Pacemaker group</title>
|
||||
|
||||
<para>Finally, we need to create a service <literal>group</literal> to ensure that the virtual IP is linked to the API services resources:</para>
|
||||
<screen>group g_services_api p_api-ip p_keystone p_glance-api p_cinder-api \
|
||||
p_neutron-server p_glance-registry p_ceilometer-agent-central</screen>
|
||||
</section>
|
@ -1,14 +0,0 @@
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="s-api-vip">
|
||||
|
||||
<title>Configure the VIP</title>
|
||||
|
||||
<para>First, you must select and assign a virtual IP address (VIP) that can freely float between cluster nodes.</para>
|
||||
<para>This configuration creates <literal>p_ip_api</literal>, a virtual IP address for use by the API node (192.168.42.103):</para>
|
||||
<screen>primitive p_api-ip ocf:heartbeat:IPaddr2 \
|
||||
params ip="192.168.42.103" cidr_netmask="24" \
|
||||
op monitor interval="30s"</screen>
|
||||
</section>
|
@ -1,67 +0,0 @@
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="s-ceilometer-agent-central">
|
||||
|
||||
<title>Highly available Telemetry central agent</title>
|
||||
|
||||
<para>Telemetry (ceilometer) is the metering and monitoring service in
|
||||
OpenStack. The Central agent polls for resource utilization
|
||||
statistics for resources not tied to instances or compute nodes.</para>
|
||||
<note>
|
||||
<para>Due to limitations of a polling model, a single instance of this agent
|
||||
can be polling a given list of meters. In this setup, we install this service
|
||||
on the API nodes also in the active / passive mode.</para>
|
||||
</note>
|
||||
<para>Making the Telemetry central agent service highly available in active / passive mode involves
|
||||
managing its daemon with the Pacemaker cluster manager.</para>
|
||||
<note>
|
||||
<para>You will find at <link xlink:href="http://docs.openstack.org/developer/ceilometer/install/manual.html#installing-the-central-agent">this page</link>
|
||||
the process to install the Telemetry central agent.</para>
|
||||
</note>
|
||||
<section xml:id="_add_the_telemetry_central_agent_resource_to_pacemaker">
|
||||
|
||||
<title>Add the Telemetry central agent resource to Pacemaker</title>
|
||||
|
||||
<para>First of all, you need to download the resource agent to your system:</para>
|
||||
<screen><prompt>#</prompt> <userinput>cd /usr/lib/ocf/resource.d/openstack</userinput>
|
||||
<prompt>#</prompt> <userinput>wget https://raw.github.com/madkiss/openstack-resource-agents/master/ocf/ceilometer-agent-central</userinput>
|
||||
<prompt>#</prompt> <userinput>chmod a+rx *</userinput></screen>
|
||||
<para>You may then proceed with adding the Pacemaker configuration for
|
||||
the Telemetry central agent resource. Connect to the Pacemaker cluster with <literal>crm
|
||||
configure</literal>, and add the following cluster resources:</para>
|
||||
<programlisting>primitive p_ceilometer-agent-central ocf:openstack:ceilometer-agent-central \
|
||||
params config="/etc/ceilometer/ceilometer.conf" \
|
||||
op monitor interval="30s" timeout="30s"</programlisting>
|
||||
<para>This configuration creates</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><literal>p_ceilometer-agent-central</literal>, a resource for manage Ceilometer Central Agent service
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para><literal>crm configure</literal> supports batch input, so you may copy and paste the
|
||||
above into your live pacemaker configuration, and then make changes as
|
||||
required.</para>
|
||||
<para>Once completed, commit your configuration changes by entering <literal>commit</literal>
|
||||
from the <literal>crm configure</literal> menu. Pacemaker will then start the Ceilometer Central Agent
|
||||
service, and its dependent resources, on one of your nodes.</para>
|
||||
</section>
|
||||
<section xml:id="_configure_telemetry_central_agent_service">
|
||||
|
||||
<title>Configure Telemetry central agent service</title>
|
||||
|
||||
<para>Edit <filename>/etc/ceilometer/ceilometer.conf</filename>:</para>
|
||||
<programlisting language="ini"># We use API VIP for Identity Service connection:
|
||||
os_auth_url=http://192.168.42.103:5000/v2.0
|
||||
|
||||
# We send notifications to High Available RabbitMQ:
|
||||
notifier_strategy = rabbit
|
||||
rabbit_host = 192.168.42.102
|
||||
|
||||
[database]
|
||||
# We have to use MySQL connection to store data:
|
||||
sql_connection=mysql://ceilometer:password@192.168.42.101/ceilometer</programlisting>
|
||||
</section>
|
||||
</section>
|
@ -1,94 +0,0 @@
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="s-cinder-api">
|
||||
|
||||
<title>Highly available Block Storage API</title>
|
||||
|
||||
<para>Making the Block Storage (cinder) API service highly available in active / passive mode involves</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
Configure Block Storage to listen on the VIP address,
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
managing Block Storage API daemon with the Pacemaker cluster manager,
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Configure OpenStack services to use this IP address.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<note>
|
||||
<para>Here is the
|
||||
<link xlink:href="http://docs.openstack.org/trunk/install-guide/install/apt/content/ch_cinder.html">documentation</link>
|
||||
for installing Block Storage service.</para>
|
||||
</note>
|
||||
<section xml:id="_add_block_storage_api_resource_to_pacemaker">
|
||||
|
||||
<title>Add Block Storage API resource to Pacemaker</title>
|
||||
|
||||
<para>First of all, you need to download the resource agent to your system:</para>
|
||||
<screen><prompt>#</prompt> <userinput>cd /usr/lib/ocf/resource.d/openstack</userinput>
|
||||
<prompt>#</prompt> <userinput>wget https://raw.github.com/madkiss/openstack-resource-agents/master/ocf/cinder-api</userinput>
|
||||
<prompt>#</prompt> <userinput>chmod a+rx *</userinput></screen>
|
||||
<para>You can now add the Pacemaker configuration for
|
||||
Block Storage API resource. Connect to the Pacemaker cluster with <literal>crm
|
||||
configure</literal>, and add the following cluster resources:</para>
|
||||
<programlisting>primitive p_cinder-api ocf:openstack:cinder-api \
|
||||
params config="/etc/cinder/cinder.conf" os_password="secrete" os_username="admin" \
|
||||
os_tenant_name="admin" keystone_get_token_url="http://192.168.42.103:5000/v2.0/tokens" \
|
||||
op monitor interval="30s" timeout="30s"</programlisting>
|
||||
<para>This configuration creates</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><literal>p_cinder-api</literal>, a resource for manage Block Storage API service
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para><literal>crm configure</literal> supports batch input, so you may copy and paste the
|
||||
above into your live pacemaker configuration, and then make changes as
|
||||
required. For example, you may enter <literal>edit p_ip_cinder-api</literal> from the
|
||||
<literal>crm configure</literal> menu and edit the resource to match your preferred
|
||||
virtual IP address.</para>
|
||||
<para>Once completed, commit your configuration changes by entering <literal>commit</literal>
|
||||
from the <literal>crm configure</literal> menu. Pacemaker will then start the Block Storage API
|
||||
service, and its dependent resources, on one of your nodes.</para>
|
||||
</section>
|
||||
<section xml:id="_configure_block_storage_api_service">
|
||||
|
||||
<title>Configure Block Storage API service</title>
|
||||
|
||||
<para>Edit <filename>/etc/cinder/cinder.conf</filename>:</para>
|
||||
<programlisting language="ini"># We have to use MySQL connection to store data:
|
||||
sql_connection=mysql://cinder:password@192.168.42.101/cinder
|
||||
|
||||
# We bind Block Storage API to the VIP:
|
||||
osapi_volume_listen = 192.168.42.103
|
||||
|
||||
# We send notifications to High Available RabbitMQ:
|
||||
notifier_strategy = rabbit
|
||||
rabbit_host = 192.168.42.102</programlisting>
|
||||
</section>
|
||||
<section xml:id="_configure_openstack_services_to_use_highly_available_block_storage_api">
|
||||
|
||||
<title>Configure OpenStack services to use highly available Block Storage API</title>
|
||||
|
||||
<para>Your OpenStack services must now point their Block Storage API configuration to
|
||||
the highly available, virtual cluster IP address — rather than a
|
||||
Block Storage API server’s physical IP address as you normally would.</para>
|
||||
<para>You must create the Block Storage API endpoint with this IP.</para>
|
||||
<note>
|
||||
<para>If you are using both private and public IP, you should create two Virtual IPs and define your endpoint like this:</para>
|
||||
</note>
|
||||
<screen><prompt>$</prompt> <userinput>keystone endpoint-create --region $KEYSTONE_REGION \
|
||||
--service-id $service-id --publicurl 'http://PUBLIC_VIP:8776/v1/%(tenant_id)s' \
|
||||
--adminurl 'http://192.168.42.103:8776/v1/%(tenant_id)s' \
|
||||
--internalurl 'http://192.168.42.103:8776/v1/%(tenant_id)s'</userinput></screen>
|
||||
</section>
|
||||
</section>
|
@ -1,111 +0,0 @@
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="s-glance-api">
|
||||
|
||||
<title>Highly available OpenStack Image API</title>
|
||||
|
||||
<para>OpenStack Image Service offers a service for discovering, registering, and retrieving virtual machine images.
|
||||
To make the OpenStack Image API service highly available in active / passive mode, you must:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
Configure OpenStack Image to listen on the VIP address.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Manage OpenStack Image API daemon with the Pacemaker cluster manager.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Configure OpenStack services to use this IP address.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<note>
|
||||
<para>Here is the <link xlink:href="http://docs.openstack.org/trunk/install-guide/install/apt/content/ch_glance.html">documentation</link> for installing the OpenStack Image API service.</para>
|
||||
</note>
|
||||
<section xml:id="_add_openstack_image_api_resource_to_pacemaker">
|
||||
|
||||
<title>Add OpenStack Image API resource to Pacemaker</title>
|
||||
|
||||
<para>First of all, you need to download the resource agent to your system:</para>
|
||||
<screen><prompt>#</prompt> <userinput>cd /usr/lib/ocf/resource.d/openstack</userinput>
|
||||
<prompt>#</prompt> <userinput>wget https://raw.github.com/madkiss/openstack-resource-agents/master/ocf/glance-api</userinput>
|
||||
<prompt>#</prompt> <userinput>chmod a+rx *</userinput></screen>
|
||||
<para>You can now add the Pacemaker configuration for
|
||||
OpenStack Image API resource. Connect to the Pacemaker cluster with <literal>crm
|
||||
configure</literal>, and add the following cluster resources:</para>
|
||||
<programlisting>primitive p_glance-api ocf:openstack:glance-api \
|
||||
params config="/etc/glance/glance-api.conf" os_password="secrete" os_username="admin" os_tenant_name="admin" os_auth_url="http://192.168.42.103:5000/v2.0/" \
|
||||
op monitor interval="30s" timeout="30s"</programlisting>
|
||||
<para>This configuration creates</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><literal>p_glance-api</literal>, a resource for managing OpenStack Image API service
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para><literal>crm configure</literal> supports batch input, so you may copy and paste the
|
||||
above into your live Pacemaker configuration, and then make changes as
|
||||
required. For example, you may enter <literal>edit p_ip_glance-api</literal> from the
|
||||
<literal>crm configure</literal> menu and edit the resource to match your preferred
|
||||
virtual IP address.</para>
|
||||
<para>Once completed, commit your configuration changes by entering <literal>commit</literal>
|
||||
from the <literal>crm configure</literal> menu. Pacemaker will then start the OpenStack Image API
|
||||
service, and its dependent resources, on one of your nodes.</para>
|
||||
</section>
|
||||
<section xml:id="_configure_openstack_image_service_api">
|
||||
|
||||
<title>Configure OpenStack Image Service API</title>
|
||||
|
||||
<para>Edit <filename>/etc/glance/glance-api.conf</filename>:</para>
|
||||
<programlisting language="ini"># We have to use MySQL connection to store data:
|
||||
sql_connection=mysql://glance:password@192.168.42.101/glance
|
||||
|
||||
# We bind OpenStack Image API to the VIP:
|
||||
bind_host = 192.168.42.103
|
||||
|
||||
# Connect to OpenStack Image Registry service:
|
||||
registry_host = 192.168.42.103
|
||||
|
||||
# We send notifications to High Available RabbitMQ:
|
||||
notifier_strategy = rabbit
|
||||
rabbit_host = 192.168.42.102</programlisting>
|
||||
</section>
|
||||
<section xml:id="_configure_openstack_services_to_use_high_available_openstack_image_api">
|
||||
|
||||
<title>Configure OpenStack services to use high available OpenStack Image API</title>
|
||||
|
||||
<para>Your OpenStack services must now point their OpenStack Image API configuration to
|
||||
the highly available, virtual cluster IP address — rather than an
|
||||
OpenStack Image API server’s physical IP address as you normally would.</para>
|
||||
<para>For OpenStack Compute, for example, if your OpenStack
|
||||
Image API service IP address is 192.168.42.103 as in the
|
||||
configuration explained here, you would use the following
|
||||
configuration in your <filename>nova.conf</filename>
|
||||
file:</para>
|
||||
<programlisting language="ini">[glance]
|
||||
...
|
||||
api_servers = 192.168.42.103
|
||||
...</programlisting>
|
||||
<note>
|
||||
<para>In versions prior to Juno, this option was called
|
||||
<literal>glance_api_servers</literal> in the
|
||||
<literal>[DEFAULT]</literal> section.
|
||||
</para>
|
||||
</note>
|
||||
<para>You must also create
|
||||
the OpenStack Image API endpoint with this IP.</para>
|
||||
<note>
|
||||
<para>If you are using both private and public IP addresses, you should create two Virtual IP addresses and define your endpoint like this:</para>
|
||||
</note>
|
||||
<screen><prompt>$</prompt> <userinput>keystone endpoint-create --region $KEYSTONE_REGION \
|
||||
--service-id $service-id --publicurl 'http://PUBLIC_VIP:9292' \
|
||||
--adminurl 'http://192.168.42.103:9292' \
|
||||
--internalurl 'http://192.168.42.103:9292'</userinput></screen>
|
||||
</section>
|
||||
</section>
|
@ -1,96 +0,0 @@
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="s-keystone">
|
||||
|
||||
<title>Highly available OpenStack Identity</title>
|
||||
|
||||
<para>OpenStack Identity is the Identity Service in OpenStack and used by many services.
|
||||
Making the OpenStack Identity service highly available in active / passive mode involves</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
Configure OpenStack Identity to listen on the VIP address,
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
managing OpenStack Identity daemon with the Pacemaker cluster manager,
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Configure OpenStack services to use this IP address.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<note>
|
||||
<para>Here is the <link xlink:href="http://docs.openstack.org/trunk/install-guide/install/apt/content/ch_keystone.html">documentation</link> for installing OpenStack Identity service.</para>
|
||||
</note>
|
||||
<section xml:id="_add_openstack_identity_resource_to_pacemaker">
|
||||
|
||||
<title>Add OpenStack Identity resource to Pacemaker</title>
|
||||
|
||||
<para>First of all, you need to download the resource agent to your system:</para>
|
||||
<screen><prompt>#</prompt> <userinput>cd /usr/lib/ocf/resource.d</userinput>
|
||||
<prompt>#</prompt> <userinput>mkdir openstack</userinput>
|
||||
<prompt>#</prompt> <userinput>cd openstack</userinput>
|
||||
<prompt>#</prompt> <userinput>wget https://raw.github.com/madkiss/openstack-resource-agents/master/ocf/keystone</userinput>
|
||||
<prompt>#</prompt> <userinput>chmod a+rx *</userinput></screen>
|
||||
<para>You can now add the Pacemaker configuration for
|
||||
OpenStack Identity resource. Connect to the Pacemaker cluster with <literal>crm
|
||||
configure</literal>, and add the following cluster resources:</para>
|
||||
<programlisting>primitive p_keystone ocf:openstack:keystone \
|
||||
params config="/etc/keystone/keystone.conf" os_password="secret" os_username="admin" os_tenant_name="admin" os_auth_url="http://192.168.42.103:5000/v2.0/" \
|
||||
op monitor interval="30s" timeout="30s"</programlisting>
|
||||
<para>This configuration creates <literal>p_keystone</literal>, a resource for managing the OpenStack Identity service.</para>
|
||||
<para><literal>crm configure</literal> supports batch input, so you may copy and paste the
|
||||
above into your live pacemaker configuration, and then make changes as
|
||||
required. For example, you may enter <literal>edit p_ip_keystone</literal> from the
|
||||
<literal>crm configure</literal> menu and edit the resource to match your preferred
|
||||
virtual IP address.</para>
|
||||
<para>Once completed, commit your configuration changes by entering <literal>commit</literal>
|
||||
from the <literal>crm configure</literal> menu. Pacemaker will then start the OpenStack Identity
|
||||
service, and its dependent resources, on one of your nodes.</para>
|
||||
</section>
|
||||
<section xml:id="_configure_openstack_identity_service">
|
||||
|
||||
<title>Configure OpenStack Identity service</title>
|
||||
|
||||
<para>You need to edit your OpenStack Identity configuration file (<filename>keystone.conf</filename>) and change the bind parameters:</para>
|
||||
<para>On Havana:</para>
|
||||
<programlisting language="ini">bind_host = 192.168.42.103</programlisting>
|
||||
<para>On Icehouse, the <literal>admin_bind_host</literal> option lets you use a private network for the admin access.</para>
|
||||
<programlisting language="ini">public_bind_host = 192.168.42.103
|
||||
admin_bind_host = 192.168.42.103</programlisting>
|
||||
<para>To be sure all data will be highly available, you should be sure that you store everything in the MySQL database (which is also highly available):</para>
|
||||
<programlisting language="ini">[catalog]
|
||||
driver = keystone.catalog.backends.sql.Catalog
|
||||
...
|
||||
[identity]
|
||||
driver = keystone.identity.backends.sql.Identity
|
||||
...</programlisting>
|
||||
</section>
|
||||
<section xml:id="_configure_openstack_services_to_use_the_highly_available_openstack_identity">
|
||||
|
||||
<title>Configure OpenStack services to use the highly available OpenStack Identity</title>
|
||||
|
||||
<para>Your OpenStack services must now point their OpenStack Identity configuration to
|
||||
the highly available, virtual cluster IP address — rather than a
|
||||
OpenStack Identity server’s physical IP address as you normally would.</para>
|
||||
<para>For example with OpenStack Compute, if your OpenStack Identity service IP address is
|
||||
192.168.42.103 as in the configuration explained here, you would use
|
||||
the following line in your API configuration file
|
||||
(<literal>api-paste.ini</literal>):</para>
|
||||
<programlisting language="ini">auth_host = 192.168.42.103</programlisting>
|
||||
<para>You also need to create the OpenStack Identity Endpoint with this IP.</para>
|
||||
<para>NOTE: If you are using both private and public IP addresses, you should create two Virtual IP addresses and define your endpoint like this:</para>
|
||||
<screen><prompt>$</prompt> <userinput>keystone endpoint-create --region $KEYSTONE_REGION \
|
||||
--service-id $service-id --publicurl 'http://PUBLIC_VIP:5000/v2.0' \
|
||||
--adminurl 'http://192.168.42.103:35357/v2.0' \
|
||||
--internalurl 'http://192.168.42.103:5000/v2.0'</userinput></screen>
|
||||
<para>If you are using the horizon dashboard, you should edit the <literal>local_settings.py</literal> file:</para>
|
||||
<programlisting>OPENSTACK_HOST = 192.168.42.103</programlisting>
|
||||
</section>
|
||||
</section>
|
@ -1,93 +0,0 @@
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="s-neutron-server">
|
||||
|
||||
<title>Highly available OpenStack Networking server</title>
|
||||
|
||||
<para>OpenStack Networking is the network connectivity service in OpenStack.
|
||||
Making the OpenStack Networking Server service highly available in active / passive mode involves the following tasks:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
Configure OpenStack Networking to listen on the virtual IP address,
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Manage the OpenStack Networking API Server daemon with the Pacemaker cluster manager,
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Configure OpenStack services to use the virtual IP address.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<note>
|
||||
<para>Here is the <link xlink:href="http://docs.openstack.org/trunk/install-guide/install/apt/content/ch_networking.html">documentation</link> for installing OpenStack Networking service.</para>
|
||||
</note>
|
||||
<section xml:id="_add_openstack_networking_server_resource_to_pacemaker">
|
||||
|
||||
<title>Add OpenStack Networking Server resource to Pacemaker</title>
|
||||
|
||||
<para>First of all, you need to download the resource agent to your system:</para>
|
||||
<screen><prompt>#</prompt> <userinput>cd /usr/lib/ocf/resource.d/openstack</userinput>
|
||||
<prompt>#</prompt> <userinput>wget https://raw.github.com/madkiss/openstack-resource-agents/master/ocf/neutron-server</userinput>
|
||||
<prompt>#</prompt> <userinput>chmod a+rx *</userinput></screen>
|
||||
<para>You can now add the Pacemaker configuration for
|
||||
OpenStack Networking Server resource. Connect to the Pacemaker cluster with <literal>crm
|
||||
configure</literal>, and add the following cluster resources:</para>
|
||||
<programlisting>primitive p_neutron-server ocf:openstack:neutron-server \
|
||||
params os_password="secret" os_username="admin" os_tenant_name="admin" \
|
||||
keystone_get_token_url="http://192.168.42.103:5000/v2.0/tokens" \
|
||||
op monitor interval="30s" timeout="30s"</programlisting>
|
||||
<para>This configuration creates <literal>p_neutron-server</literal>, a resource for manage OpenStack Networking Server service</para>
|
||||
<para><literal>crm configure</literal> supports batch input, so you may copy and paste the
|
||||
above into your live pacemaker configuration, and then make changes as
|
||||
required. For example, you may enter <literal>edit p_neutron-server</literal> from the
|
||||
<literal>crm configure</literal> menu and edit the resource to match your preferred
|
||||
virtual IP address.</para>
|
||||
<para>Once completed, commit your configuration changes by entering <literal>commit</literal>
|
||||
from the <literal>crm configure</literal> menu. Pacemaker will then start the OpenStack Networking API
|
||||
service, and its dependent resources, on one of your nodes.</para>
|
||||
</section>
|
||||
<section xml:id="_configure_openstack_networking_server">
|
||||
|
||||
<title>Configure OpenStack Networking server</title>
|
||||
|
||||
<para>Edit <filename>/etc/neutron/neutron.conf</filename>:</para>
|
||||
<programlisting language="ini"># We bind the service to the VIP:
|
||||
bind_host = 192.168.42.103
|
||||
|
||||
# We bind OpenStack Networking Server to the VIP:
|
||||
bind_host = 192.168.42.103
|
||||
|
||||
# We send notifications to Highly available RabbitMQ:
|
||||
notifier_strategy = rabbit
|
||||
rabbit_host = 192.168.42.102
|
||||
|
||||
[database]
|
||||
# We have to use MySQL connection to store data:
|
||||
connection = mysql://neutron:password@192.168.42.101/neutron</programlisting>
|
||||
</section>
|
||||
<section xml:id="_configure_openstack_services_to_use_highly_available_openstack_networking_server">
|
||||
|
||||
<title>Configure OpenStack services to use highly available OpenStack Networking server</title>
|
||||
|
||||
<para>Your OpenStack services must now point their OpenStack Networking Server configuration to
|
||||
the highly available, virtual cluster IP address — rather than an
|
||||
OpenStack Networking server’s physical IP address as you normally would.</para>
|
||||
<para>For example, you should configure OpenStack Compute for using highly available OpenStack Networking server in editing <literal>nova.conf</literal> file:</para>
|
||||
<programlisting language="ini">neutron_url = http://192.168.42.103:9696</programlisting>
|
||||
<para>You need to create the OpenStack Networking server endpoint with this IP.</para>
|
||||
<note>
|
||||
<para>If you are using both private and public IP addresses, you should create two Virtual IP addresses and define your endpoint like this:</para>
|
||||
</note>
|
||||
<screen><prompt>$</prompt> <userinput>keystone endpoint-create --region $KEYSTONE_REGION --service-id $service-id \
|
||||
--publicurl 'http://PUBLIC_VIP:9696/' \
|
||||
--adminurl 'http://192.168.42.103:9696/' \
|
||||
--internalurl 'http://192.168.42.103:9696/'</userinput></screen>
|
||||
</section>
|
||||
</section>
|
@ -1,89 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<book xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="bk-ha-guide">
|
||||
|
||||
<title>OpenStack High Availability Guide</title>
|
||||
<info>
|
||||
<author>
|
||||
<personname>
|
||||
<firstname>Florian</firstname>
|
||||
<surname>Haas</surname>
|
||||
</personname>
|
||||
<email>florian@hastexo.com</email>
|
||||
<affiliation>
|
||||
<orgname>hastexo</orgname>
|
||||
</affiliation>
|
||||
</author>
|
||||
<copyright>
|
||||
<year>2012</year>
|
||||
<year>2013</year>
|
||||
<year>2014</year>
|
||||
<holder>OpenStack Contributors</holder>
|
||||
</copyright>
|
||||
<releaseinfo>current</releaseinfo>
|
||||
<productname>OpenStack</productname>
|
||||
<pubdate/>
|
||||
<legalnotice role="apache2">
|
||||
<annotation>
|
||||
<remark>Copyright details are filled in by the template.</remark>
|
||||
</annotation>
|
||||
</legalnotice>
|
||||
<abstract>
|
||||
<para>This guide describes how to install,
|
||||
configure, and manage OpenStack for high availability.</para>
|
||||
</abstract>
|
||||
<revhistory>
|
||||
<revision>
|
||||
<date>2014-05-16</date>
|
||||
<revdescription>
|
||||
<itemizedlist spacing="compact">
|
||||
<listitem>
|
||||
<para>Conversion to Docbook.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</revdescription>
|
||||
</revision>
|
||||
<revision>
|
||||
<date>2014-04-17</date>
|
||||
<revdescription>
|
||||
<itemizedlist spacing="compact">
|
||||
<listitem>
|
||||
<para>Minor cleanup of typos, otherwise no
|
||||
major revisions for Icehouse release.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</revdescription>
|
||||
</revision>
|
||||
<revision>
|
||||
<date>2012-01-16</date>
|
||||
<revdescription>
|
||||
<itemizedlist spacing="compact">
|
||||
<listitem>
|
||||
<para>Organizes guide based on cloud controller and compute nodes.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</revdescription>
|
||||
</revision>
|
||||
<revision>
|
||||
<date>2012-05-24</date>
|
||||
<revdescription>
|
||||
<itemizedlist spacing="compact">
|
||||
<listitem>
|
||||
<para>Begin trunk designation.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</revdescription>
|
||||
</revision>
|
||||
</revhistory>
|
||||
</info>
|
||||
|
||||
<xi:include href="../common/ch_preface.xml"/>
|
||||
<xi:include href="ch_intro.xml"/>
|
||||
<xi:include href="part_active_passive.xml"/>
|
||||
<xi:include href="part_active_active.xml"/>
|
||||
<xi:include href="../common/app_support.xml"/>
|
||||
|
||||
</book>
|
@ -1,21 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="ch-api">
|
||||
|
||||
<title>API node cluster stack</title>
|
||||
|
||||
<para>The API node exposes OpenStack API endpoints onto external network (Internet).
|
||||
It must talk to the cloud controller on the management network.</para>
|
||||
|
||||
<xi:include href="api/section_api_vip.xml"/>
|
||||
<xi:include href="api/section_keystone.xml"/>
|
||||
<xi:include href="api/section_glance_api.xml"/>
|
||||
<xi:include href="api/section_cinder_api.xml"/>
|
||||
<xi:include href="api/section_neutron_server.xml"/>
|
||||
<xi:include href="api/section_ceilometer_agent_central.xml"/>
|
||||
<xi:include href="api/section_api_pacemaker.xml"/>
|
||||
|
||||
</chapter>
|
@ -1,15 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="ch-controller">
|
||||
|
||||
<title>Cloud controller cluster stack</title>
|
||||
|
||||
<para>The cloud controller runs on the management network and must talk to all other services.</para>
|
||||
|
||||
<xi:include href="controller/section_mysql.xml"/>
|
||||
<xi:include href="controller/section_rabbitmq.xml"/>
|
||||
|
||||
</chapter>
|
@ -1,32 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="ha-aa-controllers">
|
||||
|
||||
<title>OpenStack controller nodes</title>
|
||||
|
||||
<para>OpenStack controller nodes contain:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
All OpenStack API services
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
All OpenStack schedulers
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Memcached service
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
|
||||
<xi:include href="ha_aa_controllers/section_run_openstack_api_and_schedulers.xml"/>
|
||||
<xi:include href="ha_aa_controllers/section_memcached.xml"/>
|
||||
|
||||
</chapter>
|
@ -1,22 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="ha-aa-db">
|
||||
|
||||
<title>Database</title>
|
||||
|
||||
<para>The first step is installing the database that sits at the heart of the
|
||||
cluster. When we talk about High Availability, we talk about several databases (for redundancy) and a
|
||||
means to keep them synchronized. In this case, we must choose the
|
||||
MySQL database, along with Galera for synchronous multi-master replication.</para>
|
||||
<para>The choice of database isn’t a foregone conclusion; you’re not required
|
||||
to use MySQL. It is, however, a fairly common choice in OpenStack
|
||||
installations, so we’ll cover it here.</para>
|
||||
|
||||
<xi:include href="ha_aa_db/section_ha_aa_db_mysql_galera.xml"/>
|
||||
<xi:include href="ha_aa_db/section_ha_aa_db_galera_monitoring.xml"/>
|
||||
<xi:include href="ha_aa_db/section_other_ways_to_provide_a_highly_available_database.xml"/>
|
||||
|
||||
</chapter>
|
@ -1,158 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="ha-aa-haproxy">
|
||||
|
||||
<title>HAProxy nodes</title>
|
||||
|
||||
<para>HAProxy is a very fast and reliable solution offering high availability, load balancing, and proxying
|
||||
for TCP and HTTP-based applications. It is particularly suited for web sites crawling under very high loads
|
||||
while needing persistence or Layer 7 processing. Supporting tens of thousands of connections is clearly
|
||||
realistic with today’s hardware.</para>
|
||||
<para>For installing HAProxy on your nodes, you should consider its <link xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://haproxy.1wt.eu/#docs">official documentation</link>.
|
||||
Also, you have to consider that this service should not be a single point of failure, so you need at least two
|
||||
nodes running HAProxy.</para>
|
||||
<para>Here is an example for HAProxy configuration file:</para>
|
||||
<programlisting>global
|
||||
chroot /var/lib/haproxy
|
||||
daemon
|
||||
group haproxy
|
||||
maxconn 4000
|
||||
pidfile /var/run/haproxy.pid
|
||||
user haproxy
|
||||
|
||||
defaults
|
||||
log global
|
||||
maxconn 8000
|
||||
option redispatch
|
||||
retries 3
|
||||
timeout http-request 10s
|
||||
timeout queue 1m
|
||||
timeout connect 10s
|
||||
timeout client 1m
|
||||
timeout server 1m
|
||||
timeout check 10s
|
||||
|
||||
listen dashboard_cluster
|
||||
bind <Virtual IP>:443
|
||||
balance source
|
||||
option tcpka
|
||||
option httpchk
|
||||
option tcplog
|
||||
server controller1 10.0.0.1:443 check inter 2000 rise 2 fall 5
|
||||
server controller2 10.0.0.2:443 check inter 2000 rise 2 fall 5
|
||||
|
||||
listen galera_cluster
|
||||
bind <Virtual IP>:3306
|
||||
balance source
|
||||
option httpchk
|
||||
server controller1 10.0.0.4:3306 check port 9200 inter 2000 rise 2 fall 5
|
||||
server controller2 10.0.0.5:3306 check port 9200 inter 2000 rise 2 fall 5
|
||||
server controller3 10.0.0.6:3306 check port 9200 inter 2000 rise 2 fall 5
|
||||
|
||||
listen glance_api_cluster
|
||||
bind <Virtual IP>:9292
|
||||
balance source
|
||||
option tcpka
|
||||
option httpchk
|
||||
option tcplog
|
||||
server controller1 10.0.0.1:9292 check inter 2000 rise 2 fall 5
|
||||
server controller2 10.0.0.2:9292 check inter 2000 rise 2 fall 5
|
||||
|
||||
listen glance_registry_cluster
|
||||
bind <Virtual IP>:9191
|
||||
balance source
|
||||
option tcpka
|
||||
option tcplog
|
||||
server controller1 10.0.0.1:9191 check inter 2000 rise 2 fall 5
|
||||
server controller2 10.0.0.2:9191 check inter 2000 rise 2 fall 5
|
||||
|
||||
listen keystone_admin_cluster
|
||||
bind <Virtual IP>:35357
|
||||
balance source
|
||||
option tcpka
|
||||
option httpchk
|
||||
option tcplog
|
||||
server controller1 10.0.0.1:35357 check inter 2000 rise 2 fall 5
|
||||
server controller2 10.0.0.2.42:35357 check inter 2000 rise 2 fall 5
|
||||
|
||||
listen keystone_public_internal_cluster
|
||||
bind <Virtual IP>:5000
|
||||
balance source
|
||||
option tcpka
|
||||
option httpchk
|
||||
option tcplog
|
||||
server controller1 10.0.0.1:5000 check inter 2000 rise 2 fall 5
|
||||
server controller2 10.0.0.2:5000 check inter 2000 rise 2 fall 5
|
||||
|
||||
listen nova_ec2_api_cluster
|
||||
bind <Virtual IP>:8773
|
||||
balance source
|
||||
option tcpka
|
||||
option tcplog
|
||||
server controller1 10.0.0.1:8773 check inter 2000 rise 2 fall 5
|
||||
server controller2 10.0.0.2:8773 check inter 2000 rise 2 fall 5
|
||||
|
||||
listen nova_compute_api_cluster
|
||||
bind <Virtual IP>:8774
|
||||
balance source
|
||||
option tcpka
|
||||
option httpchk
|
||||
option tcplog
|
||||
server controller1 10.0.0.1:8774 check inter 2000 rise 2 fall 5
|
||||
server controller2 10.0.0.2:8774 check inter 2000 rise 2 fall 5
|
||||
|
||||
listen nova_metadata_api_cluster
|
||||
bind <Virtual IP>:8775
|
||||
balance source
|
||||
option tcpka
|
||||
option tcplog
|
||||
server controller1 10.0.0.1:8775 check inter 2000 rise 2 fall 5
|
||||
server controller2 10.0.0.2:8775 check inter 2000 rise 2 fall 5
|
||||
|
||||
listen cinder_api_cluster
|
||||
bind <Virtual IP>:8776
|
||||
balance source
|
||||
option tcpka
|
||||
option httpchk
|
||||
option tcplog
|
||||
server controller1 10.0.0.1:8776 check inter 2000 rise 2 fall 5
|
||||
server controller2 10.0.0.2:8776 check inter 2000 rise 2 fall 5
|
||||
|
||||
listen ceilometer_api_cluster
|
||||
bind <Virtual IP>:8777
|
||||
balance source
|
||||
option tcpka
|
||||
option httpchk
|
||||
option tcplog
|
||||
server controller1 10.0.0.1:8774 check inter 2000 rise 2 fall 5
|
||||
server controller2 10.0.0.2:8774 check inter 2000 rise 2 fall 5
|
||||
|
||||
listen spice_cluster
|
||||
bind <Virtual IP>:6082
|
||||
balance source
|
||||
option tcpka
|
||||
option tcplog
|
||||
server controller1 10.0.0.1:6080 check inter 2000 rise 2 fall 5
|
||||
server controller2 10.0.0.2:6080 check inter 2000 rise 2 fall 5
|
||||
|
||||
listen neutron_api_cluster
|
||||
bind <Virtual IP>:9696
|
||||
balance source
|
||||
option tcpka
|
||||
option httpchk
|
||||
option tcplog
|
||||
server controller1 10.0.0.1:9696 check inter 2000 rise 2 fall 5
|
||||
server controller2 10.0.0.2:9696 check inter 2000 rise 2 fall 5
|
||||
|
||||
listen swift_proxy_cluster
|
||||
bind <Virtual IP>:8080
|
||||
balance source
|
||||
option tcplog
|
||||
option tcpka
|
||||
server controller1 10.0.0.1:8080 check inter 2000 rise 2 fall 5
|
||||
server controller2 10.0.0.2:8080 check inter 2000 rise 2 fall 5</programlisting>
|
||||
<para>After each change of this file, you should restart HAProxy.</para>
|
||||
</chapter>
|
@ -1,50 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="ha-aa-network">
|
||||
|
||||
<title>OpenStack network nodes</title>
|
||||
|
||||
<para>OpenStack network nodes contain:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
neutron DHCP agent
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
neutron L2 agent
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Neutron L3 agent
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
neutron metadata agent
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
neutron lbaas agent
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<note>
|
||||
<para>The neutron L2 agent does not need to be highly available. It has to be
|
||||
installed on each Data Forwarding Node and controls the virtual networking
|
||||
drivers as Open vSwitch or Linux Bridge. One L2 agent runs per node
|
||||
and controls its virtual interfaces. That’s why it cannot be distributed and
|
||||
highly available.</para>
|
||||
</note>
|
||||
|
||||
<xi:include href="ha_aa_network/section_run_neutron_dhcp_agent.xml"/>
|
||||
<xi:include href="ha_aa_network/section_run_neutron_l3_agent.xml"/>
|
||||
<xi:include href="ha_aa_network/section_run_neutron_metadata_agent.xml"/>
|
||||
|
||||
</chapter>
|
@ -1,34 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="ha-aa-rabbitmq">
|
||||
|
||||
<title>RabbitMQ</title>
|
||||
|
||||
<para>RabbitMQ is the default AMQP server used by many OpenStack services. Making the RabbitMQ service
|
||||
highly available involves the following steps:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
Install RabbitMQ
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Configure RabbitMQ for HA queues
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Configure OpenStack services to use Rabbit HA queues
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
|
||||
<xi:include href="ha_aa_rabbitmq/section_install_rabbitmq.xml"/>
|
||||
<xi:include href="ha_aa_rabbitmq/section_configure_rabbitmq.xml"/>
|
||||
<xi:include href="ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml"/>
|
||||
|
||||
</chapter>
|
@ -1,74 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="ch-intro">
|
||||
|
||||
<title>Introduction to OpenStack High Availability</title>
|
||||
|
||||
<para>High Availability systems seek to minimize two things:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><emphasis role="strong">System downtime</emphasis> — occurs when a <emphasis>user-facing</emphasis> service is unavailable beyond a specified maximum amount of time.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="strong">Data loss</emphasis> — accidental deletion or destruction of data.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>Most high availability systems guarantee protection against system downtime and data loss only in the event of a single failure. However, they are also expected to protect against cascading failures, where a single failure deteriorates into a series of consequential failures.</para>
|
||||
<para>A crucial aspect of high availability is the elimination of single points of failure (SPOFs). A SPOF is an individual piece of equipment or software which will cause system downtime or data loss if it fails. In order to eliminate SPOFs, check that mechanisms exist for redundancy of:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
Network components, such as switches and routers
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Applications and automatic service migration
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Storage components
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Facility services such as power, air conditioning, and fire protection
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>Most high availability systems will fail in the event of multiple independent (non-consequential) failures. In this case, most systems will protect data over maintaining availability.</para>
|
||||
<para>High-availability systems typically achieve an uptime percentage of 99.99% or more, which roughly equates to less than an hour of cumulative downtime per year. In order to achieve this, high availability systems should keep recovery times after a failure to about one to two minutes, sometimes significantly less.</para>
|
||||
<para>OpenStack currently meets such availability requirements for its own infrastructure services, meaning that an uptime of 99.99% is feasible for the OpenStack infrastructure proper. However, OpenStack <emphasis>does</emphasis> <emphasis>not</emphasis> guarantee 99.99% availability for individual guest instances.</para>
|
||||
<para>Preventing single points of failure can depend on whether or not a service is stateless.</para>
|
||||
<section xml:id="stateless-vs-stateful">
|
||||
|
||||
<title>Stateless vs. Stateful services</title>
|
||||
|
||||
<para>A stateless service is one that provides a response after your request, and then requires no further attention. To make a stateless service highly available, you need to provide redundant instances and load balance them. OpenStack services that are stateless include nova-api, nova-conductor, glance-api, keystone-api, neutron-api and nova-scheduler.</para>
|
||||
<para>A stateful service is one where subsequent requests to the service depend on the results of the first request. Stateful services are more difficult to manage because a single action typically involves more than one request, so simply providing additional instances and load balancing will not solve the problem. For example, if the Horizon user interface reset itself every time you went to a new page, it wouldn’t be very useful. OpenStack services that are stateful include the OpenStack database and message queue.</para>
|
||||
<para>Making stateful services highly available can depend on whether you choose an active/passive or active/active configuration.</para>
|
||||
</section>
|
||||
<section xml:id="ap-intro">
|
||||
|
||||
<title>Active/Passive</title>
|
||||
|
||||
<para>In an active/passive configuration, systems are set up to bring additional resources online to replace those that have failed. For example, OpenStack would write to the main database while maintaining a disaster recovery database that can be brought online in the event that the main database fails.</para>
|
||||
<para>Typically, an active/passive installation for a stateless service would maintain a redundant instance that can be brought online when required. Requests may be handled using a virtual IP address to facilitate return to service with minimal reconfiguration required.</para>
|
||||
<para>A typical active/passive installation for a stateful service maintains a replacement resource that can be brought online when required. A separate application (such as Pacemaker or Corosync) monitors these services, bringing the backup online as necessary.</para>
|
||||
</section>
|
||||
<section xml:id="aa-intro">
|
||||
|
||||
<title>Active/Active</title>
|
||||
|
||||
<para>In an active/active configuration, systems also use a backup but will manage both the main and redundant systems concurrently. This way, if there is a failure the user is unlikely to notice. The backup system is already online, and takes on increased load while the main system is fixed and brought back online.</para>
|
||||
<para>Typically, an active/active installation for a stateless service would maintain a redundant instance, and requests are load balanced using a virtual IP address and a load balancer such as HAProxy.</para>
|
||||
<para>A typical active/active installation for a stateful service would include redundant services with all instances having an identical state. For example, updates to one instance of a database would also update all other instances. This way a request to one instance is the same as a request to any other. A load balancer manages the traffic to these systems, ensuring that operational systems always handle the request.</para>
|
||||
<para>These are some of the more common ways to implement these high availability architectures, but they are by no means the only ways to do it. The important thing is to make sure that your services are redundant, and available; how you achieve that is up to you. This document will cover some of the more common options for highly available systems.</para>
|
||||
</section>
|
||||
</chapter>
|
@ -1,21 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="ch-network">
|
||||
|
||||
<title>Network controller cluster stack</title>
|
||||
|
||||
<para>The network controller sits on the management and data network, and needs to be connected to the Internet if an instance will need access to the Internet.</para>
|
||||
<note>
|
||||
<para>Both nodes should have the same hostname since the Networking scheduler will be
|
||||
aware of one node, for example a virtual router attached to a single L3 node.</para>
|
||||
</note>
|
||||
|
||||
<xi:include href="network/section_highly_available_neutron_l3_agent.xml"/>
|
||||
<xi:include href="network/section_highly_available_neutron_dhcp_agent.xml"/>
|
||||
<xi:include href="network/section_highly_available_neutron_metadata_agent.xml"/>
|
||||
<xi:include href="network/section_manage_network_resources.xml"/>
|
||||
|
||||
</chapter>
|
@ -1,34 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="ch-pacemaker">
|
||||
|
||||
<title>The Pacemaker cluster stack</title>
|
||||
|
||||
<para>OpenStack infrastructure high availability relies on the
|
||||
<link xlink:href="http://www.clusterlabs.org">Pacemaker</link> cluster stack, the
|
||||
state-of-the-art high availability and load balancing stack for the
|
||||
Linux platform. Pacemaker is storage and application-agnostic, and is
|
||||
in no way specific to OpenStack.</para>
|
||||
<para>Pacemaker relies on the <link xlink:href="http://www.corosync.org">Corosync</link> messaging
|
||||
layer for reliable cluster communications. Corosync implements the
|
||||
Totem single-ring ordering and membership protocol. It also provides UDP
|
||||
and InfiniBand based messaging, quorum, and cluster membership to
|
||||
Pacemaker.</para>
|
||||
<para>Pacemaker interacts with applications through <emphasis>resource agents</emphasis> (RAs),
|
||||
of which it supports over 70 natively. Pacemaker can also easily use
|
||||
third-party RAs. An OpenStack high-availability configuration uses
|
||||
existing native Pacemaker RAs (such as those managing MySQL
|
||||
databases or virtual IP addresses), existing third-party RAs (such as
|
||||
for RabbitMQ), and native OpenStack RAs (such as those managing the
|
||||
OpenStack Identity and Image Services).</para>
|
||||
|
||||
<xi:include href="pacemaker/section_install_packages.xml"/>
|
||||
<xi:include href="pacemaker/section_set_up_corosync.xml"/>
|
||||
<xi:include href="pacemaker/section_starting_corosync.xml"/>
|
||||
<xi:include href="pacemaker/section_start_pacemaker.xml"/>
|
||||
<xi:include href="pacemaker/section_set_basic_cluster_properties.xml"/>
|
||||
|
||||
</chapter>
|
@ -1,255 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE section [
|
||||
<!ENTITY % openstack SYSTEM "../../common/entities/openstack.ent">
|
||||
%openstack;
|
||||
]>
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="s-mysql">
|
||||
|
||||
<title>Highly available MySQL</title>
|
||||
|
||||
<para>MySQL is the default database server used by many OpenStack
|
||||
services. Making the MySQL service highly available involves</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
Configure a DRBD device for use by MySQL,
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Configure MySQL to use a data directory residing on that DRBD
|
||||
device,
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Select and assign a virtual IP address (VIP) that can freely
|
||||
float between cluster nodes,
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Configure MySQL to listen on that IP address,
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Manage all resources, including the MySQL daemon itself, with
|
||||
the Pacemaker cluster manager.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<note>
|
||||
<para><link xlink:href="http://galeracluster.com/">MySQL/Galera</link> is an
|
||||
alternative method of configuring MySQL for high availability. It is
|
||||
likely to become the preferred method of achieving MySQL high
|
||||
availability once it has sufficiently matured. At the time of writing,
|
||||
however, the Pacemaker/DRBD based approach remains the recommended one
|
||||
for OpenStack environments.</para>
|
||||
</note>
|
||||
<section xml:id="_configure_drbd">
|
||||
|
||||
<title>Configure DRBD</title>
|
||||
|
||||
<para>The Pacemaker based MySQL server requires a DRBD resource from
|
||||
which it mounts the <literal>/var/lib/mysql</literal> directory. In this example,
|
||||
the DRBD resource is simply named <literal>mysql</literal>:</para>
|
||||
<formalpara>
|
||||
|
||||
<title><literal>mysql</literal> DRBD resource configuration (<filename>/etc/drbd.d/mysql.res</filename>)</title>
|
||||
|
||||
<para>
|
||||
<programlisting>resource mysql {
|
||||
device minor 0;
|
||||
disk "/dev/data/mysql";
|
||||
meta-disk internal;
|
||||
on node1 {
|
||||
address ipv4 10.0.42.100:7700;
|
||||
}
|
||||
on node2 {
|
||||
address ipv4 10.0.42.254:7700;
|
||||
}
|
||||
}</programlisting>
|
||||
</para>
|
||||
</formalpara>
|
||||
<para>This resource uses an underlying local disk (in DRBD terminology, a
|
||||
<emphasis>backing device</emphasis>) named <literal>/dev/data/mysql</literal> on both cluster nodes,
|
||||
<literal>node1</literal> and <literal>node2</literal>. Normally, this would be an LVM Logical Volume
|
||||
specifically set aside for this purpose. The DRBD <literal>meta-disk</literal> is
|
||||
<literal>internal</literal>, meaning DRBD-specific metadata is being stored at the end
|
||||
of the <literal>disk</literal> device itself. The device is configured to communicate
|
||||
between IPv4 addresses 10.0.42.100 and 10.0.42.254, using TCP port
|
||||
7700. Once enabled, it will map to a local DRBD block device with the
|
||||
device minor number 0, that is, <filename>/dev/drbd0</filename>.</para>
|
||||
<para>Enabling a DRBD resource is explained in detail in
|
||||
<link xlink:href="http://www.drbd.org/users-guide-8.3/s-first-time-up.html">the DRBD
|
||||
User’s Guide</link>. In brief, the proper sequence of commands is this:</para>
|
||||
<screen><prompt>#</prompt> <userinput>drbdadm create-md mysql</userinput><co xml:id="CO3-1"/>
|
||||
<prompt>#</prompt> <userinput>drbdadm up mysql</userinput><co xml:id="CO3-2"/>
|
||||
<prompt>#</prompt> <userinput>drbdadm -- --force primary mysql</userinput><co xml:id="CO3-3"/></screen>
|
||||
<calloutlist>
|
||||
<callout arearefs="CO3-1">
|
||||
<para>
|
||||
Initializes DRBD metadata and writes the initial set of metadata
|
||||
to <literal>/dev/data/mysql</literal>. Must be completed on both nodes.
|
||||
</para>
|
||||
</callout>
|
||||
<callout arearefs="CO3-2">
|
||||
<para>
|
||||
Creates the <literal>/dev/drbd0</literal> device node, <emphasis>attaches</emphasis> the DRBD device
|
||||
to its backing store, and <emphasis>connects</emphasis> the DRBD node to its peer. Must
|
||||
be completed on both nodes.
|
||||
</para>
|
||||
</callout>
|
||||
<callout arearefs="CO3-3">
|
||||
<para>
|
||||
Kicks off the initial device synchronization, and puts the device
|
||||
into the <literal>primary</literal> (readable and writable) role. See
|
||||
<link xlink:href="http://www.drbd.org/users-guide-8.3/ch-admin.html#s-roles">Resource
|
||||
roles</link> (from the DRBD User’s Guide) for a more detailed description of
|
||||
the primary and secondary roles in DRBD. Must be completed <emphasis>on one
|
||||
node only,</emphasis> namely the one where you are about to continue with
|
||||
creating your filesystem.
|
||||
</para>
|
||||
</callout>
|
||||
</calloutlist>
|
||||
</section>
|
||||
<section xml:id="_creating_a_file_system">
|
||||
|
||||
<title>Creating a file system</title>
|
||||
|
||||
<para>Once the DRBD resource is running and in the primary role (and
|
||||
potentially still in the process of running the initial device
|
||||
synchronization), you may proceed with creating the filesystem for
|
||||
MySQL data. XFS is the generally recommended filesystem due to its journaling, efficient allocation, and performance:</para>
|
||||
<screen><prompt>#</prompt> <userinput>mkfs -t xfs /dev/drbd0</userinput></screen>
|
||||
<para>You may also use the alternate device path for the DRBD device, which
|
||||
may be easier to remember as it includes the self-explanatory resource
|
||||
name:</para>
|
||||
<screen><prompt>#</prompt> <userinput>mkfs -t xfs /dev/drbd/by-res/mysql</userinput></screen>
|
||||
<para>Once completed, you may safely return the device to the secondary
|
||||
role. Any ongoing device synchronization will continue in the
|
||||
background:</para>
|
||||
<screen><prompt>#</prompt> <userinput>drbdadm secondary mysql</userinput></screen>
|
||||
</section>
|
||||
<section xml:id="_prepare_mysql_for_pacemaker_high_availability">
|
||||
|
||||
<title>Prepare MySQL for Pacemaker high availability</title>
|
||||
|
||||
<para>In order for Pacemaker monitoring to function properly, you must
|
||||
ensure that MySQL’s database files reside on the DRBD device. If you
|
||||
already have an existing MySQL database, the simplest approach is to
|
||||
just move the contents of the existing <literal>/var/lib/mysql</literal> directory into
|
||||
the newly created filesystem on the DRBD device.</para>
|
||||
<warning>
|
||||
<para>You must complete the next step while the MySQL database
|
||||
server is shut down.</para>
|
||||
</warning>
|
||||
<screen><prompt>#</prompt> <userinput>mount /dev/drbd/by-res/mysql /mnt</userinput>
|
||||
<prompt>#</prompt> <userinput>mv /var/lib/mysql/* /mnt</userinput>
|
||||
<prompt>#</prompt> <userinput>umount /mnt</userinput></screen>
|
||||
<para>For a new MySQL installation with no existing data, you may also run
|
||||
the <literal>mysql_install_db</literal> command:</para>
|
||||
<screen><prompt>#</prompt> <userinput>mount /dev/drbd/by-res/mysql /mnt</userinput>
|
||||
<prompt>#</prompt> <userinput>mysql_install_db --datadir=/mnt</userinput>
|
||||
<prompt>#</prompt> <userinput>umount /mnt</userinput></screen>
|
||||
<para>Regardless of the approach, the steps outlined here must be completed
|
||||
on only one cluster node.</para>
|
||||
</section>
|
||||
<section xml:id="_add_mysql_resources_to_pacemaker">
|
||||
|
||||
<title>Add MySQL resources to Pacemaker</title>
|
||||
|
||||
<para>You can now add the Pacemaker configuration for
|
||||
MySQL resources. Connect to the Pacemaker cluster with <literal>crm
|
||||
configure</literal>, and add the following cluster resources:</para>
|
||||
<programlisting>primitive p_ip_mysql ocf:heartbeat:IPaddr2 \
|
||||
params ip="192.168.42.101" cidr_netmask="24" \
|
||||
op monitor interval="30s"
|
||||
primitive p_drbd_mysql ocf:linbit:drbd \
|
||||
params drbd_resource="mysql" \
|
||||
op start timeout="90s" \
|
||||
op stop timeout="180s" \
|
||||
op promote timeout="180s" \
|
||||
op demote timeout="180s" \
|
||||
op monitor interval="30s" role="Slave" \
|
||||
op monitor interval="29s" role="Master"
|
||||
primitive p_fs_mysql ocf:heartbeat:Filesystem \
|
||||
params device="/dev/drbd/by-res/mysql" \
|
||||
directory="/var/lib/mysql" \
|
||||
fstype="xfs" \
|
||||
options="relatime" \
|
||||
op start timeout="60s" \
|
||||
op stop timeout="180s" \
|
||||
op monitor interval="60s" timeout="60s"
|
||||
primitive p_mysql ocf:heartbeat:mysql \
|
||||
params additional_parameters="--bind-address=192.168.42.101"
|
||||
config="/etc/mysql/my.cnf" \
|
||||
pid="/var/run/mysqld/mysqld.pid" \
|
||||
socket="/var/run/mysqld/mysqld.sock" \
|
||||
log="/var/log/mysql/mysqld.log" \
|
||||
op monitor interval="20s" timeout="10s" \
|
||||
op start timeout="120s" \
|
||||
op stop timeout="120s"
|
||||
group g_mysql p_ip_mysql p_fs_mysql p_mysql
|
||||
ms ms_drbd_mysql p_drbd_mysql \
|
||||
meta notify="true" clone-max="2"
|
||||
colocation c_mysql_on_drbd inf: g_mysql ms_drbd_mysql:Master
|
||||
order o_drbd_before_mysql inf: ms_drbd_mysql:promote g_mysql:start</programlisting>
|
||||
<para>This configuration creates</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><literal>p_ip_mysql</literal>, a virtual IP address for use by MySQL
|
||||
(192.168.42.101),
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><literal>p_fs_mysql</literal>, a Pacemaker managed filesystem mounted to
|
||||
<literal>/var/lib/mysql</literal> on whatever node currently runs the MySQL
|
||||
service,
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><literal>ms_drbd_mysql</literal>, the <emphasis>master/slave set</emphasis> managing the <literal>mysql</literal>
|
||||
DRBD resource,
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
a service <literal>group</literal> and <literal>order</literal> and <literal>colocation</literal> constraints to ensure
|
||||
resources are started on the correct nodes, and in the correct sequence.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para><literal>crm configure</literal> supports batch input, so you may copy and paste the
|
||||
above into your live pacemaker configuration, and then make changes as
|
||||
required. For example, you may enter <literal>edit p_ip_mysql</literal> from the
|
||||
<literal>crm configure</literal> menu and edit the resource to match your preferred
|
||||
virtual IP address.</para>
|
||||
<para>Once completed, commit your configuration changes by entering <literal>commit</literal>
|
||||
from the <literal>crm configure</literal> menu. Pacemaker will then start the MySQL
|
||||
service, and its dependent resources, on one of your nodes.</para>
|
||||
</section>
|
||||
<section xml:id="_configure_openstack_services_for_highly_available_mysql">
|
||||
|
||||
<title>Configure OpenStack services for highly available MySQL</title>
|
||||
|
||||
<para>Your OpenStack services must now point their MySQL configuration to
|
||||
the highly available, virtual cluster IP address—rather than a
|
||||
MySQL server’s physical IP address as you normally would.</para>
|
||||
<para>For OpenStack Image, for example, if your MySQL service IP address is
|
||||
192.168.42.101 as in the configuration explained here, you would use
|
||||
the following line in your OpenStack Image registry configuration file
|
||||
(<filename>glance-registry.conf</filename>):</para>
|
||||
<programlisting language="ini">sql_connection = mysql://glancedbadmin:<password>@192.168.42.101/glance</programlisting>
|
||||
<para>No other changes are necessary to your OpenStack configuration. If the
|
||||
node currently hosting your database experiences a problem
|
||||
necessitating service failover, your OpenStack services may experience
|
||||
a brief MySQL interruption, as they would in the event of a network
|
||||
hiccup, and then continue to run normally.</para>
|
||||
</section>
|
||||
</section>
|
@ -1,243 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE section [
|
||||
<!ENTITY % openstack SYSTEM "../../common/entities/openstack.ent">
|
||||
%openstack;
|
||||
]>
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="s-rabbitmq">
|
||||
|
||||
<title>Highly available RabbitMQ</title>
|
||||
|
||||
<para>RabbitMQ is the default AMQP server used by many OpenStack
|
||||
services. Making the RabbitMQ service highly available involves:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
configuring a DRBD device for use by RabbitMQ,
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
configuring RabbitMQ to use a data directory residing on that DRBD
|
||||
device,
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
selecting and assigning a virtual IP address (VIP) that can freely
|
||||
float between cluster nodes,
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
configuring RabbitMQ to listen on that IP address,
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
managing all resources, including the RabbitMQ daemon itself, with
|
||||
the Pacemaker cluster manager.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<note>
|
||||
<para>There is an alternative method of configuring RabbitMQ for high
|
||||
availability. That approach, known as
|
||||
<link xlink:href="http://www.rabbitmq.com/ha.html">active-active mirrored queues</link>,
|
||||
happens to be the one preferred by the RabbitMQ developers—however
|
||||
it has shown less than ideal consistency and reliability in OpenStack
|
||||
clusters. Thus, at the time of writing, the Pacemaker/DRBD based
|
||||
approach remains the recommended one for OpenStack environments,
|
||||
although this may change in the near future as RabbitMQ active-active
|
||||
mirrored queues mature.</para>
|
||||
</note>
|
||||
<section xml:id="_configure_drbd_2">
|
||||
|
||||
<title>Configure DRBD</title>
|
||||
|
||||
<para>The Pacemaker based RabbitMQ server requires a DRBD resource from
|
||||
which it mounts the <literal>/var/lib/rabbitmq</literal> directory. In this example,
|
||||
the DRBD resource is simply named <literal>rabbitmq</literal>:</para>
|
||||
<formalpara>
|
||||
|
||||
<title><literal>rabbitmq</literal> DRBD resource configuration (<filename>/etc/drbd.d/rabbitmq.res</filename>)</title>
|
||||
|
||||
<para>
|
||||
<programlisting>resource rabbitmq {
|
||||
device minor 1;
|
||||
disk "/dev/data/rabbitmq";
|
||||
meta-disk internal;
|
||||
on node1 {
|
||||
address ipv4 10.0.42.100:7701;
|
||||
}
|
||||
on node2 {
|
||||
address ipv4 10.0.42.254:7701;
|
||||
}
|
||||
}</programlisting>
|
||||
</para>
|
||||
</formalpara>
|
||||
<para>This resource uses an underlying local disk (in DRBD terminology, a
|
||||
<emphasis>backing device</emphasis>) named <literal>/dev/data/rabbitmq</literal> on both cluster nodes,
|
||||
<literal>node1</literal> and <literal>node2</literal>. Normally, this would be an LVM Logical Volume
|
||||
specifically set aside for this purpose. The DRBD <literal>meta-disk</literal> is
|
||||
<literal>internal</literal>, meaning DRBD-specific metadata is being stored at the end
|
||||
of the <literal>disk</literal> device itself. The device is configured to communicate
|
||||
between IPv4 addresses 10.0.42.100 and 10.0.42.254, using TCP port
|
||||
7701. Once enabled, it will map to a local DRBD block device with the
|
||||
device minor number 1, that is, <literal>/dev/drbd1</literal>.</para>
|
||||
<para>Enabling a DRBD resource is explained in detail in
|
||||
<link xlink:href="http://www.drbd.org/users-guide-8.3/s-first-time-up.html">the DRBD
|
||||
User’s Guide</link>. In brief, the proper sequence of commands is this:</para>
|
||||
<screen><prompt>#</prompt> <userinput>drbdadm create-md rabbitmq</userinput><co xml:id="CO4-1"/>
|
||||
<prompt>#</prompt> <userinput>drbdadm up rabbitmq</userinput><co xml:id="CO4-2"/>
|
||||
<prompt>#</prompt> <userinput>drbdadm -- --force primary rabbitmq</userinput><co xml:id="CO4-3"/></screen>
|
||||
<calloutlist>
|
||||
<callout arearefs="CO4-1">
|
||||
<para>
|
||||
Initializes DRBD metadata and writes the initial set of metadata
|
||||
to <literal>/dev/data/rabbitmq</literal>. Must be completed on both nodes.
|
||||
</para>
|
||||
</callout>
|
||||
<callout arearefs="CO4-2">
|
||||
<para>
|
||||
Creates the <literal>/dev/drbd1</literal> device node, <emphasis>attaches</emphasis> the DRBD device
|
||||
to its backing store, and <emphasis>connects</emphasis> the DRBD node to its peer. Must
|
||||
be completed on both nodes.
|
||||
</para>
|
||||
</callout>
|
||||
<callout arearefs="CO4-3">
|
||||
<para>
|
||||
Kicks off the initial device synchronization, and puts the device
|
||||
into the <literal>primary</literal> (readable and writable) role. See
|
||||
<link xlink:href="http://www.drbd.org/users-guide-8.3/ch-admin.html#s-roles">Resource
|
||||
roles</link> (from the DRBD User’s Guide) for a more detailed description of
|
||||
the primary and secondary roles in DRBD. Must be completed <emphasis>on one
|
||||
node only,</emphasis> namely the one where you are about to continue with
|
||||
creating your filesystem.
|
||||
</para>
|
||||
</callout>
|
||||
</calloutlist>
|
||||
</section>
|
||||
<section xml:id="_create_a_file_system">
|
||||
|
||||
<title>Create a file system</title>
|
||||
|
||||
<para>Once the DRBD resource is running and in the primary role (and
|
||||
potentially still in the process of running the initial device
|
||||
synchronization), you may proceed with creating the filesystem for
|
||||
RabbitMQ data. XFS is generally the recommended filesystem:</para>
|
||||
<screen><prompt>#</prompt> <userinput>mkfs -t xfs /dev/drbd1</userinput></screen>
|
||||
<para>You may also use the alternate device path for the DRBD device, which
|
||||
may be easier to remember as it includes the self-explanatory resource
|
||||
name:</para>
|
||||
<screen><prompt>#</prompt> <userinput>mkfs -t xfs /dev/drbd/by-res/rabbitmq</userinput></screen>
|
||||
<para>Once completed, you may safely return the device to the secondary
|
||||
role. Any ongoing device synchronization will continue in the
|
||||
background:</para>
|
||||
<screen><prompt>#</prompt> <userinput>drbdadm secondary rabbitmq</userinput></screen>
|
||||
</section>
|
||||
<section xml:id="_prepare_rabbitmq_for_pacemaker_high_availability">
|
||||
|
||||
<title>Prepare RabbitMQ for Pacemaker high availability</title>
|
||||
|
||||
<para>In order for Pacemaker monitoring to function properly, you must
|
||||
ensure that RabbitMQ’s <literal>.erlang.cookie</literal> files are identical on all
|
||||
nodes, regardless of whether DRBD is mounted there or not. The
|
||||
simplest way of doing so is to take an existing <literal>.erlang.cookie</literal> from
|
||||
one of your nodes, copying it to the RabbitMQ data directory on the
|
||||
other node, and also copying it to the DRBD-backed filesystem.</para>
|
||||
<screen><prompt>#</prompt> <userinput>scp -a /var/lib/rabbitmq/.erlang.cookie node2:/var/lib/rabbitmq/</userinput>
|
||||
<prompt>#</prompt> <userinput>mount /dev/drbd/by-res/rabbitmq /mnt</userinput>
|
||||
<prompt>#</prompt> <userinput>cp -a /var/lib/rabbitmq/.erlang.cookie /mnt</userinput>
|
||||
<prompt>#</prompt> <userinput>umount /mnt</userinput></screen>
|
||||
</section>
|
||||
<section xml:id="_add_rabbitmq_resources_to_pacemaker">
|
||||
|
||||
<title>Add RabbitMQ resources to Pacemaker</title>
|
||||
|
||||
<para>You may now proceed with adding the Pacemaker configuration for
|
||||
RabbitMQ resources. Connect to the Pacemaker cluster with <literal>crm
|
||||
configure</literal>, and add the following cluster resources:</para>
|
||||
<programlisting>primitive p_ip_rabbitmq ocf:heartbeat:IPaddr2 \
|
||||
params ip="192.168.42.100" cidr_netmask="24" \
|
||||
op monitor interval="10s"
|
||||
primitive p_drbd_rabbitmq ocf:linbit:drbd \
|
||||
params drbd_resource="rabbitmq" \
|
||||
op start timeout="90s" \
|
||||
op stop timeout="180s" \
|
||||
op promote timeout="180s" \
|
||||
op demote timeout="180s" \
|
||||
op monitor interval="30s" role="Slave" \
|
||||
op monitor interval="29s" role="Master"
|
||||
primitive p_fs_rabbitmq ocf:heartbeat:Filesystem \
|
||||
params device="/dev/drbd/by-res/rabbitmq" \
|
||||
directory="/var/lib/rabbitmq" \
|
||||
fstype="xfs" options="relatime" \
|
||||
op start timeout="60s" \
|
||||
op stop timeout="180s" \
|
||||
op monitor interval="60s" timeout="60s"
|
||||
primitive p_rabbitmq ocf:rabbitmq:rabbitmq-server \
|
||||
params nodename="rabbit@localhost" \
|
||||
mnesia_base="/var/lib/rabbitmq" \
|
||||
op monitor interval="20s" timeout="10s"
|
||||
group g_rabbitmq p_ip_rabbitmq p_fs_rabbitmq p_rabbitmq
|
||||
ms ms_drbd_rabbitmq p_drbd_rabbitmq \
|
||||
meta notify="true" master-max="1" clone-max="2"
|
||||
colocation c_rabbitmq_on_drbd inf: g_rabbitmq ms_drbd_rabbitmq:Master
|
||||
order o_drbd_before_rabbitmq inf: ms_drbd_rabbitmq:promote g_rabbitmq:start</programlisting>
|
||||
<para>This configuration creates</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><literal>p_ip_rabbitmq</literal>, a virtual IP address for use by RabbitMQ
|
||||
(192.168.42.100),
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><literal>p_fs_rabbitmq</literal>, a Pacemaker managed filesystem mounted to
|
||||
<literal>/var/lib/rabbitmq</literal> on whatever node currently runs the RabbitMQ
|
||||
service,
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><literal>ms_drbd_rabbitmq</literal>, the <emphasis>master/slave set</emphasis> managing the <literal>rabbitmq</literal>
|
||||
DRBD resource,
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
a service <literal>group</literal> and <literal>order</literal> and <literal>colocation</literal> constraints to ensure
|
||||
resources are started on the correct nodes, and in the correct sequence.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para><literal>crm configure</literal> supports batch input, so you may copy and paste the
|
||||
above into your live pacemaker configuration, and then make changes as
|
||||
required. For example, you may enter <literal>edit p_ip_rabbitmq</literal> from the
|
||||
<literal>crm configure</literal> menu and edit the resource to match your preferred
|
||||
virtual IP address.</para>
|
||||
<para>Once completed, commit your configuration changes by entering <literal>commit</literal>
|
||||
from the <literal>crm configure</literal> menu. Pacemaker will then start the RabbitMQ
|
||||
service, and its dependent resources, on one of your nodes.</para>
|
||||
</section>
|
||||
<section xml:id="_configure_openstack_services_for_highly_available_rabbitmq">
|
||||
|
||||
<title>Configure OpenStack services for highly available RabbitMQ</title>
|
||||
|
||||
<para>Your OpenStack services must now point their RabbitMQ configuration to
|
||||
the highly available, virtual cluster IP address—rather than a
|
||||
RabbitMQ server’s physical IP address as you normally would.</para>
|
||||
<para>For OpenStack Image, for example, if your RabbitMQ service IP address is
|
||||
192.168.42.100 as in the configuration explained here, you would use
|
||||
the following line in your OpenStack Image API configuration file
|
||||
(<filename>glance-api.conf</filename>):</para>
|
||||
<programlisting language="ini">rabbit_host = 192.168.42.100</programlisting>
|
||||
<para>No other changes are necessary to your OpenStack configuration. If the
|
||||
node currently hosting your RabbitMQ experiences a problem
|
||||
necessitating service failover, your OpenStack services may experience
|
||||
a brief RabbitMQ interruption, as they would in the event of a network
|
||||
hiccup, and then continue to run normally.</para>
|
||||
</section>
|
||||
</section>
|
@ -1 +0,0 @@
|
||||
Place any figures and illustrations in this directory.
|
@ -1,18 +0,0 @@
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="_memcached">
|
||||
|
||||
<title>Memcached</title>
|
||||
|
||||
<para>Most OpenStack services use an application to offer persistence and store ephemeral data (like tokens).
|
||||
Memcached is one of them and can scale-out easily without any specific tricks required.</para>
|
||||
<para>To install and configure it, read the <link xlink:href="http://code.google.com/p/memcached/wiki/NewStart">official documentation</link>.</para>
|
||||
<para>Memory caching is managed by oslo-incubator, so the way to use multiple memcached servers is the same for all projects.</para>
|
||||
<para>Example with two hosts:</para>
|
||||
<programlisting>memcached_servers = controller1:11211,controller2:11211</programlisting>
|
||||
<para>By default, controller1 handles the caching service but if the host goes down, controller2 does the job.
|
||||
For more information about memcached installation, see the OpenStack
|
||||
Cloud Administrator Guide.</para>
|
||||
</section>
|
@ -1,81 +0,0 @@
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="_run_openstack_api_and_schedulers">
|
||||
|
||||
<title>Run OpenStack API and schedulers</title>
|
||||
|
||||
<section xml:id="_api_services">
|
||||
|
||||
<title>API services</title>
|
||||
|
||||
<para>All OpenStack projects have an API service for controlling all the resources in the Cloud.
|
||||
In active/active mode, the most common setup is to scale-out these services on at least two nodes
|
||||
and use load balancing and virtual IP (with HAProxy & Keepalived in this setup).</para>
|
||||
<para>
|
||||
<emphasis role="strong">Configure API OpenStack services</emphasis>
|
||||
</para>
|
||||
<para>To configure our Cloud using Highly available and scalable API services, we need to ensure that:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
You use virtual IPs when configuring OpenStack Identity endpoints.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
All OpenStack configuration files should refer to virtual IPs.
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>
|
||||
<emphasis role="strong">In case of failure</emphasis>
|
||||
</para>
|
||||
<para>The monitor check is quite simple since it just establishes a TCP connection to the API port. Comparing to the
|
||||
active/passive mode using Corosync & Resources Agents, we don’t check if the service is actually running.
|
||||
That’s why all OpenStack API should be monitored by another tool, for example Nagios, with the goal to detect
|
||||
failures in the Cloud Framework infrastructure.</para>
|
||||
</section>
|
||||
<section xml:id="_schedulers">
|
||||
|
||||
<title>Schedulers</title>
|
||||
|
||||
<para>OpenStack schedulers are used to determine how to dispatch compute, network and volume requests. The most
|
||||
common setup is to use RabbitMQ as messaging system already documented in this guide.
|
||||
Those services are connected to the messaging backend and can scale-out:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
nova-scheduler
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
nova-conductor
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
cinder-scheduler
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
neutron-server
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
ceilometer-collector
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
heat-engine
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>Please refer to the RabbitMQ section for configuring these services with multiple messaging servers.</para>
|
||||
</section>
|
||||
</section>
|
@ -1,10 +0,0 @@
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="ha-aa-db-galera-monitoring">
|
||||
|
||||
<title>Galera monitoring scripts</title>
|
||||
|
||||
<para>(Coming soon)</para>
|
||||
</section>
|
@ -1,179 +0,0 @@
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="ha-aa-db-mysql-galera">
|
||||
|
||||
<title>MySQL with Galera</title>
|
||||
|
||||
<para>Rather than starting with a vanilla version of MySQL, and then adding
|
||||
Galera, you will want to install a version of MySQL patched for wsrep
|
||||
(Write Set REPlication) from <link xlink:href="https://launchpad.net/codership-mysql/0.7">https://launchpad.net/codership-mysql/0.7</link>.
|
||||
The wsrep API is suitable for configuring MySQL High Availability in
|
||||
OpenStack because it supports synchronous replication.</para>
|
||||
<para>Note that the installation requirements call for careful attention. Read
|
||||
the guide <link xlink:href="https://launchpadlibrarian.net/66669857/README-wsrep">https://launchpadlibrarian.net/66669857/README-wsrep</link>
|
||||
to ensure you follow all the required steps.</para>
|
||||
<orderedlist numeration="arabic" inheritnum="ignore" continuation="restarts">
|
||||
|
||||
<title>Installing Galera through a MySQL version patched for wsrep:</title>
|
||||
|
||||
<listitem>
|
||||
<para>
|
||||
Download Galera from <link xlink:href="https://launchpad.net/galera/+download">https://launchpad.net/galera/+download</link>, and
|
||||
install the *.rpms or *.debs, which takes care of any dependencies
|
||||
that your system doesn’t already have installed.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Adjust the configuration:
|
||||
</para>
|
||||
<para>In the system-wide <filename>my.conf</filename> file, make sure mysqld isn’t bound to
|
||||
127.0.0.1, and that <filename>/etc/mysql/conf.d/</filename> is included.
|
||||
Typically you can find this file at <filename>/etc/my.cnf</filename>:</para>
|
||||
<programlisting>[mysqld]
|
||||
...
|
||||
!includedir /etc/mysql/conf.d/
|
||||
...
|
||||
#bind-address = 127.0.0.1</programlisting>
|
||||
<para>When adding a new node, you must configure it with a MySQL account that
|
||||
can access the other nodes. The new node must be able to request a state
|
||||
snapshot from one of the existing nodes:</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Specify your MySQL account information in <filename>/etc/mysql/conf.d/wsrep.cnf</filename>:
|
||||
</para>
|
||||
<programlisting>wsrep_sst_auth=wsrep_sst:wspass</programlisting>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Connect as root and grant privileges to that user:
|
||||
</para>
|
||||
<screen><prompt></prompt> <userinput>mysql -e "SET wsrep_on=OFF; GRANT ALL ON *.* TO wsrep_sst@'%' IDENTIFIED BY 'wspass'";</userinput></screen>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Remove user accounts with empty user names because they cause problems:
|
||||
</para>
|
||||
<screen><prompt>$</prompt> <userinput>mysql -e "SET wsrep_on=OFF; DELETE FROM mysql.user WHERE user='';"</userinput></screen>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Set up certain mandatory configuration options within MySQL itself.
|
||||
These include:
|
||||
</para>
|
||||
<programlisting>query_cache_size=0
|
||||
binlog_format=ROW
|
||||
default_storage_engine=innodb
|
||||
innodb_autoinc_lock_mode=2
|
||||
innodb_doublewrite=1</programlisting>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Check that the nodes can access each other through the firewall.
|
||||
Depending on your environment, this might mean adjusting iptables, as in:
|
||||
</para>
|
||||
<screen><prompt>#</prompt> <userinput>iptables --insert RH-Firewall-1-INPUT 1 --proto tcp \
|
||||
--source <my IP>/24 --destination <my IP>/32 --dport 3306 \
|
||||
-j ACCEPT</userinput>
|
||||
<prompt>#</prompt> <userinput>iptables --insert RH-Firewall-1-INPUT 1 --proto tcp \
|
||||
--source <my IP>/24 --destination <my IP>/32 --dport 4567 \
|
||||
-j ACCEPT</userinput></screen>
|
||||
<para>This might also mean configuring any NAT firewall between nodes to allow
|
||||
direct connections. You might need to disable SELinux, or configure it to
|
||||
allow mysqld to listen to sockets at unprivileged ports.</para>
|
||||
</listitem>
|
||||
</orderedlist>
|
||||
<para>Now you’re ready to create the cluster.</para>
|
||||
<section xml:id="_create_the_cluster">
|
||||
|
||||
<title>Create the cluster</title>
|
||||
|
||||
<para>In creating a cluster, you first start a single instance, which creates
|
||||
the cluster. The rest of the MySQL instances then connect to
|
||||
that cluster:</para>
|
||||
<orderedlist numeration="arabic" inheritnum="ignore" continuation="restarts">
|
||||
|
||||
<title>An example of creating the cluster:</title>
|
||||
|
||||
<listitem>
|
||||
<para>
|
||||
Start on the first node having IP address <literal>10.0.0.10</literal> by executing the command:
|
||||
</para>
|
||||
</listitem>
|
||||
</orderedlist>
|
||||
<screen><prompt>#</prompt> <userinput>service mysql start wsrep_cluster_address=gcomm://</userinput></screen>
|
||||
<orderedlist numeration="arabic" inheritnum="ignore" continuation="restarts">
|
||||
<listitem>
|
||||
<para>
|
||||
Connect to that cluster on the rest of the nodes by referencing the
|
||||
address of that node, as in:
|
||||
</para>
|
||||
</listitem>
|
||||
</orderedlist>
|
||||
<screen><prompt>#</prompt> <userinput>service mysql start wsrep_cluster_address=gcomm://10.0.0.10</userinput></screen>
|
||||
<para>You also have the option to set the <literal>wsrep_cluster_address</literal> in the
|
||||
<filename>/etc/mysql/conf.d/wsrep.cnf</filename> file, or within the client itself. (In
|
||||
fact, for some systems, such as MariaDB or Percona, this may be your
|
||||
only option.)</para>
|
||||
<orderedlist numeration="arabic" inheritnum="ignore" continuation="restarts">
|
||||
|
||||
<title>An example of checking the status of the cluster.</title>
|
||||
|
||||
<listitem>
|
||||
<para>
|
||||
Open the MySQL client and check the status of the various parameters:
|
||||
</para>
|
||||
</listitem>
|
||||
</orderedlist>
|
||||
<screen><prompt>mysql></prompt> <userinput>SET GLOBAL wsrep_cluster_address='<cluster address string>';</userinput>
|
||||
<prompt>mysql></prompt> <userinput>SHOW STATUS LIKE 'wsrep%';</userinput></screen>
|
||||
<para>You should see a status that looks something like this:</para>
|
||||
<screen><prompt>mysql></prompt> <userinput>show status like 'wsrep%';</userinput>
|
||||
<computeroutput>+----------------------------+--------------------------------------+
|
||||
| Variable_name | Value |
|
||||
+----------------------------+--------------------------------------+
|
||||
| wsrep_local_state_uuid | 111fc28b-1b05-11e1-0800-e00ec5a7c930 |
|
||||
| wsrep_protocol_version | 1 |
|
||||
| wsrep_last_committed | 0 |
|
||||
| wsrep_replicated | 0 |
|
||||
| wsrep_replicated_bytes | 0 |
|
||||
| wsrep_received | 2 |
|
||||
| wsrep_received_bytes | 134 |
|
||||
| wsrep_local_commits | 0 |
|
||||
| wsrep_local_cert_failures | 0 |
|
||||
| wsrep_local_bf_aborts | 0 |
|
||||
| wsrep_local_replays | 0 |
|
||||
| wsrep_local_send_queue | 0 |
|
||||
| wsrep_local_send_queue_avg | 0.000000 |
|
||||
| wsrep_local_recv_queue | 0 |
|
||||
| wsrep_local_recv_queue_avg | 0.000000 |
|
||||
| wsrep_flow_control_paused | 0.000000 |
|
||||
| wsrep_flow_control_sent | 0 |
|
||||
| wsrep_flow_control_recv | 0 |
|
||||
| wsrep_cert_deps_distance | 0.000000 |
|
||||
| wsrep_apply_oooe | 0.000000 |
|
||||
| wsrep_apply_oool | 0.000000 |
|
||||
| wsrep_apply_window | 0.000000 |
|
||||
| wsrep_commit_oooe | 0.000000 |
|
||||
| wsrep_commit_oool | 0.000000 |
|
||||
| wsrep_commit_window | 0.000000 |
|
||||
| wsrep_local_state | 4 |
|
||||
| wsrep_local_state_comment | Synced (6) |
|
||||
| wsrep_cert_index_size | 0 |
|
||||
| wsrep_cluster_conf_id | 1 |
|
||||
| wsrep_cluster_size | 1 |
|
||||
| wsrep_cluster_state_uuid | 111fc28b-1b05-11e1-0800-e00ec5a7c930 |
|
||||
| wsrep_cluster_status | Primary |
|
||||
| wsrep_connected | ON |
|
||||
| wsrep_local_index | 0 |
|
||||
| wsrep_provider_name | Galera |
|
||||
| wsrep_provider_vendor | Codership Oy |
|
||||
| wsrep_provider_version | 21.1.0(r86) |
|
||||
| wsrep_ready | ON |
|
||||
+----------------------------+--------------------------------------+
|
||||
38 rows in set (0.01 sec)</computeroutput></screen>
|
||||
</section>
|
||||
</section>
|
@ -1,13 +0,0 @@
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="_other_ways_to_provide_a_highly_available_database">
|
||||
|
||||
<title>Other ways to provide a highly available database</title>
|
||||
|
||||
<para>MySQL with Galera is by no means the only way to achieve database HA.
|
||||
MariaDB (<link xlink:href="https://mariadb.org/">https://mariadb.org/</link>) and Percona (<link xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://www.percona.com/">http://www.percona.com/</link>)
|
||||
also work with Galera. You also have the option to use PostgreSQL, which
|
||||
has its own replication, or another database HA option.</para>
|
||||
</section>
|
@ -1,12 +0,0 @@
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="_run_neutron_dhcp_agent">
|
||||
|
||||
<title>Run neutron DHCP agent</title>
|
||||
|
||||
<para>OpenStack Networking service has a scheduler that
|
||||
lets you run multiple agents across nodes. Also, the DHCP agent can be natively
|
||||
highly available. For details, see <link xlink:href="http://docs.openstack.org/trunk/config-reference/content/app_demo_multi_dhcp_agents.html">OpenStack Configuration Reference</link>.</para>
|
||||
</section>
|
@ -1,15 +0,0 @@
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="_run_neutron_l3_agent">
|
||||
|
||||
<title>Run neutron L3 agent</title>
|
||||
|
||||
<para>The neutron L3 agent is scalable, due to the scheduler
|
||||
that allows distribution of virtual routers across multiple nodes.
|
||||
There is no native feature to make these routers highly available.
|
||||
At this time, the active/passive solution exists to run the Neutron L3
|
||||
agent in failover mode with Pacemaker. See the active/passive
|
||||
section of this guide.</para>
|
||||
</section>
|
@ -1,13 +0,0 @@
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="_run_neutron_metadata_agent">
|
||||
|
||||
<title>Run neutron metadata agent</title>
|
||||
|
||||
<para>There is no native feature to make this service highly available.
|
||||
At this time, the Active / Passive solution exists to run the neutron
|
||||
metadata agent in failover mode with Pacemaker. See the Active /
|
||||
Passive section of this guide.</para>
|
||||
</section>
|
@ -1,28 +0,0 @@
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="_configure_openstack_services_to_use_rabbitmq">
|
||||
|
||||
<title>Configure OpenStack services to use RabbitMQ</title>
|
||||
|
||||
<para>We have to configure the OpenStack components to use at least two RabbitMQ nodes.</para>
|
||||
<para>Do this configuration on all services using RabbitMQ:</para>
|
||||
<para>RabbitMQ HA cluster host:port pairs:</para>
|
||||
<programlisting>rabbit_hosts=rabbit1:5672,rabbit2:5672</programlisting>
|
||||
<para>How frequently to retry connecting with RabbitMQ:</para>
|
||||
<programlisting>rabbit_retry_interval=1</programlisting>
|
||||
<para>How long to back-off for between retries when connecting to RabbitMQ:</para>
|
||||
<programlisting>rabbit_retry_backoff=2</programlisting>
|
||||
<para>Maximum retries with trying to connect to RabbitMQ (infinite by default):</para>
|
||||
<programlisting>rabbit_max_retries=0</programlisting>
|
||||
<para>Use durable queues in RabbitMQ:</para>
|
||||
<programlisting>rabbit_durable_queues=false</programlisting>
|
||||
<para>Use HA queues in RabbitMQ (x-ha-policy: all):</para>
|
||||
<programlisting>rabbit_ha_queues=true</programlisting>
|
||||
<para>If you change the configuration from an old setup which did not use HA queues, you should interrupt the service:</para>
|
||||
<screen><prompt>#</prompt> <userinput>rabbitmqctl stop_app</userinput>
|
||||
<prompt>#</prompt> <userinput>rabbitmqctl reset</userinput>
|
||||
<prompt>#</prompt> <userinput>rabbitmqctl start_app</userinput></screen>
|
||||
<para>Services currently working with HA queues: OpenStack Compute, OpenStack Block Storage, OpenStack Networking, Telemetry.</para>
|
||||
</section>
|
@ -1,42 +0,0 @@
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="_configure_rabbitmq">
|
||||
|
||||
<title>Configure RabbitMQ</title>
|
||||
|
||||
<para>Here we are building a cluster of RabbitMQ nodes to construct a RabbitMQ broker.
|
||||
Mirrored queues in RabbitMQ improve the availability of service since it will be resilient to failures.
|
||||
We have to consider that while exchanges and bindings will survive the loss of individual nodes, queues
|
||||
and their messages will not because a queue and its contents is located on one node. If we lose this node,
|
||||
we also lose the queue.</para>
|
||||
<para>We consider that we run (at least) two RabbitMQ servers. To build a broker, we need to ensure that all nodes
|
||||
have the same Erlang cookie file. To do so, stop RabbitMQ everywhere and copy the cookie from rabbit1 server
|
||||
to other server(s):</para>
|
||||
<screen><prompt>#</prompt> <userinput>scp /var/lib/rabbitmq/.erlang.cookie \
|
||||
root@rabbit2:/var/lib/rabbitmq/.erlang.cookie</userinput></screen>
|
||||
<para>Then, start RabbitMQ on nodes.
|
||||
If RabbitMQ fails to start, you can’t continue to the next step.</para>
|
||||
<para>Now, we are building the HA cluster. From rabbit2, run these commands:</para>
|
||||
<screen><prompt>#</prompt> <userinput>rabbitmqctl stop_app</userinput>
|
||||
<prompt>#</prompt> <userinput>rabbitmqctl join_cluster rabbit@rabbit1</userinput>
|
||||
<prompt>#</prompt> <userinput>rabbitmqctl start_app</userinput></screen>
|
||||
<para>To verify the cluster status:</para>
|
||||
<screen><prompt>#</prompt> <userinput>rabbitmqctl cluster_status</userinput>
|
||||
<computeroutput>
|
||||
Cluster status of node rabbit@rabbit2 ...
|
||||
[{nodes,[{disc,[rabbit@rabbit1]},{ram,[rabbit@rabbit2]}]},{running_nodes,[rabbit@rabbit2,rabbit@rabbit1]}]
|
||||
</computeroutput></screen>
|
||||
<para>If the cluster is working, you can now proceed to creating users and passwords for queues.</para>
|
||||
<para>
|
||||
<emphasis role="strong">Note for RabbitMQ version 3</emphasis>
|
||||
</para>
|
||||
<para>Queue mirroring is no longer controlled by the <emphasis>x-ha-policy</emphasis> argument when declaring a queue. OpenStack can
|
||||
continue to declare this argument, but it won’t cause queues to be mirrored.
|
||||
We need to make sure that all queues (except those with auto-generated names) are mirrored across all running nodes:</para>
|
||||
<screen><prompt>#</prompt> <userinput>rabbitmqctl set_policy HA '^(?!amq\.).*' '{"ha-mode": "all"}'</userinput></screen>
|
||||
<para>
|
||||
<link xlink:href="http://www.rabbitmq.com/ha.html">More information about High availability in RabbitMQ</link>
|
||||
</para>
|
||||
</section>
|
@ -1,30 +0,0 @@
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="_install_rabbitmq">
|
||||
|
||||
<title>Install RabbitMQ</title>
|
||||
|
||||
<para>This setup has been tested with RabbitMQ 2.7.1.</para>
|
||||
<section xml:id="_on_ubuntu_debian">
|
||||
|
||||
<title>On Ubuntu / Debian</title>
|
||||
|
||||
<para>RabbitMQ is packaged on both distros:</para>
|
||||
<screen><prompt>#</prompt> <userinput>apt-get install rabbitmq-server rabbitmq-plugins</userinput></screen>
|
||||
<para>
|
||||
<link xlink:href="http://www.rabbitmq.com/install-debian.html">Official manual for installing RabbitMQ on Ubuntu / Debian</link>
|
||||
</para>
|
||||
</section>
|
||||
<section xml:id="_on_fedora_rhel">
|
||||
|
||||
<title>On Fedora / RHEL</title>
|
||||
|
||||
<para>RabbitMQ is packaged on both distros:</para>
|
||||
<screen><prompt>#</prompt> <userinput>yum install erlang</userinput></screen>
|
||||
<para>
|
||||
<link xlink:href="http://www.rabbitmq.com/install-rpm.html">Official manual for installing RabbitMQ on Fedora / RHEL</link>
|
||||
</para>
|
||||
</section>
|
||||
</section>
|
@ -1,67 +0,0 @@
|
||||
totem {
|
||||
version: 2
|
||||
|
||||
# Time (in ms) to wait for a token <1>
|
||||
token: 10000
|
||||
|
||||
# How many token retransmits before forming a new
|
||||
# configuration
|
||||
token_retransmits_before_loss_const: 10
|
||||
|
||||
# Turn off the virtual synchrony filter
|
||||
vsftype: none
|
||||
|
||||
# Enable encryption <2>
|
||||
secauth: on
|
||||
|
||||
# How many threads to use for encryption/decryption
|
||||
threads: 0
|
||||
|
||||
# This specifies the redundant ring protocol, which may be
|
||||
# none, active, or passive. <3>
|
||||
rrp_mode: active
|
||||
|
||||
# The following is a two-ring multicast configuration. <4>
|
||||
interface {
|
||||
ringnumber: 0
|
||||
bindnetaddr: 192.168.42.0
|
||||
mcastaddr: 239.255.42.1
|
||||
mcastport: 5405
|
||||
}
|
||||
interface {
|
||||
ringnumber: 1
|
||||
bindnetaddr: 10.0.42.0
|
||||
mcastaddr: 239.255.42.2
|
||||
mcastport: 5405
|
||||
}
|
||||
}
|
||||
|
||||
amf {
|
||||
mode: disabled
|
||||
}
|
||||
|
||||
service {
|
||||
# Load the Pacemaker Cluster Resource Manager <5>
|
||||
ver: 1
|
||||
name: pacemaker
|
||||
}
|
||||
|
||||
aisexec {
|
||||
user: root
|
||||
group: root
|
||||
}
|
||||
|
||||
logging {
|
||||
fileline: off
|
||||
to_stderr: yes
|
||||
to_logfile: no
|
||||
to_syslog: yes
|
||||
syslog_facility: daemon
|
||||
debug: off
|
||||
timestamp: on
|
||||
logger_subsys {
|
||||
subsys: AMF
|
||||
debug: off
|
||||
tags: enter|leave|trace1|trace2|trace3|trace4|trace6
|
||||
}
|
||||
}
|
@ -1,11 +0,0 @@
|
||||
resource mysql {
|
||||
device minor 0;
|
||||
disk "/dev/data/mysql";
|
||||
meta-disk internal;
|
||||
on node1 {
|
||||
address ipv4 10.0.42.100:7700;
|
||||
}
|
||||
on node2 {
|
||||
address ipv4 10.0.42.254:7700;
|
||||
}
|
||||
}
|
@ -1,2 +0,0 @@
|
||||
group g_services_api p_api-ip p_keystone p_glance-api p_cinder-api \
|
||||
p_neutron-server p_glance-registry p_ceilometer-agent-central
|
@ -1,3 +0,0 @@
|
||||
primitive p_api-ip ocf:heartbeat:IPaddr2 \
|
||||
params ip="192.168.42.103" cidr_netmask="24" \
|
||||
op monitor interval="30s"
|
@ -1,3 +0,0 @@
|
||||
primitive p_ceilometer-agent-central ocf:openstack:ceilometer-agent-central \
|
||||
params config="/etc/ceilometer/ceilometer.conf" \
|
||||
op monitor interval="30s" timeout="30s"
|
@ -1,4 +0,0 @@
|
||||
primitive p_cinder-api ocf:openstack:cinder-api \
|
||||
params config="/etc/cinder/cinder.conf" os_password="secrete" os_username="admin" \
|
||||
os_tenant_name="admin" keystone_get_token_url="http://192.168.42.103:5000/v2.0/tokens" \
|
||||
op monitor interval="30s" timeout="30s"
|
@ -1,3 +0,0 @@
|
||||
primitive p_glance-api ocf:openstack:glance-api \
|
||||
params config="/etc/glance/glance-api.conf" os_password="secrete" os_username="admin" os_tenant_name="admin" os_auth_url="http://192.168.42.103:5000/v2.0/" \
|
||||
op monitor interval="30s" timeout="30s"
|
@ -1,3 +0,0 @@
|
||||
primitive p_keystone ocf:openstack:keystone \
|
||||
params config="/etc/keystone/keystone.conf" os_password="secret" os_username="admin" os_tenant_name="admin" os_auth_url="http://192.168.42.103:5000/v2.0/" \
|
||||
op monitor interval="30s" timeout="30s"
|
@ -1,33 +0,0 @@
|
||||
primitive p_ip_mysql ocf:heartbeat:IPaddr2 \
|
||||
params ip="192.168.42.101" cidr_netmask="24" \
|
||||
op monitor interval="30s"
|
||||
primitive p_drbd_mysql ocf:linbit:drbd \
|
||||
params drbd_resource="mysql" \
|
||||
op start timeout="90s" \
|
||||
op stop timeout="180s" \
|
||||
op promote timeout="180s" \
|
||||
op demote timeout="180s" \
|
||||
op monitor interval="30s" role="Slave" \
|
||||
op monitor interval="29s" role="Master"
|
||||
primitive p_fs_mysql ocf:heartbeat:Filesystem \
|
||||
params device="/dev/drbd/by-res/mysql" \
|
||||
directory="/var/lib/mysql" \
|
||||
fstype="xfs" \
|
||||
options="relatime" \
|
||||
op start timeout="60s" \
|
||||
op stop timeout="180s" \
|
||||
op monitor interval="60s" timeout="60s"
|
||||
primitive p_mysql ocf:heartbeat:mysql \
|
||||
params additional_parameters="--bind-address=50.56.179.138"
|
||||
config="/etc/mysql/my.cnf" \
|
||||
pid="/var/run/mysqld/mysqld.pid" \
|
||||
socket="/var/run/mysqld/mysqld.sock" \
|
||||
log="/var/log/mysql/mysqld.log" \
|
||||
op monitor interval="20s" timeout="10s" \
|
||||
op start timeout="120s" \
|
||||
op stop timeout="120s"
|
||||
group g_mysql p_ip_mysql p_fs_mysql p_mysql
|
||||
ms ms_drbd_mysql p_drbd_mysql \
|
||||
meta notify="true" clone-max="2"
|
||||
colocation c_mysql_on_drbd inf: g_mysql ms_drbd_mysql:Master
|
||||
order o_drbd_before_mysql inf: ms_drbd_mysql:promote g_mysql:start
|
@ -1,4 +0,0 @@
|
||||
primitive p_neutron-dhcp-agent ocf:openstack:neutron-dhcp-agent \
|
||||
params config="/etc/neutron/neutron.conf" \
|
||||
plugin_config="/etc/neutron/dhcp_agent.ini" \
|
||||
op monitor interval="30s" timeout="30s"
|
@ -1,4 +0,0 @@
|
||||
primitive p_neutron-l3-agent ocf:openstack:neutron-agent-l3 \
|
||||
params config="/etc/neutron/neutron.conf" \
|
||||
plugin_config="/etc/neutron/l3_agent.ini" \
|
||||
op monitor interval="30s" timeout="30s"
|
@ -1,4 +0,0 @@
|
||||
primitive p_neutron-metadata-agent ocf:openstack:neutron-metadata-agent \
|
||||
params config="/etc/neutron/neutron.conf" \
|
||||
plugin_config="/etc/neutron/metadata_agent.ini" \
|
||||
op monitor interval="30s" timeout="30s"
|
@ -1,2 +0,0 @@
|
||||
group g_services_network p_neutron-l3-agent p_neutron-dhcp-agent \
|
||||
p_neutron-metadata_agent
|
@ -1,4 +0,0 @@
|
||||
primitive p_neutron-server ocf:openstack:neutron-server \
|
||||
params os_password="secrete" os_username="admin" os_tenant_name="admin" \
|
||||
keystone_get_token_url="http://192.168.42.103:5000/v2.0/tokens" \
|
||||
op monitor interval="30s" timeout="30s"
|
@ -1,5 +0,0 @@
|
||||
property no-quorum-policy="ignore" \ # <1>
|
||||
pe-warn-series-max="1000" \ # <2>
|
||||
pe-input-series-max="1000" \
|
||||
pe-error-series-max="1000" \
|
||||
cluster-recheck-interval="5min" # <3>
|
@ -1,27 +0,0 @@
|
||||
primitive p_ip_rabbitmq ocf:heartbeat:IPaddr2 \
|
||||
params ip="192.168.42.100" cidr_netmask="24" \
|
||||
op monitor interval="10s"
|
||||
primitive p_drbd_rabbitmq ocf:linbit:drbd \
|
||||
params drbd_resource="rabbitmq" \
|
||||
op start timeout="90s" \
|
||||
op stop timeout="180s" \
|
||||
op promote timeout="180s" \
|
||||
op demote timeout="180s" \
|
||||
op monitor interval="30s" role="Slave" \
|
||||
op monitor interval="29s" role="Master"
|
||||
primitive p_fs_rabbitmq ocf:heartbeat:Filesystem \
|
||||
params device="/dev/drbd/by-res/rabbitmq" \
|
||||
directory="/var/lib/rabbitmq" \
|
||||
fstype="xfs" options="relatime" \
|
||||
op start timeout="60s" \
|
||||
op stop timeout="180s" \
|
||||
op monitor interval="60s" timeout="60s"
|
||||
primitive p_rabbitmq ocf:rabbitmq:rabbitmq-server \
|
||||
params nodename="rabbit@localhost" \
|
||||
mnesia_base="/var/lib/rabbitmq" \
|
||||
op monitor interval="20s" timeout="10s"
|
||||
group g_rabbitmq p_ip_rabbitmq p_fs_rabbitmq p_rabbitmq
|
||||
ms ms_drbd_rabbitmq p_drbd_rabbitmq \
|
||||
meta notify="true" master-max="1" clone-max="2"
|
||||
colocation c_rabbitmq_on_drbd inf: g_rabbitmq ms_drbd_rabbitmq:Master
|
||||
order o_drbd_before_rabbitmq inf: ms_drbd_rabbitmq:promote g_rabbitmq:start
|
@ -1,11 +0,0 @@
|
||||
resource rabbitmq {
|
||||
device minor 1;
|
||||
disk "/dev/data/rabbitmq";
|
||||
meta-disk internal;
|
||||
on node1 {
|
||||
address ipv4 10.0.42.100:7701;
|
||||
}
|
||||
on node2 {
|
||||
address ipv4 10.0.42.254:7701;
|
||||
}
|
||||
}
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -1,44 +0,0 @@
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="_highly_available_neutron_dhcp_agent">
|
||||
|
||||
<title>Highly available neutron DHCP agent</title>
|
||||
|
||||
<para>Neutron DHCP agent distributes IP addresses to the VMs with dnsmasq (by
|
||||
default). High availability for the DHCP agent is achieved by adopting
|
||||
Pacemaker.</para>
|
||||
<note>
|
||||
<para>Here is the <link xlink:href="http://docs.openstack.org/trunk/config-reference/content/section_adv_cfg_dhcp_agent.html">documentation</link> for installing neutron DHCP agent.</para>
|
||||
</note>
|
||||
<section xml:id="_add_neutron_dhcp_agent_resource_to_pacemaker">
|
||||
|
||||
<title>Add neutron DHCP agent resource to Pacemaker</title>
|
||||
|
||||
<para>First of all, you need to download the resource agent to your system:</para>
|
||||
<screen><prompt>#</prompt> <userinput>cd /usr/lib/ocf/resource.d/openstack</userinput>
|
||||
<prompt>#</prompt> <userinput>wget https://raw.github.com/madkiss/openstack-resource-agents/master/ocf/neutron-agent-dhcp</userinput>
|
||||
<prompt>#</prompt> <userinput>chmod a+rx neutron-agent-dhcp</userinput></screen>
|
||||
<para>You may now proceed with adding the Pacemaker configuration for
|
||||
neutron DHCP agent resource. Connect to the Pacemaker cluster with <literal>crm
|
||||
configure</literal>, and add the following cluster resources:</para>
|
||||
<programlisting>primitive p_neutron-dhcp-agent ocf:openstack:neutron-agent-dhcp \
|
||||
params config="/etc/neutron/neutron.conf" \
|
||||
plugin_config="/etc/neutron/dhcp_agent.ini" \
|
||||
op monitor interval="30s" timeout="30s"</programlisting>
|
||||
<para>This configuration creates:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><literal>p_neutron-agent-dhcp</literal>, a
|
||||
resource for manage Neutron DHCP Agent service.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para><literal>crm configure</literal> supports batch input, so you may copy and paste the
|
||||
above into your live pacemaker configuration, and then make changes as
|
||||
required.</para>
|
||||
<para>Once completed, commit your configuration changes by entering <literal>commit</literal>
|
||||
from the <literal>crm configure</literal> menu. Pacemaker will then start the neutron DHCP
|
||||
agent service, and its dependent resources, on one of your nodes.</para>
|
||||
</section>
|
||||
</section>
|
@ -1,48 +0,0 @@
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="_highly_available_neutron_l3_agent">
|
||||
|
||||
<title>Highly available neutron L3 agent</title>
|
||||
|
||||
<para>The neutron L3 agent provides L3/NAT forwarding to ensure external network access
|
||||
for VMs on tenant networks. High availability for the L3 agent is achieved by
|
||||
adopting Pacemaker.</para>
|
||||
<note>
|
||||
<para>Here is the <link xlink:href="http://docs.openstack.org/trunk/config-reference/content/section_adv_cfg_l3_agent.html">documentation</link> for installing neutron L3 agent.</para>
|
||||
</note>
|
||||
<section xml:id="_add_neutron_l3_agent_resource_to_pacemaker">
|
||||
|
||||
<title>Add neutron L3 agent resource to Pacemaker</title>
|
||||
|
||||
<para>First of all, you need to download the resource agent to your system:</para>
|
||||
<screen><prompt>#</prompt> <userinput>cd /usr/lib/ocf/resource.d/openstack</userinput>
|
||||
<prompt>#</prompt> <userinput>wget https://raw.github.com/madkiss/openstack-resource-agents/master/ocf/neutron-agent-l3</userinput>
|
||||
<prompt>#</prompt> <userinput>chmod a+rx neutron-l3-agent</userinput></screen>
|
||||
<para>You may now proceed with adding the Pacemaker configuration for
|
||||
neutron L3 agent resource. Connect to the Pacemaker cluster with <literal>crm
|
||||
configure</literal>, and add the following cluster resources:</para>
|
||||
<programlisting>primitive p_neutron-l3-agent ocf:openstack:neutron-agent-l3 \
|
||||
params config="/etc/neutron/neutron.conf" \
|
||||
plugin_config="/etc/neutron/l3_agent.ini" \
|
||||
op monitor interval="30s" timeout="30s"</programlisting>
|
||||
<para>This configuration creates</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><literal>p_neutron-l3-agent</literal>, a resource for manage Neutron L3 Agent service
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para><literal>crm configure</literal> supports batch input, so you may copy and paste the
|
||||
above into your live pacemaker configuration, and then make changes as
|
||||
required.</para>
|
||||
<para>Once completed, commit your configuration changes by entering <literal>commit</literal>
|
||||
from the <literal>crm configure</literal> menu. Pacemaker will then start the neutron L3 agent
|
||||
service, and its dependent resources, on one of your nodes.</para>
|
||||
<note>
|
||||
<para>This method does not ensure a zero downtime since it has to recreate all
|
||||
the namespaces and virtual routers on the node.</para>
|
||||
</note>
|
||||
</section>
|
||||
</section>
|
@ -1,41 +0,0 @@
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="_highly_available_neutron_metadata_agent">
|
||||
<title>Highly available neutron metadata agent</title>
|
||||
<para>Neutron metadata agent allows Compute API metadata to be reachable by VMs on tenant
|
||||
networks. High availability for the metadata agent is achieved by adopting
|
||||
Pacemaker.</para>
|
||||
<note>
|
||||
<para>Here is the <link xlink:href="http://docs.openstack.org/trunk/config-reference/content/networking-options-metadata.html">documentation</link> for installing Neutron Metadata Agent.</para>
|
||||
</note>
|
||||
<section xml:id="_add_neutron_metadata_agent_resource_to_pacemaker">
|
||||
<title>Add neutron metadata agent resource to Pacemaker</title>
|
||||
<para>First of all, you need to download the resource agent to your system:</para>
|
||||
<screen><prompt>#</prompt> <userinput>cd /usr/lib/ocf/resource.d/openstack</userinput>
|
||||
<prompt>#</prompt> <userinput>wget https://raw.github.com/madkiss/openstack-resource-agents/master/ocf/neutron-metadata-agent</userinput>
|
||||
<prompt>#</prompt> <userinput>chmod a+rx neutron-metadata-agent</userinput></screen>
|
||||
<para>You may now proceed with adding the Pacemaker configuration for
|
||||
neutron metadata agent resource. Connect to the Pacemaker cluster with <literal>crm
|
||||
configure</literal>, and add the following cluster resources:</para>
|
||||
<programlisting>primitive p_neutron-metadata-agent ocf:openstack:neutron-metadata-agent \
|
||||
params config="/etc/neutron/neutron.conf" \
|
||||
plugin_config="/etc/neutron/metadata_agent.ini" \
|
||||
op monitor interval="30s" timeout="30s"</programlisting>
|
||||
<para>This configuration creates</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><literal>p_neutron-metadata-agent</literal>, a resource for manage Neutron Metadata Agent
|
||||
service
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para><literal>crm configure</literal> supports batch input, so you may copy and paste the
|
||||
above into your live Pacemaker configuration, and then make changes as
|
||||
required.</para>
|
||||
<para>Once completed, commit your configuration changes by entering <literal>commit</literal>
|
||||
from the <literal>crm configure</literal> menu. Pacemaker will then start the neutron metadata
|
||||
agent service, and its dependent resources, on one of your nodes.</para>
|
||||
</section>
|
||||
</section>
|
@ -1,15 +0,0 @@
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="_manage_network_resources">
|
||||
|
||||
<title>Manage network resources</title>
|
||||
|
||||
<para>You can now add the Pacemaker configuration for
|
||||
managing all network resources together with a group.
|
||||
Connect to the Pacemaker cluster with <literal>crm configure</literal>, and add the following
|
||||
cluster resources:</para>
|
||||
<programlisting>group g_services_network p_neutron-l3-agent p_neutron-dhcp-agent \
|
||||
p_neutron-metadata_agent</programlisting>
|
||||
</section>
|
@ -1,43 +0,0 @@
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="_install_packages">
|
||||
<title>Install packages</title>
|
||||
<para>On any host that is meant to be part of a Pacemaker cluster, you must
|
||||
first establish cluster communications through the Corosync messaging
|
||||
layer. This involves installing the following packages (and their
|
||||
dependencies, which your package manager will normally install
|
||||
automatically):</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><literal>pacemaker</literal> (Note that the crm shell should be downloaded separately.)
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<literal>crmsh</literal>
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<literal>corosync</literal>
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<literal>cluster-glue</literal>
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><literal>fence-agents</literal> (Fedora only; all other distributions use fencing
|
||||
agents from <literal>cluster-glue</literal>)
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
<literal>resource-agents</literal>
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</section>
|
@ -1,54 +0,0 @@
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="_set_basic_cluster_properties">
|
||||
|
||||
<title>Set basic cluster properties</title>
|
||||
|
||||
<para>Once your Pacemaker cluster is set up, it is recommended to set a few
|
||||
basic cluster properties. To do so, start the <literal>crm</literal> shell and change
|
||||
into the configuration menu by entering
|
||||
<literal>configure</literal>. Alternatively, you may jump straight into the Pacemaker
|
||||
configuration menu by typing <literal>crm configure</literal> directly from a shell
|
||||
prompt.</para>
|
||||
<para>Then, set the following properties:</para>
|
||||
<programlisting>property no-quorum-policy="ignore" \ # <co xml:id="CO2-1"/>
|
||||
pe-warn-series-max="1000" \ # <co xml:id="CO2-2"/>
|
||||
pe-input-series-max="1000" \
|
||||
pe-error-series-max="1000" \
|
||||
cluster-recheck-interval="5min" # <co xml:id="CO2-3"/></programlisting>
|
||||
<calloutlist>
|
||||
<callout arearefs="CO2-1">
|
||||
<para>
|
||||
Setting <literal>no-quorum-policy="ignore"</literal> is required in 2-node Pacemaker
|
||||
clusters for the following reason: if quorum enforcement is enabled,
|
||||
and one of the two nodes fails, then the remaining node can not
|
||||
establish a <emphasis>majority</emphasis> of quorum votes necessary to run services, and
|
||||
thus it is unable to take over any resources. In this case, the appropriate
|
||||
workaround is to ignore loss of quorum in the cluster. This should only <emphasis>only</emphasis> be done in 2-node clusters: do not set this property in
|
||||
Pacemaker clusters with more than two nodes. Note that a two-node cluster with this setting exposes a risk of split-brain because either half of the cluster, or both, are able to become active in the event that both nodes remain online but lose communication with one another. The preferred configuration is 3 or more nodes per cluster.
|
||||
</para>
|
||||
</callout>
|
||||
<callout arearefs="CO2-2">
|
||||
<para>
|
||||
Setting <literal>pe-warn-series-max</literal>, <literal>pe-input-series-max</literal> and
|
||||
<literal>pe-error-series-max</literal> to 1000 instructs Pacemaker to keep a longer
|
||||
history of the inputs processed, and errors and warnings generated, by
|
||||
its Policy Engine. This history is typically useful in case cluster
|
||||
troubleshooting becomes necessary.
|
||||
</para>
|
||||
</callout>
|
||||
<callout arearefs="CO2-3">
|
||||
<para>
|
||||
Pacemaker uses an event-driven approach to cluster state
|
||||
processing. However, certain Pacemaker actions occur at a configurable
|
||||
interval, <literal>cluster-recheck-interval</literal>, which defaults to 15 minutes. It
|
||||
is usually prudent to reduce this to a shorter interval, such as 5 or
|
||||
3 minutes.
|
||||
</para>
|
||||
</callout>
|
||||
</calloutlist>
|
||||
<para>Once you have made these changes, you may <literal>commit</literal> the updated
|
||||
configuration.</para>
|
||||
</section>
|
@ -1,166 +0,0 @@
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="_set_up_corosync">
|
||||
<title>Set up Corosync</title>
|
||||
<para>Besides installing the <literal>corosync</literal> package, you must also
|
||||
create a configuration file, stored in
|
||||
<filename>/etc/corosync/corosync.conf</filename>. Most distributions ship an example
|
||||
configuration file (<filename>corosync.conf.example</filename>) as part of the
|
||||
documentation bundled with the <literal>corosync</literal> package. An example Corosync
|
||||
configuration file is shown below:</para>
|
||||
<formalpara>
|
||||
|
||||
<title>Corosync configuration file (<filename>corosync.conf</filename>)</title>
|
||||
|
||||
<para>
|
||||
<programlisting>totem {
|
||||
version: 2
|
||||
|
||||
# Time (in ms) to wait for a token <co xml:id="CO1-1"/>
|
||||
token: 10000
|
||||
|
||||
# How many token retransmits before forming a new
|
||||
# configuration
|
||||
token_retransmits_before_loss_const: 10
|
||||
|
||||
# Turn off the virtual synchrony filter
|
||||
vsftype: none
|
||||
|
||||
# Enable encryption <co xml:id="CO1-2"/>
|
||||
secauth: on
|
||||
|
||||
# How many threads to use for encryption/decryption
|
||||
threads: 0
|
||||
|
||||
# This specifies the redundant ring protocol, which may be
|
||||
# none, active, or passive. <co xml:id="CO1-3"/>
|
||||
rrp_mode: active
|
||||
|
||||
# The following is a two-ring multicast configuration. <co xml:id="CO1-4"/>
|
||||
interface {
|
||||
ringnumber: 0
|
||||
bindnetaddr: 192.168.42.0
|
||||
mcastaddr: 239.255.42.1
|
||||
mcastport: 5405
|
||||
}
|
||||
interface {
|
||||
ringnumber: 1
|
||||
bindnetaddr: 10.0.42.0
|
||||
mcastaddr: 239.255.42.2
|
||||
mcastport: 5405
|
||||
}
|
||||
}
|
||||
|
||||
amf {
|
||||
mode: disabled
|
||||
}
|
||||
|
||||
service {
|
||||
# Load the Pacemaker Cluster Resource Manager <co xml:id="CO1-5"/>
|
||||
ver: 1
|
||||
name: pacemaker
|
||||
}
|
||||
|
||||
aisexec {
|
||||
user: root
|
||||
group: root
|
||||
}
|
||||
|
||||
logging {
|
||||
fileline: off
|
||||
to_stderr: yes
|
||||
to_logfile: no
|
||||
to_syslog: yes
|
||||
syslog_facility: daemon
|
||||
debug: off
|
||||
timestamp: on
|
||||
logger_subsys {
|
||||
subsys: AMF
|
||||
debug: off
|
||||
tags: enter|leave|trace1|trace2|trace3|trace4|trace6
|
||||
}
|
||||
}</programlisting>
|
||||
</para>
|
||||
</formalpara>
|
||||
<calloutlist>
|
||||
<callout arearefs="CO1-1">
|
||||
<para>
|
||||
The <literal>token</literal> value specifies the time, in milliseconds, during
|
||||
which the Corosync token is expected to be transmitted around the
|
||||
ring. When this timeout expires, the token is declared lost, and after
|
||||
<literal>token_retransmits_before_loss_const</literal> lost tokens the non-responding
|
||||
<emphasis>processor</emphasis> (cluster node) is declared dead. In other words,
|
||||
<literal>token</literal> × <literal>token_retransmits_before_loss_const</literal> is the maximum
|
||||
time a node is allowed to not respond to cluster messages before being
|
||||
considered dead. The default for <literal>token</literal> is 1000 (1 second), with 4
|
||||
allowed retransmits. These defaults are intended to minimize failover
|
||||
times, but can cause frequent "false alarms" and unintended failovers
|
||||
in case of short network interruptions. The values used here are
|
||||
safer, albeit with slightly extended failover times.
|
||||
</para>
|
||||
</callout>
|
||||
<callout arearefs="CO1-2">
|
||||
<para>
|
||||
With <literal>secauth</literal> enabled, Corosync nodes mutually authenticate using
|
||||
a 128-byte shared secret stored in <literal>/etc/corosync/authkey</literal>, which may
|
||||
be generated with the <literal>corosync-keygen</literal> utility. When using <literal>secauth</literal>,
|
||||
cluster communications are also encrypted.
|
||||
</para>
|
||||
</callout>
|
||||
<callout arearefs="CO1-3">
|
||||
<para>
|
||||
In Corosync configurations using redundant networking (with more
|
||||
than one <literal>interface</literal>), you must select a Redundant Ring Protocol (RRP)
|
||||
mode other than <literal>none</literal>. <literal>active</literal> is the recommended RRP mode.
|
||||
</para>
|
||||
</callout>
|
||||
<callout arearefs="CO1-4">
|
||||
<para>
|
||||
There are several things to note about the recommended interface
|
||||
configuration:
|
||||
</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>
|
||||
The <literal>ringnumber</literal> must differ between all configured interfaces,
|
||||
starting with 0.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
The <literal>bindnetaddr</literal> is the <emphasis>network</emphasis> address of the interfaces to bind
|
||||
to. The example uses two network addresses of <literal>/24</literal> IPv4 subnets.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
Multicast groups (<literal>mcastaddr</literal>) <emphasis>must not</emphasis> be reused across cluster
|
||||
boundaries. In other words, no two distinct clusters should ever use
|
||||
the same multicast group. Be sure to select multicast addresses
|
||||
compliant with <link xlink:href="http://www.ietf.org/rfc/rfc2365.txt">RFC 2365,
|
||||
"Administratively Scoped IP Multicast"</link>.
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>
|
||||
For firewall configurations, note that Corosync communicates over
|
||||
UDP only, and uses <literal>mcastport</literal> (for receives) and <literal>mcastport</literal>-1 (for
|
||||
sends).
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
</callout>
|
||||
<callout arearefs="CO1-5">
|
||||
<para>
|
||||
The <literal>service</literal> declaration for the <literal>pacemaker</literal> service may be
|
||||
placed in the <filename>corosync.conf</filename> file directly, or in its own separate
|
||||
file, <filename>/etc/corosync/service.d/pacemaker</filename>.
|
||||
</para>
|
||||
</callout>
|
||||
</calloutlist>
|
||||
<para>Once created, the <filename>corosync.conf</filename> file (and the <filename>authkey</filename> file if the
|
||||
<literal>secauth</literal> option is enabled) must be synchronized across all cluster
|
||||
nodes.</para>
|
||||
</section>
|
@ -1,42 +0,0 @@
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="_start_pacemaker">
|
||||
<title>Start Pacemaker</title>
|
||||
<para>Once the Corosync services have been started and you have established
|
||||
that the cluster is communicating properly, it is safe to start
|
||||
<literal>pacemakerd</literal>, the Pacemaker master control process:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><literal>/etc/init.d/pacemaker start</literal> (LSB)
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><literal>service pacemaker start</literal> (LSB, alternate)
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><literal>start pacemaker</literal> (upstart)
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><literal>systemctl start pacemaker</literal> (systemd)
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>Once Pacemaker services have started, Pacemaker will create a default
|
||||
empty cluster configuration with no resources. You may observe
|
||||
Pacemaker’s status with the <literal>crm_mon</literal> utility:</para>
|
||||
<screen><computeroutput>============
|
||||
Last updated: Sun Oct 7 21:07:52 2012
|
||||
Last change: Sun Oct 7 20:46:00 2012 via cibadmin on node2
|
||||
Stack: openais
|
||||
Current DC: node2 - partition with quorum
|
||||
Version: 1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c
|
||||
2 Nodes configured, 2 expected votes
|
||||
0 Resources configured.
|
||||
============
|
||||
|
||||
Online: [ node2 node1 ]</computeroutput></screen>
|
||||
</section>
|
@ -1,54 +0,0 @@
|
||||
<section xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="_starting_corosync">
|
||||
|
||||
<title>Starting Corosync</title>
|
||||
|
||||
<para>Corosync is started as a regular system service. Depending on your
|
||||
distribution, it may ship with an LSB init script, an
|
||||
upstart job, or a systemd unit file. Either way, the service is
|
||||
usually named <literal>corosync</literal>:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><literal>/etc/init.d/corosync start</literal> (LSB)
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><literal>service corosync start</literal> (LSB, alternate)
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><literal>start corosync</literal> (upstart)
|
||||
</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><literal>systemctl start corosync</literal> (systemd)
|
||||
</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>You can now check the Corosync connectivity with two tools.</para>
|
||||
<para>The <literal>corosync-cfgtool</literal> utility, when invoked with the <literal>-s</literal> option,
|
||||
gives a summary of the health of the communication rings:</para>
|
||||
<screen><prompt>#</prompt> <userinput>corosync-cfgtool -s</userinput>
|
||||
<computeroutput>Printing ring status.
|
||||
Local node ID 435324542
|
||||
RING ID 0
|
||||
id = 192.168.42.82
|
||||
status = ring 0 active with no faults
|
||||
RING ID 1
|
||||
id = 10.0.42.100
|
||||
status = ring 1 active with no faults</computeroutput></screen>
|
||||
<para>The <literal>corosync-objctl</literal> utility can be used to dump the Corosync cluster
|
||||
member list:</para>
|
||||
<screen><prompt>#</prompt> <userinput>corosync-objctl runtime.totem.pg.mrp.srp.members</userinput>
|
||||
<computeroutput>runtime.totem.pg.mrp.srp.435324542.ip=r(0) ip(192.168.42.82) r(1) ip(10.0.42.100)
|
||||
runtime.totem.pg.mrp.srp.435324542.join_count=1
|
||||
runtime.totem.pg.mrp.srp.435324542.status=joined
|
||||
runtime.totem.pg.mrp.srp.983895584.ip=r(0) ip(192.168.42.87) r(1) ip(10.0.42.254)
|
||||
runtime.totem.pg.mrp.srp.983895584.join_count=1
|
||||
runtime.totem.pg.mrp.srp.983895584.status=joined</computeroutput></screen>
|
||||
<para>You should see a <literal>status=joined</literal> entry for each of your constituent
|
||||
cluster nodes.</para>
|
||||
</section>
|
@ -1,17 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<part xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="ha-using-active-active">
|
||||
|
||||
<title>HA using active/active</title>
|
||||
|
||||
|
||||
<xi:include href="ch_ha_aa_db.xml"/>
|
||||
<xi:include href="ch_ha_aa_rabbitmq.xml"/>
|
||||
<xi:include href="ch_ha_aa_haproxy.xml"/>
|
||||
<xi:include href="ch_ha_aa_controllers.xml"/>
|
||||
<xi:include href="ch_ha_aa_network.xml"/>
|
||||
|
||||
</part>
|
@ -1,16 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<part xmlns="http://docbook.org/ns/docbook"
|
||||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||||
version="5.0"
|
||||
xml:id="ha-using-active-passive">
|
||||
|
||||
<title>HA using active/passive</title>
|
||||
|
||||
|
||||
<xi:include href="ch_pacemaker.xml"/>
|
||||
<xi:include href="ch_controller.xml"/>
|
||||
<xi:include href="ch_api.xml"/>
|
||||
<xi:include href="ch_network.xml"/>
|
||||
|
||||
</part>
|
@ -1,78 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<project xmlns="http://maven.apache.org/POM/4.0.0"
|
||||
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
|
||||
<parent>
|
||||
<groupId>org.openstack.docs</groupId>
|
||||
<artifactId>parent-pom</artifactId>
|
||||
<version>1.0.0-SNAPSHOT</version>
|
||||
<relativePath>../pom.xml</relativePath>
|
||||
</parent>
|
||||
<modelVersion>4.0.0</modelVersion>
|
||||
<artifactId>openstack-high-availability-guide</artifactId>
|
||||
<packaging>jar</packaging>
|
||||
<name>OpenStack High Availability Guide</name>
|
||||
<properties>
|
||||
<!-- This is set by Jenkins according to the branch. -->
|
||||
<release.path.name></release.path.name>
|
||||
<comments.enabled>0</comments.enabled>
|
||||
</properties>
|
||||
<!-- ################################################ -->
|
||||
<!-- USE "mvn clean generate-sources" to run this POM -->
|
||||
<!-- ################################################ -->
|
||||
<build>
|
||||
<plugins>
|
||||
<plugin>
|
||||
<groupId>com.rackspace.cloud.api</groupId>
|
||||
<artifactId>clouddocs-maven-plugin</artifactId>
|
||||
<!-- version set in ../pom.xml -->
|
||||
<executions>
|
||||
<execution>
|
||||
<id>generate-webhelp</id>
|
||||
<goals>
|
||||
<goal>generate-webhelp</goal>
|
||||
</goals>
|
||||
<phase>generate-sources</phase>
|
||||
<configuration>
|
||||
<!-- These parameters only apply to webhelp -->
|
||||
<enableDisqus>${comments.enabled}</enableDisqus>
|
||||
<disqusShortname>openstack-ha-guide</disqusShortname>
|
||||
<enableGoogleAnalytics>1</enableGoogleAnalytics>
|
||||
<googleAnalyticsId>UA-17511903-1</googleAnalyticsId>
|
||||
<generateToc>
|
||||
appendix toc,title
|
||||
article/appendix nop
|
||||
article toc,title
|
||||
book toc,title,figure,table,example,equation
|
||||
chapter toc,title
|
||||
section toc
|
||||
part toc,title
|
||||
qandadiv toc
|
||||
qandaset toc
|
||||
reference toc,title
|
||||
set toc,title
|
||||
</generateToc>
|
||||
<!-- The following elements sets the autonumbering of sections in output for chapter numbers but no numbered sections-->
|
||||
<sectionAutolabel>0</sectionAutolabel>
|
||||
<tocSectionDepth>1</tocSectionDepth>
|
||||
<sectionLabelIncludesComponentLabel>0</sectionLabelIncludesComponentLabel>
|
||||
<webhelpDirname>high-availability-guide</webhelpDirname>
|
||||
<pdfFilenameBase>high-availability-guide</pdfFilenameBase>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
<configuration>
|
||||
<!-- These parameters apply to pdf and webhelp -->
|
||||
<xincludeSupported>true</xincludeSupported>
|
||||
<sourceDirectory>.</sourceDirectory>
|
||||
<includes>
|
||||
bk-ha-guide.xml
|
||||
</includes>
|
||||
<canonicalUrlBase>http://docs.openstack.org/high-availability-guide/content</canonicalUrlBase>
|
||||
<glossaryCollection>${basedir}/../glossary/glossary-terms.xml</glossaryCollection>
|
||||
<branding>openstack</branding>
|
||||
</configuration>
|
||||
</plugin>
|
||||
</plugins>
|
||||
</build>
|
||||
</project>
|
@ -88,11 +88,6 @@
|
||||
Configuration Reference
|
||||
</a>
|
||||
</dd>
|
||||
<dd>
|
||||
<a href="high-availability-guide/target/docbkx/webhelp/high-availability-guide/content/index.html">
|
||||
High Availability Guide
|
||||
</a>
|
||||
</dd>
|
||||
<dd>
|
||||
<a href="networking-guide/target/docbkx/webhelp/networking-guide/content/index.html">
|
||||
Networking Guide
|
||||
|
@ -15,7 +15,6 @@
|
||||
<module>cli-reference</module>
|
||||
<module>config-reference</module>
|
||||
<module>glossary</module>
|
||||
<module>high-availability-guide</module>
|
||||
<module>image-guide</module>
|
||||
<module>install-guide</module>
|
||||
<module>networking-guide</module>
|
||||
|
Loading…
Reference in New Issue
Block a user