Sync IP addresses with Installation Guides

* change controller virtual IP to controller IP in install guides.
* use controller virtual IP as database IP address to sync.
* use controller virtual IP as messaging service IP address to sync.
* use controller virtual IP as storage node IP address
  instead of using standalone storage nodes to keep current contents.
* don't update any other contents, just change IP addresses.

Change-Id: Ibae9143142414fbec924d30ffa39dc0d51bb7411
Closes-Bug: #1529548
This commit is contained in:
KATO Tomoyuki 2016-01-04 18:00:41 +09:00
parent 6f366378ec
commit 069139da15
7 changed files with 58 additions and 48 deletions

View File

@ -46,7 +46,7 @@ Add OpenStack Identity resource to Pacemaker
os_password="secretsecret" \ os_password="secretsecret" \
os_username="admin" os_username="admin"
os_tenant_name="admin" os_tenant_name="admin"
os_auth_url="http://192.168.42.103:5000/v2.0/" \ os_auth_url="http://10.0.0.11:5000/v2.0/" \
op monitor interval="30s" timeout="30s" op monitor interval="30s" timeout="30s"
This configuration creates ``p_keystone``, This configuration creates ``p_keystone``,
@ -76,9 +76,9 @@ Configure OpenStack Identity service
.. code-block:: ini .. code-block:: ini
bind_host = 192.168.42.103 bind_host = 10.0.0.11
public_bind_host = 192.168.42.103 public_bind_host = 10.0.0.11
admin_bind_host = 192.168.42.103 admin_bind_host = 10.0.0.11
The ``admin_bind_host`` parameter The ``admin_bind_host`` parameter
lets you use a private network for admin access. lets you use a private network for admin access.
@ -110,12 +110,12 @@ of an OpenStack Identity server as you would do
in a non-HA environment. in a non-HA environment.
#. For OpenStack Compute, for example, #. For OpenStack Compute, for example,
if your OpenStack Identiy service IP address is 192.168.42.103, if your OpenStack Identiy service IP address is 10.0.0.11,
use the following configuration in your :file:`api-paste.ini` file: use the following configuration in your :file:`api-paste.ini` file:
.. code-block:: ini .. code-block:: ini
auth_host = 192.168.42.103 auth_host = 10.0.0.11
#. You also need to create the OpenStack Identity Endpoint #. You also need to create the OpenStack Identity Endpoint
with this IP address. with this IP address.
@ -131,9 +131,9 @@ in a non-HA environment.
$ openstack endpoint create --region $KEYSTONE_REGION \ $ openstack endpoint create --region $KEYSTONE_REGION \
$service-type public http://PUBLIC_VIP:5000/v2.0 $service-type public http://PUBLIC_VIP:5000/v2.0
$ openstack endpoint create --region $KEYSTONE_REGION \ $ openstack endpoint create --region $KEYSTONE_REGION \
$service-type admin http://192.168.42.103:35357/v2.0 $service-type admin http://10.0.0.11:35357/v2.0
$ openstack endpoint create --region $KEYSTONE_REGION \ $ openstack endpoint create --region $KEYSTONE_REGION \
$service-type internal http://192.168.42.103:5000/v2.0 $service-type internal http://10.0.0.11:5000/v2.0
#. If you are using the horizon dashboard, #. If you are using the horizon dashboard,
@ -142,6 +142,6 @@ in a non-HA environment.
.. code-block:: ini .. code-block:: ini
OPENSTACK_HOST = 192.168.42.103 OPENSTACK_HOST = 10.0.0.11

View File

@ -83,13 +83,13 @@ Set up the cluster with `pcs`
make up the cluster. The :option:`-p` option is used to give make up the cluster. The :option:`-p` option is used to give
the password on command line and makes it easier to script. the password on command line and makes it easier to script.
- :command:`pcs cluster auth NODE1 NODE2 NODE3 -u hacluster - :command:`pcs cluster auth controller1 controller2 controller3
-p my-secret-password-no-dont-use-this-one --force` -u hacluster -p my-secret-password-no-dont-use-this-one --force`
#. Create the cluster, giving it a name, and start it: #. Create the cluster, giving it a name, and start it:
- :command:`pcs cluster setup --force --name my-first-openstack-cluster - :command:`pcs cluster setup --force --name my-first-openstack-cluster
NODE1 NODE2 NODE3` controller1 controller2 controller3`
- :command:`pcs cluster start --all` - :command:`pcs cluster start --all`
Set up the cluster with `crmsh` Set up the cluster with `crmsh`
@ -150,7 +150,7 @@ An example Corosync configuration file is shown below:
# The following is a two-ring multicast configuration. (4) # The following is a two-ring multicast configuration. (4)
interface { interface {
ringnumber: 0 ringnumber: 0
bindnetaddr: 192.168.42.0 bindnetaddr: 10.0.0.0
mcastaddr: 239.255.42.1 mcastaddr: 239.255.42.1
mcastport: 5405 mcastport: 5405
} }
@ -291,7 +291,7 @@ for unicastis shown below:
#... #...
interface { interface {
ringnumber: 0 ringnumber: 0
bindnetaddr: 192.168.42.0 bindnetaddr: 10.0.0.0
broadcast: yes (1) broadcast: yes (1)
mcastport: 5405 mcastport: 5405
} }
@ -306,12 +306,12 @@ for unicastis shown below:
nodelist { (3) nodelist { (3)
node { node {
ring0_addr: 192.168.42.1 ring0_addr: 10.0.0.1
ring1_addr: 10.0.42.1 ring1_addr: 10.0.42.1
nodeid: 1 nodeid: 1
} }
node { node {
ring0_addr: 192.168.42.2 ring0_addr: 10.0.0.2
ring1_addr: 10.0.42.2 ring1_addr: 10.0.42.2
nodeid: 2 nodeid: 2
} }
@ -471,7 +471,7 @@ to get a summary of the health of the communication rings:
Printing ring status. Printing ring status.
Local node ID 435324542 Local node ID 435324542
RING ID 0 RING ID 0
id = 192.168.42.82 id = 10.0.0.82
status = ring 0 active with no faults status = ring 0 active with no faults
RING ID 1 RING ID 1
id = 10.0.42.100 id = 10.0.42.100
@ -483,10 +483,10 @@ to dump the Corosync cluster member list:
.. code-block:: console .. code-block:: console
# corosync-objctl runtime.totem.pg.mrp.srp.members # corosync-objctl runtime.totem.pg.mrp.srp.members
runtime.totem.pg.mrp.srp.435324542.ip=r(0) ip(192.168.42.82) r(1) ip(10.0.42.100) runtime.totem.pg.mrp.srp.435324542.ip=r(0) ip(10.0.0.82) r(1) ip(10.0.42.100)
runtime.totem.pg.mrp.srp.435324542.join_count=1 runtime.totem.pg.mrp.srp.435324542.join_count=1
runtime.totem.pg.mrp.srp.435324542.status=joined runtime.totem.pg.mrp.srp.435324542.status=joined
runtime.totem.pg.mrp.srp.983895584.ip=r(0) ip(192.168.42.87) r(1) ip(10.0.42.254) runtime.totem.pg.mrp.srp.983895584.ip=r(0) ip(10.0.0.87) r(1) ip(10.0.42.254)
runtime.totem.pg.mrp.srp.983895584.join_count=1 runtime.totem.pg.mrp.srp.983895584.join_count=1
runtime.totem.pg.mrp.srp.983895584.status=joined runtime.totem.pg.mrp.srp.983895584.status=joined
@ -526,15 +526,15 @@ Use the :command:`crm_mon` utility to observe the status of Pacemaker:
============ ============
Last updated: Sun Oct 7 21:07:52 2012 Last updated: Sun Oct 7 21:07:52 2012
Last change: Sun Oct 7 20:46:00 2012 via cibadmin on NODE2 Last change: Sun Oct 7 20:46:00 2012 via cibadmin on controller2
Stack: openais Stack: openais
Current DC: NODE2 - partition with quorum Current DC: controller2 - partition with quorum
Version: 1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c Version: 1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c
3 Nodes configured, 3 expected votes 3 Nodes configured, 3 expected votes
0 Resources configured. 0 Resources configured.
============ ============
Online: [ NODE3 NODE2 NODE1 ] Online: [ controller3 controller2 controller1 ]
.. _pacemaker-cluster-properties: .. _pacemaker-cluster-properties:

View File

@ -7,18 +7,18 @@ You must select and assign a virtual IP address (VIP)
that can freely float between cluster nodes. that can freely float between cluster nodes.
This configuration creates ``vip``, This configuration creates ``vip``,
a virtual IP address for use by the API node (``192.168.42.103``): a virtual IP address for use by the API node (``10.0.0.11``):
For ``crmsh``: For ``crmsh``:
.. code-block:: console .. code-block:: console
primitive vip ocf:heartbeat:IPaddr2 \ primitive vip ocf:heartbeat:IPaddr2 \
params ip="192.168.42.103" cidr_netmask="24" op monitor interval="30s" params ip="10.0.0.11" cidr_netmask="24" op monitor interval="30s"
For ``pcs``: For ``pcs``:
.. code-block:: console .. code-block:: console
# pcs resource create vip ocf:heartbeat:IPaddr2 \ # pcs resource create vip ocf:heartbeat:IPaddr2 \
params ip="192.168.42.103" cidr_netmask="24" op monitor interval="30s" params ip="10.0.0.11" cidr_netmask="24" op monitor interval="30s"

View File

@ -12,3 +12,13 @@ Follow the instructions in the OpenStack Installation Guides:
The OpenStack Installation Guides also include a list of the services The OpenStack Installation Guides also include a list of the services
that use passwords with important notes about using them. that use passwords with important notes about using them.
This guide uses the following example IP addresses:
.. code-block:: none
# controller
10.0.0.11 controller # virtual IP
10.0.0.12 controller1
10.0.0.13 controller2
10.0.0.14 controller3

View File

@ -38,7 +38,7 @@ and add the following cluster resources:
os_password="secretsecret" os_password="secretsecret"
os_username="admin" \ os_username="admin" \
os_tenant_name="admin" os_tenant_name="admin"
keystone_get_token_url="http://192.168.42.103:5000/v2.0/tokens" \ keystone_get_token_url="http://10.0.0.11:5000/v2.0/tokens" \
op monitor interval="30s" timeout="30s" op monitor interval="30s" timeout="30s"
This configuration creates ``p_cinder-api``, This configuration creates ``p_cinder-api``,
@ -67,19 +67,19 @@ Edit the :file:`/etc/cinder/cinder.conf` file:
:linenos: :linenos:
# We have to use MySQL connection to store data: # We have to use MySQL connection to store data:
sql_connection = mysql://cinder:password@192.168.42.101/cinder sql_connection = mysql://cinder:password@10.0.0.11/cinder
# Alternatively, you can switch to pymysql, # Alternatively, you can switch to pymysql,
# a new Python 3 compatible library and use # a new Python 3 compatible library and use
# sql_connection = mysql+pymysql://cinder:password@192.168.42.101/cinder # sql_connection = mysql+pymysql://cinder:password@10.0.0.11/cinder
# and be ready when everything moves to Python 3. # and be ready when everything moves to Python 3.
# Ref: https://wiki.openstack.org/wiki/PyMySQL_evaluation # Ref: https://wiki.openstack.org/wiki/PyMySQL_evaluation
# We bind Block Storage API to the VIP: # We bind Block Storage API to the VIP:
osapi_volume_listen = 192.168.42.103 osapi_volume_listen = 10.0.0.11
# We send notifications to High Available RabbitMQ: # We send notifications to High Available RabbitMQ:
notifier_strategy = rabbit notifier_strategy = rabbit
rabbit_host = 192.168.42.102 rabbit_host = 10.0.0.11
.. _ha-cinder-services: .. _ha-cinder-services:
@ -103,7 +103,7 @@ you should create two virtual IPs and define your endpoint like this:
$ keystone endpoint-create --region $KEYSTONE_REGION \ $ keystone endpoint-create --region $KEYSTONE_REGION \
--service-id $service-id \ --service-id $service-id \
--publicurl 'http://PUBLIC_VIP:8776/v1/%(tenant_id)s' \ --publicurl 'http://PUBLIC_VIP:8776/v1/%(tenant_id)s' \
--adminurl 'http://192.168.42.103:8776/v1/%(tenant_id)s' \ --adminurl 'http://10.0.0.11:8776/v1/%(tenant_id)s' \
--internalurl 'http://192.168.42.103:8776/v1/%(tenant_id)s' --internalurl 'http://10.0.0.11:8776/v1/%(tenant_id)s'

View File

@ -41,7 +41,7 @@ and add the following cluster resources:
params config="/etc/glance/glance-api.conf" \ params config="/etc/glance/glance-api.conf" \
os_password="secretsecret" \ os_password="secretsecret" \
os_username="admin" os_tenant_name="admin" \ os_username="admin" os_tenant_name="admin" \
os_auth_url="http://192.168.42.103:5000/v2.0/" \ os_auth_url="http://10.0.0.11:5000/v2.0/" \
op monitor interval="30s" timeout="30s" op monitor interval="30s" timeout="30s"
This configuration creates ``p_glance-api``, This configuration creates ``p_glance-api``,
@ -71,22 +71,22 @@ to configure the OpenStack image service:
.. code-block:: ini .. code-block:: ini
# We have to use MySQL connection to store data: # We have to use MySQL connection to store data:
sql_connection=mysql://glance:password@192.168.42.101/glance sql_connection=mysql://glance:password@10.0.0.11/glance
# Alternatively, you can switch to pymysql, # Alternatively, you can switch to pymysql,
# a new Python 3 compatible library and use # a new Python 3 compatible library and use
# sql_connection=mysql+pymysql://glance:password@192.168.42.101/glance # sql_connection=mysql+pymysql://glance:password@10.0.0.11/glance
# and be ready when everything moves to Python 3. # and be ready when everything moves to Python 3.
# Ref: https://wiki.openstack.org/wiki/PyMySQL_evaluation # Ref: https://wiki.openstack.org/wiki/PyMySQL_evaluation
# We bind OpenStack Image API to the VIP: # We bind OpenStack Image API to the VIP:
bind_host = 192.168.42.103 bind_host = 10.0.0.11
# Connect to OpenStack Image registry service: # Connect to OpenStack Image registry service:
registry_host = 192.168.42.103 registry_host = 10.0.0.11
# We send notifications to High Available RabbitMQ: # We send notifications to High Available RabbitMQ:
notifier_strategy = rabbit notifier_strategy = rabbit
rabbit_host = 192.168.42.102 rabbit_host = 10.0.0.11
[TODO: need more discussion of these parameters] [TODO: need more discussion of these parameters]
@ -103,7 +103,7 @@ of an OpenStack Image API server
as you would in a non-HA cluster. as you would in a non-HA cluster.
For OpenStack Compute, for example, For OpenStack Compute, for example,
if your OpenStack Image API service IP address is 192.168.42.103 if your OpenStack Image API service IP address is 10.0.0.11
(as in the configuration explained here), (as in the configuration explained here),
you would use the following configuration in your :file:`nova.conf` file: you would use the following configuration in your :file:`nova.conf` file:
@ -111,7 +111,7 @@ you would use the following configuration in your :file:`nova.conf` file:
[glance] [glance]
... ...
api_servers = 192.168.42.103 api_servers = 10.0.0.11
... ...
@ -124,7 +124,7 @@ and define your endpoint like this:
$ keystone endpoint-create --region $KEYSTONE_REGION \ $ keystone endpoint-create --region $KEYSTONE_REGION \
--service-id $service-id --publicurl 'http://PUBLIC_VIP:9292' \ --service-id $service-id --publicurl 'http://PUBLIC_VIP:9292' \
--adminurl 'http://192.168.42.103:9292' \ --adminurl 'http://10.0.0.11:9292' \
--internalurl 'http://192.168.42.103:9292' --internalurl 'http://10.0.0.11:9292'

View File

@ -36,7 +36,7 @@ API resource. Connect to the Pacemaker cluster with the
os_password="secretsecret" os_password="secretsecret"
os_username="admin" \ os_username="admin" \
os_tenant_name="admin" os_tenant_name="admin"
keystone_get_token_url="http://192.168.42.103:5000/v2.0/tokens" \ keystone_get_token_url="http://10.0.0.11:5000/v2.0/tokens" \
op monitor interval="30s" timeout="30s" op monitor interval="30s" timeout="30s"
This configuration creates ``p_manila-api``, a resource for managing the This configuration creates ``p_manila-api``, a resource for managing the
@ -64,14 +64,14 @@ Edit the :file:`/etc/manila/manila.conf` file:
:linenos: :linenos:
# We have to use MySQL connection to store data: # We have to use MySQL connection to store data:
sql_connection = mysql+pymysql://manila:password@192.168.42.101/manila?charset=utf8 sql_connection = mysql+pymysql://manila:password@10.0.0.11/manila?charset=utf8
# We bind Shared File Systems API to the VIP: # We bind Shared File Systems API to the VIP:
osapi_volume_listen = 192.168.42.103 osapi_volume_listen = 10.0.0.11
# We send notifications to High Available RabbitMQ: # We send notifications to High Available RabbitMQ:
notifier_strategy = rabbit notifier_strategy = rabbit
rabbit_host = 192.168.42.102 rabbit_host = 10.0.0.11
.. _ha-manila-services: .. _ha-manila-services:
@ -95,7 +95,7 @@ virtual IPs and define your endpoints like this:
sharev2 public 'http://PUBLIC_VIP:8786/v2/%(tenant_id)s' sharev2 public 'http://PUBLIC_VIP:8786/v2/%(tenant_id)s'
$ openstack endpoint create --region RegionOne \ $ openstack endpoint create --region RegionOne \
sharev2 internal 'http://192.168.42.103:8786/v2/%(tenant_id)s' sharev2 internal 'http://10.0.0.11:8786/v2/%(tenant_id)s'
$ openstack endpoint create --region RegionOne \ $ openstack endpoint create --region RegionOne \
sharev2 admin 'http://192.168.42.103:8786/v2/%(tenant_id)s' sharev2 admin 'http://10.0.0.11:8786/v2/%(tenant_id)s'