Browse Source

Retire repository

Retire repository following the process on retiring an OpenStack
repository:
http://docs.openstack.org/infra/manual/drivers.html#remove-project-content

This removes *all* content and just leaves a single README.rst that
explains how to get the content

Depends-On: I9f4e21b44c717d11511fea48db54a52103e294b1
Change-Id: I4281697640489a779a3ef82b23174262a0baa3fc
Andreas Jaeger 2 years ago
parent
commit
1510c36ac5
64 changed files with 10 additions and 28064 deletions
  1. 0
    24
      .gitignore
  2. 0
    4
      .gitreview
  3. 10
    62
      README.rst
  4. 0
    2
      doc-test.conf
  5. 0
    31
      doc-tools-check-languages.conf
  6. 0
    7
      doc/common/README.txt
  7. 0
    256
      doc/common/app_support.rst
  8. 0
    47
      doc/common/conventions.rst
  9. 0
    3950
      doc/common/glossary.rst
  10. 0
    10567
      doc/common/source/locale/ja/LC_MESSAGES/common.po
  11. 0
    30
      doc/ha-guide/setup.cfg
  12. 0
    30
      doc/ha-guide/setup.py
  13. 0
    1
      doc/ha-guide/source/common
  14. 0
    12
      doc/ha-guide/source/compute-node-ha-api.rst
  15. 0
    10
      doc/ha-guide/source/compute-node-ha.rst
  16. 0
    289
      doc/ha-guide/source/conf.py
  17. 0
    396
      doc/ha-guide/source/controller-ha-galera-config.rst
  18. 0
    275
      doc/ha-guide/source/controller-ha-galera-install.rst
  19. 0
    256
      doc/ha-guide/source/controller-ha-galera-manage.rst
  20. 0
    33
      doc/ha-guide/source/controller-ha-galera.rst
  21. 0
    229
      doc/ha-guide/source/controller-ha-haproxy.rst
  22. 0
    147
      doc/ha-guide/source/controller-ha-keystone.rst
  23. 0
    21
      doc/ha-guide/source/controller-ha-memcached.rst
  24. 0
    633
      doc/ha-guide/source/controller-ha-pacemaker.rst
  25. 0
    310
      doc/ha-guide/source/controller-ha-rabbitmq.rst
  26. 0
    78
      doc/ha-guide/source/controller-ha-telemetry.rst
  27. 0
    24
      doc/ha-guide/source/controller-ha-vip.rst
  28. 0
    20
      doc/ha-guide/source/controller-ha.rst
  29. BIN
      doc/ha-guide/source/figures/Cluster-deployment-collapsed.png
  30. BIN
      doc/ha-guide/source/figures/Cluster-deployment-segregated.png
  31. BIN
      doc/ha-guide/source/figures/keepalived-arch.jpg
  32. 0
    47
      doc/ha-guide/source/hardware-ha-basic.rst
  33. 0
    15
      doc/ha-guide/source/hardware-ha.rst
  34. 0
    43
      doc/ha-guide/source/index.rst
  35. 0
    42
      doc/ha-guide/source/install-ha-memcached.rst
  36. 0
    9
      doc/ha-guide/source/install-ha-ntp.rst
  37. 0
    24
      doc/ha-guide/source/install-ha-os.rst
  38. 0
    12
      doc/ha-guide/source/install-ha.rst
  39. 0
    96
      doc/ha-guide/source/intro-ha-arch-keepalived.rst
  40. 0
    198
      doc/ha-guide/source/intro-ha-arch-pacemaker.rst
  41. 0
    4
      doc/ha-guide/source/intro-ha-compute.rst
  42. 0
    213
      doc/ha-guide/source/intro-ha-concepts.rst
  43. 0
    62
      doc/ha-guide/source/intro-ha-controller.rst
  44. 0
    4
      doc/ha-guide/source/intro-ha-other.rst
  45. 0
    12
      doc/ha-guide/source/intro-ha-storage.rst
  46. 0
    15
      doc/ha-guide/source/intro-ha.rst
  47. 0
    4261
      doc/ha-guide/source/locale/ha-guide.pot
  48. 0
    4398
      doc/ha-guide/source/locale/ja/LC_MESSAGES/ha-guide.po
  49. 0
    17
      doc/ha-guide/source/networking-ha-dhcp.rst
  50. 0
    37
      doc/ha-guide/source/networking-ha-l3.rst
  51. 0
    17
      doc/ha-guide/source/networking-ha-lbaas.rst
  52. 0
    18
      doc/ha-guide/source/networking-ha-metadata.rst
  53. 0
    60
      doc/ha-guide/source/networking-ha.rst
  54. 0
    4
      doc/ha-guide/source/noncore-ha.rst
  55. 0
    84
      doc/ha-guide/source/storage-ha-backend.rst
  56. 0
    238
      doc/ha-guide/source/storage-ha-cinder.rst
  57. 0
    130
      doc/ha-guide/source/storage-ha-glance.rst
  58. 0
    101
      doc/ha-guide/source/storage-ha-manila.rst
  59. 0
    13
      doc/ha-guide/source/storage-ha.rst
  60. 0
    13
      other-requirements.txt
  61. 0
    9
      test-requirements.txt
  62. 0
    6
      tools/build-all-rst.sh
  63. 0
    42
      tools/generatepot-rst.sh
  64. 0
    76
      tox.ini

+ 0
- 24
.gitignore View File

@@ -1,24 +0,0 @@
1
-.DS_Store
2
-*.xpr
3
-
4
-# Packages
5
-.venv
6
-*.egg
7
-*.egg-info
8
-
9
-# Build directories
10
-target/
11
-publish-docs/
12
-build/
13
-/build-*.log.gz
14
-
15
-# Testenvironment
16
-.tox/
17
-
18
-# Transifex Client Setting
19
-.tx
20
-
21
-# Editors
22
-*~
23
-.*.swp
24
-.bak

+ 0
- 4
.gitreview View File

@@ -1,4 +0,0 @@
1
-[gerrit]
2
-host=review.openstack.org
3
-port=29418
4
-project=openstack/ha-guide.git

+ 10
- 62
README.rst View File

@@ -1,65 +1,13 @@
1
-OpenStack High Availability Guide
2
-+++++++++++++++++++++++++++++++++
1
+This project is no longer maintained.
3 2
 
4
-This repository contains the OpenStack High Availability Guide.
3
+The contents of this repository are still available in the Git
4
+source code management system.  To see the contents of this
5
+repository before it reached its end of life, please check out the
6
+previous commit with "git checkout HEAD^1".
5 7
 
6
-For more details, see the `OpenStack Documentation wiki page
7
-<http://wiki.openstack.org/Documentation>`_.
8
+The content has been merged into the openstack-manuals repository at
9
+http://git.openstack.org/cgit/openstack/openstack-manuals/
8 10
 
9
-Building
10
-========
11
-
12
-The root directory of the *OpenStack High Availability Guide*
13
-is ``doc/ha-guide``.
14
-
15
-To build the guide, run ``tox -e docs``.
16
-
17
-Testing of changes and building of the manual
18
-=============================================
19
-
20
-Install the python tox package and run ``tox`` from the top-level
21
-directory to use the same tests that are done as part of our Jenkins
22
-gating jobs.
23
-
24
-If you like to run individual tests, run:
25
-
26
- * ``tox -e checkniceness`` - to run the niceness tests
27
- * ``tox -e checkbuild`` - to actually build the manual
28
-
29
-tox will use the openstack-doc-tools package for execution of these
30
-tests.
31
-
32
-
33
-Contributing
34
-============
35
-
36
-Our community welcomes all people interested in open source cloud
37
-computing, and encourages you to join the `OpenStack Foundation
38
-<http://www.openstack.org/join>`_.
39
-
40
-The best way to get involved with the community is to talk with others
41
-online or at a meet up and offer contributions through our processes,
42
-the `OpenStack wiki <http://wiki.openstack.org>`_, blogs, or on IRC at
43
-``#openstack`` on ``irc.freenode.net``.
44
-
45
-We welcome all types of contributions, from blueprint designs to
46
-documentation to testing to deployment scripts.
47
-
48
-If you would like to contribute to the documents, please see the
49
-`OpenStack Documentation Contributor Guide
50
-<http://docs.openstack.org/contributor-guide/>`_.
51
-
52
-
53
-Bugs
54
-====
55
-
56
-Bugs should be filed on Launchpad, not GitHub:
57
-
58
-   https://bugs.launchpad.net/openstack-manuals
59
-
60
-
61
-Installing
62
-==========
63
-
64
-Refer to http://docs.openstack.org to see where these documents are published
65
-and to learn more about the OpenStack project.
11
+For any further questions, please email
12
+openstack-docs@lists.openstack.org or join #openstack-doc on
13
+Freenode.

+ 0
- 2
doc-test.conf View File

@@ -1,2 +0,0 @@
1
-[DEFAULT]
2
-repo_name = ha-guide

+ 0
- 31
doc-tools-check-languages.conf View File

@@ -1,31 +0,0 @@
1
-# Configuration for translation setup.
2
-
3
-# directories to be set up
4
-declare -A DIRECTORIES=(
5
-)
6
-
7
-# books to be built
8
-declare -A BOOKS=(
9
-    ["ja"]="ha-guide"
10
-)
11
-
12
-# draft books
13
-declare -A DRAFTS=(
14
-    ["ja"]="ha-guide"
15
-)
16
-
17
-# Where does the top-level pom live?
18
-# Set to empty to not copy it.
19
-POM_FILE=""
20
-
21
-# Location of doc dir
22
-DOC_DIR="doc/"
23
-
24
-# Books with special handling
25
-# Values need to match content in project-config/jenkins/scripts/common_translation_update.sh
26
-declare -A SPECIAL_BOOKS
27
-SPECIAL_BOOKS=(
28
-    ["ha-guide"]="RST"
29
-    # These are translated in openstack-manuals
30
-    ["common"]="skip"
31
-)

+ 0
- 7
doc/common/README.txt View File

@@ -1,7 +0,0 @@
1
-Important note about this directory
2
-===================================
3
-
4
-Because this directory is synced from openstack-manuals, make any changes in
5
-openstack-manuals/doc/common. After changes to the synced files merge to
6
-openstack-manuals/doc/common, a patch is automatically proposed for this
7
-directory.

+ 0
- 256
doc/common/app_support.rst View File

@@ -1,256 +0,0 @@
1
-.. ## WARNING ##########################################################
2
-.. This file is synced from openstack/openstack-manuals repository to
3
-.. other related repositories. If you need to make changes to this file,
4
-.. make the changes in openstack-manuals. After any change merged to,
5
-.. openstack-manuals, automatically a patch for others will be proposed.
6
-.. #####################################################################
7
-
8
-=================
9
-Community support
10
-=================
11
-
12
-The following resources are available to help you run and use OpenStack.
13
-The OpenStack community constantly improves and adds to the main
14
-features of OpenStack, but if you have any questions, do not hesitate to
15
-ask. Use the following resources to get OpenStack support, and
16
-troubleshoot your installations.
17
-
18
-Documentation
19
-~~~~~~~~~~~~~
20
-
21
-For the available OpenStack documentation, see
22
-`docs.openstack.org <http://docs.openstack.org>`__.
23
-
24
-To provide feedback on documentation, join and use the
25
-openstack-docs@lists.openstack.org mailing list at `OpenStack
26
-Documentation Mailing
27
-List <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs>`__,
28
-or `report a
29
-bug <https://bugs.launchpad.net/openstack-manuals/+filebug>`__.
30
-
31
-The following books explain how to install an OpenStack cloud and its
32
-associated components:
33
-
34
-*  `Installation Guide for openSUSE Leap 42.1 and SUSE Linux Enterprise
35
-   Server 12 SP1
36
-   <http://docs.openstack.org/mitaka/install-guide-obs/>`__
37
-
38
-*  `Installation Guide for Red Hat Enterprise Linux 7 and CentOS 7
39
-   <http://docs.openstack.org/mitaka/install-guide-rdo/>`__
40
-
41
-*  `Installation Guide for Ubuntu 14.04 (LTS)
42
-   <http://docs.openstack.org/mitaka/install-guide-ubuntu/>`__
43
-
44
-The following books explain how to configure and run an OpenStack cloud:
45
-
46
-*  `Architecture Design Guide <http://docs.openstack.org/arch-design/>`__
47
-
48
-*  `Administrator Guide <http://docs.openstack.org/admin-guide/>`__
49
-
50
-*  `Configuration Reference <http://docs.openstack.org/mitaka/config-reference/>`__
51
-
52
-*  `Operations Guide <http://docs.openstack.org/ops/>`__
53
-
54
-*  `Networking Guide <http://docs.openstack.org/mitaka/networking-guide>`__
55
-
56
-*  `High Availability Guide <http://docs.openstack.org/ha-guide/>`__
57
-
58
-*  `Security Guide <http://docs.openstack.org/sec/>`__
59
-
60
-*  `Virtual Machine Image Guide <http://docs.openstack.org/image-guide/>`__
61
-
62
-The following books explain how to use the OpenStack dashboard and
63
-command-line clients:
64
-
65
-*  `API Guide <http://developer.openstack.org/api-guide/quick-start/>`__
66
-
67
-*  `End User Guide <http://docs.openstack.org/user-guide/>`__
68
-
69
-*  `Command-Line Interface Reference
70
-   <http://docs.openstack.org/cli-reference/>`__
71
-
72
-The following documentation provides reference and guidance information
73
-for the OpenStack APIs:
74
-
75
-*  `API Complete Reference
76
-   (HTML) <http://developer.openstack.org/api-ref.html>`__
77
-
78
-*  `API Complete Reference
79
-   (PDF) <http://developer.openstack.org/api-ref-guides/bk-api-ref.pdf>`__
80
-
81
-The following guide provides how to contribute to OpenStack documentation:
82
-
83
-*  `Documentation Contributor Guide <http://docs.openstack.org/contributor-guide/>`__
84
-
85
-ask.openstack.org
86
-~~~~~~~~~~~~~~~~~
87
-
88
-During the set up or testing of OpenStack, you might have questions
89
-about how a specific task is completed or be in a situation where a
90
-feature does not work correctly. Use the
91
-`ask.openstack.org <https://ask.openstack.org>`__ site to ask questions
92
-and get answers. When you visit the https://ask.openstack.org site, scan
93
-the recently asked questions to see whether your question has already
94
-been answered. If not, ask a new question. Be sure to give a clear,
95
-concise summary in the title and provide as much detail as possible in
96
-the description. Paste in your command output or stack traces, links to
97
-screen shots, and any other information which might be useful.
98
-
99
-OpenStack mailing lists
100
-~~~~~~~~~~~~~~~~~~~~~~~
101
-
102
-A great way to get answers and insights is to post your question or
103
-problematic scenario to the OpenStack mailing list. You can learn from
104
-and help others who might have similar issues. To subscribe or view the
105
-archives, go to
106
-http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack. If you are
107
-interested in the other mailing lists for specific projects or development,
108
-refer to `Mailing Lists <https://wiki.openstack.org/wiki/Mailing_Lists>`__.
109
-
110
-The OpenStack wiki
111
-~~~~~~~~~~~~~~~~~~
112
-
113
-The `OpenStack wiki <https://wiki.openstack.org/>`__ contains a broad
114
-range of topics but some of the information can be difficult to find or
115
-is a few pages deep. Fortunately, the wiki search feature enables you to
116
-search by title or content. If you search for specific information, such
117
-as about networking or OpenStack Compute, you can find a large amount
118
-of relevant material. More is being added all the time, so be sure to
119
-check back often. You can find the search box in the upper-right corner
120
-of any OpenStack wiki page.
121
-
122
-The Launchpad Bugs area
123
-~~~~~~~~~~~~~~~~~~~~~~~
124
-
125
-The OpenStack community values your set up and testing efforts and wants
126
-your feedback. To log a bug, you must sign up for a Launchpad account at
127
-https://launchpad.net/+login. You can view existing bugs and report bugs
128
-in the Launchpad Bugs area. Use the search feature to determine whether
129
-the bug has already been reported or already been fixed. If it still
130
-seems like your bug is unreported, fill out a bug report.
131
-
132
-Some tips:
133
-
134
-*  Give a clear, concise summary.
135
-
136
-*  Provide as much detail as possible in the description. Paste in your
137
-   command output or stack traces, links to screen shots, and any other
138
-   information which might be useful.
139
-
140
-*  Be sure to include the software and package versions that you are
141
-   using, especially if you are using a development branch, such as,
142
-   ``"Kilo release" vs git commit bc79c3ecc55929bac585d04a03475b72e06a3208``.
143
-
144
-*  Any deployment-specific information is helpful, such as whether you
145
-   are using Ubuntu 14.04 or are performing a multi-node installation.
146
-
147
-The following Launchpad Bugs areas are available:
148
-
149
-*  `Bugs: OpenStack Block Storage
150
-   (cinder) <https://bugs.launchpad.net/cinder>`__
151
-
152
-*  `Bugs: OpenStack Compute (nova) <https://bugs.launchpad.net/nova>`__
153
-
154
-*  `Bugs: OpenStack Dashboard
155
-   (horizon) <https://bugs.launchpad.net/horizon>`__
156
-
157
-*  `Bugs: OpenStack Identity
158
-   (keystone) <https://bugs.launchpad.net/keystone>`__
159
-
160
-*  `Bugs: OpenStack Image service
161
-   (glance) <https://bugs.launchpad.net/glance>`__
162
-
163
-*  `Bugs: OpenStack Networking
164
-   (neutron) <https://bugs.launchpad.net/neutron>`__
165
-
166
-*  `Bugs: OpenStack Object Storage
167
-   (swift) <https://bugs.launchpad.net/swift>`__
168
-
169
-*  `Bugs: Application catalog (murano) <https://bugs.launchpad.net/murano>`__
170
-
171
-*  `Bugs: Bare metal service (ironic) <https://bugs.launchpad.net/ironic>`__
172
-
173
-*  `Bugs: Clustering service (senlin) <https://bugs.launchpad.net/senlin>`__
174
-
175
-*  `Bugs: Containers service (magnum) <https://bugs.launchpad.net/magnum>`__
176
-
177
-*  `Bugs: Data processing service
178
-   (sahara) <https://bugs.launchpad.net/sahara>`__
179
-
180
-*  `Bugs: Database service (trove) <https://bugs.launchpad.net/trove>`__
181
-
182
-*  `Bugs: Deployment service (fuel) <https://bugs.launchpad.net/fuel>`__
183
-
184
-*  `Bugs: DNS service (designate) <https://bugs.launchpad.net/designate>`__
185
-
186
-*  `Bugs: Key Manager Service (barbican) <https://bugs.launchpad.net/barbican>`__
187
-
188
-*  `Bugs: Monitoring (monasca) <https://bugs.launchpad.net/monasca>`__
189
-
190
-*  `Bugs: Orchestration (heat) <https://bugs.launchpad.net/heat>`__
191
-
192
-*  `Bugs: Rating (cloudkitty) <https://bugs.launchpad.net/cloudkitty>`__
193
-
194
-*  `Bugs: Shared file systems (manila) <https://bugs.launchpad.net/manila>`__
195
-
196
-*  `Bugs: Telemetry
197
-   (ceilometer) <https://bugs.launchpad.net/ceilometer>`__
198
-
199
-*  `Bugs: Telemetry v3
200
-   (gnocchi) <https://bugs.launchpad.net/gnocchi>`__
201
-
202
-*  `Bugs: Workflow service
203
-   (mistral) <https://bugs.launchpad.net/mistral>`__
204
-
205
-*  `Bugs: Messaging service
206
-   (zaqar) <https://bugs.launchpad.net/zaqar>`__
207
-
208
-*  `Bugs: OpenStack API Documentation
209
-   (developer.openstack.org) <https://bugs.launchpad.net/openstack-api-site>`__
210
-
211
-*  `Bugs: OpenStack Documentation
212
-   (docs.openstack.org) <https://bugs.launchpad.net/openstack-manuals>`__
213
-
214
-The OpenStack IRC channel
215
-~~~~~~~~~~~~~~~~~~~~~~~~~
216
-
217
-The OpenStack community lives in the #openstack IRC channel on the
218
-Freenode network. You can hang out, ask questions, or get immediate
219
-feedback for urgent and pressing issues. To install an IRC client or use
220
-a browser-based client, go to
221
-`https://webchat.freenode.net/ <https://webchat.freenode.net>`__. You can
222
-also use Colloquy (Mac OS X, http://colloquy.info/), mIRC (Windows,
223
-http://www.mirc.com/), or XChat (Linux). When you are in the IRC channel
224
-and want to share code or command output, the generally accepted method
225
-is to use a Paste Bin. The OpenStack project has one at
226
-http://paste.openstack.org. Just paste your longer amounts of text or
227
-logs in the web form and you get a URL that you can paste into the
228
-channel. The OpenStack IRC channel is ``#openstack`` on
229
-``irc.freenode.net``. You can find a list of all OpenStack IRC channels
230
-at https://wiki.openstack.org/wiki/IRC.
231
-
232
-Documentation feedback
233
-~~~~~~~~~~~~~~~~~~~~~~
234
-
235
-To provide feedback on documentation, join and use the
236
-openstack-docs@lists.openstack.org mailing list at `OpenStack
237
-Documentation Mailing
238
-List <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs>`__,
239
-or `report a
240
-bug <https://bugs.launchpad.net/openstack-manuals/+filebug>`__.
241
-
242
-OpenStack distribution packages
243
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
244
-
245
-The following Linux distributions provide community-supported packages
246
-for OpenStack:
247
-
248
-*  **Debian:** https://wiki.debian.org/OpenStack
249
-
250
-*  **CentOS, Fedora, and Red Hat Enterprise Linux:**
251
-   https://www.rdoproject.org/
252
-
253
-*  **openSUSE and SUSE Linux Enterprise Server:**
254
-   https://en.opensuse.org/Portal:OpenStack
255
-
256
-*  **Ubuntu:** https://wiki.ubuntu.com/ServerTeam/CloudArchive

+ 0
- 47
doc/common/conventions.rst View File

@@ -1,47 +0,0 @@
1
-.. ## WARNING ##########################################################
2
-.. This file is synced from openstack/openstack-manuals repository to
3
-.. other related repositories. If you need to make changes to this file,
4
-.. make the changes in openstack-manuals. After any change merged to,
5
-.. openstack-manuals, automatically a patch for others will be proposed.
6
-.. #####################################################################
7
-
8
-===========
9
-Conventions
10
-===========
11
-
12
-The OpenStack documentation uses several typesetting conventions.
13
-
14
-Notices
15
-~~~~~~~
16
-
17
-Notices take these forms:
18
-
19
-.. note:: A comment with additional information that explains a part of the
20
-          text.
21
-
22
-.. important:: Something you must be aware of before proceeding.
23
-
24
-.. tip:: An extra but helpful piece of practical advice.
25
-
26
-.. caution:: Helpful information that prevents the user from making mistakes.
27
-
28
-.. warning:: Critical information about the risk of data loss or security
29
-             issues.
30
-
31
-Command prompts
32
-~~~~~~~~~~~~~~~
33
-
34
-.. code-block:: console
35
-
36
-   $ command
37
-
38
-Any user, including the ``root`` user, can run commands that are
39
-prefixed with the ``$`` prompt.
40
-
41
-.. code-block:: console
42
-
43
-   # command
44
-
45
-The ``root`` user must run commands that are prefixed with the ``#``
46
-prompt. You can also prefix these commands with the :command:`sudo`
47
-command, if available, to run them.

+ 0
- 3950
doc/common/glossary.rst
File diff suppressed because it is too large
View File


+ 0
- 10567
doc/common/source/locale/ja/LC_MESSAGES/common.po
File diff suppressed because it is too large
View File


+ 0
- 30
doc/ha-guide/setup.cfg View File

@@ -1,30 +0,0 @@
1
-[metadata]
2
-name = openstackhaguide
3
-summary = OpenStack High Availability Guide
4
-author = OpenStack
5
-author-email = openstack-docs@lists.openstack.org
6
-home-page = http://docs.openstack.org/
7
-classifier =
8
-Environment :: OpenStack
9
-Intended Audience :: Information Technology
10
-Intended Audience :: System Administrators
11
-License :: OSI Approved :: Apache Software License
12
-Operating System :: POSIX :: Linux
13
-Topic :: Documentation
14
-
15
-[global]
16
-setup-hooks =
17
-    pbr.hooks.setup_hook
18
-
19
-[files]
20
-
21
-[build_sphinx]
22
-all_files = 1
23
-build-dir = build
24
-source-dir = source
25
-
26
-[wheel]
27
-universal = 1
28
-
29
-[pbr]
30
-warnerrors = True

+ 0
- 30
doc/ha-guide/setup.py View File

@@ -1,30 +0,0 @@
1
-#!/usr/bin/env python
2
-# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
3
-#
4
-# Licensed under the Apache License, Version 2.0 (the "License");
5
-# you may not use this file except in compliance with the License.
6
-# You may obtain a copy of the License at
7
-#
8
-#    http://www.apache.org/licenses/LICENSE-2.0
9
-#
10
-# Unless required by applicable law or agreed to in writing, software
11
-# distributed under the License is distributed on an "AS IS" BASIS,
12
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
13
-# implied.
14
-# See the License for the specific language governing permissions and
15
-# limitations under the License.
16
-
17
-# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
18
-import setuptools
19
-
20
-# In python < 2.7.4, a lazy loading of package `pbr` will break
21
-# setuptools if some other modules registered functions in `atexit`.
22
-# solution from: http://bugs.python.org/issue15881#msg170215
23
-try:
24
-    import multiprocessing  # noqa
25
-except ImportError:
26
-    pass
27
-
28
-setuptools.setup(
29
-    setup_requires=['pbr'],
30
-    pbr=True)

+ 0
- 1
doc/ha-guide/source/common View File

@@ -1 +0,0 @@
1
-../../common

+ 0
- 12
doc/ha-guide/source/compute-node-ha-api.rst View File

@@ -1,12 +0,0 @@
1
-
2
-============================================
3
-Configure high availability on compute nodes
4
-============================================
5
-
6
-The `Installation Guide
7
-<http://docs.openstack.org/liberty/#install-guides>`_
8
-gives instructions for installing multiple compute nodes.
9
-To make them highly available,
10
-you must configure the environment
11
-to include multiple instances of the API
12
-and other services.

+ 0
- 10
doc/ha-guide/source/compute-node-ha.rst View File

@@ -1,10 +0,0 @@
1
-
2
-==================================================
3
-Configuring the compute node for high availability
4
-==================================================
5
-
6
-.. toctree::
7
-   :maxdepth: 2
8
-
9
-   compute-node-ha-api.rst
10
-

+ 0
- 289
doc/ha-guide/source/conf.py View File

@@ -1,289 +0,0 @@
1
-# Licensed under the Apache License, Version 2.0 (the "License");
2
-# you may not use this file except in compliance with the License.
3
-# You may obtain a copy of the License at
4
-#
5
-#    http://www.apache.org/licenses/LICENSE-2.0
6
-#
7
-# Unless required by applicable law or agreed to in writing, software
8
-# distributed under the License is distributed on an "AS IS" BASIS,
9
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
10
-# implied.
11
-# See the License for the specific language governing permissions and
12
-# limitations under the License.
13
-
14
-# This file is execfile()d with the current directory set to its
15
-# containing dir.
16
-#
17
-# Note that not all possible configuration values are present in this
18
-# autogenerated file.
19
-#
20
-# All configuration values have a default; values that are commented out
21
-# serve to show the default.
22
-
23
-import os
24
-# import sys
25
-
26
-import openstackdocstheme
27
-
28
-# If extensions (or modules to document with autodoc) are in another directory,
29
-# add these directories to sys.path here. If the directory is relative to the
30
-# documentation root, use os.path.abspath to make it absolute, like shown here.
31
-# sys.path.insert(0, os.path.abspath('.'))
32
-
33
-# -- General configuration ------------------------------------------------
34
-
35
-# If your documentation needs a minimal Sphinx version, state it here.
36
-# needs_sphinx = '1.0'
37
-
38
-# Add any Sphinx extension module names here, as strings. They can be
39
-# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
40
-# ones.
41
-extensions = []
42
-
43
-# Add any paths that contain templates here, relative to this directory.
44
-# templates_path = ['_templates']
45
-
46
-# The suffix of source filenames.
47
-source_suffix = '.rst'
48
-
49
-# The encoding of source files.
50
-# source_encoding = 'utf-8-sig'
51
-
52
-# The master toctree document.
53
-master_doc = 'index'
54
-
55
-# General information about the project.
56
-project = u'High Availability Guide'
57
-bug_tag = u'ha-guide'
58
-copyright = u'2015, OpenStack contributors'
59
-
60
-# The version info for the project you're documenting, acts as replacement for
61
-# |version| and |release|, also used in various other places throughout the
62
-# built documents.
63
-#
64
-# The short X.Y version.
65
-version = '0.0.1'
66
-# The full version, including alpha/beta/rc tags.
67
-release = '0.0.1'
68
-
69
-# A few variables have to be set for the log-a-bug feature.
70
-#   giturl: The location of conf.py on Git. Must be set manually.
71
-#   gitsha: The SHA checksum of the bug description. Automatically extracted from git log.
72
-#   bug_tag: Tag for categorizing the bug. Must be set manually.
73
-# These variables are passed to the logabug code via html_context.
74
-giturl = u'http://git.openstack.org/cgit/openstack/ha-guide/tree/doc/ha-guide/source'
75
-git_cmd = "/usr/bin/git log | head -n1 | cut -f2 -d' '"
76
-gitsha = os.popen(git_cmd).read().strip('\n')
77
-html_context = {"gitsha": gitsha, "bug_tag": bug_tag,
78
-                "giturl": giturl}
79
-
80
-# The language for content autogenerated by Sphinx. Refer to documentation
81
-# for a list of supported languages.
82
-# language = None
83
-
84
-# There are two options for replacing |today|: either, you set today to some
85
-# non-false value, then it is used:
86
-# today = ''
87
-# Else, today_fmt is used as the format for a strftime call.
88
-# today_fmt = '%B %d, %Y'
89
-
90
-# List of patterns, relative to source directory, that match files and
91
-# directories to ignore when looking for source files.
92
-exclude_patterns = []
93
-
94
-# The reST default role (used for this markup: `text`) to use for all
95
-# documents.
96
-# default_role = None
97
-
98
-# If true, '()' will be appended to :func: etc. cross-reference text.
99
-# add_function_parentheses = True
100
-
101
-# If true, the current module name will be prepended to all description
102
-# unit titles (such as .. function::).
103
-# add_module_names = True
104
-
105
-# If true, sectionauthor and moduleauthor directives will be shown in the
106
-# output. They are ignored by default.
107
-# show_authors = False
108
-
109
-# The name of the Pygments (syntax highlighting) style to use.
110
-pygments_style = 'sphinx'
111
-
112
-# A list of ignored prefixes for module index sorting.
113
-# modindex_common_prefix = []
114
-
115
-# If true, keep warnings as "system message" paragraphs in the built documents.
116
-# keep_warnings = False
117
-
118
-
119
-# -- Options for HTML output ----------------------------------------------
120
-
121
-# The theme to use for HTML and HTML Help pages.  See the documentation for
122
-# a list of builtin themes.
123
-html_theme = 'openstackdocs'
124
-
125
-# Theme options are theme-specific and customize the look and feel of a theme
126
-# further.  For a list of options available for each theme, see the
127
-# documentation.
128
-# html_theme_options = {}
129
-
130
-# Add any paths that contain custom themes here, relative to this directory.
131
-html_theme_path = [openstackdocstheme.get_html_theme_path()]
132
-
133
-# The name for this set of Sphinx documents.  If None, it defaults to
134
-# "<project> v<release> documentation".
135
-# html_title = None
136
-
137
-# A shorter title for the navigation bar.  Default is the same as html_title.
138
-# html_short_title = None
139
-
140
-# The name of an image file (relative to this directory) to place at the top
141
-# of the sidebar.
142
-# html_logo = None
143
-
144
-# The name of an image file (within the static path) to use as favicon of the
145
-# docs.  This file should be a Windows icon file (.ico) being 16x16 or 32x32
146
-# pixels large.
147
-# html_favicon = None
148
-
149
-# Add any paths that contain custom static files (such as style sheets) here,
150
-# relative to this directory. They are copied after the builtin static files,
151
-# so a file named "default.css" will overwrite the builtin "default.css".
152
-# html_static_path = []
153
-
154
-# Add any extra paths that contain custom files (such as robots.txt or
155
-# .htaccess) here, relative to this directory. These files are copied
156
-# directly to the root of the documentation.
157
-# html_extra_path = []
158
-
159
-# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
160
-# using the given strftime format.
161
-# So that we can enable "log-a-bug" links from each output HTML page, this
162
-# variable must be set to a format that includes year, month, day, hours and
163
-# minutes.
164
-html_last_updated_fmt = '%Y-%m-%d %H:%M'
165
-
166
-# If true, SmartyPants will be used to convert quotes and dashes to
167
-# typographically correct entities.
168
-# html_use_smartypants = True
169
-
170
-# Custom sidebar templates, maps document names to template names.
171
-# html_sidebars = {}
172
-
173
-# Additional templates that should be rendered to pages, maps page names to
174
-# template names.
175
-# html_additional_pages = {}
176
-
177
-# If false, no module index is generated.
178
-# html_domain_indices = True
179
-
180
-# If false, no index is generated.
181
-html_use_index = False
182
-
183
-# If true, the index is split into individual pages for each letter.
184
-# html_split_index = False
185
-
186
-# If true, links to the reST sources are added to the pages.
187
-html_show_sourcelink = False
188
-
189
-# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
190
-# html_show_sphinx = True
191
-
192
-# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
193
-# html_show_copyright = True
194
-
195
-# If true, an OpenSearch description file will be output, and all pages will
196
-# contain a <link> tag referring to it.  The value of this option must be the
197
-# base URL from which the finished HTML is served.
198
-# html_use_opensearch = ''
199
-
200
-# This is the file name suffix for HTML files (e.g. ".xhtml").
201
-# html_file_suffix = None
202
-
203
-# Output file base name for HTML help builder.
204
-htmlhelp_basename = 'ha-guide'
205
-
206
-# If true, publish source files
207
-html_copy_source = False
208
-
209
-# -- Options for LaTeX output ---------------------------------------------
210
-
211
-latex_elements = {
212
-    # The paper size ('letterpaper' or 'a4paper').
213
-    # 'papersize': 'letterpaper',
214
-
215
-    # The font size ('10pt', '11pt' or '12pt').
216
-    # 'pointsize': '10pt',
217
-
218
-    # Additional stuff for the LaTeX preamble.
219
-    # 'preamble': '',
220
-}
221
-
222
-# Grouping the document tree into LaTeX files. List of tuples
223
-# (source start file, target name, title,
224
-#  author, documentclass [howto, manual, or own class]).
225
-latex_documents = [
226
-    ('index', 'HAGuide.tex', u'High Availability Guide',
227
-     u'OpenStack contributors', 'manual'),
228
-]
229
-
230
-# The name of an image file (relative to this directory) to place at the top of
231
-# the title page.
232
-# latex_logo = None
233
-
234
-# For "manual" documents, if this is true, then toplevel headings are parts,
235
-# not chapters.
236
-# latex_use_parts = False
237
-
238
-# If true, show page references after internal links.
239
-# latex_show_pagerefs = False
240
-
241
-# If true, show URL addresses after external links.
242
-# latex_show_urls = False
243
-
244
-# Documents to append as an appendix to all manuals.
245
-# latex_appendices = []
246
-
247
-# If false, no module index is generated.
248
-# latex_domain_indices = True
249
-
250
-
251
-# -- Options for manual page output ---------------------------------------
252
-
253
-# One entry per manual page. List of tuples
254
-# (source start file, name, description, authors, manual section).
255
-man_pages = [
256
-    ('index', 'haguide', u'High Availability Guide',
257
-     [u'OpenStack contributors'], 1)
258
-]
259
-
260
-# If true, show URL addresses after external links.
261
-# man_show_urls = False
262
-
263
-
264
-# -- Options for Texinfo output -------------------------------------------
265
-
266
-# Grouping the document tree into Texinfo files. List of tuples
267
-# (source start file, target name, title, author,
268
-#  dir menu entry, description, category)
269
-texinfo_documents = [
270
-    ('index', 'HAGuide', u'High Availability Guide',
271
-     u'OpenStack contributors', 'HAGuide',
272
-     'This guide shows OpenStack operators and deployers how to configure'
273
-     'OpenStack Networking to be robust and fault-tolerant.', 'Miscellaneous'),
274
-]
275
-
276
-# Documents to append as an appendix to all manuals.
277
-# texinfo_appendices = []
278
-
279
-# If false, no module index is generated.
280
-# texinfo_domain_indices = True
281
-
282
-# How to display URL addresses: 'footnote', 'no', or 'inline'.
283
-# texinfo_show_urls = 'footnote'
284
-
285
-# If true, do not generate a @detailmenu in the "Top" node's menu.
286
-# texinfo_no_detailmenu = False
287
-
288
-# -- Options for Internationalization output ------------------------------
289
-locale_dirs = ['locale/']

+ 0
- 396
doc/ha-guide/source/controller-ha-galera-config.rst View File

@@ -1,396 +0,0 @@
1
-Configuration
2
-==============
3
-
4
-Before you launch Galera Cluster, you need to configure the server
5
-and the database to operate as part of the cluster.
6
-
7
-Configuring the server
8
-~~~~~~~~~~~~~~~~~~~~~~~
9
-
10
-Certain services running on the underlying operating system of your
11
-OpenStack database may block Galera Cluster from normal operation
12
-or prevent ``mysqld`` from achieving network connectivity with the cluster.
13
-
14
-
15
-Firewall
16
----------
17
-
18
-Galera Cluster requires that you open four ports to network traffic:
19
-
20
-- On ``3306``, Galera Cluster uses TCP for database client connections
21
-  and State Snapshot Transfers methods that require the client,
22
-  (that is, ``mysqldump``).
23
-- On ``4567`` Galera Cluster uses TCP for replication traffic. Multicast
24
-  replication uses both TCP and UDP on this port.
25
-- On ``4568`` Galera Cluster uses TCP for Incremental State Transfers.
26
-- On ``4444`` Galera Cluster uses TCP for all other State Snapshot Transfer
27
-  methods.
28
-
29
-.. seealso:: For more information on firewalls, see `Firewalls and default ports
30
-   <http://docs.openstack.org/liberty/config-reference/content/firewalls-default-ports.html>`_, in the Configuration Reference.
31
-
32
-
33
-
34
-``iptables``
35
-^^^^^^^^^^^^^
36
-
37
-For many Linux distributions, you can configure the firewall using
38
-the ``iptables`` utility. To do so, complete the following steps:
39
-
40
-#. For each cluster node, run the following commands, replacing
41
-   ``NODE-IP-ADDRESS`` with the IP address of the cluster node
42
-   you want to open the firewall to:
43
-
44
-   .. code-block:: console
45
-
46
-      # iptables --append INPUT --in-interface eth0 \
47
-         --protocol --match tcp --dport 3306 \
48
-         --source NODE-IP-ADDRESS --jump ACCEPT
49
-      # iptables --append INPUT --in-interface eth0 \
50
-         --protocol --match tcp --dport 4567 \
51
-         --source NODE-IP-ADDRESS --jump ACCEPT
52
-      # iptables --append INPUT --in-interface eth0 \
53
-         --protocol --match tcp --dport 4568 \
54
-         --source NODE-IP-ADDRESS --jump ACCEPT
55
-      # iptables --append INPUT --in-interface eth0 \
56
-         --protocol --match tcp --dport 4444 \
57
-         --source NODE-IP-ADDRESS --jump ACCEPT
58
-
59
-   In the event that you also want to configure multicast replication,
60
-   run this command as well:
61
-
62
-   .. code-block:: console
63
-
64
-      # iptables --append INPUT --in-interface eth0 \
65
-          --protocol udp --match udp --dport 4567 \
66
-        --source NODE-IP-ADDRESS --jump ACCEPT
67
-
68
-
69
-#. Make the changes persistent. For servers that use ``init``, use
70
-   the :command:`save` command:
71
-
72
-   .. code-block:: console
73
-
74
-      # service save iptables
75
-
76
-   For servers that use ``systemd``, you need to save the current packet
77
-   filtering to the path of the file that ``iptables`` reads when it starts.
78
-   This path can vary by distribution, but common locations are in the
79
-   ``/etc`` directory, such as:
80
-
81
-   - ``/etc/sysconfig/iptables``
82
-   - ``/etc/iptables/iptables.rules``
83
-
84
-   When you find the correct path, run the :command:`iptables-save` command:
85
-
86
-   .. code-block:: console
87
-
88
-      # iptables-save > /etc/sysconfig/iptables
89
-
90
-With the firewall configuration saved, whenever your OpenStack
91
-database starts.
92
-
93
-``firewall-cmd``
94
-^^^^^^^^^^^^^^^^^
95
-
96
-For many Linux distributions, you can configure the firewall using the
97
-``firewall-cmd`` utility for FirewallD. To do so, complete the following
98
-steps on each cluster node:
99
-
100
-#. Add the Galera Cluster service:
101
-
102
-   .. code-block:: console
103
-
104
-      # firewall-cmd --add-service=mysql
105
-
106
-#. For each instance of OpenStack database in your cluster, run the
107
-   following commands, replacing ``NODE-IP-ADDRESS`` with the IP address
108
-   of the cluster node you want to open the firewall to:
109
-
110
-   .. code-block:: console
111
-
112
-      # firewall-cmd --add-port=3306/tcp
113
-      # firewall-cmd --add-port=4567/tcp
114
-      # firewall-cmd --add-port=4568/tcp
115
-      # firewall-cmd --add-port=4444/tcp
116
-
117
-   In the event that you also want to configure mutlicast replication,
118
-   run this command as well:
119
-
120
-   .. code-block:: console
121
-
122
-      # firewall-cmd --add-port=4567/udp
123
-
124
-#. To make this configuration persistent, repeat the above commands
125
-   with the :option:`--permanent` option.
126
-
127
-   .. code-block:: console
128
-
129
-      # firewall-cmd --add-service=mysql --permanent
130
-      # firewall-cmd --add-port=3306/tcp --permanent
131
-      # firewall-cmd --add-port=4567/tcp --permanent
132
-      # firewall-cmd --add-port=4568/tcp --permanent
133
-      # firewall-cmd --add-port=4444/tcp --permanent
134
-      # firewall-cmd --add-port=4567/udp --permanent
135
-
136
-
137
-With the firewall configuration saved, whenever your OpenStack
138
-database starts.
139
-
140
-SELinux
141
---------
142
-
143
-Security-Enhanced Linux is a kernel module for improving security on Linux
144
-operating systems. It is commonly enabled and configured by default on
145
-Red Hat-based distributions. In the context of Galera Cluster, systems with
146
-SELinux may block the database service, keep it from starting or prevent it
147
-from establishing network connections with the cluster.
148
-
149
-To configure SELinux to permit Galera Cluster to operate, complete
150
-the following steps on each cluster node:
151
-
152
-#. Using the ``semanage`` utility, open the relevant ports:
153
-
154
-   .. code-block:: console
155
-
156
-      # semanage port -a -t mysqld_port_t -p tcp 3306
157
-      # semanage port -a -t mysqld_port_t -p tcp 4567
158
-      # semanage port -a -t mysqld_port_t -p tcp 4568
159
-      # semanage port -a -t mysqld_port_t -p tcp 4444
160
-
161
-   In the event that you use multicast replication, you also need to
162
-   open ``4567`` to UDP traffic:
163
-
164
-   .. code-block:: console
165
-
166
-      # semanage port -a -t mysqld_port_t -p udp 4567
167
-
168
-#. Set SELinux to allow the database server to run:
169
-
170
-   .. code-block:: console
171
-
172
-      # semanage permissive -a mysqld_t
173
-
174
-With these options set, SELinux now permits Galera Cluster to operate.
175
-
176
-.. note:: Bear in mind, leaving SELinux in permissive mode is not a good
177
-        security practice. Over the longer term, you need to develop a
178
-        security policy for Galera Cluster and then switch SELinux back
179
-        into enforcing mode.
180
-
181
-        For more information on configuring SELinux to work with
182
-        Galera Cluster, see the `Documentation
183
-        <http://galeracluster.com/documentation-webpages/selinux.html>`_
184
-
185
-
186
-AppArmor
187
----------
188
-
189
-Application Armor is a kernel module for improving security on Linux
190
-operating systems. It is developed by Canonical and commonly used on
191
-Ubuntu-based distributions. In the context of Galera Cluster, systems
192
-with AppArmor may block the database service from operating normally.
193
-
194
-To configure AppArmor to work with Galera Cluster, complete the
195
-following steps on each cluster node:
196
-
197
-#. Create a symbolic link for the database server in the ``disable`` directory:
198
-
199
-   .. code-block:: console
200
-
201
-      # ln -s /etc/apparmor.d/usr /etc/apparmor.d/disable/.sbin.mysqld
202
-
203
-#. Restart AppArmor. For servers that use ``init``, run the following command:
204
-
205
-   .. code-block:: console
206
-
207
-      # service apparmor restart
208
-
209
-   For servers that use ``systemd``, instead run this command:
210
-
211
-   .. code-block:: console
212
-
213
-      # systemctl restart apparmor
214
-
215
-AppArmor now permits Galera Cluster to operate.
216
-
217
-
218
-Database configuration
219
-~~~~~~~~~~~~~~~~~~~~~~~
220
-
221
-MySQL databases, including MariaDB and Percona XtraDB, manage their
222
-configurations using a ``my.cnf`` file, which is typically located in the
223
-``/etc`` directory. Configuration options available in these databases are
224
-also available in Galera Cluster, with some restrictions and several
225
-additions.
226
-
227
-.. code-block:: ini
228
-
229
-   [mysqld]
230
-   datadir=/var/lib/mysql
231
-   socket=/var/lib/mysql/mysql.sock
232
-   user=mysql
233
-   binlog_format=ROW
234
-   bind-address=0.0.0.0
235
-
236
-   # InnoDB Configuration
237
-   default_storage_engine=innodb
238
-   innodb_autoinc_lock_mode=2
239
-   innodb_flush_log_at_trx_commit=0
240
-   innodb_buffer_pool_size=122M
241
-
242
-   # Galera Cluster Configuration
243
-   wsrep_provider=/usr/lib/libgalera_smm.so
244
-   wsrep_provider_options="pc.recovery=TRUE;gcache.size=300M"
245
-   wsrep_cluster_name="my_example_cluster"
246
-   wsrep_cluster_address="gcomm://GALERA1-IP,GALERA2-IP,GALERA3-IP"
247
-   wsrep_sst_method=rsync
248
-
249
-
250
-
251
-Configuring ``mysqld``
252
------------------------
253
-
254
-While all of the configuration parameters available to the standard MySQL,
255
-MariaDB or Percona XtraDB database server are available in Galera Cluster,
256
-there are some that you must define an outset to avoid conflict or
257
-unexpected behavior.
258
-
259
-- Ensure that the database server is not bound only to to the localhost,
260
-  ``127.0.0.1``. Instead, bind it to ``0.0.0.0`` to ensure it listens on
261
-  all available interfaces.
262
-
263
-  .. code-block:: ini
264
-
265
-     bind-address=0.0.0.0
266
-
267
-- Ensure that the binary log format is set to use row-level replication,
268
-  as opposed to statement-level replication:
269
-
270
-  .. code-block:: ini
271
-
272
-     binlog_format=ROW
273
-
274
-
275
-Configuring InnoDB
276
--------------------
277
-
278
-Galera Cluster does not support non-transactional storage engines and
279
-requires that you use InnoDB by default. There are some additional
280
-parameters that you must define to avoid conflicts.
281
-
282
-- Ensure that the default storage engine is set to InnoDB:
283
-
284
-  .. code-block:: ini
285
-
286
-     default_storage_engine=InnoDB
287
-
288
-- Ensure that the InnoDB locking mode for generating auto-increment values
289
-  is set to ``2``, which is the interleaved locking mode.
290
-
291
-  .. code-block:: ini
292
-
293
-     innodb_autoinc_lock_mode=2
294
-
295
-  Do not change this value. Other modes may cause ``INSERT`` statements
296
-  on tables with auto-increment columns to fail as well as unresolved
297
-  deadlocks that leave the system unresponsive.
298
-
299
-- Ensure that the InnoDB log buffer is written to file once per second,
300
-  rather than on each commit, to improve performance:
301
-
302
-  .. code-block:: ini
303
-
304
-     innodb_flush_log_at_trx_commit=0
305
-
306
-  Bear in mind, while setting this parameter to ``1`` or ``2`` can improve
307
-  performance, it introduces certain dangers. Operating system failures can
308
-  erase the last second of transactions. While you can recover this data
309
-  from another node, if the cluster goes down at the same time
310
-  (in the event of a data center power outage), you lose this data permanently.
311
-
312
-- Define the InnoDB memory buffer pool size. The default value is 128 MB,
313
-  but to compensate for Galera Cluster's additional memory usage, scale
314
-  your usual value back by 5%:
315
-
316
-  .. code-block:: ini
317
-
318
-     innodb_buffer_pool_size=122M
319
-
320
-
321
-Configuring wsrep replication
322
-------------------------------
323
-
324
-Galera Cluster configuration parameters all have the ``wsrep_`` prefix.
325
-There are five that you must define for each cluster node in your
326
-OpenStack database.
327
-
328
-- **wsrep Provider** The Galera Replication Plugin serves as the wsrep
329
-  Provider for Galera Cluster. It is installed on your system as the
330
-  ``libgalera_smm.so`` file. You must define the path to this file in
331
-  your ``my.cnf``.
332
-
333
-  .. code-block:: ini
334
-
335
-     wsrep_provider="/usr/lib/libgalera_smm.so"
336
-
337
-- **Cluster Name** Define an arbitrary name for your cluster.
338
-
339
-  .. code-block:: ini
340
-
341
-     wsrep_cluster_name="my_example_cluster"
342
-
343
-  You must use the same name on every cluster node. The connection fails
344
-  when this value does not match.
345
-
346
-- **Cluster Address** List the IP addresses for each cluster node.
347
-
348
-  .. code-block:: ini
349
-
350
-     wsrep_cluster_address="gcomm://192.168.1.1,192.168.1.2,192.168.1.3"
351
-
352
-  Replace the IP addresses given here with comma-separated list of each
353
-  OpenStack database in your cluster.
354
-
355
-- **Node Name** Define the logical name of the cluster node.
356
-
357
-  .. code-block:: ini
358
-
359
-     wsrep_node_name="Galera1"
360
-
361
-- **Node Address** Define the IP address of the cluster node.
362
-
363
-  .. code-block:: ini
364
-
365
-     wsrep_node_address="192.168.1.1"
366
-
367
-
368
-
369
-
370
-Additional parameters
371
-^^^^^^^^^^^^^^^^^^^^^^
372
-
373
-For a complete list of the available parameters, run the
374
-``SHOW VARIABLES`` command from within the database client:
375
-
376
-.. code-block:: mysql
377
-
378
-   SHOW VARIABLES LIKE 'wsrep_%';
379
-
380
-   +------------------------------+-------+
381
-   | Variable_name                | Value |
382
-   +------------------------------+-------+
383
-   | wsrep_auto_increment_control | ON    |
384
-   +------------------------------+-------+
385
-   | wsrep_causal_reads           | OFF   |
386
-   +------------------------------+-------+
387
-   | wsrep_certify_nonPK          | ON    |
388
-   +------------------------------+-------+
389
-   | ...                          | ...   |
390
-   +------------------------------+-------+
391
-   | wsrep_sync_wait              | 0     |
392
-   +------------------------------+-------+
393
-
394
-For the documentation of these parameters, wsrep Provider option and status
395
-variables available in Galera Cluster, see `Reference
396
-<http://galeracluster.com/documentation-webpages/reference.html>`_.

+ 0
- 275
doc/ha-guide/source/controller-ha-galera-install.rst View File

@@ -1,275 +0,0 @@
1
-Installation
2
-=============
3
-
4
-Using Galera Cluster requires that you install two packages. The first is
5
-the database server, which must include the wsrep API patch. The second
6
-package is the Galera Replication Plugin, which enables the write-set
7
-replication service functionality with the database server.
8
-
9
-There are three implementations of Galera Cluster: MySQL, MariaDB and
10
-Percona XtraDB. For each implementation, there is a software repository that
11
-provides binary packages for Debian, Red Hat, and SUSE-based Linux
12
-distributions.
13
-
14
-
15
-Enabling the repository
16
-~~~~~~~~~~~~~~~~~~~~~~~~
17
-
18
-Galera Cluster is not available in the base repositories of Linux
19
-distributions. In order to install it with your package manage, you must
20
-first enable the repository on your system. The particular methods for
21
-doing so vary depending on which distribution you use for OpenStack and
22
-which database server you want to use.
23
-
24
-Debian
25
--------
26
-
27
-For Debian and Debian-based distributions, such as Ubuntu, complete the
28
-following steps:
29
-
30
-#. Add the GnuPG key for the database repository that you want to use.
31
-
32
-   .. code-block:: console
33
-
34
-      # apt-key adv --recv-keys --keyserver \
35
-             keyserver.ubuntu.com BC19DDBA
36
-
37
-   Note that the particular key value in this command varies depending on
38
-   which database software repository you want to use.
39
-
40
-   +--------------------------+------------------------+
41
-   | Database                 | Key                    |
42
-   +==========================+========================+
43
-   | Galera Cluster for MySQL | ``BC19DDBA``           |
44
-   +--------------------------+------------------------+
45
-   | MariaDB Galera Cluster   | ``0xcbcb082a1bb943db`` |
46
-   +--------------------------+------------------------+
47
-   | Percona XtraDB Cluster   | ``1C4CBDCDCD2EFD2A``   |
48
-   +--------------------------+------------------------+
49
-
50
-#. Add the repository to your sources list. Using your preferred text
51
-   editor, create a ``galera.list`` file in the ``/etc/apt/sources.list.d/``
52
-   directory. For the contents of this file, use the lines that pertain to
53
-   the software repository you want to install:
54
-
55
-   .. code-block:: linux-config
56
-
57
-     # Galera Cluster for MySQL
58
-     deb http://releases.galeracluster.com/DISTRO RELEASE main
59
-
60
-     # MariaDB Galera Cluster
61
-     deb http://mirror.jmu.edu/pub/mariadb/repo/VERSION/DISTRO RELEASE main
62
-
63
-     # Percona XtraDB Cluster
64
-     deb http://repo.percona.com/apt RELEASE main
65
-
66
-   For each entry: Replace all instances of ``DISTRO`` with the distribution
67
-   that you use, such as ``debian`` or ``ubuntu``. Replace all instances of
68
-   ``RELEASE`` with the release of that distribution, such as ``wheezy`` or
69
-   ``trusty``. Replace all instances of ``VERSION`` with the version of the
70
-   database server that you want to install, such as ``5.6`` or ``10.0``.
71
-
72
-   .. note:: In the event that you do not know the release code-name for
73
-             your distribution, you can use the following command to
74
-             find it out:
75
-
76
-             .. code-block:: console
77
-
78
-                $ lsb_release -a
79
-
80
-
81
-#. Update the local cache.
82
-
83
-   .. code-block:: console
84
-
85
-      # apt-get update
86
-
87
-Packages in the Galera Cluster Debian repository are now available for
88
-installation on your system.
89
-
90
-Red Hat
91
---------
92
-
93
-For Red Hat Enterprise Linux and Red Hat-based Linux distributions, the
94
-process is more straightforward. In this file, only enter the text for
95
-the repository you want to use.
96
-
97
-- For Galera Cluster for MySQL, using your preferred text editor, create a
98
-  ``Galera.repo`` file in the ``/etc/yum.repos.d/`` directory.
99
-
100
-  .. code-block:: linux-config
101
-
102
-     [galera]
103
-     name = Galera Cluster for MySQL
104
-     baseurl = http://releases.galeracluster.com/DISTRO/RELEASE/ARCH
105
-     gpgkey = http://releases.galeracluster.com/GPG-KEY-galeracluster.com
106
-     gpgcheck = 1
107
-
108
-  Replace ``DISTRO`` with the name of the distribution you use, such as
109
-  ``centos`` or ``fedora``. Replace ``RELEASE`` with the release number,
110
-  such as ``7`` for CentOS 7. Replace ``ARCH`` with your system
111
-  architecture, such as ``x86_64``
112
-
113
-- For MariaDB Galera Cluster, using your preferred text editor, create a
114
-  ``Galera.repo`` file in the ``/etc/yum.repos.d/`` directory.
115
-
116
-  .. code-block:: linux-config
117
-
118
-     [mariadb]
119
-     name = MariaDB Galera Cluster
120
-     baseurl = http://yum.mariadb.org/VERSION/PACKAGE
121
-     gpgkey = https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
122
-     gpgcheck = 1
123
-
124
-  Replace ``VERSION`` with the version of MariaDB you want to install, such
125
-  as ``5.6`` or ``10.0``. Replace ``PACKAGE`` with the package type and
126
-  architecture, such as ``rhel6-amd64`` for Red Hat 6 on 64-bit
127
-  architecture.
128
-
129
-- For Percona XtraDB Cluster, run the following command:
130
-
131
-  .. code-block:: console
132
-
133
-     # yum install http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm
134
-
135
-  Bear in mind that the Percona repository only supports Red Hat Enterprise
136
-  Linux and CentOS distributions.
137
-
138
-Packages in the Galera Cluster Red Hat repository are not available for
139
-installation on your system.
140
-
141
-
142
-
143
-SUSE
144
------
145
-
146
-For SUSE Enterprise Linux and SUSE-based distributions, such as openSUSE
147
-binary installations are only available for Galera Cluster for MySQL and
148
-MariaDB Galera Cluster.
149
-
150
-#. Create a ``Galera.repo`` file in the local directory. For Galera Cluster
151
-   for MySQL, use the following content:
152
-
153
-   .. code-block:: linux-config
154
-
155
-      [galera]
156
-      name = Galera Cluster for MySQL
157
-      baseurl = http://releases.galeracluster.com/DISTRO/RELEASE
158
-      gpgkey = http://releases.galeracluster.com/GPG-KEY-galeracluster.com
159
-      gpgcheck = 1
160
-
161
-   In the text: Replace ``DISTRO`` with the name of the distribution you
162
-   use, such as ``sles`` or ``opensuse``. Replace ``RELEASE`` with the
163
-   version number of that distribution.
164
-
165
-   For MariaDB Galera Cluster, instead use this content:
166
-
167
-   .. code-block:: linux-config
168
-
169
-      [mariadb]
170
-      name = MariaDB Galera Cluster
171
-      baseurl = http://yum.mariadb.org/VERSION/PACKAGE
172
-      gpgkey = https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
173
-      gpgcheck = 1
174
-
175
-   In the text: Replace ``VERSION`` with the version of MariaDB you want to
176
-   install, such as ``5.6`` or ``10.0``. Replace package with the package
177
-   architecture you want to use, such as ``opensuse13-amd64``.
178
-
179
-#. Add the repository to your system:
180
-
181
-   .. code-block:: console
182
-
183
-      $ sudo zypper addrepo Galera.repo
184
-
185
-#. Refresh ``zypper``:
186
-
187
-   .. code-block:: console
188
-
189
-      $ sudo zypper refresh
190
-
191
-Packages in the Galera Cluster SUSE repository are now available for
192
-installation.
193
-
194
-
195
-Installing Galera Cluster
196
-~~~~~~~~~~~~~~~~~~~~~~~~~~
197
-
198
-When you finish enabling the software repository for Galera Cluster, you can
199
-install it using your package manager. The particular command and packages
200
-you need to install varies depending on which database server you want to
201
-install and which Linux distribution you use:
202
-
203
-Galera Cluster for MySQL:
204
-
205
-
206
-- For Debian and Debian-based distributions, such as Ubuntu, run the
207
-  following command:
208
-
209
-  .. code-block:: console
210
-
211
-     # apt-get install galera-3 mysql-wsrep-5.6
212
-
213
-- For Red Hat Enterprise Linux and Red Hat-based distributions, such as
214
-  Fedora or CentOS, instead run this command:
215
-
216
-  .. code-block:: console
217
-
218
-     # yum install galera-3 mysql-wsrep-5.6
219
-
220
-- For SUSE Enterprise Linux Server and SUSE-based distributions, such as
221
-  openSUSE, instead run this command:
222
-
223
-  .. code-block:: console
224
-
225
-     # zypper install galera-3 mysql-wsrep-5.6
226
-
227
-
228
-MariaDB Galera Cluster:
229
-
230
-- For Debian and Debian-based distributions, such as Ubuntu, run the
231
-  following command:
232
-
233
-  .. code-block:: console
234
-
235
-     # apt-get install galera mariadb-galera-server
236
-
237
-- For Red Hat Enterprise Linux and Red Hat-based distributions, such as
238
-  Fedora or CentOS, instead run this command:
239
-
240
-  .. code-block:: console
241
-
242
-     # yum install galera MariaDB-Galera-server
243
-
244
-- For SUSE Enterprise Linux Server and SUSE-based distributions, such as
245
-  openSUSE, instead run this command:
246
-
247
-  .. code-block:: console
248
-
249
-     # zypper install galera MariaDB-Galera-server
250
-
251
-
252
-Percona XtraDB Cluster:
253
-
254
-
255
-- For Debian and Debian-based distributions, such as Ubuntu, run the
256
-  following command:
257
-
258
-  .. code-block:: console
259
-
260
-     # apt-get install percona-xtradb-cluster
261
-
262
-- For Red Hat Enterprise Linux and Red Hat-based distributions, such as
263
-  Fedora or CentOS, instead run this command:
264
-
265
- .. code-block:: console
266
-
267
-   # yum install Percona-XtraDB-Cluster
268
-
269
-Galera Cluster is now installed on your system. You must repeat this
270
-process for each controller node in your cluster.
271
-
272
-.. warning:: In the event that you already installed the standalone version
273
-             of MySQL, MariaDB or Percona XtraDB, this installation purges all
274
-             privileges on your OpenStack database server. You must reapply the
275
-             privileges listed in the installation guide.

+ 0
- 256
doc/ha-guide/source/controller-ha-galera-manage.rst View File

@@ -1,256 +0,0 @@
1
-Management
2
-===========
3
-
4
-When you finish the installation and configuration process on each
5
-cluster node in your OpenStack database, you can initialize Galera Cluster.
6
-
7
-Before you attempt this, verify that you have the following ready:
8
-
9
-- Database hosts with Galera Cluster installed. You need a
10
-  minimum of three hosts;
11
-- No firewalls between the hosts;
12
-- SELinux and AppArmor set to permit access to ``mysqld``;
13
-- The correct path to ``libgalera_smm.so`` given to the
14
-  ``wsrep_provider`` parameter.
15
-
16
-Initializing the cluster
17
-~~~~~~~~~~~~~~~~~~~~~~~~~
18
-
19
-In Galera Cluster, the Primary Component is the cluster of database
20
-servers that replicate into each other. In the event that a
21
-cluster node loses connectivity with the Primary Component, it
22
-defaults into a non-operational state, to avoid creating or serving
23
-inconsistent data.
24
-
25
-By default, cluster nodes do not start as part of a Primary
26
-Component. Instead they assume that one exists somewhere and
27
-attempts to establish a connection with it. To create a Primary
28
-Component, you must start one cluster node using the
29
-``--wsrep-new-cluster`` option. You can do this using any cluster
30
-node, it is not important which you choose. In the Primary
31
-Component, replication and state transfers bring all databases to
32
-the same state.
33
-
34
-To start the cluster, complete the following steps:
35
-
36
-#. Initialize the Primary Component on one cluster node. For
37
-   servers that use ``init``, run the following command:
38
-
39
-   .. code-block:: console
40
-
41
-      # service mysql start --wsrep-new-cluster
42
-
43
-   For servers that use ``systemd``, instead run this command:
44
-
45
-   .. code-block:: console
46
-
47
-      # systemctl start mysql --wsrep-new-cluster
48
-
49
-#. Once the database server starts, check the cluster status using
50
-   the ``wsrep_cluster_size`` status variable. From the database
51
-   client, run the following command:
52
-
53
-   .. code-block:: mysql
54
-
55
-      SHOW STATUS LIKE 'wsrep_cluster_size';
56
-
57
-      +--------------------+-------+
58
-      | Variable_name      | Value |
59
-      +--------------------+-------+
60
-      | wsrep_cluster_size | 1     |
61
-      +--------------------+-------+
62
-
63
-#. Start the database server on all other cluster nodes. For
64
-   servers that use ``init``, run the following command:
65
-
66
-   .. code-block:: console
67
-
68
-      # service mysql start
69
-
70
-   For servers that use ``systemd``, instead run this command:
71
-
72
-   .. code-block:: console
73
-
74
-      # systemctl start mysql
75
-
76
-#. When you have all cluster nodes started, log into the database
77
-   client on one of them and check the ``wsrep_cluster_size``
78
-   status variable again.
79
-
80
-   .. code-block:: mysql
81
-
82
-      SHOW STATUS LIKE 'wsrep_cluster_size';
83
-
84
-      +--------------------+-------+
85
-      | Variable_name      | Value |
86
-      +--------------------+-------+
87
-      | wsrep_cluster_size | 3     |
88
-      +--------------------+-------+
89
-
90
-When each cluster node starts, it checks the IP addresses given to
91
-the ``wsrep_cluster_address`` parameter and attempts to establish
92
-network connectivity with a database server running there. Once it
93
-establishes a connection, it attempts to join the Primary
94
-Component, requesting a state transfer as needed to bring itself
95
-into sync with the cluster.
96
-
97
-In the event that you need to restart any cluster node, you can do
98
-so. When the database server comes back it, it establishes
99
-connectivity with the Primary Component and updates itself to any
100
-changes it may have missed while down.
101
-
102
-
103
-Restarting the cluster
104
------------------------
105
-
106
-Individual cluster nodes can stop and be restarted without issue.
107
-When a database loses its connection or restarts, Galera Cluster
108
-brings it back into sync once it reestablishes connection with the
109
-Primary Component. In the event that you need to restart the
110
-entire cluster, identify the most advanced cluster node and
111
-initialize the Primary Component on that node.
112
-
113
-To find the most advanced cluster node, you need to check the
114
-sequence numbers, or seqnos, on the last committed transaction for
115
-each. You can find this by viewing ``grastate.dat`` file in
116
-database directory,
117
-
118
-.. code-block:: console
119
-
120
-   $ cat /path/to/datadir/grastate.dat
121
-
122
-   # Galera saved state
123
-   version: 3.8
124
-   uuid:    5ee99582-bb8d-11e2-b8e3-23de375c1d30
125
-   seqno:   8204503945773
126
-
127
-Alternatively, if the database server is running, use the
128
-``wsrep_last_committed`` status variable:
129
-
130
-.. code-block:: mysql
131
-
132
-   SHOW STATUS LIKE 'wsrep_last_committed';
133
-
134
-   +----------------------+--------+
135
-   | Variable_name        | Value  |
136
-   +----------------------+--------+
137
-   | wsrep_last_committed | 409745 |
138
-   +----------------------+--------+
139
-
140
-This value increments with each transaction, so the most advanced
141
-node has the highest sequence number, and therefore is the most up to date.
142
-
143
-
144
-Configuration tips
145
-~~~~~~~~~~~~~~~~~~~
146
-
147
-
148
-Deployment strategies
149
-----------------------
150
-
151
-Galera can be configured using one of the following
152
-strategies:
153
-
154
-- Each instance has its own IP address;
155
-
156
-  OpenStack services are configured with the list of these IP
157
-  addresses so they can select one of the addresses from those
158
-  available.
159
-
160
-- Galera runs behind HAProxy.
161
-
162
-  HAProxy load balances incoming requests and exposes just one IP
163
-  address for all the clients.
164
-
165
-  Galera synchronous replication guarantees a zero slave lag. The
166
-  failover procedure completes once HAProxy detects that the active
167
-  back end has gone down and switches to the backup one, which is
168
-  then marked as 'UP'. If no back ends are up (in other words, the
169
-  Galera cluster is not ready to accept connections), the failover
170
-  procedure finishes only when the Galera cluster has been
171
-  successfully reassembled. The SLA is normally no more than 5
172
-  minutes.
173
-
174
-- Use MySQL/Galera in active/passive mode to avoid deadlocks on
175
-  ``SELECT ... FOR UPDATE`` type queries (used, for example, by nova
176
-  and neutron). This issue is discussed more in the following:
177
-
178
-  - http://lists.openstack.org/pipermail/openstack-dev/2014-May/035264.html
179
-  - http://www.joinfu.com/
180
-
181
-Of these options, the second one is highly recommended. Although Galera
182
-supports active/active configurations, we recommend active/passive
183
-(enforced by the load balancer) in order to avoid lock contention.
184
-
185
-
186
-
187
-Configuring HAProxy
188
---------------------
189
-
190
-If you use HAProxy for load-balancing client access to Galera
191
-Cluster as described in the :doc:`controller-ha-haproxy`, you can
192
-use the ``clustercheck`` utility to improve health checks.
193
-
194
-#. Create a configuration file for ``clustercheck`` at
195
-   ``/etc/sysconfig/clustercheck``:
196
-
197
-   .. code-block:: ini
198
-
199
-      MYSQL_USERNAME="clustercheck_user"
200
-      MYSQL_PASSWORD="my_clustercheck_password"
201
-      MYSQL_HOST="localhost"
202
-      MYSQL_PORT="3306"
203
-
204
-#. Log in to the database client and grant the ``clustercheck`` user
205
-   ``PROCESS`` privileges.
206
-
207
-   .. code-block:: mysql
208
-
209
-      GRANT PROCESS ON *.* TO 'clustercheck_user'@'localhost'
210
-      IDENTIFIED BY 'my_clustercheck_password';
211
-
212
-      FLUSH PRIVILEGES;
213
-
214
-   You only need to do this on one cluster node. Galera Cluster
215
-   replicates the user to all the others.
216
-
217
-#. Create a configuration file for the HAProxy monitor service, at
218
-   ``/etc/xinetd.d/galera-monitor``:
219
-
220
-   .. code-block:: ini
221
-
222
-      service galera-monitor
223
-      {
224
-         port = 9200
225
-         disable = no
226
-         socket_type = stream
227
-         protocol = tcp
228
-         wait = no
229
-         user = root
230
-         group = root
231
-         groups = yes
232
-         server = /usr/bin/clustercheck
233
-         type = UNLISTED
234
-         per_source = UNLIMITED
235
-         log_on_success =
236
-         log_on_failure = HOST
237
-         flags = REUSE
238
-      }
239
-
240
-#. Start the ``xinetd`` daemon for ``clustercheck``. For servers
241
-   that use ``init``, run the following commands:
242
-
243
-   .. code-block:: console
244
-
245
-      # service xinetd enable
246
-      # service xinetd start
247
-
248
-   For servers that use ``systemd``, instead run these commands:
249
-
250
-   .. code-block:: console
251
-
252
-      # systemctl daemon-reload
253
-      # systemctl enable xinetd
254
-      # systemctl start xinetd
255
-
256
-

+ 0
- 33
doc/ha-guide/source/controller-ha-galera.rst View File

@@ -1,33 +0,0 @@
1
-Database (Galera Cluster)
2
-==========================
3
-
4
-The first step is to install the database that sits at the heart of the
5
-cluster. To implement high availability, run an instance of the database on
6
-each controller node and use Galera Cluster to provide replication between
7
-them. Galera Cluster is a synchronous multi-master database cluster, based
8
-on MySQL and the InnoDB storage engine. It is a high-availability service
9
-that provides high system uptime, no data loss, and scalability for growth.
10
-
11
-You can achieve high availability for the OpenStack database in many
12
-different ways, depending on the type of database that you want to use.
13
-There are three implementations of Galera Cluster available to you:
14
-
15
-- `Galera Cluster for MySQL <http://galeracluster.com/>`_ The MySQL
16
-  reference implementation from Codership, Oy;
17
-- `MariaDB Galera Cluster <https://mariadb.org/>`_ The MariaDB
18
-  implementation of Galera Cluster, which is commonly supported in
19
-  environments based on Red Hat distributions;
20
-- `Percona XtraDB Cluster <http://www.percona.com/>`_ The XtraDB
21
-  implementation of Galera Cluster from Percona.
22
-
23
-In addition to Galera Cluster, you can also achieve high availability
24
-through other database options, such as PostgreSQL, which has its own
25
-replication system.
26
-
27
-
28
-.. toctree::
29
-  :maxdepth: 2
30
-
31
-  controller-ha-galera-install
32
-  controller-ha-galera-config
33
-  controller-ha-galera-manage

+ 0
- 229
doc/ha-guide/source/controller-ha-haproxy.rst View File

@@ -1,229 +0,0 @@
1
-=======
2
-HAProxy
3
-=======
4
-
5
-HAProxy provides a fast and reliable HTTP reverse proxy and load balancer
6
-for TCP or HTTP applications. It is particularly suited for web crawling
7
-under very high loads while needing persistence or Layer 7 processing.
8
-It realistically supports tens of thousands of connections with recent
9
-hardware.
10
-
11
-Each instance of HAProxy configures its front end to accept connections
12
-only from the virtual IP (VIP) address and to terminate them as a list
13
-of all instances of the corresponding service under load balancing,
14
-such as any OpenStack API service.
15
-
16
-This makes the instances of HAProxy act independently and fail over
17
-transparently together with the network endpoints (VIP addresses)
18
-failover and, therefore, shares the same SLA.
19
-
20
-You can alternatively use a commercial load balancer, which is a hardware
21
-or software. A hardware load balancer generally has good performance.
22
-
23
-For detailed instructions about installing HAProxy on your nodes,
24
-see its `official documentation <http://www.haproxy.org/#docs>`_.
25
-
26
-.. note::
27
-
28
-   HAProxy should not be a single point of failure.
29
-   It is advisable to have multiple HAProxy instances running,
30
-   where the number of these instances is a small odd number like 3 or 5.
31
-   You need to ensure its availability by other means,
32
-   such as Keepalived or Pacemaker.
33
-
34
-The common practice is to locate an HAProxy instance on each OpenStack
35
-controller in the environment.
36
-
37
-Once configured (see example file below), add HAProxy to the cluster
38
-and ensure the VIPs can only run on machines where HAProxy is active:
39
-
40
-``pcs``
41
-
42
-.. code-block:: console
43
-
44
-   $ pcs resource create lb-haproxy systemd:haproxy --clone
45
-   $ pcs constraint order start p_api-ip then lb-haproxy-clone kind=Optional
46
-   $ pcs constraint colocation add p_api-ip with lb-haproxy-clone
47
-
48
-``crmsh``
49
-
50
-TBA
51
-
52
-Example Config File
53
-~~~~~~~~~~~~~~~~~~~
54
-
55
-Here is an example ``/etc/haproxy/haproxy.cfg`` configuration file.
56
-You need a copy of it on each controller node.
57
-
58
-.. note::
59
-
60
-   To implement any changes made to this you must restart the HAProxy service
61
-
62
-.. code-block:: none
63
-
64
-   global
65
-     chroot  /var/lib/haproxy
66
-     daemon
67
-     group  haproxy
68
-     maxconn  4000
69
-     pidfile  /var/run/haproxy.pid
70
-     user  haproxy
71
-
72
-   defaults
73
-     log  global
74
-     maxconn  4000
75
-     option  redispatch
76
-     retries  3
77
-     timeout  http-request 10s
78
-     timeout  queue 1m
79
-     timeout  connect 10s
80
-     timeout  client 1m
81
-     timeout  server 1m
82
-     timeout  check 10s
83
-
84
-   listen dashboard_cluster
85
-     bind <Virtual IP>:443
86
-     balance  source
87
-     option  tcpka
88
-     option  httpchk
89
-     option  tcplog
90
-     server controller1 10.0.0.12:443 check inter 2000 rise 2 fall 5
91
-     server controller2 10.0.0.13:443 check inter 2000 rise 2 fall 5
92
-     server controller3 10.0.0.14:443 check inter 2000 rise 2 fall 5
93
-
94
-   listen galera_cluster
95
-     bind <Virtual IP>:3306
96
-     balance  source
97
-     option  mysql-check
98
-     server controller1 10.0.0.12:3306 check port 9200 inter 2000 rise 2 fall 5
99
-     server controller2 10.0.0.13:3306 backup check port 9200 inter 2000 rise 2 fall 5
100
-     server controller3 10.0.0.14:3306 backup check port 9200 inter 2000 rise 2 fall 5
101
-
102
-   listen glance_api_cluster
103
-     bind <Virtual IP>:9292
104
-     balance  source
105
-     option  tcpka
106
-     option  httpchk
107
-     option  tcplog
108
-     server controller1 10.0.0.12:9292 check inter 2000 rise 2 fall 5
109
-     server controller2 10.0.0.13:9292 check inter 2000 rise 2 fall 5
110
-     server controller3 10.0.0.14:9292 check inter 2000 rise 2 fall 5
111
-
112
-   listen glance_registry_cluster
113
-     bind <Virtual IP>:9191
114
-     balance  source
115
-     option  tcpka
116
-     option  tcplog
117
-     server controller1 10.0.0.12:9191 check inter 2000 rise 2 fall 5
118
-     server controller2 10.0.0.13:9191 check inter 2000 rise 2 fall 5
119
-     server controller3 10.0.0.14:9191 check inter 2000 rise 2 fall 5
120
-
121
-   listen keystone_admin_cluster
122
-     bind <Virtual IP>:35357
123
-     balance  source
124
-     option  tcpka
125
-     option  httpchk
126
-     option  tcplog
127
-     server controller1 10.0.0.12:35357 check inter 2000 rise 2 fall 5
128
-     server controller2 10.0.0.13:35357 check inter 2000 rise 2 fall 5
129
-     server controller3 10.0.0.14:35357 check inter 2000 rise 2 fall 5
130
-
131
-   listen keystone_public_internal_cluster
132
-     bind <Virtual IP>:5000
133
-     balance  source
134
-     option  tcpka
135
-     option  httpchk
136
-     option  tcplog
137
-     server controller1 10.0.0.12:5000 check inter 2000 rise 2 fall 5
138
-     server controller2 10.0.0.13:5000 check inter 2000 rise 2 fall 5
139
-     server controller3 10.0.0.14:5000 check inter 2000 rise 2 fall 5
140
-
141
-   listen nova_ec2_api_cluster
142
-     bind <Virtual IP>:8773
143
-     balance  source
144
-     option  tcpka
145
-     option  tcplog
146
-     server controller1 10.0.0.12:8773 check inter 2000 rise 2 fall 5
147
-     server controller2 10.0.0.13:8773 check inter 2000 rise 2 fall 5
148
-     server controller3 10.0.0.14:8773 check inter 2000 rise 2 fall 5
149
-
150
-   listen nova_compute_api_cluster
151
-     bind <Virtual IP>:8774
152
-     balance  source
153
-     option  tcpka
154
-     option  httpchk
155
-     option  tcplog
156
-     server controller1 10.0.0.12:8774 check inter 2000 rise 2 fall 5
157
-     server controller2 10.0.0.13:8774 check inter 2000 rise 2 fall 5
158
-     server controller3 10.0.0.14:8774 check inter 2000 rise 2 fall 5
159
-
160
-   listen nova_metadata_api_cluster
161
-     bind <Virtual IP>:8775
162
-     balance  source
163
-     option  tcpka
164
-     option  tcplog
165
-     server controller1 10.0.0.12:8775 check inter 2000 rise 2 fall 5
166
-     server controller2 10.0.0.13:8775 check inter 2000 rise 2 fall 5
167
-     server controller3 10.0.0.14:8775 check inter 2000 rise 2 fall 5
168
-
169
-   listen cinder_api_cluster
170
-     bind <Virtual IP>:8776
171
-     balance  source
172
-     option  tcpka
173
-     option  httpchk
174
-     option  tcplog
175
-     server controller1 10.0.0.12:8776 check inter 2000 rise 2 fall 5
176
-     server controller2 10.0.0.13:8776 check inter 2000 rise 2 fall 5
177
-     server controller3 10.0.0.14:8776 check inter 2000 rise 2 fall 5
178
-
179
-   listen ceilometer_api_cluster
180
-     bind <Virtual IP>:8777
181
-     balance  source
182
-     option  tcpka
183
-     option  tcplog
184
-     server controller1 10.0.0.12:8777 check inter 2000 rise 2 fall 5
185
-     server controller2 10.0.0.13:8777 check inter 2000 rise 2 fall 5
186
-     server controller3 10.0.0.14:8777 check inter 2000 rise 2 fall 5
187
-
188
-   listen nova_vncproxy_cluster
189
-     bind <Virtual IP>:6080
190
-     balance  source
191
-     option  tcpka
192
-     option  tcplog
193
-     server controller1 10.0.0.12:6080 check inter 2000 rise 2 fall 5
194
-     server controller2 10.0.0.13:6080 check inter 2000 rise 2 fall 5
195
-     server controller3 10.0.0.14:6080 check inter 2000 rise 2 fall 5
196
-
197
-   listen neutron_api_cluster
198
-     bind <Virtual IP>:9696
199
-     balance  source
200
-     option  tcpka
201
-     option  httpchk
202
-     option  tcplog
203
-     server controller1 10.0.0.12:9696 check inter 2000 rise 2 fall 5
204
-     server controller2 10.0.0.13:9696 check inter 2000 rise 2 fall 5
205
-     server controller3 10.0.0.14:9696 check inter 2000 rise 2 fall 5
206
-
207
-   listen swift_proxy_cluster
208
-     bind <Virtual IP>:8080
209
-     balance  source
210
-     option  tcplog
211
-     option  tcpka
212
-     server controller1 10.0.0.12:8080 check inter 2000 rise 2 fall 5
213
-     server controller2 10.0.0.13:8080 check inter 2000 rise 2 fall 5
214
-     server controller3 10.0.0.14:8080 check inter 2000 rise 2 fall 5
215
-
216
-.. note::
217
-
218
-   The Galera cluster configuration directive ``backup`` indicates
219
-   that two of the three controllers are standby nodes.
220
-   This ensures that only one node services write requests
221
-   because OpenStack support for multi-node writes is not yet production-ready.
222
-
223
-.. note::
224
-
225
-   The Telemetry API service configuration does not have the ``option httpchk``
226
-   directive as it cannot process this check properly.
227
-   TODO: explain why the Telemetry API is so special
228
-
229
-[TODO: we need more commentary about the contents and format of this file]

+ 0
- 147
doc/ha-guide/source/controller-ha-keystone.rst View File

@@ -1,147 +0,0 @@
1
-
2
-============================
3
-Identity services (keystone)
4
-============================
5
-
6
-OpenStack Identity (keystone)
7
-is the Identity service in OpenStack that is used by many services.
8
-You should be familiar with
9
-`OpenStack identity concepts
10
-<http://docs.openstack.org/liberty/install-guide-ubuntu/common/get_started_identity.html>`_
11
-before proceeding.
12
-
13
-Making the OpenStack Identity service highly available
14
-in active / passive mode involves:
15
-
16
-- :ref:`keystone-pacemaker`
17
-- :ref:`keystone-config-identity`
18
-- :ref:`keystone-services-config`
19
-
20
-.. _keystone-pacemaker:
21
-
22
-Add OpenStack Identity resource to Pacemaker
23
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
24
-
25
-#. You must first download the OpenStack Identity resource to Pacemaker
26
-   by running the following commands:
27
-
28
-   .. code-block:: console
29
-
30
-      # cd /usr/lib/ocf/resource.d
31
-      # mkdir openstack
32
-      # cd openstack
33
-      # wget https://git.openstack.org/cgit/openstack/openstack-resource-agents/plain/ocf/keystone
34
-      # chmod a+rx *
35
-
36
-#. You can now add the Pacemaker configuration
37
-   for the OpenStack Identity resource
38
-   by running the :command:`crm configure` command
39
-   to connect to the Pacemaker cluster.
40
-   Add the following cluster resources:
41
-
42
-   ::
43
-
44
-      primitive p_keystone ocf:openstack:keystone \
45
-      params config="/etc/keystone/keystone.conf"
46
-          os_password="secretsecret" \
47
-          os_username="admin"
48
-          os_tenant_name="admin"
49
-          os_auth_url="http://10.0.0.11:5000/v2.0/" \
50
-          op monitor interval="30s" timeout="30s"
51
-
52
-   This configuration creates ``p_keystone``,
53
-   a resource for managing the OpenStack Identity service.
54
-
55
-   :command:`crm configure` supports batch input
56
-   so you may copy and paste the above lines
57
-   into your live Pacemaker configuration,
58
-   and then make changes as required.
59
-   For example, you may enter edit ``p_ip_keystone``
60
-   from the :command:`crm configure` menu
61
-   and edit the resource to match your preferred virtual IP address.
62
-
63
-#. After you add these resources,
64
-   commit your configuration changes by entering :command:`commit`
65
-   from the :command:`crm configure` menu.
66
-   Pacemaker then starts the OpenStack Identity service
67
-   and its dependent resources on one of your nodes.
68
-
69
-.. _keystone-config-identity:
70
-
71
-Configure OpenStack Identity service
72
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
73
-
74
-#. Edit the :file:`keystone.conf` file
75
-   to change the values of the :manpage:`bind(2)` parameters:
76
-
77
-   .. code-block:: ini
78
-
79
-      bind_host = 10.0.0.11
80
-      public_bind_host = 10.0.0.11
81
-      admin_bind_host = 10.0.0.11
82
-
83
-   The ``admin_bind_host`` parameter
84
-   lets you use a private network for admin access.
85
-
86
-#. To be sure that all data is highly available,
87
-   ensure that everything is stored in the MySQL database
88
-   (which is also highly available):
89
-
90
-   .. code-block:: ini
91
-
92
-      [catalog]
93
-      driver = keystone.catalog.backends.sql.Catalog
94
-      ...
95
-      [identity]
96
-      driver = keystone.identity.backends.sql.Identity
97
-      ...
98
-
99
-
100
-.. _keystone-services-config:
101
-
102
-Configure OpenStack services to use the highly available OpenStack Identity
103
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
104
-
105
-Your OpenStack services must now point
106
-their OpenStack Identity configuration
107
-to the highly available virtual cluster IP address
108
-rather than point to the physical IP address
109
-of an OpenStack Identity server as you would do
110
-in a non-HA environment.
111
-
112
-#. For OpenStack Compute, for example,
113
-   if your OpenStack Identiy service IP address is 10.0.0.11,
114
-   use the following configuration in your :file:`api-paste.ini` file:
115
-
116
-   .. code-block:: ini
117
-
118
-      auth_host = 10.0.0.11
119
-
120
-#. You also need to create the OpenStack Identity Endpoint
121
-   with this IP address.
122
-
123
-   .. note::
124
-
125
-      If you are using both private and public IP addresses,
126
-      you should create two virtual IP addresses
127
-      and define your endpoint like this:
128
-
129
-      .. code-block:: console
130
-
131
-         $ openstack endpoint create --region $KEYSTONE_REGION \
132
-           $service-type public http://PUBLIC_VIP:5000/v2.0
133
-         $ openstack endpoint create --region $KEYSTONE_REGION \
134
-           $service-type admin http://10.0.0.11:35357/v2.0
135
-         $ openstack endpoint create --region $KEYSTONE_REGION \
136
-           $service-type internal http://10.0.0.11:5000/v2.0
137
-
138
-
139
-#. If you are using the horizon dashboard,
140
-   edit the :file:`local_settings.py` file
141
-   to include the following:
142
-
143
-   .. code-block:: ini
144
-
145
-      OPENSTACK_HOST = 10.0.0.11
146
-
147
-

+ 0
- 21
doc/ha-guide/source/controller-ha-memcached.rst View File

@@ -1,21 +0,0 @@
1
-===================
2
-Memcached
3
-===================
4
-
5
-Memcached is a general-purpose distributed memory caching system. It
6
-is used to speed up dynamic database-driven websites by caching data
7
-and objects in RAM to reduce the number of times an external data
8
-source must be read.
9
-
10
-Memcached is a memory cache demon that can be used by most OpenStack
11
-services to store ephemeral data, such as tokens.
12
-
13
-Access to memcached is not handled by HAproxy because replicated
14
-access is currently only in an experimental state.  Instead OpenStack
15
-services must be supplied with the full list of hosts running
16
-memcached.
17
-
18
-The Memcached client implements hashing to balance objects among the
19
-instances.  Failure of an instance only impacts a percentage of the
20
-objects and the client automatically removes it from the list of
21
-instances.  The SLA is several minutes.

+ 0
- 633
doc/ha-guide/source/controller-ha-pacemaker.rst View File

@@ -1,633 +0,0 @@
1
-=======================
2
-Pacemaker cluster stack
3
-=======================
4
-
5
-`Pacemaker <http://clusterlabs.org/>`_ cluster stack is the state-of-the-art
6
-high availability and load balancing stack for the Linux platform.
7
-Pacemaker is useful to make OpenStack infrastructure highly available.
8
-Also, it is storage and application-agnostic, and in no way
9
-specific to OpenStack.
10
-
11
-Pacemaker relies on the
12
-`Corosync <http://corosync.github.io/corosync/>`_ messaging layer
13
-for reliable cluster communications.
14
-Corosync implements the Totem single-ring ordering and membership protocol.
15
-It also provides UDP and InfiniBand based messaging,
16
-quorum, and cluster membership to Pacemaker.
17
-
18
-Pacemaker does not inherently (need or want to) understand the
19
-applications it manages. Instead, it relies on resource agents (RAs),
20
-scripts that encapsulate the knowledge of how to start, stop, and
21
-check the health of each application managed by the cluster.
22
-
23
-These agents must conform to one of the `OCF <https://github.com/ClusterLabs/
24
-OCF-spec/blob/master/ra/resource-agent-api.md>`_,
25
-`SysV Init <http://refspecs.linux-foundation.org/LSB_3.0.0/LSB-Core-generic/
26
-LSB-Core-generic/iniscrptact.html>`_, Upstart, or Systemd standards.
27
-
28
-Pacemaker ships with a large set of OCF agents (such as those managing
29
-MySQL databases, virtual IP addresses, and RabbitMQ), but can also use
30
-any agents already installed on your system and can be extended with
31
-your own (see the
32
-`developer guide <http://www.linux-ha.org/doc/dev-guides/ra-dev-guide.html>`_).
33
-
34
-The steps to implement the Pacemaker cluster stack are:
35
-
36
-- :ref:`pacemaker-install`
37
-- :ref:`pacemaker-corosync-setup`
38
-- :ref:`pacemaker-corosync-start`
39
-- :ref:`pacemaker-start`
40
-- :ref:`pacemaker-cluster-properties`
41
-
42
-.. _pacemaker-install:
43
-
44
-Install packages
45
-~~~~~~~~~~~~~~~~
46
-
47
-On any host that is meant to be part of a Pacemaker cluster,
48
-you must first establish cluster communications
49
-through the Corosync messaging layer.
50
-This involves installing the following packages
51
-(and their dependencies, which your package manager
52
-usually installs automatically):
53
-
54
-- pacemaker
55
-
56
-- pcs (CentOS or RHEL) or crmsh
57
-
58
-- corosync
59
-
60
-- fence-agents (CentOS or RHEL) or cluster-glue
61
-
62
-- resource-agents
63
-
64
-- libqb0
65
-
66
-.. _pacemaker-corosync-setup:
67
-
68
-Set up the cluster with `pcs`
69
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
70
-
71
-#. Make sure pcs is running and configured to start at boot time:
72
-
73
-   .. code-block:: console
74
-
75
-      $ systemctl enable pcsd
76
-      $ systemctl start pcsd
77
-
78
-#. Set a password for hacluster user **on each host**.
79
-
80
-   Since the cluster is a single administrative domain, it is generally
81
-   accepted to use the same password on all nodes.
82
-
83
-   .. code-block:: console
84
-
85
-      $ echo my-secret-password-no-dont-use-this-one \
86
-        | passwd --stdin hacluster
87
-
88
-#. Use that password to authenticate to the nodes which will
89
-   make up the cluster. The :option:`-p` option is used to give
90
-   the password on command line and makes it easier to script.
91
-
92
-   .. code-block:: console
93
-
94
-      $ pcs cluster auth controller1 controller2 controller3 \
95
-        -u hacluster -p my-secret-password-no-dont-use-this-one --force
96
-
97
-#. Create the cluster, giving it a name, and start it:
98
-
99
-   .. code-block:: console
100
-
101
-      $ pcs cluster setup --force --name my-first-openstack-cluster \
102
-        controller1 controller2 controller3
103
-      $ pcs cluster start --all
104
-
105
-.. note ::
106
-
107
-   In Red Hat Enterprise Linux or CentOS environments, this is a recommended
108
-   path to perform configuration. For more information, see the `RHEL docs
109
-   <https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/High_Availability_Add-On_Reference/ch-clusteradmin-HAAR.html#s1-clustercreate-HAAR>`_.
110
-
111
-Set up the cluster with `crmsh`
112
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
113
-
114
-After installing the Corosync package, you must create
115
-the :file:`/etc/corosync/corosync.conf` configuration file.
116
-
117
-.. note::
118
-         For Ubuntu, you should also enable the Corosync service
119
-         in the ``/etc/default/corosync`` configuration file.
120
-
121
-Corosync can be configured to work
122
-with either multicast or unicast IP addresses
123
-or to use the votequorum library.
124
-
125
-- :ref:`corosync-multicast`
126
-- :ref:`corosync-unicast`
127
-- :ref:`corosync-votequorum`
128
-
129
-.. _corosync-multicast:
130
-
131
-Set up Corosync with multicast
132
-------------------------------
133
-
134
-Most distributions ship an example configuration file
135
-(:file:`corosync.conf.example`)
136
-as part of the documentation bundled with the Corosync package.
137
-An example Corosync configuration file is shown below:
138
-
139
-**Example Corosync configuration file for multicast (corosync.conf)**
140
-
141
-.. code-block:: ini
142
-
143
-   totem {
144
-         version: 2
145
-
146
-         # Time (in ms) to wait for a token (1)
147
-         token: 10000
148
-
149
-        # How many token retransmits before forming a new
150
-        # configuration
151
-        token_retransmits_before_loss_const: 10
152
-
153
-        # Turn off the virtual synchrony filter
154
-        vsftype: none
155
-
156
-        # Enable encryption (2)
157
-        secauth: on
158
-
159
-        # How many threads to use for encryption/decryption
160
-        threads: 0
161
-
162
-        # This specifies the redundant ring protocol, which may be
163
-        # none, active, or passive. (3)
164
-        rrp_mode: active
165
-
166
-        # The following is a two-ring multicast configuration. (4)
167
-        interface {
168
-                ringnumber: 0
169
-                bindnetaddr: 10.0.0.0
170
-                mcastaddr: 239.255.42.1
171
-                mcastport: 5405
172
-        }
173
-        interface {
174
-                ringnumber: 1
175
-                bindnetaddr: 10.0.42.0
176
-                mcastaddr: 239.255.42.2
177
-                mcastport: 5405
178
-        }
179
-   }
180
-
181
-   amf {
182
-        mode: disabled
183
-   }
184
-
185
-   service {
186
-           # Load the Pacemaker Cluster Resource Manager (5)
187
-           ver:       1
188
-           name:      pacemaker
189
-   }
190
-
191
-   aisexec {
192
-           user:   root
193
-           group:  root
194
-   }
195
-
196
-   logging {
197
-           fileline: off
198
-           to_stderr: yes
199
-           to_logfile: no
200
-           to_syslog: yes
201
-           syslog_facility: daemon
202
-           debug: off
203
-           timestamp: on
204
-           logger_subsys {
205
-                   subsys: AMF
206
-                   debug: off
207
-                   tags: enter|leave|trace1|trace2|trace3|trace4|trace6
208
-           }}
209
-
210
-Note the following:
211
-
212
-- The ``token`` value specifies the time, in milliseconds,
213
-  during which the Corosync token is expected
214
-  to be transmitted around the ring.
215
-  When this timeout expires, the token is declared lost,
216
-  and after ``token_retransmits_before_loss_const lost`` tokens,
217
-  the non-responding processor (cluster node) is declared dead.
218
-  In other words, ``token × token_retransmits_before_loss_const``
219
-  is the maximum time a node is allowed to not respond to cluster messages
220
-  before being considered dead.
221
-  The default for token is 1000 milliseconds (1 second),
222
-  with 4 allowed retransmits.
223
-  These defaults are intended to minimize failover times,
224
-  but can cause frequent "false alarms" and unintended failovers
225
-  in case of short network interruptions. The values used here are safer,
226
-  albeit with slightly extended failover times.
227
-
228
-- With ``secauth`` enabled,
229
-  Corosync nodes mutually authenticate using a 128-byte shared secret
230
-  stored in the :file:`/etc/corosync/authkey` file,
231
-  which may be generated with the :command:`corosync-keygen` utility.
232
-  When using ``secauth``, cluster communications are also encrypted.
233
-
234
-- In Corosync configurations using redundant networking
235
-  (with more than one interface),
236
-  you must select a Redundant Ring Protocol (RRP) mode other than none.
237
-  ``active`` is the recommended RRP mode.
238
-
239
-  Note the following about the recommended interface configuration:
240
-
241
-  - Each configured interface must have a unique ``ringnumber``,
242
-    starting with 0.
243
-
244
-  - The ``bindnetaddr`` is the network address of the interfaces to bind to.
245
-    The example uses two network addresses of /24 IPv4 subnets.
246
-
247
-  - Multicast groups (``mcastaddr``) must not be reused
248
-    across cluster boundaries.
249
-    In other words, no two distinct clusters
250
-    should ever use the same multicast group.
251
-    Be sure to select multicast addresses compliant with
252
-    `RFC 2365, "Administratively Scoped IP Multicast"
253
-    <http://www.ietf.org/rfc/rfc2365.txt>`_.
254
-
255
-  - For firewall configurations,
256
-    note that Corosync communicates over UDP only,
257
-    and uses ``mcastport`` (for receives)
258
-    and ``mcastport - 1`` (for sends).
259
-
260
-- The service declaration for the pacemaker service
261
-  may be placed in the :file:`corosync.conf` file directly
262
-  or in its own separate file, :file:`/etc/corosync/service.d/pacemaker`.
263
-
264
-  .. note::
265
-
266
-           If you are using Corosync version 2 on Ubuntu 14.04,
267
-           remove or comment out lines under the service stanza,
268
-           which enables Pacemaker to start up. Another potential
269