openstack-manuals/doc/ha-guide/source/shared-messaging.rst
Andreas Jaeger 2d44b2b36d Prepare for Sphinx 1.5
The new sphinx version introduces some changes that break build:

* Warns if code cannot be parsed for highlighting. Fix the code so
  that it can be parsed, this includes uncommenting "..." lines.
  Note that not every config file is an ini-file.
  Also, the parser seems to have bugs and cannot parse all files.
  Fix mysql ini file and enable the parameter, see
http://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_file_per_table
* :option: works only with declared options, replace useage with
  simple ``.

This change only handles a few files, more to come later.

Change-Id: I7c7335e514581622dd562ee355f62d6ae1beaa18
2017-01-11 20:37:55 +01:00

8.2 KiB

Messaging service for high availability

An AMQP (Advanced Message Queuing Protocol) compliant message bus is required for most OpenStack components in order to coordinate the execution of jobs entered into the system.

The most popular AMQP implementation used in OpenStack installations is RabbitMQ.

RabbitMQ nodes fail over on the application and the infrastructure layers.

The application layer is controlled by the oslo.messaging configuration options for multiple AMQP hosts. If the AMQP node fails, the application reconnects to the next one configured within the specified reconnect interval. The specified reconnect interval constitutes its SLA.

On the infrastructure layer, the SLA is the time for which RabbitMQ cluster reassembles. Several cases are possible. The Mnesia keeper node is the master of the corresponding Pacemaker resource for RabbitMQ. When it fails, the result is a full AMQP cluster downtime interval. Normally, its SLA is no more than several minutes. Failure of another node that is a slave of the corresponding Pacemaker resource for RabbitMQ results in no AMQP cluster downtime at all.

Making the RabbitMQ service highly available involves the following steps:

  • Install RabbitMQ<rabbitmq-install>
  • Configure RabbitMQ for HA queues<rabbitmq-configure>
  • Configure OpenStack services to use RabbitMQ HA queues <rabbitmq-services>

Note

Access to RabbitMQ is not normally handled by HAProxy. Instead, consumers must be supplied with the full list of hosts running RabbitMQ with rabbit_hosts and turn on the rabbit_ha_queues option. For more information, read the core issue. For more detail, read the history and solution.

Install RabbitMQ

The commands for installing RabbitMQ are specific to the Linux distribution you are using.

For Ubuntu or Debian:

# apt-get install rabbitmq-server

For RHEL, Fedora, or CentOS:

# yum install rabbitmq-server

For openSUSE:

# zypper install rabbitmq-server

For SLES 12:

# zypper addrepo -f obs://Cloud:OpenStack:Kilo/SLE_12 Kilo [Verify the fingerprint of the imported GPG key. See below.] # zypper install rabbitmq-server

Note

For SLES 12, the packages are signed by GPG key 893A90DAD85F9316. You should verify the fingerprint of the imported GPG key before using it.

Key ID: 893A90DAD85F9316
Key Name: Cloud:OpenStack OBS Project <Cloud:OpenStack@build.opensuse.org>
Key Fingerprint: 35B34E18ABC1076D66D5A86B893A90DAD85F9316
Key Created: Tue Oct  8 13:34:21 2013
Key Expires: Thu Dec 17 13:34:21 2015

For more information, see the official installation manual for the distribution:

Configure RabbitMQ for HA queues

The following components/services can work with HA queues:

  • OpenStack Compute
  • OpenStack Block Storage
  • OpenStack Networking
  • Telemetry

Consider that, while exchanges and bindings survive the loss of individual nodes, queues and their messages do not because a queue and its contents are located on one node. If we lose this node, we also lose the queue.

Mirrored queues in RabbitMQ improve the availability of service since it is resilient to failures.

Production servers should run (at least) three RabbitMQ servers for testing and demonstration purposes, however it is possible to run only two servers. In this section, we configure two nodes, called rabbit1 and rabbit2. To build a broker, ensure that all nodes have the same Erlang cookie file.

  1. Stop RabbitMQ and copy the cookie from the first node to each of the other node(s):

    # scp /var/lib/rabbitmq/.erlang.cookie root@NODE:/var/lib/rabbitmq/.erlang.cookie
  2. On each target node, verify the correct owner, group, and permissions of the file erlang.cookie:

    # chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie
    # chmod 400 /var/lib/rabbitmq/.erlang.cookie
  3. Start the message queue service on all nodes and configure it to start when the system boots. On Ubuntu, it is configured by default.

    On CentOS, RHEL, openSUSE, and SLES:

    # systemctl enable rabbitmq-server.service
    # systemctl start rabbitmq-server.service
  4. Verify that the nodes are running:

    # rabbitmqctl cluster_status
    Cluster status of node rabbit@NODE...
    [{nodes,[{disc,[rabbit@NODE]}]},
     {running_nodes,[rabbit@NODE]},
     {partitions,[]}]
    ...done.
  5. Run the following commands on each node except the first one:

    # rabbitmqctl stop_app
    Stopping node rabbit@NODE...
    ...done.
    # rabbitmqctl join_cluster --ram rabbit@rabbit1
    # rabbitmqctl start_app
    Starting node rabbit@NODE ...
    ...done.

Note

The default node type is a disc node. In this guide, nodes join the cluster as RAM nodes.

  1. Verify the cluster status:

    # rabbitmqctl cluster_status
    Cluster status of node rabbit@NODE...
    [{nodes,[{disc,[rabbit@rabbit1]},{ram,[rabbit@NODE]}]}, \
        {running_nodes,[rabbit@NODE,rabbit@rabbit1]}]

    If the cluster is working, you can create usernames and passwords for the queues.

  2. To ensure that all queues except those with auto-generated names are mirrored across all running nodes, set the ha-mode policy key to all by running the following command on one of the nodes:

    # rabbitmqctl set_policy ha-all '^(?!amq\.).*' '{"ha-mode": "all"}'

More information is available in the RabbitMQ documentation:

Note

As another option to make RabbitMQ highly available, RabbitMQ contains the OCF scripts for the Pacemaker cluster resource agents since version 3.5.7. It provides the active/active RabbitMQ cluster with mirrored queues. For more information, see Auto-configuration of a cluster with a Pacemaker.

Configure OpenStack services to use Rabbit HA queues

Configure the OpenStack components to use at least two RabbitMQ nodes.

Use these steps to configurate all services using RabbitMQ:

  1. RabbitMQ HA cluster host:port pairs:

    rabbit_hosts=rabbit1:5672,rabbit2:5672,rabbit3:5672
  2. Retry connecting with RabbitMQ:

    rabbit_retry_interval=1
  3. How long to back-off for between retries when connecting to RabbitMQ:

    rabbit_retry_backoff=2
  4. Maximum retries with trying to connect to RabbitMQ (infinite by default):

    rabbit_max_retries=0
  5. Use durable queues in RabbitMQ:

    rabbit_durable_queues=true
  6. Use HA queues in RabbitMQ (x-ha-policy: all):

    rabbit_ha_queues=true

Note

If you change the configuration from an old set-up that did not use HA queues, restart the service:

# rabbitmqctl stop_app
# rabbitmqctl reset
# rabbitmqctl start_app