# # Translators: msgid "" msgstr "" "Project-Id-Version: OpenStack Manuals\n" "POT-Creation-Date: 2014-03-13 06:25+0000\n" "PO-Revision-Date: 2014-03-12 23:47+0000\n" "Last-Translator: Tom Fifield \n" "Language-Team: Serbian (http://www.transifex.com/projects/p/openstack/language/sr/)\n" "MIME-Version: 1.0\n" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" "Language: sr\n" "Plural-Forms: nplurals=3; plural=(n%10==1 && n%100!=11 ? 0 : n%10>=2 && n%10<=4 && (n%100<10 || n%100>=20) ? 1 : 2);\n" #: ./doc/high-availability-guide/ha-guide-docinfo.xml4(firstname) #: ./doc/high-availability-guide/bk-ha-guide.xml8(firstname) msgid "Florian" msgstr "" #: ./doc/high-availability-guide/ha-guide-docinfo.xml5(surname) #: ./doc/high-availability-guide/bk-ha-guide.xml9(surname) msgid "Haas" msgstr "" #: ./doc/high-availability-guide/ha-guide-docinfo.xml7(email) #: ./doc/high-availability-guide/bk-ha-guide.xml11(email) msgid "florian@hastexo.com" msgstr "" #: ./doc/high-availability-guide/ha-guide-docinfo.xml9(orgname) #: ./doc/high-availability-guide/bk-ha-guide.xml13(orgname) msgid "hastexo" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml5(title) msgid "OpenStack High Availability Guide" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml17(year) msgid "2012" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml18(year) msgid "2013" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml19(holder) msgid "OpenStack Contributors" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml21(releaseinfo) msgid "master" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml22(productname) msgid "OpenStack" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml26(remark) msgid "Copyright details are filled in by the template." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml31(date) msgid "2012-01-16" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml35(para) msgid "Organizes guide based on cloud controller and compute nodes." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml41(date) msgid "2012-05-24" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml45(para) msgid "Begin trunk designation." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml54(title) msgid "Introduction to OpenStack High Availability" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml56(simpara) msgid "High Availability systems seek to minimize two things:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml59(simpara) msgid "" "System downtime — occurs when a " "user-facing service is unavailable beyond a specified " "maximum amount of time, and" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml63(simpara) msgid "" "Data loss — accidental deletion or " "destruction of data." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml67(simpara) msgid "" "Most high availability systems guarantee protection against system downtime " "and data loss only in the event of a single failure. However, they are also " "expected to protect against cascading failures, where a single failure " "deteriorates into a series of consequential failures." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml68(simpara) msgid "" "A crucial aspect of high availability is the elimination of single points of" " failure (SPOFs). A SPOF is an individual piece of equipment or software " "which will cause system downtime or data loss if it fails. In order to " "eliminate SPOFs, check that mechanisms exist for redundancy of:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml71(simpara) msgid "Network components, such as switches and routers" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml76(simpara) msgid "Applications and automatic service migration" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml81(simpara) msgid "Storage components" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml86(simpara) msgid "Facility services such as power, air conditioning, and fire protection" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml91(simpara) msgid "" "Most high availability systems will fail in the event of multiple " "independent (non-consequential) failures. In this case, most systems will " "protect data over maintaining availability." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml92(simpara) msgid "" "High-availability systems typically achieve uptime of 99.99% or more, which " "roughly equates to less than an hour of cumulative downtime per year. In " "order to achieve this, high availability systems should keep recovery times " "after a failure to about one to two minutes, sometimes significantly less." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml93(simpara) msgid "" "OpenStack currently meets such availability requirements for its own " "infrastructure services, meaning that an uptime of 99.99% is feasible for " "the OpenStack infrastructure proper. However, OpenStack " "doesnot guarantee 99.99% " "availability for individual guest instances." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml94(simpara) msgid "" "Preventing single points of failure can depend on whether or not a service " "is stateless." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml97(title) msgid "Stateless vs. Stateful services" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml99(simpara) msgid "" "A stateless service is one that provides a response after your request, and " "then requires no further attention. To make a stateless service highly " "available, you need to provide redundant instances and load balance them. " "OpenStack services that are stateless include nova-api, nova-conductor, " "glance-api, keystone-api, neutron-api and nova-scheduler." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml100(simpara) msgid "" "A stateful service is one where subsequent requests to the service depend on" " the results of the first request. Stateful services are more difficult to " "manage because a single action typically involves more than one request, so " "simply providing additional instances and load balancing will not solve the " "problem. For example, if the Horizon user interface reset itself every time " "you went to a new page, it wouldn’t be very useful. OpenStack services that " "are stateful include the OpenStack database and message queue." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml101(simpara) msgid "" "Making stateful services highly available can depend on whether you choose " "an active/passive or active/active configuration." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml105(title) msgid "Active/Passive" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml107(simpara) msgid "" "In an active/passive configuration, systems are set up to bring additional " "resources online to replace those that have failed. For example, OpenStack " "would write to the main database while maintaining a disaster recovery " "database that can be brought online in the event that the main database " "fails." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml108(simpara) msgid "" "Typically, an active/passive installation for a stateless service would " "maintain a redundant instance that can be brought online when required. " "Requests are load balanced using a virtual IP address and a load balancer " "such as HAProxy." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml109(simpara) msgid "" "A typical active/passive installation for a stateful service maintains a " "replacement resource that can be brought online when required. A separate " "application (such as Pacemaker or Corosync) monitors these services, " "bringing the backup online as necessary." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml113(title) msgid "Active/Active" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml115(simpara) msgid "" "In an active/active configuration, systems also use a backup but will manage" " both the main and redundant systems concurrently. This way, if there is a " "failure the user is unlikely to notice. The backup system is already online," " and takes on increased load while the main system is fixed and brought back" " online." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml116(simpara) msgid "" "Typically, an active/active installation for a stateless service would " "maintain a redundant instance, and requests are load balanced using a " "virtual IP address and a load balancer such as HAProxy." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml117(simpara) msgid "" "A typical active/active installation for a stateful service would include " "redundant services with all instances having an identical state. For " "example, updates to one instance of a database would also update all other " "instances. This way a request to one instance is the same as a request to " "any other. A load balancer manages the traffic to these systems, ensuring " "that operational systems always handle the request." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml118(simpara) msgid "" "These are some of the more common ways to implement these high availability " "architectures, but they are by no means the only ways to do it. The " "important thing is to make sure that your services are redundant, and " "available; how you achieve that is up to you. This document will cover some " "of the more common options for highly available systems." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml123(title) msgid "HA Using Active/Passive" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml127(title) msgid "The Pacemaker Cluster Stack" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml129(simpara) msgid "" "OpenStack infrastructure high availability relies on the Pacemaker cluster stack, the " "state-of-the-art high availability and load balancing stack for the Linux " "platform. Pacemaker is storage and application-agnostic, and is in no way " "specific to OpenStack." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml134(simpara) msgid "" "Pacemaker relies on the Corosync messaging layer for " "reliable cluster communications. Corosync implements the Totem single-ring " "ordering and membership protocol. It also provides UDP and InfiniBand based " "messaging, quorum, and cluster membership to Pacemaker." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml139(simpara) msgid "" "Pacemaker interacts with applications through resource " "agents (RAs), of which it supports over 70 natively. Pacemaker " "can also easily use third-party RAs. An OpenStack high-availability " "configuration uses existing native Pacemaker RAs (such as those managing " "MySQL databases or virtual IP addresses), existing third-party RAs (such as " "for RabbitMQ), and native OpenStack RAs (such as those managing the " "OpenStack Identity and Image Services)." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml148(title) msgid "Installing Packages" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml150(simpara) msgid "" "On any host that is meant to be part of a Pacemaker cluster, you must first " "establish cluster communications through the Corosync messaging layer. This " "involves installing the following packages (and their dependencies, which " "your package manager will normally install automatically):" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml158(literal) msgid "pacemaker" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml163(literal) msgid "corosync" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml168(literal) msgid "cluster-glue" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml172(simpara) msgid "" "fence-agents (Fedora only; all other distributions use " "fencing agents from cluster-glue)" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml178(literal) msgid "resource-agents" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml185(title) msgid "Setting up Corosync" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml187(simpara) msgid "" "Besides installing the corosync package, you will also " "have to create a configuration file, stored in " "/etc/corosync/corosync.conf. Most distributions ship an " "example configuration file (corosync.conf.example) as " "part of the documentation bundled with the corosync " "package. An example Corosync configuration file is shown below:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml195(title) msgid "Corosync configuration file (corosync.conf)" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml269(para) msgid "" "The token value specifies the time, in milliseconds, " "during which the Corosync token is expected to be transmitted around the " "ring. When this timeout expires, the token is declared lost, and after " "token_retransmits_before_loss_const lost tokens the non-" "responding processor (cluster node) is declared dead. " "In other words, token × " "token_retransmits_before_loss_const is the maximum time a" " node is allowed to not respond to cluster messages before being considered " "dead. The default for token is 1000 (1 second), with 4 " "allowed retransmits. These defaults are intended to minimize failover times," " but can cause frequent \"false alarms\" and unintended failovers in case of" " short network interruptions. The values used here are safer, albeit with " "slightly extended failover times." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml285(para) msgid "" "With secauth enabled, Corosync nodes mutually " "authenticate using a 128-byte shared secret stored in " "/etc/corosync/authkey, which may be generated with the " "corosync-keygen utility. When using " "secauth, cluster communications are also encrypted." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml293(para) msgid "" "In Corosync configurations using redundant networking (with more than one " "interface), you must select a Redundant Ring Protocol " "(RRP) mode other than none. active is " "the recommended RRP mode." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml300(para) msgid "" "There are several things to note about the recommended interface " "configuration:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml306(simpara) msgid "" "The ringnumber must differ between all configured " "interfaces, starting with 0." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml312(simpara) msgid "" "The bindnetaddr is the network " "address of the interfaces to bind to. The example uses two network addresses" " of /24 IPv4 subnets." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml318(simpara) msgid "" "Multicast groups (mcastaddr) must " "not be reused across cluster boundaries. In other words, no two " "distinct clusters should ever use the same multicast group. Be sure to " "select multicast addresses compliant with RFC 2365, \"Administratively " "Scoped IP Multicast\"." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml327(simpara) msgid "" "For firewall configurations, note that Corosync communicates over UDP only, " "and uses mcastport (for receives) and " "mcastport-1 (for sends)." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml336(para) msgid "" "The service declaration for the " "pacemaker service may be placed in the " "corosync.conf file directly, or in its own separate file," " /etc/corosync/service.d/pacemaker." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml343(simpara) msgid "" "Once created, the corosync.conf file (and the " "authkey file if the secauth option is " "enabled) must be synchronized across all cluster nodes." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml349(title) msgid "Starting Corosync" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml351(simpara) msgid "" "Corosync is started as a regular system service. Depending on your " "distribution, it may ship with a LSB (System V style) init script, an " "upstart job, or a systemd unit file. Either way, the service is usually " "named corosync:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml357(simpara) msgid "/etc/init.d/corosync start (LSB)" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml361(simpara) msgid "service corosync start (LSB, alternate)" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml365(simpara) msgid "start corosync (upstart)" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml369(simpara) msgid "systemctl start corosync (systemd)" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml373(simpara) msgid "You can now check the Corosync connectivity with two tools." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml374(simpara) msgid "" "The corosync-cfgtool utility, when invoked with the " "-s option, gives a summary of the health of the " "communication rings:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml385(simpara) msgid "" "The corosync-objctl utility can be used to dump the " "Corosync cluster member list:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml394(simpara) msgid "" "You should see a status=joined entry for each of your " "constituent cluster nodes." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml399(title) msgid "Starting Pacemaker" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml401(simpara) msgid "" "Once the Corosync services have been started, and you have established that " "the cluster is communicating properly, it is safe to start " "pacemakerd, the Pacemaker master control process:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml406(simpara) msgid "/etc/init.d/pacemaker start (LSB)" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml410(simpara) msgid "service pacemaker start (LSB, alternate)" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml414(simpara) msgid "start pacemaker (upstart)" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml418(simpara) msgid "systemctl start pacemaker (systemd)" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml422(simpara) msgid "" "Once Pacemaker services have started, Pacemaker will create a default empty " "cluster configuration with no resources. You may observe Pacemaker’s status " "with the crm_mon utility:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml439(title) msgid "Setting basic cluster properties" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml441(simpara) msgid "" "Once your Pacemaker cluster is set up, it is recommended to set a few basic " "cluster properties. To do so, start the crm shell and " "change into the configuration menu by entering configure." " Alternatively. you may jump straight into the Pacemaker configuration menu " "by typing crm configure directly from a shell prompt." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml447(simpara) msgid "Then, set the following properties:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml455(para) msgid "" "Setting no-quorum-policy=\"ignore\" is required in 2-node" " Pacemaker clusters for the following reason: if quorum enforcement is " "enabled, and one of the two nodes fails, then the remaining node can not " "establish a majority of quorum votes necessary to run " "services, and thus it is unable to take over any resources. The appropriate " "workaround is to ignore loss of quorum in the cluster. This is safe and " "necessary only in 2-node clusters. Do not set this " "property in Pacemaker clusters with more than two nodes." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml467(para) msgid "" "Setting pe-warn-series-max, pe-input-series-" "max and pe-error-series-max to 1000 instructs " "Pacemaker to keep a longer history of the inputs processed, and errors and " "warnings generated, by its Policy Engine. This history is typically useful " "in case cluster troubleshooting becomes necessary." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml476(para) msgid "" "Pacemaker uses an event-driven approach to cluster state processing. " "However, certain Pacemaker actions occur at a configurable interval, " "cluster-recheck-interval, which defaults to 15 minutes. " "It is usually prudent to reduce this to a shorter interval, such as 5 or 3 " "minutes." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml485(simpara) msgid "" "Once you have made these changes, you may commit the " "updated configuration." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml491(title) msgid "Cloud Controller Cluster Stack" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml493(simpara) msgid "" "The Cloud Controller sits on the management network and needs to talk to all" " other services." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml496(title) msgid "Highly available MySQL" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml498(simpara) msgid "" "MySQL is the default database server used by many OpenStack services. Making" " the MySQL service highly available involves" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml502(simpara) msgid "configuring a DRBD device for use by MySQL," msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml507(simpara) msgid "" "configuring MySQL to use a data directory residing on that DRBD device," msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml513(simpara) #: ./doc/high-availability-guide/bk-ha-guide.xml759(simpara) msgid "" "selecting and assigning a virtual IP address (VIP) that can freely float " "between cluster nodes," msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml519(simpara) msgid "configuring MySQL to listen on that IP address," msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml524(simpara) msgid "" "managing all resources, including the MySQL daemon itself, with the " "Pacemaker cluster manager." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml531(simpara) msgid "" "MySQL/Galera is " "an alternative method of configuring MySQL for high availability. It is " "likely to become the preferred method of achieving MySQL high availability " "once it has sufficiently matured. At the time of writing, however, the " "Pacemaker/DRBD based approach remains the recommended one for OpenStack " "environments." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml540(title) #: ./doc/high-availability-guide/bk-ha-guide.xml789(title) msgid "Configuring DRBD" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml542(simpara) msgid "" "The Pacemaker based MySQL server requires a DRBD resource from which it " "mounts the /var/lib/mysql directory. In this example, the" " DRBD resource is simply named mysql:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml547(title) msgid "" "mysql DRBD resource configuration " "(/etc/drbd.d/mysql.res)" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml563(simpara) msgid "" "This resource uses an underlying local disk (in DRBD terminology, a " "backing device) named " "/dev/data/mysql on both cluster nodes, " "node1 and node2. Normally, this would " "be an LVM Logical Volume specifically set aside for this purpose. The DRBD " "meta-disk is internal, meaning DRBD-" "specific metadata is being stored at the end of the disk " "device itself. The device is configured to communicate between IPv4 " "addresses 10.0.42.100 and 10.0.42.254, using TCP port 7700. Once enabled, it" " will map to a local DRBD block device with the device minor number 0, that " "is, /dev/drbd0." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml572(simpara) #: ./doc/high-availability-guide/bk-ha-guide.xml821(simpara) msgid "" "Enabling a DRBD resource is explained in detail in the DRBD " "User’s Guide. In brief, the proper sequence of commands is this:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml580(para) msgid "" "Initializes DRBD metadata and writes the initial set of metadata to " "/dev/data/mysql. Must be completed on both nodes." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml586(para) msgid "" "Creates the /dev/drbd0 device node, " "attaches the DRBD device to its backing store, and " "connects the DRBD node to its peer. Must be completed " "on both nodes." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml593(para) #: ./doc/high-availability-guide/bk-ha-guide.xml842(para) msgid "" "Kicks off the initial device synchronization, and puts the device into the " "primary (readable and writable) role. See Resource " "roles (from the DRBD User’s Guide) for a more detailed description of" " the primary and secondary roles in DRBD. Must be completed on one" " node only, namely the one where you are about to continue with " "creating your filesystem." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml607(title) #: ./doc/high-availability-guide/bk-ha-guide.xml856(title) msgid "Creating a file system" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml609(simpara) msgid "" "Once the DRBD resource is running and in the primary role (and potentially " "still in the process of running the initial device synchronization), you may" " proceed with creating the filesystem for MySQL data. XFS is the generally " "recommended filesystem:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml614(simpara) #: ./doc/high-availability-guide/bk-ha-guide.xml863(simpara) msgid "" "You may also use the alternate device path for the DRBD device, which may be" " easier to remember as it includes the self-explanatory resource name:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml618(simpara) #: ./doc/high-availability-guide/bk-ha-guide.xml867(simpara) msgid "" "Once completed, you may safely return the device to the secondary role. Any " "ongoing device synchronization will continue in the background:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml625(title) msgid "Preparing MySQL for Pacemaker high availability" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml627(simpara) msgid "" "In order for Pacemaker monitoring to function properly, you must ensure that" " MySQL’s database files reside on the DRBD device. If you already have an " "existing MySQL database, the simplest approach is to just move the contents " "of the existing /var/lib/mysql directory into the newly " "created filesystem on the DRBD device." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml633(simpara) msgid "" "You must complete the next step while the MySQL database server is shut " "down." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml639(simpara) msgid "" "For a new MySQL installation with no existing data, you may also run the " "mysql_install_db command:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml644(simpara) msgid "" "Regardless of the approach, the steps outlined here must be completed on " "only one cluster node." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml649(title) msgid "Adding MySQL resources to Pacemaker" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml651(simpara) msgid "" "You may now proceed with adding the Pacemaker configuration for MySQL " "resources. Connect to the Pacemaker cluster with crm " "configure, and add the following cluster resources:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml687(simpara) #: ./doc/high-availability-guide/bk-ha-guide.xml921(simpara) #: ./doc/high-availability-guide/bk-ha-guide.xml1120(simpara) #: ./doc/high-availability-guide/bk-ha-guide.xml1213(simpara) #: ./doc/high-availability-guide/bk-ha-guide.xml1374(simpara) #: ./doc/high-availability-guide/bk-ha-guide.xml1448(simpara) #: ./doc/high-availability-guide/bk-ha-guide.xml1492(simpara) #: ./doc/high-availability-guide/bk-ha-guide.xml1533(simpara) msgid "This configuration creates" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml690(simpara) msgid "" "p_ip_mysql, a virtual IP address for use by MySQL " "(192.168.42.101)," msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml695(simpara) msgid "" "p_fs_mysql, a Pacemaker managed filesystem mounted to " "/var/lib/mysql on whatever node currently runs the MySQL " "service," msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml701(simpara) msgid "" "ms_drbd_mysql, the master/slave set " "managing the mysql DRBD resource," msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml706(simpara) #: ./doc/high-availability-guide/bk-ha-guide.xml940(simpara) msgid "" "a service group and order and " "colocation constraints to ensure resources are started on" " the correct nodes, and in the correct sequence." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml712(simpara) msgid "" "crm configure supports batch input, so you may copy and " "paste the above into your live pacemaker configuration, and then make " "changes as required. For example, you may enter edit " "p_ip_mysql from the crm configure menu and edit" " the resource to match your preferred virtual IP address." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml717(simpara) msgid "" "Once completed, commit your configuration changes by entering " "commit from the crm configure menu. " "Pacemaker will then start the MySQL service, and its dependent resources, on" " one of your nodes." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml723(title) msgid "Configuring OpenStack services for highly available MySQL" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml725(simpara) msgid "" "Your OpenStack services must now point their MySQL configuration to the " "highly available, virtual cluster IP address — rather than a MySQL server’s " "physical IP address as you normally would." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml728(simpara) msgid "" "For OpenStack Image, for example, if your MySQL service IP address is " "192.168.42.101 as in the configuration explained here, you would use the " "following line in your OpenStack Image registry configuration file (glance-registry.conf):" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml733(simpara) msgid "" "No other changes are necessary to your OpenStack configuration. If the node " "currently hosting your database experiences a problem necessitating service " "failover, your OpenStack services may experience a brief MySQL interruption," " as they would in the event of a network hiccup, and then continue to run " "normally." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml742(title) msgid "Highly available RabbitMQ" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml744(simpara) msgid "" "RabbitMQ is the default AMQP server used by many OpenStack services. Making " "the RabbitMQ service highly available involves:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml748(simpara) msgid "configuring a DRBD device for use by RabbitMQ," msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml753(simpara) msgid "" "configuring RabbitMQ to use a data directory residing on that DRBD device," msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml765(simpara) msgid "configuring RabbitMQ to listen on that IP address," msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml770(simpara) msgid "" "managing all resources, including the RabbitMQ daemon itself, with the " "Pacemaker cluster manager." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml777(simpara) msgid "" "There is an alternative method of configuring RabbitMQ for high " "availability. That approach, known as active-active mirrored " "queues, happens to be the one preferred by the RabbitMQ developers — " "however it has shown less than ideal consistency and reliability in " "OpenStack clusters. Thus, at the time of writing, the Pacemaker/DRBD based " "approach remains the recommended one for OpenStack environments, although " "this may change in the near future as RabbitMQ active-active mirrored queues" " mature." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml791(simpara) msgid "" "The Pacemaker based RabbitMQ server requires a DRBD resource from which it " "mounts the /var/lib/rabbitmq directory. In this example, " "the DRBD resource is simply named rabbitmq:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml796(title) msgid "" "rabbitmq DRBD resource configuration " "(/etc/drbd.d/rabbitmq.res)" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml812(simpara) msgid "" "This resource uses an underlying local disk (in DRBD terminology, a " "backing device) named " "/dev/data/rabbitmq on both cluster nodes, " "node1 and node2. Normally, this would " "be an LVM Logical Volume specifically set aside for this purpose. The DRBD " "meta-disk is internal, meaning DRBD-" "specific metadata is being stored at the end of the disk " "device itself. The device is configured to communicate between IPv4 " "addresses 10.0.42.100 and 10.0.42.254, using TCP port 7701. Once enabled, it" " will map to a local DRBD block device with the device minor number 1, that " "is, /dev/drbd1." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml829(para) msgid "" "Initializes DRBD metadata and writes the initial set of metadata to " "/dev/data/rabbitmq. Must be completed on both nodes." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml835(para) msgid "" "Creates the /dev/drbd1 device node, " "attaches the DRBD device to its backing store, and " "connects the DRBD node to its peer. Must be completed " "on both nodes." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml858(simpara) msgid "" "Once the DRBD resource is running and in the primary role (and potentially " "still in the process of running the initial device synchronization), you may" " proceed with creating the filesystem for RabbitMQ data. XFS is generally " "the recommended filesystem:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml874(title) msgid "Preparing RabbitMQ for Pacemaker high availability" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml876(simpara) msgid "" "In order for Pacemaker monitoring to function properly, you must ensure that" " RabbitMQ’s .erlang.cookie files are identical on all " "nodes, regardless of whether DRBD is mounted there or not. The simplest way " "of doing so is to take an existing .erlang.cookie from " "one of your nodes, copying it to the RabbitMQ data directory on the other " "node, and also copying it to the DRBD-backed filesystem." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml889(title) msgid "Adding RabbitMQ resources to Pacemaker" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml891(simpara) msgid "" "You may now proceed with adding the Pacemaker configuration for RabbitMQ " "resources. Connect to the Pacemaker cluster with crm " "configure, and add the following cluster resources:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml924(simpara) msgid "" "p_ip_rabbitmp, a virtual IP address for use by RabbitMQ " "(192.168.42.100)," msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml929(simpara) msgid "" "p_fs_rabbitmq, a Pacemaker managed filesystem mounted to " "/var/lib/rabbitmq on whatever node currently runs the " "RabbitMQ service," msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml935(simpara) msgid "" "ms_drbd_rabbitmq, the master/slave " "set managing the rabbitmq DRBD resource," msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml946(simpara) msgid "" "crm configure supports batch input, so you may copy and " "paste the above into your live pacemaker configuration, and then make " "changes as required. For example, you may enter edit " "p_ip_rabbitmq from the crm configure menu and " "edit the resource to match your preferred virtual IP address." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml951(simpara) msgid "" "Once completed, commit your configuration changes by entering " "commit from the crm configure menu. " "Pacemaker will then start the RabbitMQ service, and its dependent resources," " on one of your nodes." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml957(title) msgid "Configuring OpenStack services for highly available RabbitMQ" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml959(simpara) msgid "" "Your OpenStack services must now point their RabbitMQ configuration to the " "highly available, virtual cluster IP address — rather than a RabbitMQ " "server’s physical IP address as you normally would." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml962(simpara) msgid "" "For OpenStack Image, for example, if your RabbitMQ service IP address is " "192.168.42.100 as in the configuration explained here, you would use the " "following line in your OpenStack Image API configuration file (glance-api.conf):" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml967(simpara) msgid "" "No other changes are necessary to your OpenStack configuration. If the node " "currently hosting your RabbitMQ experiences a problem necessitating service " "failover, your OpenStack services may experience a brief RabbitMQ " "interruption, as they would in the event of a network hiccup, and then " "continue to run normally." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml977(title) msgid "API Node Cluster Stack" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml979(simpara) msgid "" "The API node exposes OpenStack API endpoints onto external network " "(Internet). It needs to talk to the Cloud Controller on the management " "network." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml983(title) msgid "Configure the VIP" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml985(simpara) msgid "" "First of all, we need to select and assign a virtual IP address (VIP) that " "can freely float between cluster nodes." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml986(simpara) msgid "" "This configuration creates p_ip_api, a virtual IP address" " for use by the API node (192.168.42.103) :" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml993(title) msgid "Highly available OpenStack Identity" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml995(simpara) msgid "" "OpenStack Identity is the Identity Service in OpenStack and used by many " "services. Making the OpenStack Identity service highly available in active /" " passive mode involves" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml999(simpara) msgid "configuring OpenStack Identity to listen on the VIP address," msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1004(simpara) msgid "managing OpenStack Identity daemon with the Pacemaker cluster manager," msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1009(simpara) #: ./doc/high-availability-guide/bk-ha-guide.xml1098(simpara) #: ./doc/high-availability-guide/bk-ha-guide.xml1190(simpara) #: ./doc/high-availability-guide/bk-ha-guide.xml1276(simpara) msgid "configuring OpenStack services to use this IP address." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1015(simpara) msgid "" "Here is the documentation for installing OpenStack Identity " "service." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1019(title) msgid "Adding OpenStack Identity resource to Pacemaker" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1021(simpara) #: ./doc/high-availability-guide/bk-ha-guide.xml1110(simpara) #: ./doc/high-availability-guide/bk-ha-guide.xml1202(simpara) #: ./doc/high-availability-guide/bk-ha-guide.xml1288(simpara) #: ./doc/high-availability-guide/bk-ha-guide.xml1364(simpara) #: ./doc/high-availability-guide/bk-ha-guide.xml1437(simpara) #: ./doc/high-availability-guide/bk-ha-guide.xml1481(simpara) #: ./doc/high-availability-guide/bk-ha-guide.xml1522(simpara) msgid "First of all, you need to download the resource agent to your system:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1027(simpara) msgid "" "You may now proceed with adding the Pacemaker configuration for OpenStack " "Identity resource. Connect to the Pacemaker cluster with crm " "configure, and add the following cluster resources:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1033(simpara) msgid "" "This configuration creates p_keystone, a resource for " "managing the OpenStack Identity service." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1034(simpara) msgid "" "crm configure supports batch input, so you may copy and " "paste the above into your live pacemaker configuration, and then make " "changes as required. For example, you may enter edit " "p_ip_keystone from the crm configure menu and " "edit the resource to match your preferred virtual IP address." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1039(simpara) msgid "" "Once completed, commit your configuration changes by entering " "commit from the crm configure menu. " "Pacemaker will then start the OpenStack Identity service, and its dependent " "resources, on one of your nodes." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1045(title) msgid "Configuring OpenStack Identity service" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1047(simpara) msgid "" "You need to edit your OpenStack Identity configuration file " "(keystone.conf) and change the bind parameters:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1048(simpara) msgid "On Havana:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1050(simpara) msgid "" "On Icehouse, the admin_bind_host option lets you use a " "private network for the admin access." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1053(simpara) msgid "" "To be sure all data will be highly available, you should be sure that you " "store everything in the MySQL database (which is also highly available):" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1063(title) msgid "" "Configuring OpenStack Services to use the Highly Available OpenStack " "Identity" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1065(simpara) msgid "" "Your OpenStack services must now point their OpenStack Identity " "configuration to the highly available, virtual cluster IP address — rather " "than a OpenStack Identity server’s physical IP address as you normally " "would." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1068(simpara) msgid "" "For example with OpenStack Compute, if your OpenStack Identity service IP " "address is 192.168.42.103 as in the configuration explained here, you would " "use the following line in your API configuration file (api-" "paste.ini):" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1073(simpara) msgid "You also need to create the OpenStack Identity Endpoint with this IP." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1074(simpara) #: ./doc/high-availability-guide/bk-ha-guide.xml1338(simpara) msgid "" "NOTE : If you are using both private and public IP addresses, you should " "create two Virtual IP addresses and define your endpoint like this:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1076(simpara) msgid "" "If you are using the Horizon Dashboard, you should edit the " "local_settings.py file:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1082(title) msgid "Highly available OpenStack Image API" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1084(simpara) msgid "" "OpenStack Image Service offers a service for discovering, registering, and " "retrieving virtual machine images. Making the OpenStack Image API service " "highly available in active / passive mode involves" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1088(simpara) msgid "configuring OpenStack Image to listen on the VIP address," msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1093(simpara) msgid "" "managing OpenStack Image API daemon with the Pacemaker cluster manager," msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1104(simpara) msgid "" "Here is the documentation for installing OpenStack Image API " "service." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1108(title) msgid "Adding OpenStack Image API resource to Pacemaker" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1114(simpara) msgid "" "You may now proceed with adding the Pacemaker configuration for OpenStack " "Image API resource. Connect to the Pacemaker cluster with crm " "configure, and add the following cluster resources:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1123(simpara) msgid "" "p_glance-api, a resource for manage OpenStack Image API " "service" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1127(simpara) msgid "" "crm configure supports batch input, so you may copy and " "paste the above into your live pacemaker configuration, and then make " "changes as required. For example, you may enter edit p_ip_glance-" "api from the crm configure menu and edit the " "resource to match your preferred virtual IP address." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1132(simpara) msgid "" "Once completed, commit your configuration changes by entering " "commit from the crm configure menu. " "Pacemaker will then start the OpenStack Image API service, and its dependent" " resources, on one of your nodes." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1138(title) msgid "Configuring OpenStack Image API service" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1140(simpara) msgid "Edit /etc/glance/glance-api.conf:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1156(title) msgid "" "Configuring OpenStack Services to use High Available OpenStack Image API" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1158(simpara) msgid "" "Your OpenStack services must now point their OpenStack Image API " "configuration to the highly available, virtual cluster IP address — rather " "than an OpenStack Image API server’s physical IP address as you normally " "would." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1161(simpara) msgid "" "For OpenStack Compute, for example, if your OpenStack Image API service IP " "address is 192.168.42.104 as in the configuration explained here, you would " "use the following line in your nova.conf file:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1165(simpara) msgid "You need also to create the OpenStack Image API Endpoint with this IP." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1167(simpara) msgid "" "If you are using both private and public IP addresses, you should create two" " Virtual IP addresses and define your endpoint like this:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1174(title) msgid "Highly available Cinder API" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1176(simpara) msgid "" "Cinder is the block storage service in OpenStack. Making the Cinder API " "service highly available in active / passive mode involves" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1180(simpara) msgid "configuring Cinder to listen on the VIP address," msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1185(simpara) msgid "managing Cinder API daemon with the Pacemaker cluster manager," msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1196(simpara) msgid "" "Here is the documentation for " "installing Cinder service." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1200(title) msgid "Adding Cinder API resource to Pacemaker" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1206(simpara) msgid "" "You may now proceed with adding the Pacemaker configuration for Cinder API " "resource. Connect to the Pacemaker cluster with crm " "configure, and add the following cluster resources:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1216(simpara) msgid "" "p_cinder-api, a resource for manage Cinder API service" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1220(simpara) msgid "" "crm configure supports batch input, so you may copy and " "paste the above into your live pacemaker configuration, and then make " "changes as required. For example, you may enter edit p_ip_cinder-" "api from the crm configure menu and edit the " "resource to match your preferred virtual IP address." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1225(simpara) msgid "" "Once completed, commit your configuration changes by entering " "commit from the crm configure menu. " "Pacemaker will then start the Cinder API service, and its dependent " "resources, on one of your nodes." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1231(title) msgid "Configuring Cinder API service" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1233(simpara) msgid "Edit /etc/cinder/cinder.conf:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1246(title) msgid "Configuring OpenStack Services to use High Available Cinder API" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1248(simpara) msgid "" "Your OpenStack services must now point their Cinder API configuration to the" " highly available, virtual cluster IP address — rather than a Cinder API " "server’s physical IP address as you normally would." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1251(simpara) msgid "You need to create the Cinder API Endpoint with this IP." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1253(simpara) msgid "" "If you are using both private and public IP, you should create two Virtual " "IPs and define your endpoint like this:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1260(title) msgid "Highly available OpenStack Networking Server" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1262(simpara) msgid "" "OpenStack Networking is the network connectivity service in OpenStack. " "Making the OpenStack Networking Server service highly available in active / " "passive mode involves" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1266(simpara) msgid "configuring OpenStack Networking to listen on the VIP address," msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1271(simpara) msgid "" "managing OpenStack Networking API Server daemon with the Pacemaker cluster " "manager," msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1282(simpara) msgid "" "Here is the documentation for installing OpenStack Networking " "service." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1286(title) msgid "Adding OpenStack Networking Server resource to Pacemaker" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1292(simpara) msgid "" "You may now proceed with adding the Pacemaker configuration for OpenStack " "Networking Server resource. Connect to the Pacemaker cluster with " "crm configure, and add the following cluster resources:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1299(simpara) msgid "" "This configuration creates p_neutron-server, a resource " "for manage OpenStack Networking Server service" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1300(simpara) msgid "" "crm configure supports batch input, so you may copy and " "paste the above into your live pacemaker configuration, and then make " "changes as required. For example, you may enter edit p_neutron-" "server from the crm configure menu and edit the" " resource to match your preferred virtual IP address." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1305(simpara) msgid "" "Once completed, commit your configuration changes by entering " "commit from the crm configure menu. " "Pacemaker will then start the OpenStack Networking API service, and its " "dependent resources, on one of your nodes." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1311(title) msgid "Configuring OpenStack Networking Server" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1313(simpara) msgid "Edit /etc/neutron/neutron.conf :" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1330(title) msgid "" "Configuring OpenStack Services to use Highly available OpenStack Networking " "Server" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1332(simpara) msgid "" "Your OpenStack services must now point their OpenStack Networking Server " "configuration to the highly available, virtual cluster IP address — rather " "than an OpenStack Networking server’s physical IP address as you normally " "would." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1335(simpara) msgid "" "For example, you should configure OpenStack Compute for using Highly " "Available OpenStack Networking Server in editing " "nova.conf file:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1337(simpara) msgid "" "You need to create the OpenStack Networking Server Endpoint with this IP." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1344(title) msgid "Highly available Ceilometer Central Agent" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1346(simpara) msgid "" "Ceilometer is the metering service in OpenStack. Central Agent polls for " "resource utilization statistics for resources not tied to instances or " "compute nodes." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1350(simpara) msgid "" "Due to limitations of a polling model, a single instance of this agent can " "be polling a given list of meters. In this setup, we install this service on" " the API nodes also in the active / passive mode." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1354(simpara) msgid "" "Making the Ceilometer Central Agent service highly available in active / " "passive mode involves managing its daemon with the Pacemaker cluster " "manager." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1357(simpara) msgid "" "You will find at this page the process to install the " "Ceilometer Central Agent." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1362(title) msgid "Adding the Ceilometer Central Agent resource to Pacemaker" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1368(simpara) msgid "" "You may then proceed with adding the Pacemaker configuration for the " "Ceilometer Central Agent resource. Connect to the Pacemaker cluster with " "crm configure, and add the following cluster resources:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1377(simpara) msgid "" "p_ceilometer-agent-central, a resource for manage " "Ceilometer Central Agent service" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1381(simpara) #: ./doc/high-availability-guide/bk-ha-guide.xml1455(simpara) #: ./doc/high-availability-guide/bk-ha-guide.xml1500(simpara) #: ./doc/high-availability-guide/bk-ha-guide.xml1541(simpara) msgid "" "crm configure supports batch input, so you may copy and " "paste the above into your live pacemaker configuration, and then make " "changes as required." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1384(simpara) msgid "" "Once completed, commit your configuration changes by entering " "commit from the crm configure menu. " "Pacemaker will then start the Ceilometer Central Agent service, and its " "dependent resources, on one of your nodes." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1390(title) msgid "Configuring Ceilometer Central Agent service" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1392(simpara) msgid "Edit /etc/ceilometer/ceilometer.conf :" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1407(title) msgid "Configure Pacemaker Group" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1409(simpara) msgid "" "Finally, we need to create a service group to ensure that" " virtual IP is linked to the API services resources :" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1416(title) msgid "Network Controller Cluster Stack" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1418(simpara) msgid "" "The Network controller sits on the management and data network, and needs to" " be connected to the Internet if a VM needs access to it." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1420(simpara) msgid "" "Both nodes should have the same hostname since the Neutron scheduler will be" " aware of one node, for example a virtual router attached to a single L3 " "node." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1425(title) msgid "Highly available Neutron L3 Agent" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1427(simpara) msgid "" "The Neutron L3 agent provides L3/NAT forwarding to ensure external network " "access for VMs on tenant networks. High Availability for the L3 agent is " "achieved by adopting Pacemaker." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1431(simpara) msgid "" "Here is the documentation for " "installing Neutron L3 Agent." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1435(title) msgid "Adding Neutron L3 Agent resource to Pacemaker" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1441(simpara) msgid "" "You may now proceed with adding the Pacemaker configuration for Neutron L3 " "Agent resource. Connect to the Pacemaker cluster with crm " "configure, and add the following cluster resources:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1451(simpara) msgid "" "p_neutron-l3-agent, a resource for manage Neutron L3 " "Agent service" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1458(simpara) msgid "" "Once completed, commit your configuration changes by entering " "commit from the crm configure menu. " "Pacemaker will then start the Neutron L3 Agent service, and its dependent " "resources, on one of your nodes." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1462(simpara) msgid "" "This method does not ensure a zero downtime since it has to recreate all the" " namespaces and virtual routers on the node." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1469(title) msgid "Highly available Neutron DHCP Agent" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1471(simpara) msgid "" "Neutron DHCP agent distributes IP addresses to the VMs with dnsmasq (by " "default). High Availability for the DHCP agent is achieved by adopting " "Pacemaker." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1475(simpara) msgid "" "Here is the documentation for" " installing Neutron DHCP Agent." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1479(title) msgid "Adding Neutron DHCP Agent resource to Pacemaker" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1485(simpara) msgid "" "You may now proceed with adding the Pacemaker configuration for Neutron DHCP" " Agent resource. Connect to the Pacemaker cluster with crm " "configure, and add the following cluster resources:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1495(simpara) msgid "" "p_neutron-dhcp-agent, a resource for manage Neutron DHCP " "Agent service" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1503(simpara) msgid "" "Once completed, commit your configuration changes by entering " "commit from the crm configure menu. " "Pacemaker will then start the Neutron DHCP Agent service, and its dependent " "resources, on one of your nodes." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1510(title) msgid "Highly available Neutron Metadata Agent" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1512(simpara) msgid "" "Neutron Metadata agent allows Nova API Metadata to be reachable by VMs on " "tenant networks. High Availability for the Metadata agent is achieved by " "adopting Pacemaker." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1516(simpara) msgid "" "Here is the documentation " "for installing Neutron Metadata Agent." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1520(title) msgid "Adding Neutron Metadata Agent resource to Pacemaker" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1526(simpara) msgid "" "You may now proceed with adding the Pacemaker configuration for Neutron " "Metadata Agent resource. Connect to the Pacemaker cluster with crm " "configure, and add the following cluster resources:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1536(simpara) msgid "" "p_neutron-metadata-agent, a resource for manage Neutron " "Metadata Agent service" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1544(simpara) msgid "" "Once completed, commit your configuration changes by entering " "commit from the crm configure menu. " "Pacemaker will then start the Neutron Metadata Agent service, and its " "dependent resources, on one of your nodes." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1551(title) msgid "Manage network resources" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1553(simpara) msgid "" "You may now proceed with adding the Pacemaker configuration for managing all" " network resources together with a group. Connect to the Pacemaker cluster " "with crm configure, and add the following cluster " "resources:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1564(title) msgid "HA Using Active/Active" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1568(title) msgid "Database" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1570(simpara) msgid "" "The first step is installing the database that sits at the heart of the " "cluster. When we’re talking about High Availability, however, we’re talking " "about not just one database, but several (for redundancy) and a means to " "keep them synchronized. In this case, we’re going to choose the MySQL " "database, along with Galera for synchronous multi-master replication." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1575(simpara) msgid "" "The choice of database isn’t a foregone conclusion; you’re not required to " "use MySQL. It is, however, a fairly common choice in OpenStack " "installations, so we’ll cover it here." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1580(title) msgid "MySQL with Galera" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1582(simpara) msgid "" "Rather than starting with a vanilla version of MySQL and then adding Galera " "to it, you will want to install a version of MySQL patched for wsrep (Write " "Set REPlication) from https://launchpad.net/codership-mysql/0.7. Note that the " "installation requirements are a bit touchy; you will want to make sure to " "read the README file so you don’t miss any steps." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1587(simpara) msgid "" "Next, download Galera itself from https://launchpad.net/galera/+download." " Go ahead and install the *.rpms or *.debs, taking care of any dependencies " "that your system doesn’t already have installed." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1590(simpara) msgid "" "Once you’ve completed the installation, you’ll need to make a few " "configuration changes:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1591(simpara) msgid "" "In the system-wide my.conf file, make sure mysqld isn’t " "bound to 127.0.0.1, and that /etc/mysql/conf.d/ is " "included. Typically you can find this file at " "/etc/my.cnf:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1598(simpara) msgid "" "When adding a new node, you must configure it with a MySQL account that can " "access the other nodes so that it can request a state snapshot from one of " "those existing nodes. First specify that account information in " "/etc/mysql/conf.d/wsrep.cnf:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1602(simpara) msgid "Next connect as root and grant privileges to that user:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1604(simpara) msgid "" "You’ll also need to remove user accounts with empty usernames, as they cause" " problems:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1606(simpara) msgid "" "You’ll also need to set certain mandatory configuration options within MySQL" " itself. These include:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1612(simpara) msgid "" "Finally, make sure that the nodes can access each other through the " "firewall. This might mean adjusting iptables, as in:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1616(simpara) msgid "" "It might also mean configuring any NAT firewall between nodes to allow " "direct connections, or disabling SELinux or configuring it to allow mysqld " "to listen to sockets at unprivileged ports." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1619(simpara) msgid "Now you’re ready to actually create the cluster." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1622(title) msgid "Creating the cluster" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1624(simpara) msgid "" "In creating a cluster, you first start a single instance, which creates the " "cluster. The rest of the MySQL instances then connect to that cluster. For " "example, if you started on 10.0.0.10 by executing the " "command:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1626(simpara) msgid "" "you could then connect to that cluster on the rest of the nodes by " "referencing the address of that node, as in:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1628(simpara) msgid "" "You also have the option to set the wsrep_cluster_address" " in the /etc/mysql/conf.d/wsrep.cnf file, or within the " "client itself. (In fact, for some systems, such as MariaDB or Percona, this " "may be your only option.) For example, to check the status of the cluster, " "open the MySQL client and check the status of the various parameters:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1635(simpara) msgid "You should see a status that looks something like this:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1684(title) msgid "Galera Monitoring Scripts" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1686(simpara) msgid "(Coming soon)" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1690(title) msgid "Other ways to provide a Highly Available database" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1692(simpara) msgid "" "MySQL with Galera is by no means the only way to achieve database HA. " "MariaDB (https://mariadb.org/) " "and Percona (http://www.percona.com/) also work " "with Galera. You also have the option to use Postgres, which has its own " "replication, or some other database HA option." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1700(title) msgid "RabbitMQ" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1702(simpara) msgid "" "RabbitMQ is the default AMQP server used by many OpenStack services. Making " "the RabbitMQ service highly available involves the following steps:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1706(simpara) #: ./doc/high-availability-guide/bk-ha-guide.xml1723(title) msgid "Install RabbitMQ" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1711(simpara) msgid "Configure RabbitMQ for HA queues" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1716(simpara) msgid "Configure OpenStack services to use Rabbit HA queues" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1725(simpara) msgid "This setup has been tested with RabbitMQ 2.7.1." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1728(title) msgid "On Ubuntu / Debian" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1730(simpara) #: ./doc/high-availability-guide/bk-ha-guide.xml1740(simpara) msgid "RabbitMQ is packaged on both distros:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1733(link) msgid "Official manual for installing RabbitMQ on Ubuntu / Debian" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1738(title) msgid "On Fedora / RHEL" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1743(link) msgid "Official manual for installing RabbitMQ on Fedora / RHEL" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1749(title) msgid "Configure RabbitMQ" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1751(simpara) msgid "" "Here we are building a cluster of RabbitMQ nodes to construct a RabbitMQ " "broker. Mirrored queues in RabbitMQ improve the availability of service " "since it will be resilient to failures. We have to consider that while " "exchanges and bindings will survive the loss of individual nodes, queues and" " their messages will not because a queue and its contents is located on one " "node. If we lose this node, we also lose the queue." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1756(simpara) msgid "" "We consider that we run (at least) two RabbitMQ servers. To build a broker, " "we need to ensure that all nodes have the same erlang cookie file. To do so," " stop RabbitMQ everywhere and copy the cookie from rabbit1 server to other " "server(s):" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1761(simpara) msgid "" "Then, start RabbitMQ on nodes. If RabbitMQ fails to start, you can’t " "continue to the next step." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1763(simpara) msgid "Now, we are building the HA cluster. From rabbit2, run these commands:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1767(simpara) msgid "To verify the cluster status :" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1772(simpara) msgid "" "If the cluster is working, you can now proceed to creating users and " "passwords for queues." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1774(emphasis) msgid "Note for RabbitMQ version 3" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1776(simpara) msgid "" "Queue mirroring is no longer controlled by the x-ha-" "policy argument when declaring a queue. OpenStack can continue to" " declare this argument, but it won’t cause queues to be mirrored. We need to" " make sure that all queues (except those with auto-generated names) are " "mirrored across all running nodes:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1781(link) msgid "More informations about High availability in RabbitMQ" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1786(title) msgid "Configure OpenStack Services to use RabbitMQ" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1788(simpara) msgid "" "Since the Grizzly Release, most of the OpenStack components using queuing " "have supported the feature. We have to configure them to use at least two " "RabbitMQ nodes." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1790(simpara) msgid "Do this configuration on all services using RabbitMQ:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1791(simpara) msgid "RabbitMQ HA cluster host:port pairs:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1793(simpara) msgid "How frequently to retry connecting with RabbitMQ:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1795(simpara) msgid "How long to backoff for between retries when connecting to RabbitMQ:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1797(simpara) msgid "" "Maximum retries with trying to connect to RabbitMQ (infinite by default):" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1799(simpara) msgid "Use durable queues in RabbitMQ:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1801(simpara) msgid "Use H/A queues in RabbitMQ (x-ha-policy: all):" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1803(simpara) msgid "" "If you change the configuration from an old setup which did not use HA " "queues, you should interrupt the service:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1807(simpara) msgid "" "Services currently working with HA queues: OpenStack Compute, OpenStack " "Block Storage, OpenStack Networking, Telemetry." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1812(title) msgid "HAproxy Nodes" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1814(simpara) msgid "" "HAProxy is a very fast and reliable solution offering high availability, " "load balancing, and proxying for TCP and HTTP-based applications. It is " "particularly suited for web sites crawling under very high loads while " "needing persistence or Layer 7 processing. Supporting tens of thousands of " "connections is clearly realistic with today’s hardware." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1818(simpara) msgid "" "For installing HAproxy on your nodes, you should consider its official documentation. Also, " "you have to consider that this service should not be a single point of " "failure, so you need at least two nodes running HAproxy." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1821(simpara) msgid "Here is an example for HAproxy configuration file:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1961(simpara) msgid "After each change of this file, you should restart HAproxy." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1965(title) msgid "OpenStack Controller Nodes" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1967(simpara) msgid "OpenStack Controller Nodes contains:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1970(simpara) msgid "All OpenStack API services" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1975(simpara) msgid "All OpenStack schedulers" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1980(simpara) msgid "Memcached service" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1987(title) msgid "Running OpenStack API & schedulers" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1991(title) msgid "API Services" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1993(simpara) msgid "" "All OpenStack projects have an API service for controlling all the resources" " in the Cloud. In Active / Active mode, the most common setup is to scale-" "out these services on at least two nodes and use load-balancing and virtual " "IP (with HAproxy & Keepalived in this setup)." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1997(emphasis) msgid "Configuring API OpenStack services" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml1999(simpara) msgid "" "To configure our Cloud using Highly available and scalable API services, we " "need to ensure that:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2002(simpara) msgid "Using Virtual IP when configuring OpenStack Identity Endpoints." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2007(simpara) msgid "All OpenStack configuration files should refer to Virtual IP." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2013(emphasis) msgid "In case of failure" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2015(simpara) msgid "" "The monitor check is quite simple since it just establishes a TCP connection" " to the API port. Comparing to the Active / Passive mode using Corosync " "& Resources Agents, we don’t check if the service is actually running). " "That’s why all OpenStack API should be monitored by another tool (i.e. " "Nagios) with the goal to detect failures in the Cloud Framework " "infrastructure." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2022(title) msgid "Schedulers" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2024(simpara) msgid "" "OpenStack schedulers are used to determine how to dispatch compute, network " "and volume requests. The most common setup is to use RabbitMQ as messaging " "system already documented in this guide. Those services are connected to the" " messaging backend and can scale-out :" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2029(simpara) msgid "nova-scheduler" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2034(simpara) msgid "nova-conductor" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2039(simpara) msgid "cinder-scheduler" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2044(simpara) msgid "neutron-server" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2049(simpara) msgid "ceilometer-collector" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2054(simpara) msgid "heat-engine" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2059(simpara) msgid "" "Please refer to the RabbitMQ section for configure these services with " "multiple messaging servers." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2064(title) msgid "Memcached" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2066(simpara) msgid "" "Most of OpenStack services use an application to offer persistence and store" " ephemeral datas (like tokens). Memcached is one of them and can scale-out " "easily without specific trick." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2068(simpara) msgid "" "To install and configure it, you can read the official " "documentation." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2069(simpara) msgid "" "Memory caching is managed by Oslo-incubator for so the way to use multiple " "memcached servers is the same for all projects." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2070(simpara) msgid "Example with two hosts:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2072(simpara) msgid "" "By default, controller1 will handle the caching service but if the host goes" " down, controller2 will do the job. More informations about memcached " "installation are in the OpenStack Compute Manual." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2078(title) msgid "OpenStack Network Nodes" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2080(simpara) msgid "OpenStack Network Nodes contains:" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2083(simpara) msgid "Neutron DHCP Agent" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2088(simpara) msgid "Neutron L2 Agent" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2093(simpara) msgid "Neutron L3 Agent" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2098(simpara) msgid "Neutron Metadata Agent" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2103(simpara) msgid "Neutron LBaaS Agent" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2109(simpara) msgid "" "The Neutron L2 Agent does not need to be highly available. It has to be " "installed on each Data Forwarding Node and controls the virtual networking " "drivers as Open-vSwitch or Linux Bridge. One L2 agent runs per node and " "controls its virtual interfaces. That’s why it cannot be distributed and " "highly available." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2117(title) msgid "Running Neutron DHCP Agent" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2119(simpara) msgid "" "Since the Grizzly release, OpenStack Networking service has a scheduler that" " lets you run multiple agents across nodes. Also, the DHCP agent can be " "natively highly available. For details, see OpenStack Configuration " "Reference." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2125(title) msgid "Running Neutron L3 Agent" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2127(simpara) msgid "" "Since the Grizzly release, the Neutron L3 Agent is scalable thanks to the " "scheduler that allows distribution of virtual routers across multiple nodes." " But there is no native feature to make these routers highly available. At " "this time, the Active / Passive solution exists to run the Neutron L3 agent " "in failover mode with Pacemaker. See the Active / Passive section of this " "guide." msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2136(title) msgid "Running Neutron Metadata Agent" msgstr "" #: ./doc/high-availability-guide/bk-ha-guide.xml2138(simpara) msgid "" "There is no native feature to make this service highly available. At this " "time, the Active / Passive solution exists to run the Neutron Metadata agent" " in failover mode with Pacemaker. See the Active / Passive section of this " "guide." msgstr "" #. Put one translator per line, in the form of NAME , YEAR1, YEAR2 #: ./doc/high-availability-guide/bk-ha-guide.xml0(None) msgid "translator-credits" msgstr ""