openstack-manuals/doc/high-availability-guide/locale/ko_KR.po
OpenStack Proposal Bot 6dd5f96e58 Imported Translations from Transifex
Change-Id: Id003c3f1b43bb006c9413c7393e2c2f442ba96a7
2014-04-19 06:44:27 +00:00

2306 lines
86 KiB
Plaintext
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

#
# Translators:
msgid ""
msgstr ""
"Project-Id-Version: OpenStack Manuals\n"
"POT-Creation-Date: 2014-04-19 04:10+0000\n"
"PO-Revision-Date: 2014-04-18 18:25+0000\n"
"Last-Translator: openstackjenkins <jenkins@openstack.org>\n"
"Language-Team: Korean (Korea) (http://www.transifex.com/projects/p/openstack/language/ko_KR/)\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: ko_KR\n"
"Plural-Forms: nplurals=1; plural=0;\n"
#: ./doc/high-availability-guide/ha-guide-docinfo.xml4(firstname)
#: ./doc/high-availability-guide/bk-ha-guide.xml8(firstname)
msgid "Florian"
msgstr "Florian"
#: ./doc/high-availability-guide/ha-guide-docinfo.xml5(surname)
#: ./doc/high-availability-guide/bk-ha-guide.xml9(surname)
msgid "Haas"
msgstr "Haas"
#: ./doc/high-availability-guide/ha-guide-docinfo.xml7(email)
#: ./doc/high-availability-guide/bk-ha-guide.xml11(email)
msgid "florian@hastexo.com"
msgstr "florian@hastexo.com"
#: ./doc/high-availability-guide/ha-guide-docinfo.xml9(orgname)
#: ./doc/high-availability-guide/bk-ha-guide.xml13(orgname)
msgid "hastexo"
msgstr "hastexo"
#: ./doc/high-availability-guide/bk-ha-guide.xml5(title)
msgid "OpenStack High Availability Guide"
msgstr "OpenStack 고 가용성 가이드"
#: ./doc/high-availability-guide/bk-ha-guide.xml17(year)
msgid "2012"
msgstr "2012"
#: ./doc/high-availability-guide/bk-ha-guide.xml18(year)
msgid "2013"
msgstr "2013"
#: ./doc/high-availability-guide/bk-ha-guide.xml19(holder)
msgid "OpenStack Contributors"
msgstr "OpenStack 기여자"
#: ./doc/high-availability-guide/bk-ha-guide.xml21(releaseinfo)
msgid "current"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml22(productname)
msgid "OpenStack"
msgstr "OpenStack"
#: ./doc/high-availability-guide/bk-ha-guide.xml26(remark)
msgid "Copyright details are filled in by the template."
msgstr "저작권 상세 정보는 템플릿에 채워집니다."
#: ./doc/high-availability-guide/bk-ha-guide.xml31(date)
msgid "2012-01-16"
msgstr "2012-01-16"
#: ./doc/high-availability-guide/bk-ha-guide.xml35(para)
msgid "Organizes guide based on cloud controller and compute nodes."
msgstr "Compute 노드와 Cloud Controller을 기초로 하여 가이드를 작성하였습니다."
#: ./doc/high-availability-guide/bk-ha-guide.xml41(date)
msgid "2012-05-24"
msgstr "2012-05-24"
#: ./doc/high-availability-guide/bk-ha-guide.xml45(para)
msgid "Begin trunk designation."
msgstr "Trunk 작업 시작"
#: ./doc/high-availability-guide/bk-ha-guide.xml54(title)
msgid "Introduction to OpenStack High Availability"
msgstr "OpenStack 고가용성 설명"
#: ./doc/high-availability-guide/bk-ha-guide.xml56(simpara)
msgid "High Availability systems seek to minimize two things:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml59(simpara)
msgid ""
"<emphasis role=\"strong\">System downtime</emphasis>occurs when a "
"<emphasis>user-facing</emphasis> service is unavailable beyond a specified "
"maximum amount of time, and"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml63(simpara)
msgid ""
"<emphasis role=\"strong\">Data loss</emphasis>accidental deletion or "
"destruction of data."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml67(simpara)
msgid ""
"Most high availability systems guarantee protection against system downtime "
"and data loss only in the event of a single failure. However, they are also "
"expected to protect against cascading failures, where a single failure "
"deteriorates into a series of consequential failures."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml68(simpara)
msgid ""
"A crucial aspect of high availability is the elimination of single points of"
" failure (SPOFs). A SPOF is an individual piece of equipment or software "
"which will cause system downtime or data loss if it fails. In order to "
"eliminate SPOFs, check that mechanisms exist for redundancy of:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml71(simpara)
msgid "Network components, such as switches and routers"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml76(simpara)
msgid "Applications and automatic service migration"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml81(simpara)
msgid "Storage components"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml86(simpara)
msgid "Facility services such as power, air conditioning, and fire protection"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml91(simpara)
msgid ""
"Most high availability systems will fail in the event of multiple "
"independent (non-consequential) failures. In this case, most systems will "
"protect data over maintaining availability."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml92(simpara)
msgid ""
"High-availability systems typically achieve uptime of 99.99% or more, which "
"roughly equates to less than an hour of cumulative downtime per year. In "
"order to achieve this, high availability systems should keep recovery times "
"after a failure to about one to two minutes, sometimes significantly less."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml93(simpara)
msgid ""
"OpenStack currently meets such availability requirements for its own "
"infrastructure services, meaning that an uptime of 99.99% is feasible for "
"the OpenStack infrastructure proper. However, OpenStack "
"<emphasis>does</emphasis><emphasis>not</emphasis> guarantee 99.99% "
"availability for individual guest instances."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml94(simpara)
msgid ""
"Preventing single points of failure can depend on whether or not a service "
"is stateless."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml97(title)
msgid "Stateless vs. Stateful services"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml99(simpara)
msgid ""
"A stateless service is one that provides a response after your request, and "
"then requires no further attention. To make a stateless service highly "
"available, you need to provide redundant instances and load balance them. "
"OpenStack services that are stateless include nova-api, nova-conductor, "
"glance-api, keystone-api, neutron-api and nova-scheduler."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml100(simpara)
msgid ""
"A stateful service is one where subsequent requests to the service depend on"
" the results of the first request. Stateful services are more difficult to "
"manage because a single action typically involves more than one request, so "
"simply providing additional instances and load balancing will not solve the "
"problem. For example, if the Horizon user interface reset itself every time "
"you went to a new page, it wouldnt be very useful. OpenStack services that "
"are stateful include the OpenStack database and message queue."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml101(simpara)
msgid ""
"Making stateful services highly available can depend on whether you choose "
"an active/passive or active/active configuration."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml105(title)
msgid "Active/Passive"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml107(simpara)
msgid ""
"In an active/passive configuration, systems are set up to bring additional "
"resources online to replace those that have failed. For example, OpenStack "
"would write to the main database while maintaining a disaster recovery "
"database that can be brought online in the event that the main database "
"fails."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml108(simpara)
msgid ""
"Typically, an active/passive installation for a stateless service would "
"maintain a redundant instance that can be brought online when required. "
"Requests are load balanced using a virtual IP address and a load balancer "
"such as HAProxy."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml109(simpara)
msgid ""
"A typical active/passive installation for a stateful service maintains a "
"replacement resource that can be brought online when required. A separate "
"application (such as Pacemaker or Corosync) monitors these services, "
"bringing the backup online as necessary."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml113(title)
msgid "Active/Active"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml115(simpara)
msgid ""
"In an active/active configuration, systems also use a backup but will manage"
" both the main and redundant systems concurrently. This way, if there is a "
"failure the user is unlikely to notice. The backup system is already online,"
" and takes on increased load while the main system is fixed and brought back"
" online."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml116(simpara)
msgid ""
"Typically, an active/active installation for a stateless service would "
"maintain a redundant instance, and requests are load balanced using a "
"virtual IP address and a load balancer such as HAProxy."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml117(simpara)
msgid ""
"A typical active/active installation for a stateful service would include "
"redundant services with all instances having an identical state. For "
"example, updates to one instance of a database would also update all other "
"instances. This way a request to one instance is the same as a request to "
"any other. A load balancer manages the traffic to these systems, ensuring "
"that operational systems always handle the request."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml118(simpara)
msgid ""
"These are some of the more common ways to implement these high availability "
"architectures, but they are by no means the only ways to do it. The "
"important thing is to make sure that your services are redundant, and "
"available; how you achieve that is up to you. This document will cover some "
"of the more common options for highly available systems."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml123(title)
msgid "HA Using Active/Passive"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml127(title)
msgid "The Pacemaker Cluster Stack"
msgstr "The Pacemaker Cluster Stack"
#: ./doc/high-availability-guide/bk-ha-guide.xml129(simpara)
msgid ""
"OpenStack infrastructure high availability relies on the <link "
"href=\"http://www.clusterlabs.org\">Pacemaker</link> cluster stack, the "
"state-of-the-art high availability and load balancing stack for the Linux "
"platform. Pacemaker is storage and application-agnostic, and is in no way "
"specific to OpenStack."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml134(simpara)
msgid ""
"Pacemaker relies on the <link "
"href=\"http://www.corosync.org\">Corosync</link> messaging layer for "
"reliable cluster communications. Corosync implements the Totem single-ring "
"ordering and membership protocol. It also provides UDP and InfiniBand based "
"messaging, quorum, and cluster membership to Pacemaker."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml139(simpara)
msgid ""
"Pacemaker interacts with applications through <emphasis>resource "
"agents</emphasis> (RAs), of which it supports over 70 natively. Pacemaker "
"can also easily use third-party RAs. An OpenStack high-availability "
"configuration uses existing native Pacemaker RAs (such as those managing "
"MySQL databases or virtual IP addresses), existing third-party RAs (such as "
"for RabbitMQ), and native OpenStack RAs (such as those managing the "
"OpenStack Identity and Image Services)."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml148(title)
msgid "Installing Packages"
msgstr "설치 패키지들"
#: ./doc/high-availability-guide/bk-ha-guide.xml150(simpara)
msgid ""
"On any host that is meant to be part of a Pacemaker cluster, you must first "
"establish cluster communications through the Corosync messaging layer. This "
"involves installing the following packages (and their dependencies, which "
"your package manager will normally install automatically):"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml157(simpara)
msgid ""
"<literal>pacemaker</literal> Note that the crm shell should be downloaded "
"separately."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml162(literal)
msgid "crmsh"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml167(literal)
msgid "corosync"
msgstr "corosync"
#: ./doc/high-availability-guide/bk-ha-guide.xml172(literal)
msgid "cluster-glue"
msgstr "cluster-glue"
#: ./doc/high-availability-guide/bk-ha-guide.xml176(simpara)
msgid ""
"<literal>fence-agents</literal> (Fedora only; all other distributions use "
"fencing agents from <literal>cluster-glue</literal>)"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml182(literal)
msgid "resource-agents"
msgstr "resource-agents"
#: ./doc/high-availability-guide/bk-ha-guide.xml189(title)
msgid "Setting up Corosync"
msgstr "Corosync 설정"
#: ./doc/high-availability-guide/bk-ha-guide.xml191(simpara)
msgid ""
"Besides installing the <literal>corosync</literal> package, you will also "
"have to create a configuration file, stored in "
"<literal>/etc/corosync/corosync.conf</literal>. Most distributions ship an "
"example configuration file (<literal>corosync.conf.example</literal>) as "
"part of the documentation bundled with the <literal>corosync</literal> "
"package. An example Corosync configuration file is shown below:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml199(title)
msgid "Corosync configuration file (<literal>corosync.conf</literal>)"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml273(para)
msgid ""
"The <literal>token</literal> value specifies the time, in milliseconds, "
"during which the Corosync token is expected to be transmitted around the "
"ring. When this timeout expires, the token is declared lost, and after "
"<literal>token_retransmits_before_loss_const</literal> lost tokens the non-"
"responding <emphasis>processor</emphasis> (cluster node) is declared dead. "
"In other words, <literal>token</literal> × "
"<literal>token_retransmits_before_loss_const</literal> is the maximum time a"
" node is allowed to not respond to cluster messages before being considered "
"dead. The default for <literal>token</literal> is 1000 (1 second), with 4 "
"allowed retransmits. These defaults are intended to minimize failover times,"
" but can cause frequent \"false alarms\" and unintended failovers in case of"
" short network interruptions. The values used here are safer, albeit with "
"slightly extended failover times."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml289(para)
msgid ""
"With <literal>secauth</literal> enabled, Corosync nodes mutually "
"authenticate using a 128-byte shared secret stored in "
"<literal>/etc/corosync/authkey</literal>, which may be generated with the "
"<literal>corosync-keygen</literal> utility. When using "
"<literal>secauth</literal>, cluster communications are also encrypted."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml297(para)
msgid ""
"In Corosync configurations using redundant networking (with more than one "
"<literal>interface</literal>), you must select a Redundant Ring Protocol "
"(RRP) mode other than <literal>none</literal>. <literal>active</literal> is "
"the recommended RRP mode."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml304(para)
msgid ""
"There are several things to note about the recommended interface "
"configuration:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml310(simpara)
msgid ""
"The <literal>ringnumber</literal> must differ between all configured "
"interfaces, starting with 0."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml316(simpara)
msgid ""
"The <literal>bindnetaddr</literal> is the <emphasis>network</emphasis> "
"address of the interfaces to bind to. The example uses two network addresses"
" of <literal>/24</literal> IPv4 subnets."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml322(simpara)
msgid ""
"Multicast groups (<literal>mcastaddr</literal>) <emphasis>must "
"not</emphasis> be reused across cluster boundaries. In other words, no two "
"distinct clusters should ever use the same multicast group. Be sure to "
"select multicast addresses compliant with <link "
"href=\"http://www.ietf.org/rfc/rfc2365.txt\">RFC 2365, \"Administratively "
"Scoped IP Multicast\"</link>."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml331(simpara)
msgid ""
"For firewall configurations, note that Corosync communicates over UDP only, "
"and uses <literal>mcastport</literal> (for receives) and "
"<literal>mcastport</literal>-1 (for sends)."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml340(para)
msgid ""
"The <literal>service</literal> declaration for the "
"<literal>pacemaker</literal> service may be placed in the "
"<literal>corosync.conf</literal> file directly, or in its own separate file,"
" <literal>/etc/corosync/service.d/pacemaker</literal>."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml347(simpara)
msgid ""
"Once created, the <literal>corosync.conf</literal> file (and the "
"<literal>authkey</literal> file if the <literal>secauth</literal> option is "
"enabled) must be synchronized across all cluster nodes."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml353(title)
msgid "Starting Corosync"
msgstr "Corosync 시작"
#: ./doc/high-availability-guide/bk-ha-guide.xml355(simpara)
msgid ""
"Corosync is started as a regular system service. Depending on your "
"distribution, it may ship with a LSB (System V style) init script, an "
"upstart job, or a systemd unit file. Either way, the service is usually "
"named <literal>corosync</literal>:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml361(simpara)
msgid "<literal>/etc/init.d/corosync start</literal> (LSB)"
msgstr "<literal>/etc/init.d/corosync start</literal> (LSB)"
#: ./doc/high-availability-guide/bk-ha-guide.xml365(simpara)
msgid "<literal>service corosync start</literal> (LSB, alternate)"
msgstr "<literal>service corosync start</literal> (LSB, alternate)"
#: ./doc/high-availability-guide/bk-ha-guide.xml369(simpara)
msgid "<literal>start corosync</literal> (upstart)"
msgstr "<literal>start corosync</literal> (upstart)"
#: ./doc/high-availability-guide/bk-ha-guide.xml373(simpara)
msgid "<literal>systemctl start corosync</literal> (systemd)"
msgstr "<literal>systemctl start corosync</literal> (systemd)"
#: ./doc/high-availability-guide/bk-ha-guide.xml377(simpara)
msgid "You can now check the Corosync connectivity with two tools."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml378(simpara)
msgid ""
"The <literal>corosync-cfgtool</literal> utility, when invoked with the "
"<literal>-s</literal> option, gives a summary of the health of the "
"communication rings:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml389(simpara)
msgid ""
"The <literal>corosync-objctl</literal> utility can be used to dump the "
"Corosync cluster member list:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml398(simpara)
msgid ""
"You should see a <literal>status=joined</literal> entry for each of your "
"constituent cluster nodes."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml403(title)
msgid "Starting Pacemaker"
msgstr "Pacemaker 시작"
#: ./doc/high-availability-guide/bk-ha-guide.xml405(simpara)
msgid ""
"Once the Corosync services have been started, and you have established that "
"the cluster is communicating properly, it is safe to start "
"<literal>pacemakerd</literal>, the Pacemaker master control process:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml410(simpara)
msgid "<literal>/etc/init.d/pacemaker start</literal> (LSB)"
msgstr "<literal>/etc/init.d/pacemaker start</literal> (LSB)"
#: ./doc/high-availability-guide/bk-ha-guide.xml414(simpara)
msgid "<literal>service pacemaker start</literal> (LSB, alternate)"
msgstr "<literal>service pacemaker start</literal> (LSB, alternate)"
#: ./doc/high-availability-guide/bk-ha-guide.xml418(simpara)
msgid "<literal>start pacemaker</literal> (upstart)"
msgstr "<literal>start pacemaker</literal> (upstart)"
#: ./doc/high-availability-guide/bk-ha-guide.xml422(simpara)
msgid "<literal>systemctl start pacemaker</literal> (systemd)"
msgstr "<literal>systemctl start pacemaker</literal> (systemd)"
#: ./doc/high-availability-guide/bk-ha-guide.xml426(simpara)
msgid ""
"Once Pacemaker services have started, Pacemaker will create a default empty "
"cluster configuration with no resources. You may observe Pacemakers status "
"with the <literal>crm_mon</literal> utility:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml443(title)
msgid "Setting basic cluster properties"
msgstr "기본 클러스터 특성 설정"
#: ./doc/high-availability-guide/bk-ha-guide.xml445(simpara)
msgid ""
"Once your Pacemaker cluster is set up, it is recommended to set a few basic "
"cluster properties. To do so, start the <literal>crm</literal> shell and "
"change into the configuration menu by entering <literal>configure</literal>."
" Alternatively. you may jump straight into the Pacemaker configuration menu "
"by typing <literal>crm configure</literal> directly from a shell prompt."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml451(simpara)
msgid "Then, set the following properties:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml459(para)
msgid ""
"Setting <literal>no-quorum-policy=\"ignore\"</literal> is required in 2-node"
" Pacemaker clusters for the following reason: if quorum enforcement is "
"enabled, and one of the two nodes fails, then the remaining node can not "
"establish a <emphasis>majority</emphasis> of quorum votes necessary to run "
"services, and thus it is unable to take over any resources. The appropriate "
"workaround is to ignore loss of quorum in the cluster. This is safe and "
"necessary <emphasis>only</emphasis> in 2-node clusters. Do not set this "
"property in Pacemaker clusters with more than two nodes."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml471(para)
msgid ""
"Setting <literal>pe-warn-series-max</literal>, <literal>pe-input-series-"
"max</literal> and <literal>pe-error-series-max</literal> to 1000 instructs "
"Pacemaker to keep a longer history of the inputs processed, and errors and "
"warnings generated, by its Policy Engine. This history is typically useful "
"in case cluster troubleshooting becomes necessary."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml480(para)
msgid ""
"Pacemaker uses an event-driven approach to cluster state processing. "
"However, certain Pacemaker actions occur at a configurable interval, "
"<literal>cluster-recheck-interval</literal>, which defaults to 15 minutes. "
"It is usually prudent to reduce this to a shorter interval, such as 5 or 3 "
"minutes."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml489(simpara)
msgid ""
"Once you have made these changes, you may <literal>commit</literal> the "
"updated configuration."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml495(title)
msgid "Cloud Controller Cluster Stack"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml497(simpara)
msgid ""
"The Cloud Controller sits on the management network and needs to talk to all"
" other services."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml500(title)
msgid "Highly available MySQL"
msgstr "고 가용성 MySQL"
#: ./doc/high-availability-guide/bk-ha-guide.xml502(simpara)
msgid ""
"MySQL is the default database server used by many OpenStack services. Making"
" the MySQL service highly available involves"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml506(simpara)
msgid "configuring a DRBD device for use by MySQL,"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml511(simpara)
msgid ""
"configuring MySQL to use a data directory residing on that DRBD device,"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml517(simpara)
#: ./doc/high-availability-guide/bk-ha-guide.xml763(simpara)
msgid ""
"selecting and assigning a virtual IP address (VIP) that can freely float "
"between cluster nodes,"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml523(simpara)
msgid "configuring MySQL to listen on that IP address,"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml528(simpara)
msgid ""
"managing all resources, including the MySQL daemon itself, with the "
"Pacemaker cluster manager."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml535(simpara)
msgid ""
"<link "
"href=\"http://codership.com/products/mysql_galera\">MySQL/Galera</link> is "
"an alternative method of configuring MySQL for high availability. It is "
"likely to become the preferred method of achieving MySQL high availability "
"once it has sufficiently matured. At the time of writing, however, the "
"Pacemaker/DRBD based approach remains the recommended one for OpenStack "
"environments."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml544(title)
#: ./doc/high-availability-guide/bk-ha-guide.xml793(title)
msgid "Configuring DRBD"
msgstr "DRBD 구성"
#: ./doc/high-availability-guide/bk-ha-guide.xml546(simpara)
msgid ""
"The Pacemaker based MySQL server requires a DRBD resource from which it "
"mounts the <literal>/var/lib/mysql</literal> directory. In this example, the"
" DRBD resource is simply named <literal>mysql</literal>:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml551(title)
msgid ""
"<literal>mysql</literal> DRBD resource configuration "
"(<literal>/etc/drbd.d/mysql.res</literal>)"
msgstr "<literal>mysql</literal> DRBD 자원 구성 (<literal>/etc/drbd.d/mysql.res</literal>)"
#: ./doc/high-availability-guide/bk-ha-guide.xml567(simpara)
msgid ""
"This resource uses an underlying local disk (in DRBD terminology, a "
"<emphasis>backing device</emphasis>) named "
"<literal>/dev/data/mysql</literal> on both cluster nodes, "
"<literal>node1</literal> and <literal>node2</literal>. Normally, this would "
"be an LVM Logical Volume specifically set aside for this purpose. The DRBD "
"<literal>meta-disk</literal> is <literal>internal</literal>, meaning DRBD-"
"specific metadata is being stored at the end of the <literal>disk</literal> "
"device itself. The device is configured to communicate between IPv4 "
"addresses 10.0.42.100 and 10.0.42.254, using TCP port 7700. Once enabled, it"
" will map to a local DRBD block device with the device minor number 0, that "
"is, <literal>/dev/drbd0</literal>."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml576(simpara)
#: ./doc/high-availability-guide/bk-ha-guide.xml825(simpara)
msgid ""
"Enabling a DRBD resource is explained in detail in <link "
"href=\"http://www.drbd.org/users-guide-8.3/s-first-time-up.html\">the DRBD "
"Users Guide</link>. In brief, the proper sequence of commands is this:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml584(para)
msgid ""
"Initializes DRBD metadata and writes the initial set of metadata to "
"<literal>/dev/data/mysql</literal>. Must be completed on both nodes."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml590(para)
msgid ""
"Creates the <literal>/dev/drbd0</literal> device node, "
"<emphasis>attaches</emphasis> the DRBD device to its backing store, and "
"<emphasis>connects</emphasis> the DRBD node to its peer. Must be completed "
"on both nodes."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml597(para)
#: ./doc/high-availability-guide/bk-ha-guide.xml846(para)
msgid ""
"Kicks off the initial device synchronization, and puts the device into the "
"<literal>primary</literal> (readable and writable) role. See <link "
"href=\"http://www.drbd.org/users-guide-8.3/ch-admin.html#s-roles\">Resource "
"roles</link> (from the DRBD Users Guide) for a more detailed description of"
" the primary and secondary roles in DRBD. Must be completed <emphasis>on one"
" node only,</emphasis> namely the one where you are about to continue with "
"creating your filesystem."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml611(title)
#: ./doc/high-availability-guide/bk-ha-guide.xml860(title)
msgid "Creating a file system"
msgstr "파일 시스템 생성"
#: ./doc/high-availability-guide/bk-ha-guide.xml613(simpara)
msgid ""
"Once the DRBD resource is running and in the primary role (and potentially "
"still in the process of running the initial device synchronization), you may"
" proceed with creating the filesystem for MySQL data. XFS is the generally "
"recommended filesystem:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml618(simpara)
#: ./doc/high-availability-guide/bk-ha-guide.xml867(simpara)
msgid ""
"You may also use the alternate device path for the DRBD device, which may be"
" easier to remember as it includes the self-explanatory resource name:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml622(simpara)
#: ./doc/high-availability-guide/bk-ha-guide.xml871(simpara)
msgid ""
"Once completed, you may safely return the device to the secondary role. Any "
"ongoing device synchronization will continue in the background:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml629(title)
msgid "Preparing MySQL for Pacemaker high availability"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml631(simpara)
msgid ""
"In order for Pacemaker monitoring to function properly, you must ensure that"
" MySQLs database files reside on the DRBD device. If you already have an "
"existing MySQL database, the simplest approach is to just move the contents "
"of the existing <literal>/var/lib/mysql</literal> directory into the newly "
"created filesystem on the DRBD device."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml637(simpara)
msgid ""
"You must complete the next step while the MySQL database server is shut "
"down."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml643(simpara)
msgid ""
"For a new MySQL installation with no existing data, you may also run the "
"<literal>mysql_install_db</literal> command:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml648(simpara)
msgid ""
"Regardless of the approach, the steps outlined here must be completed on "
"only one cluster node."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml653(title)
msgid "Adding MySQL resources to Pacemaker"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml655(simpara)
msgid ""
"You may now proceed with adding the Pacemaker configuration for MySQL "
"resources. Connect to the Pacemaker cluster with <literal>crm "
"configure</literal>, and add the following cluster resources:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml691(simpara)
#: ./doc/high-availability-guide/bk-ha-guide.xml925(simpara)
#: ./doc/high-availability-guide/bk-ha-guide.xml1124(simpara)
#: ./doc/high-availability-guide/bk-ha-guide.xml1217(simpara)
#: ./doc/high-availability-guide/bk-ha-guide.xml1378(simpara)
#: ./doc/high-availability-guide/bk-ha-guide.xml1452(simpara)
#: ./doc/high-availability-guide/bk-ha-guide.xml1496(simpara)
#: ./doc/high-availability-guide/bk-ha-guide.xml1537(simpara)
msgid "This configuration creates"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml694(simpara)
msgid ""
"<literal>p_ip_mysql</literal>, a virtual IP address for use by MySQL "
"(192.168.42.101),"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml699(simpara)
msgid ""
"<literal>p_fs_mysql</literal>, a Pacemaker managed filesystem mounted to "
"<literal>/var/lib/mysql</literal> on whatever node currently runs the MySQL "
"service,"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml705(simpara)
msgid ""
"<literal>ms_drbd_mysql</literal>, the <emphasis>master/slave set</emphasis> "
"managing the <literal>mysql</literal> DRBD resource,"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml710(simpara)
#: ./doc/high-availability-guide/bk-ha-guide.xml944(simpara)
msgid ""
"a service <literal>group</literal> and <literal>order</literal> and "
"<literal>colocation</literal> constraints to ensure resources are started on"
" the correct nodes, and in the correct sequence."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml716(simpara)
msgid ""
"<literal>crm configure</literal> supports batch input, so you may copy and "
"paste the above into your live pacemaker configuration, and then make "
"changes as required. For example, you may enter <literal>edit "
"p_ip_mysql</literal> from the <literal>crm configure</literal> menu and edit"
" the resource to match your preferred virtual IP address."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml721(simpara)
msgid ""
"Once completed, commit your configuration changes by entering "
"<literal>commit</literal> from the <literal>crm configure</literal> menu. "
"Pacemaker will then start the MySQL service, and its dependent resources, on"
" one of your nodes."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml727(title)
msgid "Configuring OpenStack services for highly available MySQL"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml729(simpara)
msgid ""
"Your OpenStack services must now point their MySQL configuration to the "
"highly available, virtual cluster IP addressrather than a MySQL servers "
"physical IP address as you normally would."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml732(simpara)
msgid ""
"For OpenStack Image, for example, if your MySQL service IP address is "
"192.168.42.101 as in the configuration explained here, you would use the "
"following line in your OpenStack Image registry configuration file (<literal"
">glance-registry.conf</literal>):"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml737(simpara)
msgid ""
"No other changes are necessary to your OpenStack configuration. If the node "
"currently hosting your database experiences a problem necessitating service "
"failover, your OpenStack services may experience a brief MySQL interruption,"
" as they would in the event of a network hiccup, and then continue to run "
"normally."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml746(title)
msgid "Highly available RabbitMQ"
msgstr "고 가용성 RabbitMQ"
#: ./doc/high-availability-guide/bk-ha-guide.xml748(simpara)
msgid ""
"RabbitMQ is the default AMQP server used by many OpenStack services. Making "
"the RabbitMQ service highly available involves:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml752(simpara)
msgid "configuring a DRBD device for use by RabbitMQ,"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml757(simpara)
msgid ""
"configuring RabbitMQ to use a data directory residing on that DRBD device,"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml769(simpara)
msgid "configuring RabbitMQ to listen on that IP address,"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml774(simpara)
msgid ""
"managing all resources, including the RabbitMQ daemon itself, with the "
"Pacemaker cluster manager."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml781(simpara)
msgid ""
"There is an alternative method of configuring RabbitMQ for high "
"availability. That approach, known as <link "
"href=\"http://www.rabbitmq.com/ha.html\">active-active mirrored "
"queues</link>, happens to be the one preferred by the RabbitMQ developers"
"however it has shown less than ideal consistency and reliability in "
"OpenStack clusters. Thus, at the time of writing, the Pacemaker/DRBD based "
"approach remains the recommended one for OpenStack environments, although "
"this may change in the near future as RabbitMQ active-active mirrored queues"
" mature."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml795(simpara)
msgid ""
"The Pacemaker based RabbitMQ server requires a DRBD resource from which it "
"mounts the <literal>/var/lib/rabbitmq</literal> directory. In this example, "
"the DRBD resource is simply named <literal>rabbitmq</literal>:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml800(title)
msgid ""
"<literal>rabbitmq</literal> DRBD resource configuration "
"(<literal>/etc/drbd.d/rabbitmq.res</literal>)"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml816(simpara)
msgid ""
"This resource uses an underlying local disk (in DRBD terminology, a "
"<emphasis>backing device</emphasis>) named "
"<literal>/dev/data/rabbitmq</literal> on both cluster nodes, "
"<literal>node1</literal> and <literal>node2</literal>. Normally, this would "
"be an LVM Logical Volume specifically set aside for this purpose. The DRBD "
"<literal>meta-disk</literal> is <literal>internal</literal>, meaning DRBD-"
"specific metadata is being stored at the end of the <literal>disk</literal> "
"device itself. The device is configured to communicate between IPv4 "
"addresses 10.0.42.100 and 10.0.42.254, using TCP port 7701. Once enabled, it"
" will map to a local DRBD block device with the device minor number 1, that "
"is, <literal>/dev/drbd1</literal>."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml833(para)
msgid ""
"Initializes DRBD metadata and writes the initial set of metadata to "
"<literal>/dev/data/rabbitmq</literal>. Must be completed on both nodes."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml839(para)
msgid ""
"Creates the <literal>/dev/drbd1</literal> device node, "
"<emphasis>attaches</emphasis> the DRBD device to its backing store, and "
"<emphasis>connects</emphasis> the DRBD node to its peer. Must be completed "
"on both nodes."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml862(simpara)
msgid ""
"Once the DRBD resource is running and in the primary role (and potentially "
"still in the process of running the initial device synchronization), you may"
" proceed with creating the filesystem for RabbitMQ data. XFS is generally "
"the recommended filesystem:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml878(title)
msgid "Preparing RabbitMQ for Pacemaker high availability"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml880(simpara)
msgid ""
"In order for Pacemaker monitoring to function properly, you must ensure that"
" RabbitMQs <literal>.erlang.cookie</literal> files are identical on all "
"nodes, regardless of whether DRBD is mounted there or not. The simplest way "
"of doing so is to take an existing <literal>.erlang.cookie</literal> from "
"one of your nodes, copying it to the RabbitMQ data directory on the other "
"node, and also copying it to the DRBD-backed filesystem."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml893(title)
msgid "Adding RabbitMQ resources to Pacemaker"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml895(simpara)
msgid ""
"You may now proceed with adding the Pacemaker configuration for RabbitMQ "
"resources. Connect to the Pacemaker cluster with <literal>crm "
"configure</literal>, and add the following cluster resources:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml928(simpara)
msgid ""
"<literal>p_ip_rabbitmp</literal>, a virtual IP address for use by RabbitMQ "
"(192.168.42.100),"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml933(simpara)
msgid ""
"<literal>p_fs_rabbitmq</literal>, a Pacemaker managed filesystem mounted to "
"<literal>/var/lib/rabbitmq</literal> on whatever node currently runs the "
"RabbitMQ service,"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml939(simpara)
msgid ""
"<literal>ms_drbd_rabbitmq</literal>, the <emphasis>master/slave "
"set</emphasis> managing the <literal>rabbitmq</literal> DRBD resource,"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml950(simpara)
msgid ""
"<literal>crm configure</literal> supports batch input, so you may copy and "
"paste the above into your live pacemaker configuration, and then make "
"changes as required. For example, you may enter <literal>edit "
"p_ip_rabbitmq</literal> from the <literal>crm configure</literal> menu and "
"edit the resource to match your preferred virtual IP address."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml955(simpara)
msgid ""
"Once completed, commit your configuration changes by entering "
"<literal>commit</literal> from the <literal>crm configure</literal> menu. "
"Pacemaker will then start the RabbitMQ service, and its dependent resources,"
" on one of your nodes."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml961(title)
msgid "Configuring OpenStack services for highly available RabbitMQ"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml963(simpara)
msgid ""
"Your OpenStack services must now point their RabbitMQ configuration to the "
"highly available, virtual cluster IP addressrather than a RabbitMQ "
"servers physical IP address as you normally would."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml966(simpara)
msgid ""
"For OpenStack Image, for example, if your RabbitMQ service IP address is "
"192.168.42.100 as in the configuration explained here, you would use the "
"following line in your OpenStack Image API configuration file (<literal"
">glance-api.conf</literal>):"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml971(simpara)
msgid ""
"No other changes are necessary to your OpenStack configuration. If the node "
"currently hosting your RabbitMQ experiences a problem necessitating service "
"failover, your OpenStack services may experience a brief RabbitMQ "
"interruption, as they would in the event of a network hiccup, and then "
"continue to run normally."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml981(title)
msgid "API Node Cluster Stack"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml983(simpara)
msgid ""
"The API node exposes OpenStack API endpoints onto external network "
"(Internet). It needs to talk to the Cloud Controller on the management "
"network."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml987(title)
msgid "Configure the VIP"
msgstr "VIP 구성"
#: ./doc/high-availability-guide/bk-ha-guide.xml989(simpara)
msgid ""
"First of all, we need to select and assign a virtual IP address (VIP) that "
"can freely float between cluster nodes."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml990(simpara)
msgid ""
"This configuration creates <literal>p_ip_api</literal>, a virtual IP address"
" for use by the API node (192.168.42.103) :"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml997(title)
msgid "Highly available OpenStack Identity"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml999(simpara)
msgid ""
"OpenStack Identity is the Identity Service in OpenStack and used by many "
"services. Making the OpenStack Identity service highly available in active /"
" passive mode involves"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1003(simpara)
msgid "configuring OpenStack Identity to listen on the VIP address,"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1008(simpara)
msgid "managing OpenStack Identity daemon with the Pacemaker cluster manager,"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1013(simpara)
#: ./doc/high-availability-guide/bk-ha-guide.xml1102(simpara)
#: ./doc/high-availability-guide/bk-ha-guide.xml1194(simpara)
#: ./doc/high-availability-guide/bk-ha-guide.xml1280(simpara)
msgid "configuring OpenStack services to use this IP address."
msgstr "IP 주소를 이용하여 OpenStack 서비스 구성"
#: ./doc/high-availability-guide/bk-ha-guide.xml1019(simpara)
msgid ""
"Here is the <link href=\"http://docs.openstack.org/trunk/install-"
"guide/install/apt/content/ch_installing-openstack-identity-"
"service.html\">documentation</link> for installing OpenStack Identity "
"service."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1023(title)
msgid "Adding OpenStack Identity resource to Pacemaker"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1025(simpara)
#: ./doc/high-availability-guide/bk-ha-guide.xml1114(simpara)
#: ./doc/high-availability-guide/bk-ha-guide.xml1206(simpara)
#: ./doc/high-availability-guide/bk-ha-guide.xml1292(simpara)
#: ./doc/high-availability-guide/bk-ha-guide.xml1368(simpara)
#: ./doc/high-availability-guide/bk-ha-guide.xml1441(simpara)
#: ./doc/high-availability-guide/bk-ha-guide.xml1485(simpara)
#: ./doc/high-availability-guide/bk-ha-guide.xml1526(simpara)
msgid "First of all, you need to download the resource agent to your system:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1031(simpara)
msgid ""
"You may now proceed with adding the Pacemaker configuration for OpenStack "
"Identity resource. Connect to the Pacemaker cluster with <literal>crm "
"configure</literal>, and add the following cluster resources:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1037(simpara)
msgid ""
"This configuration creates <literal>p_keystone</literal>, a resource for "
"managing the OpenStack Identity service."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1038(simpara)
msgid ""
"<literal>crm configure</literal> supports batch input, so you may copy and "
"paste the above into your live pacemaker configuration, and then make "
"changes as required. For example, you may enter <literal>edit "
"p_ip_keystone</literal> from the <literal>crm configure</literal> menu and "
"edit the resource to match your preferred virtual IP address."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1043(simpara)
msgid ""
"Once completed, commit your configuration changes by entering "
"<literal>commit</literal> from the <literal>crm configure</literal> menu. "
"Pacemaker will then start the OpenStack Identity service, and its dependent "
"resources, on one of your nodes."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1049(title)
msgid "Configuring OpenStack Identity service"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1051(simpara)
msgid ""
"You need to edit your OpenStack Identity configuration file "
"(<literal>keystone.conf</literal>) and change the bind parameters:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1052(simpara)
msgid "On Havana:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1054(simpara)
msgid ""
"On Icehouse, the <literal>admin_bind_host</literal> option lets you use a "
"private network for the admin access."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1057(simpara)
msgid ""
"To be sure all data will be highly available, you should be sure that you "
"store everything in the MySQL database (which is also highly available):"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1067(title)
msgid ""
"Configuring OpenStack Services to use the Highly Available OpenStack "
"Identity"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1069(simpara)
msgid ""
"Your OpenStack services must now point their OpenStack Identity "
"configuration to the highly available, virtual cluster IP addressrather "
"than a OpenStack Identity servers physical IP address as you normally "
"would."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1072(simpara)
msgid ""
"For example with OpenStack Compute, if your OpenStack Identity service IP "
"address is 192.168.42.103 as in the configuration explained here, you would "
"use the following line in your API configuration file (<literal>api-"
"paste.ini</literal>):"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1077(simpara)
msgid "You also need to create the OpenStack Identity Endpoint with this IP."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1078(simpara)
#: ./doc/high-availability-guide/bk-ha-guide.xml1342(simpara)
msgid ""
"NOTE : If you are using both private and public IP addresses, you should "
"create two Virtual IP addresses and define your endpoint like this:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1080(simpara)
msgid ""
"If you are using the Horizon Dashboard, you should edit the "
"<literal>local_settings.py</literal> file:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1086(title)
msgid "Highly available OpenStack Image API"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1088(simpara)
msgid ""
"OpenStack Image Service offers a service for discovering, registering, and "
"retrieving virtual machine images. Making the OpenStack Image API service "
"highly available in active / passive mode involves"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1092(simpara)
msgid "configuring OpenStack Image to listen on the VIP address,"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1097(simpara)
msgid ""
"managing OpenStack Image API daemon with the Pacemaker cluster manager,"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1108(simpara)
msgid ""
"Here is the <link href=\"http://docs.openstack.org/trunk/install-"
"guide/install/apt/content/ch_installing-openstack-"
"image.html\">documentation</link> for installing OpenStack Image API "
"service."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1112(title)
msgid "Adding OpenStack Image API resource to Pacemaker"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1118(simpara)
msgid ""
"You may now proceed with adding the Pacemaker configuration for OpenStack "
"Image API resource. Connect to the Pacemaker cluster with <literal>crm "
"configure</literal>, and add the following cluster resources:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1127(simpara)
msgid ""
"<literal>p_glance-api</literal>, a resource for manage OpenStack Image API "
"service"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1131(simpara)
msgid ""
"<literal>crm configure</literal> supports batch input, so you may copy and "
"paste the above into your live pacemaker configuration, and then make "
"changes as required. For example, you may enter <literal>edit p_ip_glance-"
"api</literal> from the <literal>crm configure</literal> menu and edit the "
"resource to match your preferred virtual IP address."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1136(simpara)
msgid ""
"Once completed, commit your configuration changes by entering "
"<literal>commit</literal> from the <literal>crm configure</literal> menu. "
"Pacemaker will then start the OpenStack Image API service, and its dependent"
" resources, on one of your nodes."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1142(title)
msgid "Configuring OpenStack Image API service"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1144(simpara)
msgid "Edit <literal>/etc/glance/glance-api.conf</literal>:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1160(title)
msgid ""
"Configuring OpenStack Services to use High Available OpenStack Image API"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1162(simpara)
msgid ""
"Your OpenStack services must now point their OpenStack Image API "
"configuration to the highly available, virtual cluster IP addressrather "
"than an OpenStack Image API servers physical IP address as you normally "
"would."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1165(simpara)
msgid ""
"For OpenStack Compute, for example, if your OpenStack Image API service IP "
"address is 192.168.42.104 as in the configuration explained here, you would "
"use the following line in your <literal>nova.conf</literal> file:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1169(simpara)
msgid "You need also to create the OpenStack Image API Endpoint with this IP."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1171(simpara)
msgid ""
"If you are using both private and public IP addresses, you should create two"
" Virtual IP addresses and define your endpoint like this:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1178(title)
msgid "Highly available Cinder API"
msgstr "고 가용성 Cinder API"
#: ./doc/high-availability-guide/bk-ha-guide.xml1180(simpara)
msgid ""
"Cinder is the block storage service in OpenStack. Making the Cinder API "
"service highly available in active / passive mode involves"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1184(simpara)
msgid "configuring Cinder to listen on the VIP address,"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1189(simpara)
msgid "managing Cinder API daemon with the Pacemaker cluster manager,"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1200(simpara)
msgid ""
"Here is the <link href=\"http://docs.openstack.org/trunk/install-"
"guide/install/apt/content/cinder-install.html\">documentation</link> for "
"installing Cinder service."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1204(title)
msgid "Adding Cinder API resource to Pacemaker"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1210(simpara)
msgid ""
"You may now proceed with adding the Pacemaker configuration for Cinder API "
"resource. Connect to the Pacemaker cluster with <literal>crm "
"configure</literal>, and add the following cluster resources:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1220(simpara)
msgid ""
"<literal>p_cinder-api</literal>, a resource for manage Cinder API service"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1224(simpara)
msgid ""
"<literal>crm configure</literal> supports batch input, so you may copy and "
"paste the above into your live pacemaker configuration, and then make "
"changes as required. For example, you may enter <literal>edit p_ip_cinder-"
"api</literal> from the <literal>crm configure</literal> menu and edit the "
"resource to match your preferred virtual IP address."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1229(simpara)
msgid ""
"Once completed, commit your configuration changes by entering "
"<literal>commit</literal> from the <literal>crm configure</literal> menu. "
"Pacemaker will then start the Cinder API service, and its dependent "
"resources, on one of your nodes."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1235(title)
msgid "Configuring Cinder API service"
msgstr "Cinder API 서비스 구성"
#: ./doc/high-availability-guide/bk-ha-guide.xml1237(simpara)
msgid "Edit <literal>/etc/cinder/cinder.conf</literal>:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1250(title)
msgid "Configuring OpenStack Services to use High Available Cinder API"
msgstr "고 가용성 Cinder API를 사용하여 OpenStack 서비스 구성"
#: ./doc/high-availability-guide/bk-ha-guide.xml1252(simpara)
msgid ""
"Your OpenStack services must now point their Cinder API configuration to the"
" highly available, virtual cluster IP addressrather than a Cinder API "
"servers physical IP address as you normally would."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1255(simpara)
msgid "You need to create the Cinder API Endpoint with this IP."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1257(simpara)
msgid ""
"If you are using both private and public IP, you should create two Virtual "
"IPs and define your endpoint like this:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1264(title)
msgid "Highly available OpenStack Networking Server"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1266(simpara)
msgid ""
"OpenStack Networking is the network connectivity service in OpenStack. "
"Making the OpenStack Networking Server service highly available in active / "
"passive mode involves"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1270(simpara)
msgid "configuring OpenStack Networking to listen on the VIP address,"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1275(simpara)
msgid ""
"managing OpenStack Networking API Server daemon with the Pacemaker cluster "
"manager,"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1286(simpara)
msgid ""
"Here is the <link href=\"http://docs.openstack.org/trunk/install-"
"guide/install/apt/content/ch_installing-openstack-"
"networking.html\">documentation</link> for installing OpenStack Networking "
"service."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1290(title)
msgid "Adding OpenStack Networking Server resource to Pacemaker"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1296(simpara)
msgid ""
"You may now proceed with adding the Pacemaker configuration for OpenStack "
"Networking Server resource. Connect to the Pacemaker cluster with "
"<literal>crm configure</literal>, and add the following cluster resources:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1303(simpara)
msgid ""
"This configuration creates <literal>p_neutron-server</literal>, a resource "
"for manage OpenStack Networking Server service"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1304(simpara)
msgid ""
"<literal>crm configure</literal> supports batch input, so you may copy and "
"paste the above into your live pacemaker configuration, and then make "
"changes as required. For example, you may enter <literal>edit p_neutron-"
"server</literal> from the <literal>crm configure</literal> menu and edit the"
" resource to match your preferred virtual IP address."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1309(simpara)
msgid ""
"Once completed, commit your configuration changes by entering "
"<literal>commit</literal> from the <literal>crm configure</literal> menu. "
"Pacemaker will then start the OpenStack Networking API service, and its "
"dependent resources, on one of your nodes."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1315(title)
msgid "Configuring OpenStack Networking Server"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1317(simpara)
msgid "Edit <literal>/etc/neutron/neutron.conf</literal> :"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1334(title)
msgid ""
"Configuring OpenStack Services to use Highly available OpenStack Networking "
"Server"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1336(simpara)
msgid ""
"Your OpenStack services must now point their OpenStack Networking Server "
"configuration to the highly available, virtual cluster IP addressrather "
"than an OpenStack Networking servers physical IP address as you normally "
"would."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1339(simpara)
msgid ""
"For example, you should configure OpenStack Compute for using Highly "
"Available OpenStack Networking Server in editing "
"<literal>nova.conf</literal> file:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1341(simpara)
msgid ""
"You need to create the OpenStack Networking Server Endpoint with this IP."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1348(title)
msgid "Highly available Ceilometer Central Agent"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1350(simpara)
msgid ""
"Ceilometer is the metering service in OpenStack. Central Agent polls for "
"resource utilization statistics for resources not tied to instances or "
"compute nodes."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1354(simpara)
msgid ""
"Due to limitations of a polling model, a single instance of this agent can "
"be polling a given list of meters. In this setup, we install this service on"
" the API nodes also in the active / passive mode."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1358(simpara)
msgid ""
"Making the Ceilometer Central Agent service highly available in active / "
"passive mode involves managing its daemon with the Pacemaker cluster "
"manager."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1361(simpara)
msgid ""
"You will find at <link "
"href=\"http://docs.openstack.org/developer/ceilometer/install/manual.html"
"#installing-the-central-agent\">this page</link> the process to install the "
"Ceilometer Central Agent."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1366(title)
msgid "Adding the Ceilometer Central Agent resource to Pacemaker"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1372(simpara)
msgid ""
"You may then proceed with adding the Pacemaker configuration for the "
"Ceilometer Central Agent resource. Connect to the Pacemaker cluster with "
"<literal>crm configure</literal>, and add the following cluster resources:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1381(simpara)
msgid ""
"<literal>p_ceilometer-agent-central</literal>, a resource for manage "
"Ceilometer Central Agent service"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1385(simpara)
#: ./doc/high-availability-guide/bk-ha-guide.xml1459(simpara)
#: ./doc/high-availability-guide/bk-ha-guide.xml1504(simpara)
#: ./doc/high-availability-guide/bk-ha-guide.xml1545(simpara)
msgid ""
"<literal>crm configure</literal> supports batch input, so you may copy and "
"paste the above into your live pacemaker configuration, and then make "
"changes as required."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1388(simpara)
msgid ""
"Once completed, commit your configuration changes by entering "
"<literal>commit</literal> from the <literal>crm configure</literal> menu. "
"Pacemaker will then start the Ceilometer Central Agent service, and its "
"dependent resources, on one of your nodes."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1394(title)
msgid "Configuring Ceilometer Central Agent service"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1396(simpara)
msgid "Edit <literal>/etc/ceilometer/ceilometer.conf</literal> :"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1411(title)
msgid "Configure Pacemaker Group"
msgstr "Pacemaker 그룹 구성"
#: ./doc/high-availability-guide/bk-ha-guide.xml1413(simpara)
msgid ""
"Finally, we need to create a service <literal>group</literal> to ensure that"
" virtual IP is linked to the API services resources :"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1420(title)
msgid "Network Controller Cluster Stack"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1422(simpara)
msgid ""
"The Network controller sits on the management and data network, and needs to"
" be connected to the Internet if a VM needs access to it."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1424(simpara)
msgid ""
"Both nodes should have the same hostname since the Neutron scheduler will be"
" aware of one node, for example a virtual router attached to a single L3 "
"node."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1429(title)
msgid "Highly available Neutron L3 Agent"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1431(simpara)
msgid ""
"The Neutron L3 agent provides L3/NAT forwarding to ensure external network "
"access for VMs on tenant networks. High Availability for the L3 agent is "
"achieved by adopting Pacemaker."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1435(simpara)
msgid ""
"Here is the <link href=\"http://docs.openstack.org/trunk/config-"
"reference/content/section_adv_cfg_l3_agent.html\">documentation</link> for "
"installing Neutron L3 Agent."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1439(title)
msgid "Adding Neutron L3 Agent resource to Pacemaker"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1445(simpara)
msgid ""
"You may now proceed with adding the Pacemaker configuration for Neutron L3 "
"Agent resource. Connect to the Pacemaker cluster with <literal>crm "
"configure</literal>, and add the following cluster resources:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1455(simpara)
msgid ""
"<literal>p_neutron-l3-agent</literal>, a resource for manage Neutron L3 "
"Agent service"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1462(simpara)
msgid ""
"Once completed, commit your configuration changes by entering "
"<literal>commit</literal> from the <literal>crm configure</literal> menu. "
"Pacemaker will then start the Neutron L3 Agent service, and its dependent "
"resources, on one of your nodes."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1466(simpara)
msgid ""
"This method does not ensure a zero downtime since it has to recreate all the"
" namespaces and virtual routers on the node."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1473(title)
msgid "Highly available Neutron DHCP Agent"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1475(simpara)
msgid ""
"Neutron DHCP agent distributes IP addresses to the VMs with dnsmasq (by "
"default). High Availability for the DHCP agent is achieved by adopting "
"Pacemaker."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1479(simpara)
msgid ""
"Here is the <link href=\"http://docs.openstack.org/trunk/config-"
"reference/content/section_adv_cfg_dhcp_agent.html\">documentation</link> for"
" installing Neutron DHCP Agent."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1483(title)
msgid "Adding Neutron DHCP Agent resource to Pacemaker"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1489(simpara)
msgid ""
"You may now proceed with adding the Pacemaker configuration for Neutron DHCP"
" Agent resource. Connect to the Pacemaker cluster with <literal>crm "
"configure</literal>, and add the following cluster resources:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1499(simpara)
msgid ""
"<literal>p_neutron-dhcp-agent</literal>, a resource for manage Neutron DHCP "
"Agent service"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1507(simpara)
msgid ""
"Once completed, commit your configuration changes by entering "
"<literal>commit</literal> from the <literal>crm configure</literal> menu. "
"Pacemaker will then start the Neutron DHCP Agent service, and its dependent "
"resources, on one of your nodes."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1514(title)
msgid "Highly available Neutron Metadata Agent"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1516(simpara)
msgid ""
"Neutron Metadata agent allows Nova API Metadata to be reachable by VMs on "
"tenant networks. High Availability for the Metadata agent is achieved by "
"adopting Pacemaker."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1520(simpara)
msgid ""
"Here is the <link href=\"http://docs.openstack.org/trunk/config-"
"reference/content/networking-options-metadata.html\">documentation</link> "
"for installing Neutron Metadata Agent."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1524(title)
msgid "Adding Neutron Metadata Agent resource to Pacemaker"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1530(simpara)
msgid ""
"You may now proceed with adding the Pacemaker configuration for Neutron "
"Metadata Agent resource. Connect to the Pacemaker cluster with <literal>crm "
"configure</literal>, and add the following cluster resources:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1540(simpara)
msgid ""
"<literal>p_neutron-metadata-agent</literal>, a resource for manage Neutron "
"Metadata Agent service"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1548(simpara)
msgid ""
"Once completed, commit your configuration changes by entering "
"<literal>commit</literal> from the <literal>crm configure</literal> menu. "
"Pacemaker will then start the Neutron Metadata Agent service, and its "
"dependent resources, on one of your nodes."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1555(title)
msgid "Manage network resources"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1557(simpara)
msgid ""
"You may now proceed with adding the Pacemaker configuration for managing all"
" network resources together with a group. Connect to the Pacemaker cluster "
"with <literal>crm configure</literal>, and add the following cluster "
"resources:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1568(title)
msgid "HA Using Active/Active"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1572(title)
msgid "Database"
msgstr "데이터베이스"
#: ./doc/high-availability-guide/bk-ha-guide.xml1574(simpara)
msgid ""
"The first step is installing the database that sits at the heart of the "
"cluster. When were talking about High Availability, however, were talking "
"about not just one database, but several (for redundancy) and a means to "
"keep them synchronized. In this case, were going to choose the MySQL "
"database, along with Galera for synchronous multi-master replication."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1579(simpara)
msgid ""
"The choice of database isnt a foregone conclusion; youre not required to "
"use MySQL. It is, however, a fairly common choice in OpenStack "
"installations, so well cover it here."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1584(title)
msgid "MySQL with Galera"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1586(simpara)
msgid ""
"Rather than starting with a vanilla version of MySQL and then adding Galera "
"to it, you will want to install a version of MySQL patched for wsrep (Write "
"Set REPlication) from <link href=\"https://launchpad.net/codership-"
"mysql/0.7\">https://launchpad.net/codership-mysql/0.7</link>. Note that the "
"installation requirements are a bit touchy; you will want to make sure to "
"read the README file so you dont miss any steps."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1591(simpara)
msgid ""
"Next, download Galera itself from <link "
"href=\"https://launchpad.net/galera/+download\">https://launchpad.net/galera/+download</link>."
" Go ahead and install the *.rpms or *.debs, taking care of any dependencies "
"that your system doesnt already have installed."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1594(simpara)
msgid ""
"Once youve completed the installation, youll need to make a few "
"configuration changes:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1595(simpara)
msgid ""
"In the system-wide <literal>my.conf</literal> file, make sure mysqld isnt "
"bound to 127.0.0.1, and that <literal>/etc/mysql/conf.d/</literal> is "
"included. Typically you can find this file at "
"<literal>/etc/my.cnf</literal>:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1602(simpara)
msgid ""
"When adding a new node, you must configure it with a MySQL account that can "
"access the other nodes so that it can request a state snapshot from one of "
"those existing nodes. First specify that account information in "
"<literal>/etc/mysql/conf.d/wsrep.cnf</literal>:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1606(simpara)
msgid "Next connect as root and grant privileges to that user:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1608(simpara)
msgid ""
"Youll also need to remove user accounts with empty usernames, as they cause"
" problems:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1610(simpara)
msgid ""
"Youll also need to set certain mandatory configuration options within MySQL"
" itself. These include:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1616(simpara)
msgid ""
"Finally, make sure that the nodes can access each other through the "
"firewall. This might mean adjusting iptables, as in:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1620(simpara)
msgid ""
"It might also mean configuring any NAT firewall between nodes to allow "
"direct connections, or disabling SELinux or configuring it to allow mysqld "
"to listen to sockets at unprivileged ports."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1623(simpara)
msgid "Now youre ready to actually create the cluster."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1626(title)
msgid "Creating the cluster"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1628(simpara)
msgid ""
"In creating a cluster, you first start a single instance, which creates the "
"cluster. The rest of the MySQL instances then connect to that cluster. For "
"example, if you started on <literal>10.0.0.10</literal> by executing the "
"command:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1630(simpara)
msgid ""
"you could then connect to that cluster on the rest of the nodes by "
"referencing the address of that node, as in:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1632(simpara)
msgid ""
"You also have the option to set the <literal>wsrep_cluster_address</literal>"
" in the <literal>/etc/mysql/conf.d/wsrep.cnf</literal> file, or within the "
"client itself. (In fact, for some systems, such as MariaDB or Percona, this "
"may be your only option.) For example, to check the status of the cluster, "
"open the MySQL client and check the status of the various parameters:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1639(simpara)
msgid "You should see a status that looks something like this:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1688(title)
msgid "Galera Monitoring Scripts"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1690(simpara)
msgid "(Coming soon)"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1694(title)
msgid "Other ways to provide a Highly Available database"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1696(simpara)
msgid ""
"MySQL with Galera is by no means the only way to achieve database HA. "
"MariaDB (<link href=\"https://mariadb.org/\">https://mariadb.org/</link>) "
"and Percona (<link "
"href=\"http://www.percona.com/\">http://www.percona.com/</link>) also work "
"with Galera. You also have the option to use Postgres, which has its own "
"replication, or some other database HA option."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1704(title)
msgid "RabbitMQ"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1706(simpara)
msgid ""
"RabbitMQ is the default AMQP server used by many OpenStack services. Making "
"the RabbitMQ service highly available involves the following steps:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1710(simpara)
#: ./doc/high-availability-guide/bk-ha-guide.xml1727(title)
msgid "Install RabbitMQ"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1715(simpara)
msgid "Configure RabbitMQ for HA queues"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1720(simpara)
msgid "Configure OpenStack services to use Rabbit HA queues"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1729(simpara)
msgid "This setup has been tested with RabbitMQ 2.7.1."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1732(title)
msgid "On Ubuntu / Debian"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1734(simpara)
#: ./doc/high-availability-guide/bk-ha-guide.xml1744(simpara)
msgid "RabbitMQ is packaged on both distros:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1737(link)
msgid "Official manual for installing RabbitMQ on Ubuntu / Debian"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1742(title)
msgid "On Fedora / RHEL"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1747(link)
msgid "Official manual for installing RabbitMQ on Fedora / RHEL"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1753(title)
msgid "Configure RabbitMQ"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1755(simpara)
msgid ""
"Here we are building a cluster of RabbitMQ nodes to construct a RabbitMQ "
"broker. Mirrored queues in RabbitMQ improve the availability of service "
"since it will be resilient to failures. We have to consider that while "
"exchanges and bindings will survive the loss of individual nodes, queues and"
" their messages will not because a queue and its contents is located on one "
"node. If we lose this node, we also lose the queue."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1760(simpara)
msgid ""
"We consider that we run (at least) two RabbitMQ servers. To build a broker, "
"we need to ensure that all nodes have the same erlang cookie file. To do so,"
" stop RabbitMQ everywhere and copy the cookie from rabbit1 server to other "
"server(s):"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1765(simpara)
msgid ""
"Then, start RabbitMQ on nodes. If RabbitMQ fails to start, you cant "
"continue to the next step."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1767(simpara)
msgid "Now, we are building the HA cluster. From rabbit2, run these commands:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1771(simpara)
msgid "To verify the cluster status :"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1776(simpara)
msgid ""
"If the cluster is working, you can now proceed to creating users and "
"passwords for queues."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1778(emphasis)
msgid "Note for RabbitMQ version 3"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1780(simpara)
msgid ""
"Queue mirroring is no longer controlled by the <emphasis>x-ha-"
"policy</emphasis> argument when declaring a queue. OpenStack can continue to"
" declare this argument, but it wont cause queues to be mirrored. We need to"
" make sure that all queues (except those with auto-generated names) are "
"mirrored across all running nodes:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1785(link)
msgid "More information about High availability in RabbitMQ"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1790(title)
msgid "Configure OpenStack Services to use RabbitMQ"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1792(simpara)
msgid ""
"We have to configure the OpenStack components to use at least two RabbitMQ "
"nodes."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1793(simpara)
msgid "Do this configuration on all services using RabbitMQ:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1794(simpara)
msgid "RabbitMQ HA cluster host:port pairs:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1796(simpara)
msgid "How frequently to retry connecting with RabbitMQ:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1798(simpara)
msgid "How long to back-off for between retries when connecting to RabbitMQ:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1800(simpara)
msgid ""
"Maximum retries with trying to connect to RabbitMQ (infinite by default):"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1802(simpara)
msgid "Use durable queues in RabbitMQ:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1804(simpara)
msgid "Use H/A queues in RabbitMQ (x-ha-policy: all):"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1806(simpara)
msgid ""
"If you change the configuration from an old setup which did not use HA "
"queues, you should interrupt the service:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1810(simpara)
msgid ""
"Services currently working with HA queues: OpenStack Compute, OpenStack "
"Block Storage, OpenStack Networking, Telemetry."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1815(title)
msgid "HAproxy Nodes"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1817(simpara)
msgid ""
"HAProxy is a very fast and reliable solution offering high availability, "
"load balancing, and proxying for TCP and HTTP-based applications. It is "
"particularly suited for web sites crawling under very high loads while "
"needing persistence or Layer 7 processing. Supporting tens of thousands of "
"connections is clearly realistic with todays hardware."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1821(simpara)
msgid ""
"For installing HAproxy on your nodes, you should consider its <link "
"href=\"http://haproxy.1wt.eu/#docs\">official documentation</link>. Also, "
"you have to consider that this service should not be a single point of "
"failure, so you need at least two nodes running HAproxy."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1824(simpara)
msgid "Here is an example for HAproxy configuration file:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1964(simpara)
msgid "After each change of this file, you should restart HAproxy."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1968(title)
msgid "OpenStack Controller Nodes"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1970(simpara)
msgid "OpenStack Controller Nodes contains:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1973(simpara)
msgid "All OpenStack API services"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1978(simpara)
msgid "All OpenStack schedulers"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1983(simpara)
msgid "Memcached service"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1990(title)
msgid "Running OpenStack API &amp; schedulers"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1994(title)
msgid "API Services"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml1996(simpara)
msgid ""
"All OpenStack projects have an API service for controlling all the resources"
" in the Cloud. In Active / Active mode, the most common setup is to scale-"
"out these services on at least two nodes and use load-balancing and virtual "
"IP (with HAproxy &amp; Keepalived in this setup)."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2000(emphasis)
msgid "Configuring API OpenStack services"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2002(simpara)
msgid ""
"To configure our Cloud using Highly available and scalable API services, we "
"need to ensure that:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2005(simpara)
msgid "Using Virtual IP when configuring OpenStack Identity Endpoints."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2010(simpara)
msgid "All OpenStack configuration files should refer to Virtual IP."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2016(emphasis)
msgid "In case of failure"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2018(simpara)
msgid ""
"The monitor check is quite simple since it just establishes a TCP connection"
" to the API port. Comparing to the Active / Passive mode using Corosync "
"&amp; Resources Agents, we dont check if the service is actually running). "
"Thats why all OpenStack API should be monitored by another tool (i.e. "
"Nagios) with the goal to detect failures in the Cloud Framework "
"infrastructure."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2025(title)
msgid "Schedulers"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2027(simpara)
msgid ""
"OpenStack schedulers are used to determine how to dispatch compute, network "
"and volume requests. The most common setup is to use RabbitMQ as messaging "
"system already documented in this guide. Those services are connected to the"
" messaging backend and can scale-out :"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2032(simpara)
msgid "nova-scheduler"
msgstr "nova-scheduler"
#: ./doc/high-availability-guide/bk-ha-guide.xml2037(simpara)
msgid "nova-conductor"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2042(simpara)
msgid "cinder-scheduler"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2047(simpara)
msgid "neutron-server"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2052(simpara)
msgid "ceilometer-collector"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2057(simpara)
msgid "heat-engine"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2062(simpara)
msgid ""
"Please refer to the RabbitMQ section for configure these services with "
"multiple messaging servers."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2067(title)
msgid "Memcached"
msgstr "memcached"
#: ./doc/high-availability-guide/bk-ha-guide.xml2069(simpara)
msgid ""
"Most of OpenStack services use an application to offer persistence and store"
" ephemeral datas (like tokens). Memcached is one of them and can scale-out "
"easily without specific trick."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2071(simpara)
msgid ""
"To install and configure it, you can read the <link "
"href=\"http://code.google.com/p/memcached/wiki/NewStart\">official "
"documentation</link>."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2072(simpara)
msgid ""
"Memory caching is managed by Oslo-incubator for so the way to use multiple "
"memcached servers is the same for all projects."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2073(simpara)
msgid "Example with two hosts:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2075(simpara)
msgid ""
"By default, controller1 will handle the caching service but if the host goes"
" down, controller2 will do the job. More informations about memcached "
"installation are in the OpenStack Compute Manual."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2081(title)
msgid "OpenStack Network Nodes"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2083(simpara)
msgid "OpenStack Network Nodes contains:"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2086(simpara)
msgid "Neutron DHCP Agent"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2091(simpara)
msgid "Neutron L2 Agent"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2096(simpara)
msgid "Neutron L3 Agent"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2101(simpara)
msgid "Neutron Metadata Agent"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2106(simpara)
msgid "Neutron LBaaS Agent"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2112(simpara)
msgid ""
"The Neutron L2 Agent does not need to be highly available. It has to be "
"installed on each Data Forwarding Node and controls the virtual networking "
"drivers as Open-vSwitch or Linux Bridge. One L2 agent runs per node and "
"controls its virtual interfaces. Thats why it cannot be distributed and "
"highly available."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2120(title)
msgid "Running Neutron DHCP Agent"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2122(simpara)
msgid ""
"OpenStack Networking service has a scheduler that lets you run multiple "
"agents across nodes. Also, the DHCP agent can be natively highly available. "
"For details, see <link href=\"http://docs.openstack.org/trunk/config-"
"reference/content/app_demo_multi_dhcp_agents.html\">OpenStack Configuration "
"Reference</link>."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2128(title)
msgid "Running Neutron L3 Agent"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2130(simpara)
msgid ""
"The Neutron L3 Agent is scalable thanks to the scheduler that allows "
"distribution of virtual routers across multiple nodes. But there is no "
"native feature to make these routers highly available. At this time, the "
"Active / Passive solution exists to run the Neutron L3 agent in failover "
"mode with Pacemaker. See the Active / Passive section of this guide."
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2139(title)
msgid "Running Neutron Metadata Agent"
msgstr ""
#: ./doc/high-availability-guide/bk-ha-guide.xml2141(simpara)
msgid ""
"There is no native feature to make this service highly available. At this "
"time, the Active / Passive solution exists to run the Neutron Metadata agent"
" in failover mode with Pacemaker. See the Active / Passive section of this "
"guide."
msgstr ""
#. Put one translator per line, in the form of NAME <EMAIL>, YEAR1, YEAR2
#: ./doc/high-availability-guide/bk-ha-guide.xml0(None)
msgid "translator-credits"
msgstr "Sungjin Gang <ujuc@ujuc.kr>, 2012-2013.\nYeonki Choi < >, 2013.\nJay Lee <hyangii@gmail.com>, 2013"