diff --git a/doc/high-availability-guide/locale/high-availability-guide.pot b/doc/high-availability-guide/locale/high-availability-guide.pot index 8ad0dbb827..2cfa68b119 100644 --- a/doc/high-availability-guide/locale/high-availability-guide.pot +++ b/doc/high-availability-guide/locale/high-availability-guide.pot @@ -1,7 +1,7 @@ msgid "" msgstr "" "Project-Id-Version: PACKAGE VERSION\n" -"POT-Creation-Date: 2014-07-08 06:08+0000\n" +"POT-Creation-Date: 2014-07-09 06:06+0000\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language-Team: LANGUAGE \n" @@ -13,7 +13,7 @@ msgstr "" msgid "Cloud controller cluster stack" msgstr "" -#: ./doc/high-availability-guide/ch_controller.xml:9(simpara) +#: ./doc/high-availability-guide/ch_controller.xml:9(para) msgid "The cloud controller runs on the management network and must talk to all other services." msgstr "" @@ -21,31 +21,31 @@ msgstr "" msgid "OpenStack network nodes" msgstr "" -#: ./doc/high-availability-guide/ch_ha_aa_network.xml:9(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_network.xml:9(para) msgid "OpenStack network nodes contain:" msgstr "" -#: ./doc/high-availability-guide/ch_ha_aa_network.xml:12(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_network.xml:12(para) msgid "neutron DHCP agent" msgstr "" -#: ./doc/high-availability-guide/ch_ha_aa_network.xml:17(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_network.xml:17(para) msgid "neutron L2 agent" msgstr "" -#: ./doc/high-availability-guide/ch_ha_aa_network.xml:22(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_network.xml:22(para) msgid "Neutron L3 agent" msgstr "" -#: ./doc/high-availability-guide/ch_ha_aa_network.xml:27(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_network.xml:27(para) msgid "neutron metadata agent" msgstr "" -#: ./doc/high-availability-guide/ch_ha_aa_network.xml:32(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_network.xml:32(para) msgid "neutron lbaas agent" msgstr "" -#: ./doc/high-availability-guide/ch_ha_aa_network.xml:38(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_network.xml:38(para) msgid "The neutron L2 agent does not need to be highly available. It has to be installed on each Data Forwarding Node and controls the virtual networking drivers as Open vSwitch or Linux Bridge. One L2 agent runs per node and controls its virtual interfaces. That’s why it cannot be distributed and highly available." msgstr "" @@ -53,15 +53,15 @@ msgstr "" msgid "The Pacemaker cluster stack" msgstr "" -#: ./doc/high-availability-guide/ch_pacemaker.xml:9(simpara) +#: ./doc/high-availability-guide/ch_pacemaker.xml:9(para) msgid "OpenStack infrastructure high availability relies on the Pacemaker cluster stack, the state-of-the-art high availability and load balancing stack for the Linux platform. Pacemaker is storage and application-agnostic, and is in no way specific to OpenStack." msgstr "" -#: ./doc/high-availability-guide/ch_pacemaker.xml:14(simpara) +#: ./doc/high-availability-guide/ch_pacemaker.xml:14(para) msgid "Pacemaker relies on the Corosync messaging layer for reliable cluster communications. Corosync implements the Totem single-ring ordering and membership protocol. It also provides UDP and InfiniBand based messaging, quorum, and cluster membership to Pacemaker." msgstr "" -#: ./doc/high-availability-guide/ch_pacemaker.xml:19(simpara) +#: ./doc/high-availability-guide/ch_pacemaker.xml:19(para) msgid "Pacemaker interacts with applications through resource agents (RAs), of which it supports over 70 natively. Pacemaker can also easily use third-party RAs. An OpenStack high-availability configuration uses existing native Pacemaker RAs (such as those managing MySQL databases or virtual IP addresses), existing third-party RAs (such as for RabbitMQ), and native OpenStack RAs (such as those managing the OpenStack Identity and Image Services)." msgstr "" @@ -69,19 +69,19 @@ msgstr "" msgid "RabbitMQ" msgstr "" -#: ./doc/high-availability-guide/ch_ha_aa_rabbitmq.xml:9(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_rabbitmq.xml:9(para) msgid "RabbitMQ is the default AMQP server used by many OpenStack services. Making the RabbitMQ service highly available involves the following steps:" msgstr "" -#: ./doc/high-availability-guide/ch_ha_aa_rabbitmq.xml:13(simpara) ./doc/high-availability-guide/ha_aa_rabbitmq/section_install_rabbitmq.xml:6(title) +#: ./doc/high-availability-guide/ch_ha_aa_rabbitmq.xml:13(para) ./doc/high-availability-guide/ha_aa_rabbitmq/section_install_rabbitmq.xml:6(title) msgid "Install RabbitMQ" msgstr "" -#: ./doc/high-availability-guide/ch_ha_aa_rabbitmq.xml:18(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_rabbitmq.xml:18(para) msgid "Configure RabbitMQ for HA queues" msgstr "" -#: ./doc/high-availability-guide/ch_ha_aa_rabbitmq.xml:23(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_rabbitmq.xml:23(para) msgid "Configure OpenStack services to use Rabbit HA queues" msgstr "" @@ -89,55 +89,55 @@ msgstr "" msgid "Introduction to OpenStack High Availability" msgstr "" -#: ./doc/high-availability-guide/ch_intro.xml:9(simpara) +#: ./doc/high-availability-guide/ch_intro.xml:9(para) msgid "High Availability systems seek to minimize two things:" msgstr "" -#: ./doc/high-availability-guide/ch_intro.xml:12(simpara) +#: ./doc/high-availability-guide/ch_intro.xml:12(para) msgid "System downtime — occurs when a user-facing service is unavailable beyond a specified maximum amount of time, and" msgstr "" -#: ./doc/high-availability-guide/ch_intro.xml:16(simpara) +#: ./doc/high-availability-guide/ch_intro.xml:16(para) msgid "Data loss — accidental deletion or destruction of data." msgstr "" -#: ./doc/high-availability-guide/ch_intro.xml:20(simpara) +#: ./doc/high-availability-guide/ch_intro.xml:20(para) msgid "Most high availability systems guarantee protection against system downtime and data loss only in the event of a single failure. However, they are also expected to protect against cascading failures, where a single failure deteriorates into a series of consequential failures." msgstr "" -#: ./doc/high-availability-guide/ch_intro.xml:21(simpara) +#: ./doc/high-availability-guide/ch_intro.xml:21(para) msgid "A crucial aspect of high availability is the elimination of single points of failure (SPOFs). A SPOF is an individual piece of equipment or software which will cause system downtime or data loss if it fails. In order to eliminate SPOFs, check that mechanisms exist for redundancy of:" msgstr "" -#: ./doc/high-availability-guide/ch_intro.xml:24(simpara) +#: ./doc/high-availability-guide/ch_intro.xml:24(para) msgid "Network components, such as switches and routers" msgstr "" -#: ./doc/high-availability-guide/ch_intro.xml:29(simpara) +#: ./doc/high-availability-guide/ch_intro.xml:29(para) msgid "Applications and automatic service migration" msgstr "" -#: ./doc/high-availability-guide/ch_intro.xml:34(simpara) +#: ./doc/high-availability-guide/ch_intro.xml:34(para) msgid "Storage components" msgstr "" -#: ./doc/high-availability-guide/ch_intro.xml:39(simpara) +#: ./doc/high-availability-guide/ch_intro.xml:39(para) msgid "Facility services such as power, air conditioning, and fire protection" msgstr "" -#: ./doc/high-availability-guide/ch_intro.xml:44(simpara) +#: ./doc/high-availability-guide/ch_intro.xml:44(para) msgid "Most high availability systems will fail in the event of multiple independent (non-consequential) failures. In this case, most systems will protect data over maintaining availability." msgstr "" -#: ./doc/high-availability-guide/ch_intro.xml:45(simpara) +#: ./doc/high-availability-guide/ch_intro.xml:45(para) msgid "High-availability systems typically achieve uptime of 99.99% or more, which roughly equates to less than an hour of cumulative downtime per year. In order to achieve this, high availability systems should keep recovery times after a failure to about one to two minutes, sometimes significantly less." msgstr "" -#: ./doc/high-availability-guide/ch_intro.xml:46(simpara) +#: ./doc/high-availability-guide/ch_intro.xml:46(para) msgid "OpenStack currently meets such availability requirements for its own infrastructure services, meaning that an uptime of 99.99% is feasible for the OpenStack infrastructure proper. However, OpenStack doesnot guarantee 99.99% availability for individual guest instances." msgstr "" -#: ./doc/high-availability-guide/ch_intro.xml:47(simpara) +#: ./doc/high-availability-guide/ch_intro.xml:47(para) msgid "Preventing single points of failure can depend on whether or not a service is stateless." msgstr "" @@ -145,15 +145,15 @@ msgstr "" msgid "Stateless vs. Stateful services" msgstr "" -#: ./doc/high-availability-guide/ch_intro.xml:52(simpara) +#: ./doc/high-availability-guide/ch_intro.xml:52(para) msgid "A stateless service is one that provides a response after your request, and then requires no further attention. To make a stateless service highly available, you need to provide redundant instances and load balance them. OpenStack services that are stateless include nova-api, nova-conductor, glance-api, keystone-api, neutron-api and nova-scheduler." msgstr "" -#: ./doc/high-availability-guide/ch_intro.xml:53(simpara) +#: ./doc/high-availability-guide/ch_intro.xml:53(para) msgid "A stateful service is one where subsequent requests to the service depend on the results of the first request. Stateful services are more difficult to manage because a single action typically involves more than one request, so simply providing additional instances and load balancing will not solve the problem. For example, if the Horizon user interface reset itself every time you went to a new page, it wouldn’t be very useful. OpenStack services that are stateful include the OpenStack database and message queue." msgstr "" -#: ./doc/high-availability-guide/ch_intro.xml:54(simpara) +#: ./doc/high-availability-guide/ch_intro.xml:54(para) msgid "Making stateful services highly available can depend on whether you choose an active/passive or active/active configuration." msgstr "" @@ -161,15 +161,15 @@ msgstr "" msgid "Active/Passive" msgstr "" -#: ./doc/high-availability-guide/ch_intro.xml:60(simpara) +#: ./doc/high-availability-guide/ch_intro.xml:60(para) msgid "In an active/passive configuration, systems are set up to bring additional resources online to replace those that have failed. For example, OpenStack would write to the main database while maintaining a disaster recovery database that can be brought online in the event that the main database fails." msgstr "" -#: ./doc/high-availability-guide/ch_intro.xml:61(simpara) +#: ./doc/high-availability-guide/ch_intro.xml:61(para) msgid "Typically, an active/passive installation for a stateless service would maintain a redundant instance that can be brought online when required. Requests are load balanced using a virtual IP address and a load balancer such as HAProxy." msgstr "" -#: ./doc/high-availability-guide/ch_intro.xml:62(simpara) +#: ./doc/high-availability-guide/ch_intro.xml:62(para) msgid "A typical active/passive installation for a stateful service maintains a replacement resource that can be brought online when required. A separate application (such as Pacemaker or Corosync) monitors these services, bringing the backup online as necessary." msgstr "" @@ -177,19 +177,19 @@ msgstr "" msgid "Active/Active" msgstr "" -#: ./doc/high-availability-guide/ch_intro.xml:68(simpara) +#: ./doc/high-availability-guide/ch_intro.xml:68(para) msgid "In an active/active configuration, systems also use a backup but will manage both the main and redundant systems concurrently. This way, if there is a failure the user is unlikely to notice. The backup system is already online, and takes on increased load while the main system is fixed and brought back online." msgstr "" -#: ./doc/high-availability-guide/ch_intro.xml:69(simpara) +#: ./doc/high-availability-guide/ch_intro.xml:69(para) msgid "Typically, an active/active installation for a stateless service would maintain a redundant instance, and requests are load balanced using a virtual IP address and a load balancer such as HAProxy." msgstr "" -#: ./doc/high-availability-guide/ch_intro.xml:70(simpara) +#: ./doc/high-availability-guide/ch_intro.xml:70(para) msgid "A typical active/active installation for a stateful service would include redundant services with all instances having an identical state. For example, updates to one instance of a database would also update all other instances. This way a request to one instance is the same as a request to any other. A load balancer manages the traffic to these systems, ensuring that operational systems always handle the request." msgstr "" -#: ./doc/high-availability-guide/ch_intro.xml:71(simpara) +#: ./doc/high-availability-guide/ch_intro.xml:71(para) msgid "These are some of the more common ways to implement these high availability architectures, but they are by no means the only ways to do it. The important thing is to make sure that your services are redundant, and available; how you achieve that is up to you. This document will cover some of the more common options for highly available systems." msgstr "" @@ -197,83 +197,83 @@ msgstr "" msgid "OpenStack High Availability Guide" msgstr "" -#: ./doc/high-availability-guide/bk-ha-guide.xml:11(firstname) +#: ./doc/high-availability-guide/bk-ha-guide.xml:12(firstname) msgid "Florian" msgstr "" -#: ./doc/high-availability-guide/bk-ha-guide.xml:12(surname) +#: ./doc/high-availability-guide/bk-ha-guide.xml:13(surname) msgid "Haas" msgstr "" -#: ./doc/high-availability-guide/bk-ha-guide.xml:14(email) +#: ./doc/high-availability-guide/bk-ha-guide.xml:15(email) msgid "florian@hastexo.com" msgstr "" -#: ./doc/high-availability-guide/bk-ha-guide.xml:16(orgname) +#: ./doc/high-availability-guide/bk-ha-guide.xml:17(orgname) msgid "hastexo" msgstr "" -#: ./doc/high-availability-guide/bk-ha-guide.xml:20(year) +#: ./doc/high-availability-guide/bk-ha-guide.xml:21(year) msgid "2012" msgstr "" -#: ./doc/high-availability-guide/bk-ha-guide.xml:21(year) +#: ./doc/high-availability-guide/bk-ha-guide.xml:22(year) msgid "2013" msgstr "" -#: ./doc/high-availability-guide/bk-ha-guide.xml:22(year) +#: ./doc/high-availability-guide/bk-ha-guide.xml:23(year) msgid "2014" msgstr "" -#: ./doc/high-availability-guide/bk-ha-guide.xml:23(holder) +#: ./doc/high-availability-guide/bk-ha-guide.xml:24(holder) msgid "OpenStack Contributors" msgstr "" -#: ./doc/high-availability-guide/bk-ha-guide.xml:25(releaseinfo) +#: ./doc/high-availability-guide/bk-ha-guide.xml:26(releaseinfo) msgid "current" msgstr "" -#: ./doc/high-availability-guide/bk-ha-guide.xml:26(productname) +#: ./doc/high-availability-guide/bk-ha-guide.xml:27(productname) msgid "OpenStack" msgstr "" -#: ./doc/high-availability-guide/bk-ha-guide.xml:30(remark) +#: ./doc/high-availability-guide/bk-ha-guide.xml:31(remark) msgid "Copyright details are filled in by the template." msgstr "" -#: ./doc/high-availability-guide/bk-ha-guide.xml:34(para) +#: ./doc/high-availability-guide/bk-ha-guide.xml:35(para) msgid "This guide describes how to install, configure, and manage OpenStack for high availability." msgstr "" -#: ./doc/high-availability-guide/bk-ha-guide.xml:39(date) +#: ./doc/high-availability-guide/bk-ha-guide.xml:40(date) msgid "2014-05-16" msgstr "" -#: ./doc/high-availability-guide/bk-ha-guide.xml:43(para) +#: ./doc/high-availability-guide/bk-ha-guide.xml:44(para) msgid "Conversion to Docbook." msgstr "" -#: ./doc/high-availability-guide/bk-ha-guide.xml:49(date) +#: ./doc/high-availability-guide/bk-ha-guide.xml:50(date) msgid "2014-04-17" msgstr "" -#: ./doc/high-availability-guide/bk-ha-guide.xml:53(para) +#: ./doc/high-availability-guide/bk-ha-guide.xml:54(para) msgid "Minor cleanup of typos, otherwise no major revisions for Icehouse release." msgstr "" -#: ./doc/high-availability-guide/bk-ha-guide.xml:60(date) +#: ./doc/high-availability-guide/bk-ha-guide.xml:61(date) msgid "2012-01-16" msgstr "" -#: ./doc/high-availability-guide/bk-ha-guide.xml:64(para) +#: ./doc/high-availability-guide/bk-ha-guide.xml:65(para) msgid "Organizes guide based on cloud controller and compute nodes." msgstr "" -#: ./doc/high-availability-guide/bk-ha-guide.xml:70(date) +#: ./doc/high-availability-guide/bk-ha-guide.xml:71(date) msgid "2012-05-24" msgstr "" -#: ./doc/high-availability-guide/bk-ha-guide.xml:74(para) +#: ./doc/high-availability-guide/bk-ha-guide.xml:75(para) msgid "Begin trunk designation." msgstr "" @@ -281,11 +281,11 @@ msgstr "" msgid "Network controller cluster stack" msgstr "" -#: ./doc/high-availability-guide/ch_network.xml:9(simpara) +#: ./doc/high-availability-guide/ch_network.xml:9(para) msgid "The network controller sits on the management and data network, and needs to be connected to the Internet if a VM needs access to it." msgstr "" -#: ./doc/high-availability-guide/ch_network.xml:11(simpara) +#: ./doc/high-availability-guide/ch_network.xml:11(para) msgid "Both nodes should have the same hostname since the Networking scheduler will be aware of one node, for example a virtual router attached to a single L3 node." msgstr "" @@ -293,19 +293,19 @@ msgstr "" msgid "HAproxy nodes" msgstr "" -#: ./doc/high-availability-guide/ch_ha_aa_haproxy.xml:9(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_haproxy.xml:9(para) msgid "HAProxy is a very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. It is particularly suited for web sites crawling under very high loads while needing persistence or Layer 7 processing. Supporting tens of thousands of connections is clearly realistic with today’s hardware." msgstr "" -#: ./doc/high-availability-guide/ch_ha_aa_haproxy.xml:13(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_haproxy.xml:13(para) msgid "For installing HAproxy on your nodes, you should consider its official documentation. Also, you have to consider that this service should not be a single point of failure, so you need at least two nodes running HAproxy." msgstr "" -#: ./doc/high-availability-guide/ch_ha_aa_haproxy.xml:16(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_haproxy.xml:16(para) msgid "Here is an example for HAproxy configuration file:" msgstr "" -#: ./doc/high-availability-guide/ch_ha_aa_haproxy.xml:156(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_haproxy.xml:156(para) msgid "After each change of this file, you should restart HAproxy." msgstr "" @@ -313,19 +313,19 @@ msgstr "" msgid "OpenStack controller nodes" msgstr "" -#: ./doc/high-availability-guide/ch_ha_aa_controllers.xml:9(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_controllers.xml:9(para) msgid "OpenStack controller nodes contain:" msgstr "" -#: ./doc/high-availability-guide/ch_ha_aa_controllers.xml:12(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_controllers.xml:12(para) msgid "All OpenStack API services" msgstr "" -#: ./doc/high-availability-guide/ch_ha_aa_controllers.xml:17(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_controllers.xml:17(para) msgid "All OpenStack schedulers" msgstr "" -#: ./doc/high-availability-guide/ch_ha_aa_controllers.xml:22(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_controllers.xml:22(para) msgid "Memcached service" msgstr "" @@ -337,7 +337,7 @@ msgstr "" msgid "API node cluster stack" msgstr "" -#: ./doc/high-availability-guide/ch_api.xml:9(simpara) +#: ./doc/high-availability-guide/ch_api.xml:9(para) msgid "The API node exposes OpenStack API endpoints onto external network (Internet). It must talk to the cloud controller on the management network." msgstr "" @@ -345,11 +345,11 @@ msgstr "" msgid "Database" msgstr "" -#: ./doc/high-availability-guide/ch_ha_aa_db.xml:9(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_db.xml:9(para) msgid "The first step is installing the database that sits at the heart of the cluster. When we talk about High Availability, we talk about several databases (for redundancy) and a means to keep them synchronized. In this case, we must choose the MySQL database, along with Galera for synchronous multi-master replication." msgstr "" -#: ./doc/high-availability-guide/ch_ha_aa_db.xml:13(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_db.xml:13(para) msgid "The choice of database isn’t a foregone conclusion; you’re not required to use MySQL. It is, however, a fairly common choice in OpenStack installations, so we’ll cover it here." msgstr "" @@ -361,7 +361,7 @@ msgstr "" msgid "Galera monitoring scripts" msgstr "" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_galera_monitoring.xml:8(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_galera_monitoring.xml:8(para) msgid "(Coming soon)" msgstr "" @@ -369,11 +369,11 @@ msgstr "" msgid "MySQL with Galera" msgstr "" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:8(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:8(para) msgid "Rather than starting with a vanilla version of MySQL, and then adding Galera, you will want to install a version of MySQL patched for wsrep (Write Set REPlication) from https://launchpad.net/codership-mysql/0.7. The wsrep API is suitable for configuring MySQL High Availability in OpenStack because it supports synchronous replication." msgstr "" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:13(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:13(para) msgid "Note that the installation requirements call for careful attention. Read the guide https://launchpadlibrarian.net/66669857/README-wsrep to ensure you follow all the required steps." msgstr "" @@ -381,83 +381,83 @@ msgstr "" msgid "Installing Galera through a MySQL version patched for wsrep:" msgstr "" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:21(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:21(para) msgid "Download Galera from https://launchpad.net/galera/+download, and install the *.rpms or *.debs, which takes care of any dependencies that your system doesn’t already have installed." msgstr "" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:28(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:28(para) msgid "Adjust the configuration:" msgstr "" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:31(simpara) -msgid "In the system-wide my.conf file, make sure mysqld isn’t bound to 127.0.0.1, and that /etc/mysql/conf.d/ is included. Typically you can find this file at /etc/my.cnf:" +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:31(para) +msgid "In the system-wide my.conf file, make sure mysqld isn’t bound to 127.0.0.1, and that /etc/mysql/conf.d/ is included. Typically you can find this file at /etc/my.cnf:" msgstr "" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:39(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:39(para) msgid "When adding a new node, you must configure it with a MySQL account that can access the other nodes. The new node must be able to request a state snapshot from one of the existing nodes:" msgstr "" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:44(simpara) -msgid "Specify your MySQL account information in /etc/mysql/conf.d/wsrep.cnf:" +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:44(para) +msgid "Specify your MySQL account information in /etc/mysql/conf.d/wsrep.cnf:" msgstr "" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:50(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:50(para) msgid "Connect as root and grant privileges to that user:" msgstr "" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:56(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:56(para) msgid "Remove user accounts with empty usernames because they cause problems:" msgstr "" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:62(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:62(para) msgid "Set up certain mandatory configuration options within MySQL itself. These include:" msgstr "" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:73(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:73(para) msgid "Check that the nodes can access each other through the firewall. Depending on your environment, this might mean adjusting iptables, as in:" msgstr "" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:79(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:83(para) msgid "This might also mean configuring any NAT firewall between nodes to allow direct connections. You might need to disable SELinux, or configure it to allow mysqld to listen to sockets at unprivileged ports." msgstr "" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:84(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:88(para) msgid "Now you’re ready to create the cluster." msgstr "" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:87(title) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:91(title) msgid "Create the cluster" msgstr "" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:89(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:93(para) msgid "In creating a cluster, you first start a single instance, which creates the cluster. The rest of the MySQL instances then connect to that cluster:" msgstr "" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:94(title) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:98(title) msgid "An example of creating the cluster:" msgstr "" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:97(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:101(para) msgid "Start on 10.0.0.10 by executing the command:" msgstr "" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:105(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:109(para) msgid "Connect to that cluster on the rest of the nodes by referencing the address of that node, as in:" msgstr "" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:112(simpara) -msgid "You also have the option to set the wsrep_cluster_address in the /etc/mysql/conf.d/wsrep.cnf file, or within the client itself. (In fact, for some systems, such as MariaDB or Percona, this may be your only option.)" +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:116(para) +msgid "You also have the option to set the wsrep_cluster_address in the /etc/mysql/conf.d/wsrep.cnf file, or within the client itself. (In fact, for some systems, such as MariaDB or Percona, this may be your only option.)" msgstr "" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:118(title) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:122(title) msgid "An example of checking the status of the cluster." msgstr "" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:121(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:125(para) msgid "Open the MySQL client and check the status of the various parameters:" msgstr "" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:128(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml:132(para) msgid "You should see a status that looks something like this:" msgstr "" @@ -465,7 +465,7 @@ msgstr "" msgid "Other ways to provide a highly available database" msgstr "" -#: ./doc/high-availability-guide/ha_aa_db/section_other_ways_to_provide_a_highly_available_database.xml:8(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_other_ways_to_provide_a_highly_available_database.xml:8(para) msgid "MySQL with Galera is by no means the only way to achieve database HA. MariaDB (https://mariadb.org/) and Percona (http://www.percona.com/) also work with Galera. You also have the option to use Postgres, which has its own replication, or another database HA option." msgstr "" @@ -473,7 +473,7 @@ msgstr "" msgid "Run neutron L3 agent" msgstr "" -#: ./doc/high-availability-guide/ha_aa_network/section_run_neutron_l3_agent.xml:8(simpara) +#: ./doc/high-availability-guide/ha_aa_network/section_run_neutron_l3_agent.xml:8(para) msgid "The neutron L3 agent is scalable thanks to the scheduler that allows distribution of virtual routers across multiple nodes. But there is no native feature to make these routers highly available. At this time, the Active / Passive solution exists to run the Neutron L3 agent in failover mode with Pacemaker. See the Active / Passive section of this guide." msgstr "" @@ -481,7 +481,7 @@ msgstr "" msgid "Run neutron metadata agent" msgstr "" -#: ./doc/high-availability-guide/ha_aa_network/section_run_neutron_metadata_agent.xml:8(simpara) +#: ./doc/high-availability-guide/ha_aa_network/section_run_neutron_metadata_agent.xml:8(para) msgid "There is no native feature to make this service highly available. At this time, the Active / Passive solution exists to run the neutron metadata agent in failover mode with Pacemaker. See the Active / Passive section of this guide." msgstr "" @@ -489,7 +489,7 @@ msgstr "" msgid "Run neutron DHCP agent" msgstr "" -#: ./doc/high-availability-guide/ha_aa_network/section_run_neutron_dhcp_agent.xml:8(simpara) +#: ./doc/high-availability-guide/ha_aa_network/section_run_neutron_dhcp_agent.xml:8(para) msgid "OpenStack Networking service has a scheduler that lets you run multiple agents across nodes. Also, the DHCP agent can be natively highly available. For details, see OpenStack Configuration Reference." msgstr "" @@ -497,23 +497,23 @@ msgstr "" msgid "Highly available Block Storage API" msgstr "" -#: ./doc/high-availability-guide/api/section_cinder_api.xml:8(simpara) +#: ./doc/high-availability-guide/api/section_cinder_api.xml:8(para) msgid "Making the Block Storage (cinder) API service highly available in active / passive mode involves" msgstr "" -#: ./doc/high-availability-guide/api/section_cinder_api.xml:11(simpara) +#: ./doc/high-availability-guide/api/section_cinder_api.xml:11(para) msgid "Configure Block Storage to listen on the VIP address," msgstr "" -#: ./doc/high-availability-guide/api/section_cinder_api.xml:16(simpara) +#: ./doc/high-availability-guide/api/section_cinder_api.xml:16(para) msgid "managing Block Storage API daemon with the Pacemaker cluster manager," msgstr "" -#: ./doc/high-availability-guide/api/section_cinder_api.xml:21(simpara) ./doc/high-availability-guide/api/section_glance_api.xml:22(simpara) ./doc/high-availability-guide/api/section_neutron_server.xml:22(simpara) ./doc/high-availability-guide/api/section_keystone.xml:22(simpara) +#: ./doc/high-availability-guide/api/section_cinder_api.xml:21(para) ./doc/high-availability-guide/api/section_glance_api.xml:22(para) ./doc/high-availability-guide/api/section_neutron_server.xml:22(para) ./doc/high-availability-guide/api/section_keystone.xml:22(para) msgid "Configure OpenStack services to use this IP address." msgstr "" -#: ./doc/high-availability-guide/api/section_cinder_api.xml:27(simpara) +#: ./doc/high-availability-guide/api/section_cinder_api.xml:27(para) msgid "Here is the documentation for installing Block Storage service." msgstr "" @@ -521,27 +521,27 @@ msgstr "" msgid "Add Block Storage API resource to Pacemaker" msgstr "" -#: ./doc/high-availability-guide/api/section_cinder_api.xml:35(simpara) ./doc/high-availability-guide/api/section_glance_api.xml:34(simpara) ./doc/high-availability-guide/api/section_neutron_server.xml:34(simpara) ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml:26(simpara) ./doc/high-availability-guide/api/section_keystone.xml:34(simpara) ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml:18(simpara) ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml:18(simpara) ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml:18(simpara) +#: ./doc/high-availability-guide/api/section_cinder_api.xml:35(para) ./doc/high-availability-guide/api/section_glance_api.xml:34(para) ./doc/high-availability-guide/api/section_neutron_server.xml:34(para) ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml:26(para) ./doc/high-availability-guide/api/section_keystone.xml:34(para) ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml:14(para) ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml:18(para) ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml:18(para) msgid "First of all, you need to download the resource agent to your system:" msgstr "" -#: ./doc/high-availability-guide/api/section_cinder_api.xml:39(simpara) +#: ./doc/high-availability-guide/api/section_cinder_api.xml:39(para) msgid "You can now add the Pacemaker configuration for Block Storage API resource. Connect to the Pacemaker cluster with crm configure, and add the following cluster resources:" msgstr "" -#: ./doc/high-availability-guide/api/section_cinder_api.xml:46(simpara) ./doc/high-availability-guide/api/section_glance_api.xml:44(simpara) ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml:36(simpara) ./doc/high-availability-guide/controller/section_rabbitmq.xml:190(simpara) ./doc/high-availability-guide/controller/section_mysql.xml:202(simpara) ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml:29(simpara) ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml:29(simpara) ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml:29(simpara) +#: ./doc/high-availability-guide/api/section_cinder_api.xml:46(para) ./doc/high-availability-guide/api/section_glance_api.xml:44(para) ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml:36(para) ./doc/high-availability-guide/controller/section_rabbitmq.xml:190(para) ./doc/high-availability-guide/controller/section_mysql.xml:202(para) ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml:25(para) ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml:29(para) ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml:29(para) msgid "This configuration creates" msgstr "" -#: ./doc/high-availability-guide/api/section_cinder_api.xml:49(simpara) +#: ./doc/high-availability-guide/api/section_cinder_api.xml:49(para) msgid "p_cinder-api, a resource for manage Block Storage API service" msgstr "" -#: ./doc/high-availability-guide/api/section_cinder_api.xml:53(simpara) +#: ./doc/high-availability-guide/api/section_cinder_api.xml:53(para) msgid "crm configure supports batch input, so you may copy and paste the above into your live pacemaker configuration, and then make changes as required. For example, you may enter edit p_ip_cinder-api from the crm configure menu and edit the resource to match your preferred virtual IP address." msgstr "" -#: ./doc/high-availability-guide/api/section_cinder_api.xml:58(simpara) +#: ./doc/high-availability-guide/api/section_cinder_api.xml:58(para) msgid "Once completed, commit your configuration changes by entering commit from the crm configure menu. Pacemaker will then start the Block Storage API service, and its dependent resources, on one of your nodes." msgstr "" @@ -549,23 +549,23 @@ msgstr "" msgid "Configure Block Storage API service" msgstr "" -#: ./doc/high-availability-guide/api/section_cinder_api.xml:66(simpara) -msgid "Edit /etc/cinder/cinder.conf:" +#: ./doc/high-availability-guide/api/section_cinder_api.xml:66(para) +msgid "Edit /etc/cinder/cinder.conf:" msgstr "" #: ./doc/high-availability-guide/api/section_cinder_api.xml:79(title) msgid "Configure OpenStack services to use highly available Block Storage API" msgstr "" -#: ./doc/high-availability-guide/api/section_cinder_api.xml:81(simpara) +#: ./doc/high-availability-guide/api/section_cinder_api.xml:81(para) msgid "Your OpenStack services must now point their Block Storage API configuration to the highly available, virtual cluster IP address — rather than a Block Storage API server’s physical IP address as you normally would." msgstr "" -#: ./doc/high-availability-guide/api/section_cinder_api.xml:84(simpara) +#: ./doc/high-availability-guide/api/section_cinder_api.xml:84(para) msgid "You must create the Block Storage API endpoint with this IP." msgstr "" -#: ./doc/high-availability-guide/api/section_cinder_api.xml:86(simpara) +#: ./doc/high-availability-guide/api/section_cinder_api.xml:86(para) msgid "If you are using both private and public IP, you should create two Virtual IPs and define your endpoint like this:" msgstr "" @@ -573,7 +573,7 @@ msgstr "" msgid "Configure Pacemaker group" msgstr "" -#: ./doc/high-availability-guide/api/section_api_pacemaker.xml:8(simpara) +#: ./doc/high-availability-guide/api/section_api_pacemaker.xml:8(para) msgid "Finally, we need to create a service group to ensure that virtual IP is linked to the API services resources:" msgstr "" @@ -581,19 +581,19 @@ msgstr "" msgid "Highly available OpenStack Image API" msgstr "" -#: ./doc/high-availability-guide/api/section_glance_api.xml:8(simpara) +#: ./doc/high-availability-guide/api/section_glance_api.xml:8(para) msgid "OpenStack Image Service offers a service for discovering, registering, and retrieving virtual machine images. To make the OpenStack Image API service highly available in active / passive mode, you must:" msgstr "" -#: ./doc/high-availability-guide/api/section_glance_api.xml:12(simpara) +#: ./doc/high-availability-guide/api/section_glance_api.xml:12(para) msgid "Configure OpenStack Image to listen on the VIP address." msgstr "" -#: ./doc/high-availability-guide/api/section_glance_api.xml:17(simpara) +#: ./doc/high-availability-guide/api/section_glance_api.xml:17(para) msgid "Manage OpenStack Image API daemon with the Pacemaker cluster manager." msgstr "" -#: ./doc/high-availability-guide/api/section_glance_api.xml:28(simpara) +#: ./doc/high-availability-guide/api/section_glance_api.xml:28(para) msgid "Here is the documentation for installing OpenStack Image API service." msgstr "" @@ -601,19 +601,19 @@ msgstr "" msgid "Add OpenStack Image API resource to Pacemaker" msgstr "" -#: ./doc/high-availability-guide/api/section_glance_api.xml:38(simpara) +#: ./doc/high-availability-guide/api/section_glance_api.xml:38(para) msgid "You can now add the Pacemaker configuration for OpenStack Image API resource. Connect to the Pacemaker cluster with crm configure, and add the following cluster resources:" msgstr "" -#: ./doc/high-availability-guide/api/section_glance_api.xml:47(simpara) +#: ./doc/high-availability-guide/api/section_glance_api.xml:47(para) msgid "p_glance-api, a resource for manage OpenStack Image API service" msgstr "" -#: ./doc/high-availability-guide/api/section_glance_api.xml:51(simpara) +#: ./doc/high-availability-guide/api/section_glance_api.xml:51(para) msgid "crm configure supports batch input, so you may copy and paste the above into your live pacemaker configuration, and then make changes as required. For example, you may enter edit p_ip_glance-api from the crm configure menu and edit the resource to match your preferred virtual IP address." msgstr "" -#: ./doc/high-availability-guide/api/section_glance_api.xml:56(simpara) +#: ./doc/high-availability-guide/api/section_glance_api.xml:56(para) msgid "Once completed, commit your configuration changes by entering commit from the crm configure menu. Pacemaker will then start the OpenStack Image API service, and its dependent resources, on one of your nodes." msgstr "" @@ -621,27 +621,27 @@ msgstr "" msgid "Configure OpenStack Image Service API" msgstr "" -#: ./doc/high-availability-guide/api/section_glance_api.xml:64(simpara) -msgid "Edit /etc/glance/glance-api.conf:" +#: ./doc/high-availability-guide/api/section_glance_api.xml:64(para) +msgid "Edit /etc/glance/glance-api.conf:" msgstr "" #: ./doc/high-availability-guide/api/section_glance_api.xml:80(title) msgid "Configure OpenStack services to use high available OpenStack Image API" msgstr "" -#: ./doc/high-availability-guide/api/section_glance_api.xml:82(simpara) +#: ./doc/high-availability-guide/api/section_glance_api.xml:82(para) msgid "Your OpenStack services must now point their OpenStack Image API configuration to the highly available, virtual cluster IP address — rather than an OpenStack Image API server’s physical IP address as you normally would." msgstr "" -#: ./doc/high-availability-guide/api/section_glance_api.xml:85(simpara) -msgid "For OpenStack Compute, for example, if your OpenStack Image API service IP address is 192.168.42.104 as in the configuration explained here, you would use the following line in your nova.conf file:" +#: ./doc/high-availability-guide/api/section_glance_api.xml:85(para) +msgid "For OpenStack Compute, for example, if your OpenStack Image API service IP address is 192.168.42.104 as in the configuration explained here, you would use the following line in your nova.conf file:" msgstr "" -#: ./doc/high-availability-guide/api/section_glance_api.xml:89(simpara) +#: ./doc/high-availability-guide/api/section_glance_api.xml:89(para) msgid "You must also create the OpenStack Image API endpoint with this IP." msgstr "" -#: ./doc/high-availability-guide/api/section_glance_api.xml:91(simpara) ./doc/high-availability-guide/api/section_neutron_server.xml:85(simpara) +#: ./doc/high-availability-guide/api/section_glance_api.xml:91(para) ./doc/high-availability-guide/api/section_neutron_server.xml:85(para) msgid "If you are using both private and public IP addresses, you should create two Virtual IP addresses and define your endpoint like this:" msgstr "" @@ -649,19 +649,19 @@ msgstr "" msgid "Highly available OpenStack Networking server" msgstr "" -#: ./doc/high-availability-guide/api/section_neutron_server.xml:8(simpara) +#: ./doc/high-availability-guide/api/section_neutron_server.xml:8(para) msgid "OpenStack Networking is the network connectivity service in OpenStack. Making the OpenStack Networking Server service highly available in active / passive mode involves" msgstr "" -#: ./doc/high-availability-guide/api/section_neutron_server.xml:12(simpara) +#: ./doc/high-availability-guide/api/section_neutron_server.xml:12(para) msgid "Configure OpenStack Networking to listen on the VIP address," msgstr "" -#: ./doc/high-availability-guide/api/section_neutron_server.xml:17(simpara) +#: ./doc/high-availability-guide/api/section_neutron_server.xml:17(para) msgid "managing OpenStack Networking API Server daemon with the Pacemaker cluster manager," msgstr "" -#: ./doc/high-availability-guide/api/section_neutron_server.xml:28(simpara) +#: ./doc/high-availability-guide/api/section_neutron_server.xml:28(para) msgid "Here is the documentation for installing OpenStack Networking service." msgstr "" @@ -669,19 +669,19 @@ msgstr "" msgid "Add OpenStack Networking Server resource to Pacemaker" msgstr "" -#: ./doc/high-availability-guide/api/section_neutron_server.xml:38(simpara) +#: ./doc/high-availability-guide/api/section_neutron_server.xml:38(para) msgid "You can now add the Pacemaker configuration for OpenStack Networking Server resource. Connect to the Pacemaker cluster with crm configure, and add the following cluster resources:" msgstr "" -#: ./doc/high-availability-guide/api/section_neutron_server.xml:45(simpara) +#: ./doc/high-availability-guide/api/section_neutron_server.xml:45(para) msgid "This configuration creates p_neutron-server, a resource for manage OpenStack Networking Server service" msgstr "" -#: ./doc/high-availability-guide/api/section_neutron_server.xml:46(simpara) +#: ./doc/high-availability-guide/api/section_neutron_server.xml:46(para) msgid "crm configure supports batch input, so you may copy and paste the above into your live pacemaker configuration, and then make changes as required. For example, you may enter edit p_neutron-server from the crm configure menu and edit the resource to match your preferred virtual IP address." msgstr "" -#: ./doc/high-availability-guide/api/section_neutron_server.xml:51(simpara) +#: ./doc/high-availability-guide/api/section_neutron_server.xml:51(para) msgid "Once completed, commit your configuration changes by entering commit from the crm configure menu. Pacemaker will then start the OpenStack Networking API service, and its dependent resources, on one of your nodes." msgstr "" @@ -689,23 +689,23 @@ msgstr "" msgid "Configure OpenStack Networking server" msgstr "" -#: ./doc/high-availability-guide/api/section_neutron_server.xml:59(simpara) -msgid "Edit /etc/neutron/neutron.conf :" +#: ./doc/high-availability-guide/api/section_neutron_server.xml:59(para) +msgid "Edit /etc/neutron/neutron.conf:" msgstr "" #: ./doc/high-availability-guide/api/section_neutron_server.xml:76(title) msgid "Configure OpenStack services to use highly available OpenStack Networking server" msgstr "" -#: ./doc/high-availability-guide/api/section_neutron_server.xml:78(simpara) +#: ./doc/high-availability-guide/api/section_neutron_server.xml:78(para) msgid "Your OpenStack services must now point their OpenStack Networking Server configuration to the highly available, virtual cluster IP address — rather than an OpenStack Networking server’s physical IP address as you normally would." msgstr "" -#: ./doc/high-availability-guide/api/section_neutron_server.xml:81(simpara) +#: ./doc/high-availability-guide/api/section_neutron_server.xml:81(para) msgid "For example, you should configure OpenStack Compute for using highly available OpenStack Networking server in editing nova.conf file:" msgstr "" -#: ./doc/high-availability-guide/api/section_neutron_server.xml:83(simpara) +#: ./doc/high-availability-guide/api/section_neutron_server.xml:83(para) msgid "You need to create the OpenStack Networking server endpoint with this IP." msgstr "" @@ -713,19 +713,19 @@ msgstr "" msgid "Highly available Telemetry central agent" msgstr "" -#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml:8(simpara) +#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml:8(para) msgid "Telemetry (ceilometer) is the metering and monitoring service in OpenStack. The Central agent polls for resource utilization statistics for resources not tied to instances or compute nodes." msgstr "" -#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml:12(simpara) +#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml:12(para) msgid "Due to limitations of a polling model, a single instance of this agent can be polling a given list of meters. In this setup, we install this service on the API nodes also in the active / passive mode." msgstr "" -#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml:16(simpara) +#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml:16(para) msgid "Making the Telemetry central agent service highly available in active / passive mode involves managing its daemon with the Pacemaker cluster manager." msgstr "" -#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml:19(simpara) +#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml:19(para) msgid "You will find at this page the process to install the Telemetry central agent." msgstr "" @@ -733,19 +733,19 @@ msgstr "" msgid "Add the Telemetry central agent resource to Pacemaker" msgstr "" -#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml:30(simpara) +#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml:30(para) msgid "You may then proceed with adding the Pacemaker configuration for the Telemetry central agent resource. Connect to the Pacemaker cluster with crm configure, and add the following cluster resources:" msgstr "" -#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml:39(simpara) +#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml:39(para) msgid "p_ceilometer-agent-central, a resource for manage Ceilometer Central Agent service" msgstr "" -#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml:43(simpara) ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml:37(simpara) ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml:36(simpara) ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml:37(simpara) +#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml:43(para) ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml:33(para) ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml:36(para) ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml:37(para) msgid "crm configure supports batch input, so you may copy and paste the above into your live pacemaker configuration, and then make changes as required." msgstr "" -#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml:46(simpara) +#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml:46(para) msgid "Once completed, commit your configuration changes by entering commit from the crm configure menu. Pacemaker will then start the Ceilometer Central Agent service, and its dependent resources, on one of your nodes." msgstr "" @@ -753,27 +753,27 @@ msgstr "" msgid "Configure Telemetry central agent service" msgstr "" -#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml:54(simpara) -msgid "Edit /etc/ceilometer/ceilometer.conf :" +#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml:54(para) +msgid "Edit /etc/ceilometer/ceilometer.conf:" msgstr "" #: ./doc/high-availability-guide/api/section_keystone.xml:6(title) msgid "Highly available OpenStack Identity" msgstr "" -#: ./doc/high-availability-guide/api/section_keystone.xml:8(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml:8(para) msgid "OpenStack Identity is the Identity Service in OpenStack and used by many services. Making the OpenStack Identity service highly available in active / passive mode involves" msgstr "" -#: ./doc/high-availability-guide/api/section_keystone.xml:12(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml:12(para) msgid "Configure OpenStack Identity to listen on the VIP address," msgstr "" -#: ./doc/high-availability-guide/api/section_keystone.xml:17(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml:17(para) msgid "managing OpenStack Identity daemon with the Pacemaker cluster manager," msgstr "" -#: ./doc/high-availability-guide/api/section_keystone.xml:28(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml:28(para) msgid "Here is the documentation for installing OpenStack Identity service." msgstr "" @@ -781,19 +781,19 @@ msgstr "" msgid "Add OpenStack Identity resource to Pacemaker" msgstr "" -#: ./doc/high-availability-guide/api/section_keystone.xml:40(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml:40(para) msgid "You can now add the Pacemaker configuration for OpenStack Identity resource. Connect to the Pacemaker cluster with crm configure, and add the following cluster resources:" msgstr "" -#: ./doc/high-availability-guide/api/section_keystone.xml:46(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml:46(para) msgid "This configuration creates p_keystone, a resource for managing the OpenStack Identity service." msgstr "" -#: ./doc/high-availability-guide/api/section_keystone.xml:47(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml:47(para) msgid "crm configure supports batch input, so you may copy and paste the above into your live pacemaker configuration, and then make changes as required. For example, you may enter edit p_ip_keystone from the crm configure menu and edit the resource to match your preferred virtual IP address." msgstr "" -#: ./doc/high-availability-guide/api/section_keystone.xml:52(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml:52(para) msgid "Once completed, commit your configuration changes by entering commit from the crm configure menu. Pacemaker will then start the OpenStack Identity service, and its dependent resources, on one of your nodes." msgstr "" @@ -801,19 +801,19 @@ msgstr "" msgid "Configure OpenStack Identity service" msgstr "" -#: ./doc/high-availability-guide/api/section_keystone.xml:60(simpara) -msgid "You need to edit your OpenStack Identity configuration file (keystone.conf) and change the bind parameters:" +#: ./doc/high-availability-guide/api/section_keystone.xml:60(para) +msgid "You need to edit your OpenStack Identity configuration file (keystone.conf) and change the bind parameters:" msgstr "" -#: ./doc/high-availability-guide/api/section_keystone.xml:61(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml:61(para) msgid "On Havana:" msgstr "" -#: ./doc/high-availability-guide/api/section_keystone.xml:63(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml:63(para) msgid "On Icehouse, the admin_bind_host option lets you use a private network for the admin access." msgstr "" -#: ./doc/high-availability-guide/api/section_keystone.xml:66(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml:66(para) msgid "To be sure all data will be highly available, you should be sure that you store everything in the MySQL database (which is also highly available):" msgstr "" @@ -821,23 +821,23 @@ msgstr "" msgid "Configure OpenStack services to use the highly available OpenStack Identity" msgstr "" -#: ./doc/high-availability-guide/api/section_keystone.xml:78(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml:78(para) msgid "Your OpenStack services must now point their OpenStack Identity configuration to the highly available, virtual cluster IP address — rather than a OpenStack Identity server’s physical IP address as you normally would." msgstr "" -#: ./doc/high-availability-guide/api/section_keystone.xml:81(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml:81(para) msgid "For example with OpenStack Compute, if your OpenStack Identity service IP address is 192.168.42.103 as in the configuration explained here, you would use the following line in your API configuration file (api-paste.ini):" msgstr "" -#: ./doc/high-availability-guide/api/section_keystone.xml:86(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml:86(para) msgid "You also need to create the OpenStack Identity Endpoint with this IP." msgstr "" -#: ./doc/high-availability-guide/api/section_keystone.xml:87(simpara) -msgid "NOTE : If you are using both private and public IP addresses, you should create two Virtual IP addresses and define your endpoint like this:" +#: ./doc/high-availability-guide/api/section_keystone.xml:87(para) +msgid "NOTE: If you are using both private and public IP addresses, you should create two Virtual IP addresses and define your endpoint like this:" msgstr "" -#: ./doc/high-availability-guide/api/section_keystone.xml:89(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml:92(para) msgid "If you are using the horizon dashboard, you should edit the local_settings.py file:" msgstr "" @@ -845,43 +845,43 @@ msgstr "" msgid "Configure the VIP" msgstr "" -#: ./doc/high-availability-guide/api/section_api_vip.xml:8(simpara) +#: ./doc/high-availability-guide/api/section_api_vip.xml:8(para) msgid "First, you must select and assign a virtual IP address (VIP) that can freely float between cluster nodes." msgstr "" -#: ./doc/high-availability-guide/api/section_api_vip.xml:9(simpara) -msgid "This configuration creates p_ip_api, a virtual IP address for use by the API node (192.168.42.103) :" +#: ./doc/high-availability-guide/api/section_api_vip.xml:9(para) +msgid "This configuration creates p_ip_api, a virtual IP address for use by the API node (192.168.42.103):" msgstr "" #: ./doc/high-availability-guide/controller/section_rabbitmq.xml:11(title) msgid "Highly available RabbitMQ" msgstr "" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:13(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:13(para) msgid "RabbitMQ is the default AMQP server used by many OpenStack services. Making the RabbitMQ service highly available involves:" msgstr "" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:17(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:17(para) msgid "configuring a DRBD device for use by RabbitMQ," msgstr "" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:22(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:22(para) msgid "configuring RabbitMQ to use a data directory residing on that DRBD device," msgstr "" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:28(simpara) ./doc/high-availability-guide/controller/section_mysql.xml:28(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:28(para) ./doc/high-availability-guide/controller/section_mysql.xml:28(para) msgid "selecting and assigning a virtual IP address (VIP) that can freely float between cluster nodes," msgstr "" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:34(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:34(para) msgid "configuring RabbitMQ to listen on that IP address," msgstr "" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:39(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:39(para) msgid "managing all resources, including the RabbitMQ daemon itself, with the Pacemaker cluster manager." msgstr "" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:46(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:46(para) msgid "There is an alternative method of configuring RabbitMQ for high availability. That approach, known as active-active mirrored queues, happens to be the one preferred by the RabbitMQ developershowever it has shown less than ideal consistency and reliability in OpenStack clusters. Thus, at the time of writing, the Pacemaker/DRBD based approach remains the recommended one for OpenStack environments, although this may change in the near future as RabbitMQ active-active mirrored queues mature." msgstr "" @@ -889,19 +889,19 @@ msgstr "" msgid "Configure DRBD" msgstr "" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:60(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:60(para) msgid "The Pacemaker based RabbitMQ server requires a DRBD resource from which it mounts the /var/lib/rabbitmq directory. In this example, the DRBD resource is simply named rabbitmq:" msgstr "" #: ./doc/high-availability-guide/controller/section_rabbitmq.xml:65(title) -msgid "rabbitmq DRBD resource configuration (/etc/drbd.d/rabbitmq.res)" +msgid "rabbitmq DRBD resource configuration (/etc/drbd.d/rabbitmq.res)" msgstr "" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:81(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:81(para) msgid "This resource uses an underlying local disk (in DRBD terminology, a backing device) named /dev/data/rabbitmq on both cluster nodes, node1 and node2. Normally, this would be an LVM Logical Volume specifically set aside for this purpose. The DRBD meta-disk is internal, meaning DRBD-specific metadata is being stored at the end of the disk device itself. The device is configured to communicate between IPv4 addresses 10.0.42.100 and 10.0.42.254, using TCP port 7701. Once enabled, it will map to a local DRBD block device with the device minor number 1, that is, /dev/drbd1." msgstr "" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:90(simpara) ./doc/high-availability-guide/controller/section_mysql.xml:87(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:90(para) ./doc/high-availability-guide/controller/section_mysql.xml:87(para) msgid "Enabling a DRBD resource is explained in detail in the DRBD User’s Guide. In brief, the proper sequence of commands is this:" msgstr "" @@ -921,15 +921,15 @@ msgstr "" msgid "Create a file system" msgstr "" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:127(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:127(para) msgid "Once the DRBD resource is running and in the primary role (and potentially still in the process of running the initial device synchronization), you may proceed with creating the filesystem for RabbitMQ data. XFS is generally the recommended filesystem:" msgstr "" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:132(simpara) ./doc/high-availability-guide/controller/section_mysql.xml:129(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:132(para) ./doc/high-availability-guide/controller/section_mysql.xml:129(para) msgid "You may also use the alternate device path for the DRBD device, which may be easier to remember as it includes the self-explanatory resource name:" msgstr "" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:136(simpara) ./doc/high-availability-guide/controller/section_mysql.xml:133(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:136(para) ./doc/high-availability-guide/controller/section_mysql.xml:133(para) msgid "Once completed, you may safely return the device to the secondary role. Any ongoing device synchronization will continue in the background:" msgstr "" @@ -937,7 +937,7 @@ msgstr "" msgid "Prepare RabbitMQ for Pacemaker high availability" msgstr "" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:145(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:145(para) msgid "In order for Pacemaker monitoring to function properly, you must ensure that RabbitMQ’s .erlang.cookie files are identical on all nodes, regardless of whether DRBD is mounted there or not. The simplest way of doing so is to take an existing .erlang.cookie from one of your nodes, copying it to the RabbitMQ data directory on the other node, and also copying it to the DRBD-backed filesystem." msgstr "" @@ -945,31 +945,31 @@ msgstr "" msgid "Add RabbitMQ resources to Pacemaker" msgstr "" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:160(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:160(para) msgid "You may now proceed with adding the Pacemaker configuration for RabbitMQ resources. Connect to the Pacemaker cluster with crm configure, and add the following cluster resources:" msgstr "" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:193(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:193(para) msgid "p_ip_rabbitmq, a virtual IP address for use by RabbitMQ (192.168.42.100)," msgstr "" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:198(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:198(para) msgid "p_fs_rabbitmq, a Pacemaker managed filesystem mounted to /var/lib/rabbitmq on whatever node currently runs the RabbitMQ service," msgstr "" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:204(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:204(para) msgid "ms_drbd_rabbitmq, the master/slave set managing the rabbitmq DRBD resource," msgstr "" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:209(simpara) ./doc/high-availability-guide/controller/section_mysql.xml:221(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:209(para) ./doc/high-availability-guide/controller/section_mysql.xml:221(para) msgid "a service group and order and colocation constraints to ensure resources are started on the correct nodes, and in the correct sequence." msgstr "" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:215(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:215(para) msgid "crm configure supports batch input, so you may copy and paste the above into your live pacemaker configuration, and then make changes as required. For example, you may enter edit p_ip_rabbitmq from the crm configure menu and edit the resource to match your preferred virtual IP address." msgstr "" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:220(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:220(para) msgid "Once completed, commit your configuration changes by entering commit from the crm configure menu. Pacemaker will then start the RabbitMQ service, and its dependent resources, on one of your nodes." msgstr "" @@ -977,15 +977,15 @@ msgstr "" msgid "Configure OpenStack services for highly available RabbitMQ" msgstr "" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:228(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:228(para) msgid "Your OpenStack services must now point their RabbitMQ configuration to the highly available, virtual cluster IP addressrather than a RabbitMQ server" msgstr "" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:231(simpara) -msgid "For OpenStack Image, for example, if your RabbitMQ service IP address is 192.168.42.100 as in the configuration explained here, you would use the following line in your OpenStack Image API configuration file (glance-api.conf):" +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:231(para) +msgid "For OpenStack Image, for example, if your RabbitMQ service IP address is 192.168.42.100 as in the configuration explained here, you would use the following line in your OpenStack Image API configuration file (glance-api.conf):" msgstr "" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:236(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml:236(para) msgid "No other changes are necessary to your OpenStack configuration. If the node currently hosting your RabbitMQ experiences a problem necessitating service failover, your OpenStack services may experience a brief RabbitMQ interruption, as they would in the event of a network hiccup, and then continue to run normally." msgstr "" @@ -993,40 +993,40 @@ msgstr "" msgid "Highly available MySQL" msgstr "" -#: ./doc/high-availability-guide/controller/section_mysql.xml:13(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml:13(para) msgid "MySQL is the default database server used by many OpenStack services. Making the MySQL service highly available involves" msgstr "" -#: ./doc/high-availability-guide/controller/section_mysql.xml:17(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml:17(para) msgid "Configure a DRBD device for use by MySQL," msgstr "" -#: ./doc/high-availability-guide/controller/section_mysql.xml:22(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml:22(para) msgid "Configure MySQL to use a data directory residing on that DRBD device," msgstr "" -#: ./doc/high-availability-guide/controller/section_mysql.xml:34(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml:34(para) msgid "Configure MySQL to listen on that IP address," msgstr "" -#: ./doc/high-availability-guide/controller/section_mysql.xml:39(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml:39(para) msgid "managing all resources, including the MySQL daemon itself, with the Pacemaker cluster manager." msgstr "" -#: ./doc/high-availability-guide/controller/section_mysql.xml:46(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml:46(para) msgid "MySQL/Galera is an alternative method of Configure MySQL for high availability. It is likely to become the preferred method of achieving MySQL high availability once it has sufficiently matured. At the time of writing, however, the Pacemaker/DRBD based approach remains the recommended one for OpenStack environments." msgstr "" -#: ./doc/high-availability-guide/controller/section_mysql.xml:57(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml:57(para) msgid "The Pacemaker based MySQL server requires a DRBD resource from which it mounts the /var/lib/mysql directory. In this example, the DRBD resource is simply named mysql:" msgstr "" #: ./doc/high-availability-guide/controller/section_mysql.xml:62(title) -msgid "mysql DRBD resource configuration (/etc/drbd.d/mysql.res)" +msgid "mysql DRBD resource configuration (/etc/drbd.d/mysql.res)" msgstr "" -#: ./doc/high-availability-guide/controller/section_mysql.xml:78(simpara) -msgid "This resource uses an underlying local disk (in DRBD terminology, a backing device) named /dev/data/mysql on both cluster nodes, node1 and node2. Normally, this would be an LVM Logical Volume specifically set aside for this purpose. The DRBD meta-disk is internal, meaning DRBD-specific metadata is being stored at the end of the disk device itself. The device is configured to communicate between IPv4 addresses 10.0.42.100 and 10.0.42.254, using TCP port 7700. Once enabled, it will map to a local DRBD block device with the device minor number 0, that is, /dev/drbd0." +#: ./doc/high-availability-guide/controller/section_mysql.xml:78(para) +msgid "This resource uses an underlying local disk (in DRBD terminology, a backing device) named /dev/data/mysql on both cluster nodes, node1 and node2. Normally, this would be an LVM Logical Volume specifically set aside for this purpose. The DRBD meta-disk is internal, meaning DRBD-specific metadata is being stored at the end of the disk device itself. The device is configured to communicate between IPv4 addresses 10.0.42.100 and 10.0.42.254, using TCP port 7700. Once enabled, it will map to a local DRBD block device with the device minor number 0, that is, /dev/drbd0." msgstr "" #: ./doc/high-availability-guide/controller/section_mysql.xml:95(para) @@ -1041,7 +1041,7 @@ msgstr "" msgid "Creating a file system" msgstr "" -#: ./doc/high-availability-guide/controller/section_mysql.xml:124(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml:124(para) msgid "Once the DRBD resource is running and in the primary role (and potentially still in the process of running the initial device synchronization), you may proceed with creating the filesystem for MySQL data. XFS is the generally recommended filesystem:" msgstr "" @@ -1049,19 +1049,19 @@ msgstr "" msgid "Prepare MySQL for Pacemaker high availability" msgstr "" -#: ./doc/high-availability-guide/controller/section_mysql.xml:142(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml:142(para) msgid "In order for Pacemaker monitoring to function properly, you must ensure that MySQL’s database files reside on the DRBD device. If you already have an existing MySQL database, the simplest approach is to just move the contents of the existing /var/lib/mysql directory into the newly created filesystem on the DRBD device." msgstr "" -#: ./doc/high-availability-guide/controller/section_mysql.xml:148(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml:148(para) msgid "You must complete the next step while the MySQL database server is shut down." msgstr "" -#: ./doc/high-availability-guide/controller/section_mysql.xml:154(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml:154(para) msgid "For a new MySQL installation with no existing data, you may also run the mysql_install_db command:" msgstr "" -#: ./doc/high-availability-guide/controller/section_mysql.xml:159(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml:159(para) msgid "Regardless of the approach, the steps outlined here must be completed on only one cluster node." msgstr "" @@ -1069,27 +1069,27 @@ msgstr "" msgid "Add MySQL resources to Pacemaker" msgstr "" -#: ./doc/high-availability-guide/controller/section_mysql.xml:166(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml:166(para) msgid "You can now add the Pacemaker configuration for MySQL resources. Connect to the Pacemaker cluster with crm configure, and add the following cluster resources:" msgstr "" -#: ./doc/high-availability-guide/controller/section_mysql.xml:205(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml:205(para) msgid "p_ip_mysql, a virtual IP address for use by MySQL (192.168.42.101)," msgstr "" -#: ./doc/high-availability-guide/controller/section_mysql.xml:210(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml:210(para) msgid "p_fs_mysql, a Pacemaker managed filesystem mounted to /var/lib/mysql on whatever node currently runs the MySQL service," msgstr "" -#: ./doc/high-availability-guide/controller/section_mysql.xml:216(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml:216(para) msgid "ms_drbd_mysql, the master/slave set managing the mysql DRBD resource," msgstr "" -#: ./doc/high-availability-guide/controller/section_mysql.xml:227(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml:227(para) msgid "crm configure supports batch input, so you may copy and paste the above into your live pacemaker configuration, and then make changes as required. For example, you may enter edit p_ip_mysql from the crm configure menu and edit the resource to match your preferred virtual IP address." msgstr "" -#: ./doc/high-availability-guide/controller/section_mysql.xml:232(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml:232(para) msgid "Once completed, commit your configuration changes by entering commit from the crm configure menu. Pacemaker will then start the MySQL service, and its dependent resources, on one of your nodes." msgstr "" @@ -1097,15 +1097,15 @@ msgstr "" msgid "Configure OpenStack services for highly available MySQL" msgstr "" -#: ./doc/high-availability-guide/controller/section_mysql.xml:240(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml:240(para) msgid "Your OpenStack services must now point their MySQL configuration to the highly available, virtual cluster IP addressrather than a MySQL server" msgstr "" -#: ./doc/high-availability-guide/controller/section_mysql.xml:243(simpara) -msgid "For OpenStack Image, for example, if your MySQL service IP address is 192.168.42.101 as in the configuration explained here, you would use the following line in your OpenStack Image registry configuration file (glance-registry.conf):" +#: ./doc/high-availability-guide/controller/section_mysql.xml:243(para) +msgid "For OpenStack Image, for example, if your MySQL service IP address is 192.168.42.101 as in the configuration explained here, you would use the following line in your OpenStack Image registry configuration file (glance-registry.conf):" msgstr "" -#: ./doc/high-availability-guide/controller/section_mysql.xml:248(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml:248(para) msgid "No other changes are necessary to your OpenStack configuration. If the node currently hosting your database experiences a problem necessitating service failover, your OpenStack services may experience a brief MySQL interruption, as they would in the event of a network hiccup, and then continue to run normally." msgstr "" @@ -1113,23 +1113,23 @@ msgstr "" msgid "Memcached" msgstr "" -#: ./doc/high-availability-guide/ha_aa_controllers/section_memcached.xml:8(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_memcached.xml:8(para) msgid "Most of OpenStack services use an application to offer persistence and store ephemeral data (like tokens). Memcached is one of them and can scale-out easily without specific trick." msgstr "" -#: ./doc/high-availability-guide/ha_aa_controllers/section_memcached.xml:10(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_memcached.xml:10(para) msgid "To install and configure it, read the official documentation." msgstr "" -#: ./doc/high-availability-guide/ha_aa_controllers/section_memcached.xml:11(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_memcached.xml:11(para) msgid "Memory caching is managed by oslo-incubator so the way to use multiple memcached servers is the same for all projects." msgstr "" -#: ./doc/high-availability-guide/ha_aa_controllers/section_memcached.xml:12(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_memcached.xml:12(para) msgid "Example with two hosts:" msgstr "" -#: ./doc/high-availability-guide/ha_aa_controllers/section_memcached.xml:14(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_memcached.xml:14(para) msgid "By default, controller1 handles the caching service but if the host goes down, controller2 does the job. For more information about memcached installation, see the OpenStack Cloud Administrator Guide." msgstr "" @@ -1141,7 +1141,7 @@ msgstr "" msgid "API services" msgstr "" -#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml:12(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml:12(para) msgid "All OpenStack projects have an API service for controlling all the resources in the Cloud. In Active / Active mode, the most common setup is to scale-out these services on at least two nodes and use load balancing and virtual IP (with HAproxy & Keepalived in this setup)." msgstr "" @@ -1149,15 +1149,15 @@ msgstr "" msgid "Configure API OpenStack services" msgstr "" -#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml:18(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml:18(para) msgid "To configure our Cloud using Highly available and scalable API services, we need to ensure that:" msgstr "" -#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml:21(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml:21(para) msgid "You use Virtual IP when configuring OpenStack Identity endpoints." msgstr "" -#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml:26(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml:26(para) msgid "All OpenStack configuration files should refer to Virtual IP." msgstr "" @@ -1165,7 +1165,7 @@ msgstr "" msgid "In case of failure" msgstr "" -#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml:34(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml:34(para) msgid "The monitor check is quite simple since it just establishes a TCP connection to the API port. Comparing to the Active / Passive mode using Corosync & Resources Agents, we don’t check if the service is actually running). That’s why all OpenStack API should be monitored by another tool, for example Nagios, with the goal to detect failures in the Cloud Framework infrastructure." msgstr "" @@ -1173,147 +1173,147 @@ msgstr "" msgid "Schedulers" msgstr "" -#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml:43(simpara) -msgid "OpenStack schedulers are used to determine how to dispatch compute, network and volume requests. The most common setup is to use RabbitMQ as messaging system already documented in this guide. Those services are connected to the messaging backend and can scale-out :" +#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml:43(para) +msgid "OpenStack schedulers are used to determine how to dispatch compute, network and volume requests. The most common setup is to use RabbitMQ as messaging system already documented in this guide. Those services are connected to the messaging backend and can scale-out:" msgstr "" -#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml:48(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml:48(para) msgid "nova-scheduler" msgstr "" -#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml:53(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml:53(para) msgid "nova-conductor" msgstr "" -#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml:58(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml:58(para) msgid "cinder-scheduler" msgstr "" -#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml:63(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml:63(para) msgid "neutron-server" msgstr "" -#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml:68(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml:68(para) msgid "ceilometer-collector" msgstr "" -#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml:73(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml:73(para) msgid "heat-engine" msgstr "" -#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml:78(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml:78(para) msgid "Please refer to the RabbitMQ section for configure these services with multiple messaging servers." msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_start_pacemaker.xml:6(title) +#: ./doc/high-availability-guide/pacemaker/section_start_pacemaker.xml:5(title) msgid "Start Pacemaker" msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_start_pacemaker.xml:8(simpara) +#: ./doc/high-availability-guide/pacemaker/section_start_pacemaker.xml:6(para) msgid "Once the Corosync services have been started, and you have established that the cluster is communicating properly, it is safe to start pacemakerd, the Pacemaker master control process:" msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_start_pacemaker.xml:13(simpara) +#: ./doc/high-availability-guide/pacemaker/section_start_pacemaker.xml:11(para) msgid "/etc/init.d/pacemaker start (LSB)" msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_start_pacemaker.xml:17(simpara) +#: ./doc/high-availability-guide/pacemaker/section_start_pacemaker.xml:15(para) msgid "service pacemaker start (LSB, alternate)" msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_start_pacemaker.xml:21(simpara) +#: ./doc/high-availability-guide/pacemaker/section_start_pacemaker.xml:19(para) msgid "start pacemaker (upstart)" msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_start_pacemaker.xml:25(simpara) +#: ./doc/high-availability-guide/pacemaker/section_start_pacemaker.xml:23(para) msgid "systemctl start pacemaker (systemd)" msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_start_pacemaker.xml:29(simpara) +#: ./doc/high-availability-guide/pacemaker/section_start_pacemaker.xml:27(para) msgid "Once Pacemaker services have started, Pacemaker will create a default empty cluster configuration with no resources. You may observe Pacemaker’s status with the crm_mon utility:" msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml:6(title) +#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml:5(title) msgid "Set up Corosync" msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml:8(simpara) -msgid "Besides installing the corosync package, you must also create a configuration file, stored in /etc/corosync/corosync.conf. Most distributions ship an example configuration file (corosync.conf.example) as part of the documentation bundled with the corosync package. An example Corosync configuration file is shown below:" +#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml:6(para) +msgid "Besides installing the corosync package, you must also create a configuration file, stored in /etc/corosync/corosync.conf. Most distributions ship an example configuration file (corosync.conf.example) as part of the documentation bundled with the corosync package. An example Corosync configuration file is shown below:" msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml:16(title) -msgid "Corosync configuration file (corosync.conf)" +#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml:14(title) +msgid "Corosync configuration file (corosync.conf)" msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml:90(para) +#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml:88(para) msgid "The token value specifies the time, in milliseconds, during which the Corosync token is expected to be transmitted around the ring. When this timeout expires, the token is declared lost, and after token_retransmits_before_loss_const lost tokens the non-responding processor (cluster node) is declared dead. In other words, token × token_retransmits_before_loss_const is the maximum time a node is allowed to not respond to cluster messages before being considered dead. The default for token is 1000 (1 second), with 4 allowed retransmits. These defaults are intended to minimize failover times, but can cause frequent \"false alarms\" and unintended failovers in case of short network interruptions. The values used here are safer, albeit with slightly extended failover times." msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml:106(para) +#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml:104(para) msgid "With secauth enabled, Corosync nodes mutually authenticate using a 128-byte shared secret stored in /etc/corosync/authkey, which may be generated with the corosync-keygen utility. When using secauth, cluster communications are also encrypted." msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml:114(para) +#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml:112(para) msgid "In Corosync configurations using redundant networking (with more than one interface), you must select a Redundant Ring Protocol (RRP) mode other than none. active is the recommended RRP mode." msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml:121(para) +#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml:119(para) msgid "There are several things to note about the recommended interface configuration:" msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml:127(simpara) +#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml:125(para) msgid "The ringnumber must differ between all configured interfaces, starting with 0." msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml:133(simpara) +#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml:131(para) msgid "The bindnetaddr is the network address of the interfaces to bind to. The example uses two network addresses of /24 IPv4 subnets." msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml:139(simpara) +#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml:137(para) msgid "Multicast groups (mcastaddr) must not be reused across cluster boundaries. In other words, no two distinct clusters should ever use the same multicast group. Be sure to select multicast addresses compliant with RFC 2365, \"Administratively Scoped IP Multicast\"." msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml:148(simpara) +#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml:146(para) msgid "For firewall configurations, note that Corosync communicates over UDP only, and uses mcastport (for receives) and mcastport-1 (for sends)." msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml:157(para) -msgid "The service declaration for the pacemaker service may be placed in the corosync.conf file directly, or in its own separate file, /etc/corosync/service.d/pacemaker." +#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml:155(para) +msgid "The service declaration for the pacemaker service may be placed in the corosync.conf file directly, or in its own separate file, /etc/corosync/service.d/pacemaker." msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml:164(simpara) -msgid "Once created, the corosync.conf file (and the authkey file if the secauth option is enabled) must be synchronized across all cluster nodes." +#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml:162(para) +msgid "Once created, the corosync.conf file (and the authkey file if the secauth option is enabled) must be synchronized across all cluster nodes." msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml:6(title) +#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml:5(title) msgid "Install packages" msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml:8(simpara) +#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml:6(para) msgid "On any host that is meant to be part of a Pacemaker cluster, you must first establish cluster communications through the Corosync messaging layer. This involves Install the following packages (and their dependencies, which your package manager will normally install automatically):" msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml:15(simpara) +#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml:13(para) msgid "pacemaker Note that the crm shell should be downloaded separately." msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml:20(literal) +#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml:18(literal) msgid "crmsh" msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml:25(literal) +#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml:23(literal) msgid "corosync" msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml:30(literal) +#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml:28(literal) msgid "cluster-glue" msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml:34(simpara) +#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml:32(para) msgid "fence-agents (Fedora only; all other distributions use fencing agents from cluster-glue)" msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml:40(literal) +#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml:38(literal) msgid "resource-agents" msgstr "" @@ -1321,11 +1321,11 @@ msgstr "" msgid "Set basic cluster properties" msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_set_basic_cluster_properties.xml:8(simpara) +#: ./doc/high-availability-guide/pacemaker/section_set_basic_cluster_properties.xml:8(para) msgid "Once your Pacemaker cluster is set up, it is recommended to set a few basic cluster properties. To do so, start the crm shell and change into the configuration menu by entering configure. Alternatively. you may jump straight into the Pacemaker configuration menu by typing crm configure directly from a shell prompt." msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_set_basic_cluster_properties.xml:14(simpara) +#: ./doc/high-availability-guide/pacemaker/section_set_basic_cluster_properties.xml:14(para) msgid "Then, set the following properties:" msgstr "" @@ -1341,7 +1341,7 @@ msgstr "" msgid "Pacemaker uses an event-driven approach to cluster state processing. However, certain Pacemaker actions occur at a configurable interval, cluster-recheck-interval, which defaults to 15 minutes. It is usually prudent to reduce this to a shorter interval, such as 5 or 3 minutes." msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_set_basic_cluster_properties.xml:52(simpara) +#: ./doc/high-availability-guide/pacemaker/section_set_basic_cluster_properties.xml:52(para) msgid "Once you have made these changes, you may commit the updated configuration." msgstr "" @@ -1349,39 +1349,39 @@ msgstr "" msgid "Starting Corosync" msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml:8(simpara) +#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml:8(para) msgid "Corosync is started as a regular system service. Depending on your distribution, it may ship with a LSB (System V style) init script, an upstart job, or a systemd unit file. Either way, the service is usually named corosync:" msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml:14(simpara) +#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml:14(para) msgid "/etc/init.d/corosync start (LSB)" msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml:18(simpara) +#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml:18(para) msgid "service corosync start (LSB, alternate)" msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml:22(simpara) +#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml:22(para) msgid "start corosync (upstart)" msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml:26(simpara) +#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml:26(para) msgid "systemctl start corosync (systemd)" msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml:30(simpara) +#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml:30(para) msgid "You can now check the Corosync connectivity with two tools." msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml:31(simpara) +#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml:31(para) msgid "The corosync-cfgtool utility, when invoked with the -s option, gives a summary of the health of the communication rings:" msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml:42(simpara) +#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml:42(para) msgid "The corosync-objctl utility can be used to dump the Corosync cluster member list:" msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml:51(simpara) +#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml:51(para) msgid "You should see a status=joined entry for each of your constituent cluster nodes." msgstr "" @@ -1389,47 +1389,47 @@ msgstr "" msgid "Configure OpenStack services to use RabbitMQ" msgstr "" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml:8(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml:8(para) msgid "We have to configure the OpenStack components to use at least two RabbitMQ nodes." msgstr "" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml:9(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml:9(para) msgid "Do this configuration on all services using RabbitMQ:" msgstr "" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml:10(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml:10(para) msgid "RabbitMQ HA cluster host:port pairs:" msgstr "" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml:12(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml:12(para) msgid "How frequently to retry connecting with RabbitMQ:" msgstr "" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml:14(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml:14(para) msgid "How long to back-off for between retries when connecting to RabbitMQ:" msgstr "" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml:16(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml:16(para) msgid "Maximum retries with trying to connect to RabbitMQ (infinite by default):" msgstr "" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml:18(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml:18(para) msgid "Use durable queues in RabbitMQ:" msgstr "" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml:20(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml:20(para) msgid "Use H/A queues in RabbitMQ (x-ha-policy: all):" msgstr "" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml:22(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml:22(para) msgid "If you change the configuration from an old setup which did not use HA queues, you should interrupt the service:" msgstr "" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml:26(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml:26(para) msgid "Services currently working with HA queues: OpenStack Compute, OpenStack Block Storage, OpenStack Networking, Telemetry." msgstr "" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_install_rabbitmq.xml:8(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_install_rabbitmq.xml:8(para) msgid "This setup has been tested with RabbitMQ 2.7.1." msgstr "" @@ -1437,7 +1437,7 @@ msgstr "" msgid "On Ubuntu / Debian" msgstr "" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_install_rabbitmq.xml:13(simpara) ./doc/high-availability-guide/ha_aa_rabbitmq/section_install_rabbitmq.xml:23(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_install_rabbitmq.xml:13(para) ./doc/high-availability-guide/ha_aa_rabbitmq/section_install_rabbitmq.xml:23(para) msgid "RabbitMQ is packaged on both distros:" msgstr "" @@ -1457,67 +1457,67 @@ msgstr "" msgid "Configure RabbitMQ" msgstr "" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml:8(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml:8(para) msgid "Here we are building a cluster of RabbitMQ nodes to construct a RabbitMQ broker. Mirrored queues in RabbitMQ improve the availability of service since it will be resilient to failures. We have to consider that while exchanges and bindings will survive the loss of individual nodes, queues and their messages will not because a queue and its contents is located on one node. If we lose this node, we also lose the queue." msgstr "" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml:13(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml:13(para) msgid "We consider that we run (at least) two RabbitMQ servers. To build a broker, we need to ensure that all nodes have the same erlang cookie file. To do so, stop RabbitMQ everywhere and copy the cookie from rabbit1 server to other server(s):" msgstr "" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml:18(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml:18(para) msgid "Then, start RabbitMQ on nodes. If RabbitMQ fails to start, you can’t continue to the next step." msgstr "" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml:20(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml:20(para) msgid "Now, we are building the HA cluster. From rabbit2, run these commands:" msgstr "" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml:24(simpara) -msgid "To verify the cluster status :" +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml:24(para) +msgid "To verify the cluster status:" msgstr "" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml:29(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml:30(para) msgid "If the cluster is working, you can now proceed to creating users and passwords for queues." msgstr "" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml:31(emphasis) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml:32(emphasis) msgid "Note for RabbitMQ version 3" msgstr "" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml:33(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml:34(para) msgid "Queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. OpenStack can continue to declare this argument, but it won’t cause queues to be mirrored. We need to make sure that all queues (except those with auto-generated names) are mirrored across all running nodes:" msgstr "" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml:38(link) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml:39(link) msgid "More information about High availability in RabbitMQ" msgstr "" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml:6(title) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml:5(title) msgid "Highly available neutron metadata agent" msgstr "" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml:8(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml:6(para) msgid "Neutron metadata agent allows Compute API metadata to be reachable by VMs on tenant networks. High availability for the metadata agent is achieved by adopting Pacemaker." msgstr "" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml:12(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml:10(para) msgid "Here is the documentation for installing Neutron Metadata Agent." msgstr "" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml:16(title) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml:13(title) msgid "Add neutron metadata agent resource to Pacemaker" msgstr "" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml:22(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml:18(para) msgid "You may now proceed with adding the Pacemaker configuration for neutron metadata agent resource. Connect to the Pacemaker cluster with crm configure, and add the following cluster resources:" msgstr "" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml:32(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml:28(para) msgid "p_neutron-metadata-agent, a resource for manage Neutron Metadata Agent service" msgstr "" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml:40(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml:36(para) msgid "Once completed, commit your configuration changes by entering commit from the crm configure menu. Pacemaker will then start the neutron metadata agent service, and its dependent resources, on one of your nodes." msgstr "" @@ -1525,11 +1525,11 @@ msgstr "" msgid "Highly available neutron L3 agent" msgstr "" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml:8(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml:8(para) msgid "The neutron L3 agent provides L3/NAT forwarding to ensure external network access for VMs on tenant networks. High availability for the L3 agent is achieved by adopting Pacemaker." msgstr "" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml:12(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml:12(para) msgid "Here is the documentation for installing neutron L3 agent." msgstr "" @@ -1537,19 +1537,19 @@ msgstr "" msgid "Add neutron L3 agent resource to Pacemaker" msgstr "" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml:22(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml:22(para) msgid "You may now proceed with adding the Pacemaker configuration for neutron L3 agent resource. Connect to the Pacemaker cluster with crm configure, and add the following cluster resources:" msgstr "" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml:32(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml:32(para) msgid "p_neutron-l3-agent, a resource for manage Neutron L3 Agent service" msgstr "" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml:39(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml:39(para) msgid "Once completed, commit your configuration changes by entering commit from the crm configure menu. Pacemaker will then start the neutron L3 agent service, and its dependent resources, on one of your nodes." msgstr "" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml:43(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml:43(para) msgid "This method does not ensure a zero downtime since it has to recreate all the namespaces and virtual routers on the node." msgstr "" @@ -1557,7 +1557,7 @@ msgstr "" msgid "Manage network resources" msgstr "" -#: ./doc/high-availability-guide/network/section_manage_network_resources.xml:8(simpara) +#: ./doc/high-availability-guide/network/section_manage_network_resources.xml:8(para) msgid "You can now add the Pacemaker configuration for managing all network resources together with a group. Connect to the Pacemaker cluster with crm configure, and add the following cluster resources:" msgstr "" @@ -1565,11 +1565,11 @@ msgstr "" msgid "Highly available neutron DHCP agent" msgstr "" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml:8(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml:8(para) msgid "Neutron DHCP agent distributes IP addresses to the VMs with dnsmasq (by default). High availability for the DHCP agent is achieved by adopting Pacemaker." msgstr "" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml:12(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml:12(para) msgid "Here is the documentation for installing neutron DHCP agent." msgstr "" @@ -1577,15 +1577,15 @@ msgstr "" msgid "Add neutron DHCP agent resource to Pacemaker" msgstr "" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml:22(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml:22(para) msgid "You may now proceed with adding the Pacemaker configuration for neutron DHCP agent resource. Connect to the Pacemaker cluster with crm configure, and add the following cluster resources:" msgstr "" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml:32(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml:32(para) msgid "p_neutron-dhcp-agent, a resource for manage Neutron DHCP Agent service" msgstr "" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml:40(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml:40(para) msgid "Once completed, commit your configuration changes by entering commit from the crm configure menu. Pacemaker will then start the neutron DHCP agent service, and its dependent resources, on one of your nodes." msgstr "" diff --git a/doc/high-availability-guide/locale/ja.po b/doc/high-availability-guide/locale/ja.po index b4478da0f1..6905f9676c 100644 --- a/doc/high-availability-guide/locale/ja.po +++ b/doc/high-availability-guide/locale/ja.po @@ -4,8 +4,8 @@ msgid "" msgstr "" "Project-Id-Version: OpenStack Manuals\n" -"POT-Creation-Date: 2014-07-08 03:10+0000\n" -"PO-Revision-Date: 2014-07-07 15:34+0000\n" +"POT-Creation-Date: 2014-07-09 04:52+0000\n" +"PO-Revision-Date: 2014-07-09 03:03+0000\n" "Last-Translator: openstackjenkins \n" "Language-Team: Japanese (http://www.transifex.com/projects/p/openstack-manuals-i18n/language/ja/)\n" "MIME-Version: 1.0\n" @@ -18,7 +18,7 @@ msgstr "" msgid "Cloud controller cluster stack" msgstr "クラウドコントローラーのクラスタースタック" -#: ./doc/high-availability-guide/ch_controller.xml9(simpara) +#: ./doc/high-availability-guide/ch_controller.xml9(para) msgid "" "The cloud controller runs on the management network and must talk to all " "other services." @@ -28,31 +28,31 @@ msgstr "クラウドコントローラーは、管理ネットワークで動作 msgid "OpenStack network nodes" msgstr "OpenStack ネットワークノード" -#: ./doc/high-availability-guide/ch_ha_aa_network.xml9(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_network.xml9(para) msgid "OpenStack network nodes contain:" msgstr "OpenStack ネットワークノードは次のものを含みます。" -#: ./doc/high-availability-guide/ch_ha_aa_network.xml12(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_network.xml12(para) msgid "neutron DHCP agent" msgstr "Neutron DHCP エージェント" -#: ./doc/high-availability-guide/ch_ha_aa_network.xml17(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_network.xml17(para) msgid "neutron L2 agent" msgstr "neutron L2 エージェント" -#: ./doc/high-availability-guide/ch_ha_aa_network.xml22(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_network.xml22(para) msgid "Neutron L3 agent" msgstr "neutron L3 エージェント" -#: ./doc/high-availability-guide/ch_ha_aa_network.xml27(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_network.xml27(para) msgid "neutron metadata agent" msgstr "neutron メタデータエージェント" -#: ./doc/high-availability-guide/ch_ha_aa_network.xml32(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_network.xml32(para) msgid "neutron lbaas agent" msgstr "neutron LBaaS エージェント" -#: ./doc/high-availability-guide/ch_ha_aa_network.xml38(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_network.xml38(para) msgid "" "The neutron L2 agent does not need to be highly available. It has to be " "installed on each Data Forwarding Node and controls the virtual networking " @@ -65,7 +65,7 @@ msgstr "neutron L2 エージェントは高可用性にする必要がありま msgid "The Pacemaker cluster stack" msgstr "Pacemaker クラスタースタック" -#: ./doc/high-availability-guide/ch_pacemaker.xml9(simpara) +#: ./doc/high-availability-guide/ch_pacemaker.xml9(para) msgid "" "OpenStack infrastructure high availability relies on the Pacemaker cluster stack, the " @@ -74,7 +74,7 @@ msgid "" "specific to OpenStack." msgstr "OpenStack インフラの高可用性は Pacemaker クラスター・スタックに依存しています。これは Linux プラットフォームにおける最先端の高可用性および負荷分散のスタックです。Pacemaker はストレージとアプリケーションに依存しません。また、OpenStack に固有ではありません。" -#: ./doc/high-availability-guide/ch_pacemaker.xml14(simpara) +#: ./doc/high-availability-guide/ch_pacemaker.xml14(para) msgid "" "Pacemaker relies on the Corosync messaging layer for " @@ -83,7 +83,7 @@ msgid "" "messaging, quorum, and cluster membership to Pacemaker." msgstr "Pacemaker は信頼できるクラスター通信として Corosync メッセージング層に依存しています。Corosync は Totem 単一リング順序制御・メンバー管理プロトコルを実装しています。また、UDP および InfiniBand によるメッセージング、クォーラムおよびクラスターメンバー管理を Pacemaker に提供します。" -#: ./doc/high-availability-guide/ch_pacemaker.xml19(simpara) +#: ./doc/high-availability-guide/ch_pacemaker.xml19(para) msgid "" "Pacemaker interacts with applications through resource " "agents (RAs), of which it supports over 70 natively. Pacemaker " @@ -98,22 +98,22 @@ msgstr "Pacemaker は リソースエージェント (RA) msgid "RabbitMQ" msgstr "RabbitMQ" -#: ./doc/high-availability-guide/ch_ha_aa_rabbitmq.xml9(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_rabbitmq.xml9(para) msgid "" "RabbitMQ is the default AMQP server used by many OpenStack services. Making " "the RabbitMQ service highly available involves the following steps:" msgstr "RabbitMQ が多くの OpenStack サービスにより使用される標準の AMQP サーバーです。RabbitMQ サービスを高可用性にすることは、以下の手順が関連します。" -#: ./doc/high-availability-guide/ch_ha_aa_rabbitmq.xml13(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_rabbitmq.xml13(para) #: ./doc/high-availability-guide/ha_aa_rabbitmq/section_install_rabbitmq.xml6(title) msgid "Install RabbitMQ" msgstr "RabbitMQ のインストール" -#: ./doc/high-availability-guide/ch_ha_aa_rabbitmq.xml18(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_rabbitmq.xml18(para) msgid "Configure RabbitMQ for HA queues" msgstr "高可用性 キュー用の RabbitMQ の設定" -#: ./doc/high-availability-guide/ch_ha_aa_rabbitmq.xml23(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_rabbitmq.xml23(para) msgid "Configure OpenStack services to use Rabbit HA queues" msgstr "RabbitMQ HA キューを使用するための OpenStack サービスの設定" @@ -121,24 +121,24 @@ msgstr "RabbitMQ HA キューを使用するための OpenStack サービスの msgid "Introduction to OpenStack High Availability" msgstr "OpenStack 高可用性の概要" -#: ./doc/high-availability-guide/ch_intro.xml9(simpara) +#: ./doc/high-availability-guide/ch_intro.xml9(para) msgid "High Availability systems seek to minimize two things:" msgstr "高可用性システムは以下の二つを最小化しようとします。" -#: ./doc/high-availability-guide/ch_intro.xml12(simpara) +#: ./doc/high-availability-guide/ch_intro.xml12(para) msgid "" "System downtime — occurs when a " "user-facing service is unavailable beyond a specified " "maximum amount of time, and" msgstr "システム停止時間 — 指定された最大時間を超えるユーザーが直面するサービスの利用不可能な時間の発生。" -#: ./doc/high-availability-guide/ch_intro.xml16(simpara) +#: ./doc/high-availability-guide/ch_intro.xml16(para) msgid "" "Data loss — accidental deletion or " "destruction of data." msgstr "データ損失 — 不慮の削除、またはデータの破損。" -#: ./doc/high-availability-guide/ch_intro.xml20(simpara) +#: ./doc/high-availability-guide/ch_intro.xml20(para) msgid "" "Most high availability systems guarantee protection against system downtime " "and data loss only in the event of a single failure. However, they are also " @@ -146,7 +146,7 @@ msgid "" "deteriorates into a series of consequential failures." msgstr "多くの高可用性システムは単一障害事象のみで、システム停止やデータ損失に対する保護を保証します。しかしながら、単一障害が一連の障害を悪化させていく、段階的な障害に対しても保護することが期待されます。" -#: ./doc/high-availability-guide/ch_intro.xml21(simpara) +#: ./doc/high-availability-guide/ch_intro.xml21(para) msgid "" "A crucial aspect of high availability is the elimination of single points of" " failure (SPOFs). A SPOF is an individual piece of equipment or software " @@ -154,30 +154,30 @@ msgid "" "eliminate SPOFs, check that mechanisms exist for redundancy of:" msgstr "高可用性の重要な側面は単一障害点 (SPOF) を減らすことです。SPOF は、障害が発生した場合にシステム停止やデータ損失を引き起こす、設備やソフトウェアの個々の部品です。SPOF を削減するために、以下の冗長性に対するメカニズムを確認します。" -#: ./doc/high-availability-guide/ch_intro.xml24(simpara) +#: ./doc/high-availability-guide/ch_intro.xml24(para) msgid "Network components, such as switches and routers" msgstr "スイッチやルーターなどのネットワークの構成要素" -#: ./doc/high-availability-guide/ch_intro.xml29(simpara) +#: ./doc/high-availability-guide/ch_intro.xml29(para) msgid "Applications and automatic service migration" msgstr "アプリケーションおよび自動的なサービスのマイグレーション" -#: ./doc/high-availability-guide/ch_intro.xml34(simpara) +#: ./doc/high-availability-guide/ch_intro.xml34(para) msgid "Storage components" msgstr "ストレージ構成要素" -#: ./doc/high-availability-guide/ch_intro.xml39(simpara) +#: ./doc/high-availability-guide/ch_intro.xml39(para) msgid "Facility services such as power, air conditioning, and fire protection" msgstr "電源、空調、防火などに関する設備" -#: ./doc/high-availability-guide/ch_intro.xml44(simpara) +#: ./doc/high-availability-guide/ch_intro.xml44(para) msgid "" "Most high availability systems will fail in the event of multiple " "independent (non-consequential) failures. In this case, most systems will " "protect data over maintaining availability." msgstr "多くの高可用性システムは複数の独立した (不連続な) 障害が発生すると停止します。この場合、多くのシステムは可用性の維持よりデータの保護を優先します。" -#: ./doc/high-availability-guide/ch_intro.xml45(simpara) +#: ./doc/high-availability-guide/ch_intro.xml45(para) msgid "" "High-availability systems typically achieve uptime of 99.99% or more, which " "roughly equates to less than an hour of cumulative downtime per year. In " @@ -185,7 +185,7 @@ msgid "" "after a failure to about one to two minutes, sometimes significantly less." msgstr "高可用性システムは一般的に 99.99% かそれ以上の稼働時間を達成します。これは、年間の累積停止時間として 1 時間以内ほどです。これを達成するために、高可用性システムは、障害発生後の復旧時間を1、2分に、ときどきさらに短時間にする必要があります。" -#: ./doc/high-availability-guide/ch_intro.xml46(simpara) +#: ./doc/high-availability-guide/ch_intro.xml46(para) msgid "" "OpenStack currently meets such availability requirements for its own " "infrastructure services, meaning that an uptime of 99.99% is feasible for " @@ -194,7 +194,7 @@ msgid "" "availability for individual guest instances." msgstr "OpenStack は自身のインフラストラクチャーサービスに対して、現在そのような要件を満たしています。99.99% の稼働時間が OpenStack インフラストラクチャーに対して完全に実現するという意味です。しかしながら、OpenStack は個々のゲストインスタンスに対して 99.99% の可用性を保証していません。" -#: ./doc/high-availability-guide/ch_intro.xml47(simpara) +#: ./doc/high-availability-guide/ch_intro.xml47(para) msgid "" "Preventing single points of failure can depend on whether or not a service " "is stateless." @@ -204,7 +204,7 @@ msgstr "単一障害点を無くすことは、サービスがステートレス msgid "Stateless vs. Stateful services" msgstr "ステートレスサービスとステートフルサービス" -#: ./doc/high-availability-guide/ch_intro.xml52(simpara) +#: ./doc/high-availability-guide/ch_intro.xml52(para) msgid "" "A stateless service is one that provides a response after your request, and " "then requires no further attention. To make a stateless service highly " @@ -213,7 +213,7 @@ msgid "" "glance-api, keystone-api, neutron-api and nova-scheduler." msgstr "ステートレスなサービスは、リクエストの後に応答を返すもので、その後の注意をする必要がないものです。ステートレスなサービスを高可用性にするには、冗長なインスタンスとそれらの負荷分散を提供する必要があります。ステートレスな OpenStack サービスは nova-api、nova-conductor、glance-api、keystone-api、neutron-api、nova-scheduler があります。" -#: ./doc/high-availability-guide/ch_intro.xml53(simpara) +#: ./doc/high-availability-guide/ch_intro.xml53(para) msgid "" "A stateful service is one where subsequent requests to the service depend on" " the results of the first request. Stateful services are more difficult to " @@ -224,7 +224,7 @@ msgid "" "are stateful include the OpenStack database and message queue." msgstr "ステートフルサービスは、最初のサービスリクエストの結果に基づいて、後続のものがサービスにリクエストします。ステートフルサービスは管理することがより難しいです。その理由は、単一のアクションが複数のリクエストに関係するため、単に追加インスタンスと負荷分散を提供しても問題が解決しないからです。たとえば、Horizon ユーザーインターフェースが新しいページに移動するたびにリセットすると、かなり使い勝手が悪くなります。ステートフルな OpenStack サービスに OpenStack データベースとメッセージキューがあります。" -#: ./doc/high-availability-guide/ch_intro.xml54(simpara) +#: ./doc/high-availability-guide/ch_intro.xml54(para) msgid "" "Making stateful services highly available can depend on whether you choose " "an active/passive or active/active configuration." @@ -234,7 +234,7 @@ msgstr "ステートフルサービスの高可用性は、アクティブ / パ msgid "Active/Passive" msgstr "アクティブ/パッシブ" -#: ./doc/high-availability-guide/ch_intro.xml60(simpara) +#: ./doc/high-availability-guide/ch_intro.xml60(para) msgid "" "In an active/passive configuration, systems are set up to bring additional " "resources online to replace those that have failed. For example, OpenStack " @@ -243,7 +243,7 @@ msgid "" "fails." msgstr "アクティブ / パッシブの設定の場合、システムは故障したリソースを置き換えるために、オンラインで追加リソースをセットアップします。たとえば、メインのデータベースが故障したときにオンラインになる災害対策データベースを維持する限り、OpenStack はメインのデータベースに書き込みます。" -#: ./doc/high-availability-guide/ch_intro.xml61(simpara) +#: ./doc/high-availability-guide/ch_intro.xml61(para) msgid "" "Typically, an active/passive installation for a stateless service would " "maintain a redundant instance that can be brought online when required. " @@ -251,7 +251,7 @@ msgid "" "such as HAProxy." msgstr "一般的にステートレスサービスをアクティブ / パッシブにインストールすると、必要に応じてオンラインにできる冗長なインスタンスを維持することになります。リクエストは HAProxy のような仮想 IP アドレスとロードバランサーを使用して負荷分散されます。" -#: ./doc/high-availability-guide/ch_intro.xml62(simpara) +#: ./doc/high-availability-guide/ch_intro.xml62(para) msgid "" "A typical active/passive installation for a stateful service maintains a " "replacement resource that can be brought online when required. A separate " @@ -263,7 +263,7 @@ msgstr "一般的にステートレスサービスをアクティブ / パッシ msgid "Active/Active" msgstr "アクティブ/アクティブ" -#: ./doc/high-availability-guide/ch_intro.xml68(simpara) +#: ./doc/high-availability-guide/ch_intro.xml68(para) msgid "" "In an active/active configuration, systems also use a backup but will manage" " both the main and redundant systems concurrently. This way, if there is a " @@ -272,14 +272,14 @@ msgid "" " online." msgstr "アクティブ / アクティブの設定の場合、システムはバックアップ側も使用しますが、メインと冗長システムを同時に管理します。このように、ユーザーが気が付かない障害が発生した場合、バックアップシステムはすでにオンラインであり、メインシステムが復旧され、オンラインになるまでの間は負荷が高くなります。" -#: ./doc/high-availability-guide/ch_intro.xml69(simpara) +#: ./doc/high-availability-guide/ch_intro.xml69(para) msgid "" "Typically, an active/active installation for a stateless service would " "maintain a redundant instance, and requests are load balanced using a " "virtual IP address and a load balancer such as HAProxy." msgstr "一般的にステートレスサービスをアクティブ / アクティブにインストールすると、冗長なインスタンスを維持することになります。リクエストは HAProxy のような仮想 IP アドレスとロードバランサーを使用して負荷分散されます。" -#: ./doc/high-availability-guide/ch_intro.xml70(simpara) +#: ./doc/high-availability-guide/ch_intro.xml70(para) msgid "" "A typical active/active installation for a stateful service would include " "redundant services with all instances having an identical state. For " @@ -289,7 +289,7 @@ msgid "" "that operational systems always handle the request." msgstr "一般的にステートレスサービスをアクティブ / アクティブにインストールすることは、すべてのインスタンスが同じ状態を持つ冗長なサービスになることを含みます。たとえば、あるインスタンスのデータベースの更新は、他のすべてのインスタンスも更新されます。このように、あるインスタンスへのリクエストは、他へのリクエストと同じです。ロードバランサーがこれらのシステムのトラフィックを管理し、利用可能なシステムが常にリクエストを確実に処理します。" -#: ./doc/high-availability-guide/ch_intro.xml71(simpara) +#: ./doc/high-availability-guide/ch_intro.xml71(para) msgid "" "These are some of the more common ways to implement these high availability " "architectures, but they are by no means the only ways to do it. The " @@ -302,86 +302,86 @@ msgstr "これらの高可用性アーキテクチャーを実現する、より msgid "OpenStack High Availability Guide" msgstr "OpenStack 高可用性ガイド" -#: ./doc/high-availability-guide/bk-ha-guide.xml11(firstname) +#: ./doc/high-availability-guide/bk-ha-guide.xml12(firstname) msgid "Florian" msgstr "Florian" -#: ./doc/high-availability-guide/bk-ha-guide.xml12(surname) +#: ./doc/high-availability-guide/bk-ha-guide.xml13(surname) msgid "Haas" msgstr "Haas" -#: ./doc/high-availability-guide/bk-ha-guide.xml14(email) +#: ./doc/high-availability-guide/bk-ha-guide.xml15(email) msgid "florian@hastexo.com" msgstr "florian@hastexo.com" -#: ./doc/high-availability-guide/bk-ha-guide.xml16(orgname) +#: ./doc/high-availability-guide/bk-ha-guide.xml17(orgname) msgid "hastexo" msgstr "hastexo" -#: ./doc/high-availability-guide/bk-ha-guide.xml20(year) +#: ./doc/high-availability-guide/bk-ha-guide.xml21(year) msgid "2012" msgstr "2012" -#: ./doc/high-availability-guide/bk-ha-guide.xml21(year) +#: ./doc/high-availability-guide/bk-ha-guide.xml22(year) msgid "2013" msgstr "2013" -#: ./doc/high-availability-guide/bk-ha-guide.xml22(year) +#: ./doc/high-availability-guide/bk-ha-guide.xml23(year) msgid "2014" msgstr "2014" -#: ./doc/high-availability-guide/bk-ha-guide.xml23(holder) +#: ./doc/high-availability-guide/bk-ha-guide.xml24(holder) msgid "OpenStack Contributors" msgstr "OpenStack 貢献者" -#: ./doc/high-availability-guide/bk-ha-guide.xml25(releaseinfo) +#: ./doc/high-availability-guide/bk-ha-guide.xml26(releaseinfo) msgid "current" msgstr "カレント" -#: ./doc/high-availability-guide/bk-ha-guide.xml26(productname) +#: ./doc/high-availability-guide/bk-ha-guide.xml27(productname) msgid "OpenStack" msgstr "OpenStack" -#: ./doc/high-availability-guide/bk-ha-guide.xml30(remark) +#: ./doc/high-availability-guide/bk-ha-guide.xml31(remark) msgid "Copyright details are filled in by the template." msgstr "Copyright details are filled in by the template." -#: ./doc/high-availability-guide/bk-ha-guide.xml34(para) +#: ./doc/high-availability-guide/bk-ha-guide.xml35(para) msgid "" "This guide describes how to install, configure, and manage OpenStack for " "high availability." msgstr "このガイドは、高可用性 OpenStack をインストール、設定、管理する方法について記載します。" -#: ./doc/high-availability-guide/bk-ha-guide.xml39(date) +#: ./doc/high-availability-guide/bk-ha-guide.xml40(date) msgid "2014-05-16" msgstr "2014-05-16" -#: ./doc/high-availability-guide/bk-ha-guide.xml43(para) +#: ./doc/high-availability-guide/bk-ha-guide.xml44(para) msgid "Conversion to Docbook." msgstr "Docbook 形式への変換。" -#: ./doc/high-availability-guide/bk-ha-guide.xml49(date) +#: ./doc/high-availability-guide/bk-ha-guide.xml50(date) msgid "2014-04-17" msgstr "2014-04-17" -#: ./doc/high-availability-guide/bk-ha-guide.xml53(para) +#: ./doc/high-availability-guide/bk-ha-guide.xml54(para) msgid "" "Minor cleanup of typos, otherwise no major revisions for Icehouse release." msgstr "誤字脱字の軽微な修正。Icehouse リリース向けの大きな改版はありません。" -#: ./doc/high-availability-guide/bk-ha-guide.xml60(date) +#: ./doc/high-availability-guide/bk-ha-guide.xml61(date) msgid "2012-01-16" msgstr "2012-01-16" -#: ./doc/high-availability-guide/bk-ha-guide.xml64(para) +#: ./doc/high-availability-guide/bk-ha-guide.xml65(para) msgid "Organizes guide based on cloud controller and compute nodes." msgstr "クラウドコントローラーとコンピュートノードに基づいてガイドを整理しました。" -#: ./doc/high-availability-guide/bk-ha-guide.xml70(date) +#: ./doc/high-availability-guide/bk-ha-guide.xml71(date) msgid "2012-05-24" msgstr "2012-05-24" -#: ./doc/high-availability-guide/bk-ha-guide.xml74(para) +#: ./doc/high-availability-guide/bk-ha-guide.xml75(para) msgid "Begin trunk designation." msgstr "trunk 指定を始めました。" @@ -389,13 +389,13 @@ msgstr "trunk 指定を始めました。" msgid "Network controller cluster stack" msgstr "ネットワークコントローラーのクラスタースタック" -#: ./doc/high-availability-guide/ch_network.xml9(simpara) +#: ./doc/high-availability-guide/ch_network.xml9(para) msgid "" "The network controller sits on the management and data network, and needs to" " be connected to the Internet if a VM needs access to it." msgstr "ネットワークコントローラーは、管理ネットワークとデータネットワークにあり、仮想マシンがインターネットにアクセスする必要がある場合にインターネットに接続する必要があります。" -#: ./doc/high-availability-guide/ch_network.xml11(simpara) +#: ./doc/high-availability-guide/ch_network.xml11(para) msgid "" "Both nodes should have the same hostname since the Networking scheduler will" " be aware of one node, for example a virtual router attached to a single L3 " @@ -406,7 +406,7 @@ msgstr "Networking スケジューラーが 1 つのノードに認識するよ msgid "HAproxy nodes" msgstr "HAproxy ノード" -#: ./doc/high-availability-guide/ch_ha_aa_haproxy.xml9(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_haproxy.xml9(para) msgid "" "HAProxy is a very fast and reliable solution offering high availability, " "load balancing, and proxying for TCP and HTTP-based applications. It is " @@ -415,7 +415,7 @@ msgid "" "connections is clearly realistic with today’s hardware." msgstr "HAProxy は高可用性、負荷分散、TCP と HTTP ベースのアプリケーションに対するプロキシを提供する非常に高速かつ信頼性のあるソリューションです。とくに永続性とレイヤー 7 処理を必要とする、非常に負荷の高いウェブサイトに適しています。数千の接続をサポートすることは、今日のハードウェアではかなり現実的です。" -#: ./doc/high-availability-guide/ch_ha_aa_haproxy.xml13(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_haproxy.xml13(para) msgid "" "For installing HAproxy on your nodes, you should consider its official documentation. Also, " @@ -423,11 +423,11 @@ msgid "" "failure, so you need at least two nodes running HAproxy." msgstr "ノードに HAproxy をインストールするために、公式ドキュメントを参照すべきです。また、このサービスが単一障害点にならないように、少なくとも 2 つのノードで HAproxy を実行すべきであることを考慮する必要があります。" -#: ./doc/high-availability-guide/ch_ha_aa_haproxy.xml16(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_haproxy.xml16(para) msgid "Here is an example for HAproxy configuration file:" msgstr "これは HAproxy 設定ファイルの例です。" -#: ./doc/high-availability-guide/ch_ha_aa_haproxy.xml156(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_haproxy.xml156(para) msgid "After each change of this file, you should restart HAproxy." msgstr "このファイルをそれぞれ変更した後、HAproxy を再起動しなければいけません。" @@ -435,19 +435,19 @@ msgstr "このファイルをそれぞれ変更した後、HAproxy を再起動 msgid "OpenStack controller nodes" msgstr "OpenStack コントローラーノード" -#: ./doc/high-availability-guide/ch_ha_aa_controllers.xml9(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_controllers.xml9(para) msgid "OpenStack controller nodes contain:" msgstr "OpenStack コントローラーノードは次のものを含みます。" -#: ./doc/high-availability-guide/ch_ha_aa_controllers.xml12(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_controllers.xml12(para) msgid "All OpenStack API services" msgstr "すべての OpenStack API サービス" -#: ./doc/high-availability-guide/ch_ha_aa_controllers.xml17(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_controllers.xml17(para) msgid "All OpenStack schedulers" msgstr "すべての OpenStack スケジューラー" -#: ./doc/high-availability-guide/ch_ha_aa_controllers.xml22(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_controllers.xml22(para) msgid "Memcached service" msgstr "Memcached サービス" @@ -459,7 +459,7 @@ msgstr "アクティブ/パッシブを使用した HA" msgid "API node cluster stack" msgstr "API ノードクラスタースタック" -#: ./doc/high-availability-guide/ch_api.xml9(simpara) +#: ./doc/high-availability-guide/ch_api.xml9(para) msgid "" "The API node exposes OpenStack API endpoints onto external network " "(Internet). It must talk to the cloud controller on the management network." @@ -469,7 +469,7 @@ msgstr "API ノードは外部ネットワーク (インターネット) にあ msgid "Database" msgstr "データベース" -#: ./doc/high-availability-guide/ch_ha_aa_db.xml9(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_db.xml9(para) msgid "" "The first step is installing the database that sits at the heart of the " "cluster. When we talk about High Availability, we talk about several " @@ -478,7 +478,7 @@ msgid "" "multi-master replication." msgstr "最初の手順は、クラスターの中心にあるデータベースをインストールすることです。私たちが高可用性について議論するとき、複数の (冗長性のための) データベースとそれらを同期させる方法について議論しています。この場合、MySQL データベースを、マルチマスターレプリケーション同期のために Galera と一緒に使用する必要があります。" -#: ./doc/high-availability-guide/ch_ha_aa_db.xml13(simpara) +#: ./doc/high-availability-guide/ch_ha_aa_db.xml13(para) msgid "" "The choice of database isn’t a foregone conclusion; you’re not required to " "use MySQL. It is, however, a fairly common choice in OpenStack " @@ -493,7 +493,7 @@ msgstr "アクティブ/アクティブを使用した HA" msgid "Galera monitoring scripts" msgstr "Galera 監視スクリプト" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_galera_monitoring.xml8(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_galera_monitoring.xml8(para) msgid "(Coming soon)" msgstr "(近日公開)" @@ -501,7 +501,7 @@ msgstr "(近日公開)" msgid "MySQL with Galera" msgstr "Galera を用いた MySQL" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml8(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml8(para) msgid "" "Rather than starting with a vanilla version of MySQL, and then adding " "Galera, you will want to install a version of MySQL patched for wsrep (Write" @@ -511,7 +511,7 @@ msgid "" "supports synchronous replication." msgstr "MySQL の派生バージョンから始めるよりは、https://launchpad.net/codership-mysql/0.7 にある wsrep (Write Set REPlication) のパッチを適用したバージョンの MySQL をインストールするとよいでしょう。wsrep API は、同期レプリケーションをサポートするので、OpenStack で高可用性 MySQL を設定するのに適しています。" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml13(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml13(para) msgid "" "Note that the installation requirements call for careful attention. Read the" " guide https://launchpad.net/galera/+download," @@ -531,103 +531,103 @@ msgid "" " your system doesn’t already have installed." msgstr "Galera を https://launchpad.net/galera/+download からダウンロードします。そして、システムにまだインストールされていない依存パッケージに注意しながら、*.rpms または *.debs をインストールします。" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml28(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml28(para) msgid "Adjust the configuration:" msgstr "設定を調整します。" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml31(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml31(para) msgid "" -"In the system-wide my.conf file, make sure mysqld isn’t " -"bound to 127.0.0.1, and that /etc/mysql/conf.d/ is " +"In the system-wide my.conf file, make sure mysqld isn’t" +" bound to 127.0.0.1, and that /etc/mysql/conf.d/ is " "included. Typically you can find this file at " -"/etc/my.cnf:" -msgstr "システム全体の my.conf ファイルで、mysqld が 127.0.0.1 に制限されておらず、/etc/mysql/conf.d/ がインクルードされることを確認します。このファイルは一般的に /etc/my.cnf にあります。" +"/etc/my.cnf:" +msgstr "" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml39(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml39(para) msgid "" "When adding a new node, you must configure it with a MySQL account that can " "access the other nodes. The new node must be able to request a state " "snapshot from one of the existing nodes:" msgstr "新しいノードを追加するとき、他のノードにアクセスできる MySQL アカウントを用いて設定する必要があります。新しいノードは、既存のノードのどれかから、状態に関するスナップショットを実行できるようにするためです。" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml44(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml44(para) msgid "" "Specify your MySQL account information in " -"/etc/mysql/conf.d/wsrep.cnf:" -msgstr "/etc/mysql/conf.d/wsrep.cnf に MySQL アカウント情報を指定します。" +"/etc/mysql/conf.d/wsrep.cnf:" +msgstr "" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml50(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml50(para) msgid "Connect as root and grant privileges to that user:" msgstr "root として接続し、そのユーザーに権限を与えます。" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml56(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml56(para) msgid "Remove user accounts with empty usernames because they cause problems:" msgstr "また、問題を引き起こす可能性があるので、空のユーザー名を用いてユーザーアカウントを削除します。" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml62(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml62(para) msgid "" "Set up certain mandatory configuration options within MySQL itself. These " "include:" msgstr "MySQL 自身の中で特定の必須設定オプションを設定します。これらには次のものがあります。" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml73(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml73(para) msgid "" "Check that the nodes can access each other through the firewall. Depending " "on your environment, this might mean adjusting iptables, as in:" msgstr "最後に、ノードがファイアウォール越しにお互いにアクセスできることを確認します。お使いの環境によっては、次のような iptables の調整を意味するかもしれません。" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml79(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml83(para) msgid "" "This might also mean configuring any NAT firewall between nodes to allow " "direct connections. You might need to disable SELinux, or configure it to " "allow mysqld to listen to sockets at unprivileged ports." msgstr "直接通信を許可するためにノード間の NAT ファイアウォールを設定することを意味するかもしれません。SELinux を無効化する必要があるかもしれません。または、mysqld が非特権ポートでソケットをリッスンできるよう設定する必要があるかもしれません。" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml84(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml88(para) msgid "Now you’re ready to create the cluster." msgstr "これでクラスターを作成する準備ができました。" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml87(title) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml91(title) msgid "Create the cluster" msgstr "クラスターの作成" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml89(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml93(para) msgid "" "In creating a cluster, you first start a single instance, which creates the " "cluster. The rest of the MySQL instances then connect to that cluster:" msgstr "クラスターを作成するとき、クラスターを作成する、単独のインスタンスをまず起動します。MySQL インスタンスの残りはそのクラスターに接続します。" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml94(title) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml98(title) msgid "An example of creating the cluster:" msgstr "クラスター作成の例:" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml97(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml101(para) msgid "Start on 10.0.0.10 by executing the command:" msgstr "以下のコマンドを実行して 10.0.0.10 で起動します。" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml105(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml109(para) msgid "" "Connect to that cluster on the rest of the nodes by referencing the address " "of that node, as in:" msgstr "次のとおり、そのノードを参照することにより、残りのノードからそのクラスターに接続します。" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml112(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml116(para) msgid "" "You also have the option to set the wsrep_cluster_address" -" in the /etc/mysql/conf.d/wsrep.cnf file, or within the " -"client itself. (In fact, for some systems, such as MariaDB or Percona, this " -"may be your only option.)" -msgstr "/etc/mysql/conf.d/wsrep.cnf ファイル、またはクライアント自身の中に wsrep_cluster_address を設定できます。(実際、MariaDB や Percona のようないくつかのシステムは、これが唯一の選択肢になるでしょう。) " +" in the /etc/mysql/conf.d/wsrep.cnf file, or within the" +" client itself. (In fact, for some systems, such as MariaDB or Percona, this" +" may be your only option.)" +msgstr "" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml118(title) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml122(title) msgid "An example of checking the status of the cluster." msgstr "クラスターの状態の確認例。" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml121(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml125(para) msgid "Open the MySQL client and check the status of the various parameters:" msgstr "MySQL クライアントを開き、さまざまなパラメーターの状態を確認します。" -#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml128(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml132(para) msgid "You should see a status that looks something like this:" msgstr "このように見える状態を確認すべきです。" @@ -635,7 +635,7 @@ msgstr "このように見える状態を確認すべきです。" msgid "Other ways to provide a highly available database" msgstr "高可用性データベースを提供する別の方法" -#: ./doc/high-availability-guide/ha_aa_db/section_other_ways_to_provide_a_highly_available_database.xml8(simpara) +#: ./doc/high-availability-guide/ha_aa_db/section_other_ways_to_provide_a_highly_available_database.xml8(para) msgid "" "MySQL with Galera is by no means the only way to achieve database HA. " "MariaDB (https://mariadb.org/) " @@ -649,7 +649,7 @@ msgstr "MySQL と Galera の併用がデータベースを HA 化する唯一の msgid "Run neutron L3 agent" msgstr "Neutron L3 エージェントの実行" -#: ./doc/high-availability-guide/ha_aa_network/section_run_neutron_l3_agent.xml8(simpara) +#: ./doc/high-availability-guide/ha_aa_network/section_run_neutron_l3_agent.xml8(para) msgid "" "The neutron L3 agent is scalable thanks to the scheduler that allows " "distribution of virtual routers across multiple nodes. But there is no " @@ -662,7 +662,7 @@ msgstr "仮想ルーターを複数のノードに渡り配布できるスケジ msgid "Run neutron metadata agent" msgstr "Neutron メタデータエージェントの実行" -#: ./doc/high-availability-guide/ha_aa_network/section_run_neutron_metadata_agent.xml8(simpara) +#: ./doc/high-availability-guide/ha_aa_network/section_run_neutron_metadata_agent.xml8(para) msgid "" "There is no native feature to make this service highly available. At this " "time, the Active / Passive solution exists to run the neutron metadata agent" @@ -674,7 +674,7 @@ msgstr "このサービスを高可用性にする独自の機能はありませ msgid "Run neutron DHCP agent" msgstr "Neutron DHCP エージェントの実行" -#: ./doc/high-availability-guide/ha_aa_network/section_run_neutron_dhcp_agent.xml8(simpara) +#: ./doc/high-availability-guide/ha_aa_network/section_run_neutron_dhcp_agent.xml8(para) msgid "" "OpenStack Networking service has a scheduler that lets you run multiple " "agents across nodes. Also, the DHCP agent can be natively highly available. " @@ -687,28 +687,28 @@ msgstr "OpenStack Networking Service はノードをまたがり複数のエー msgid "Highly available Block Storage API" msgstr "高可用性 Block Storage API" -#: ./doc/high-availability-guide/api/section_cinder_api.xml8(simpara) +#: ./doc/high-availability-guide/api/section_cinder_api.xml8(para) msgid "" "Making the Block Storage (cinder) API service highly available in active / " "passive mode involves" msgstr "Block Storage (cinder) API サービスのアクティブ / パッシブモードでの高可用化" -#: ./doc/high-availability-guide/api/section_cinder_api.xml11(simpara) +#: ./doc/high-availability-guide/api/section_cinder_api.xml11(para) msgid "Configure Block Storage to listen on the VIP address," msgstr "Block Storage がその仮想 IP アドレスをリッスンする設定" -#: ./doc/high-availability-guide/api/section_cinder_api.xml16(simpara) +#: ./doc/high-availability-guide/api/section_cinder_api.xml16(para) msgid "managing Block Storage API daemon with the Pacemaker cluster manager," msgstr "Pacemaker クラスターマネージャーを用いた Block Storage API デーモンの管理" -#: ./doc/high-availability-guide/api/section_cinder_api.xml21(simpara) -#: ./doc/high-availability-guide/api/section_glance_api.xml22(simpara) -#: ./doc/high-availability-guide/api/section_neutron_server.xml22(simpara) -#: ./doc/high-availability-guide/api/section_keystone.xml22(simpara) +#: ./doc/high-availability-guide/api/section_cinder_api.xml21(para) +#: ./doc/high-availability-guide/api/section_glance_api.xml22(para) +#: ./doc/high-availability-guide/api/section_neutron_server.xml22(para) +#: ./doc/high-availability-guide/api/section_keystone.xml22(para) msgid "Configure OpenStack services to use this IP address." msgstr "OpenStack のサービスがこの IP アドレスを使用するよう設定します。" -#: ./doc/high-availability-guide/api/section_cinder_api.xml27(simpara) +#: ./doc/high-availability-guide/api/section_cinder_api.xml27(para) msgid "" "Here is the documentation for " @@ -719,42 +719,42 @@ msgstr "ここに Block Storage のインストールに関するcrm configure, and " "add the following cluster resources:" msgstr "Block Storage API リソース用の Pacemaker 設定を追加できます。crm configure を用いて Pacemaker クラスターに接続し、以下のクラスターリソースを追加します。" -#: ./doc/high-availability-guide/api/section_cinder_api.xml46(simpara) -#: ./doc/high-availability-guide/api/section_glance_api.xml44(simpara) -#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml36(simpara) -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml190(simpara) -#: ./doc/high-availability-guide/controller/section_mysql.xml202(simpara) -#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml29(simpara) -#: ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml29(simpara) -#: ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml29(simpara) +#: ./doc/high-availability-guide/api/section_cinder_api.xml46(para) +#: ./doc/high-availability-guide/api/section_glance_api.xml44(para) +#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml36(para) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml190(para) +#: ./doc/high-availability-guide/controller/section_mysql.xml202(para) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml25(para) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml29(para) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml29(para) msgid "This configuration creates" msgstr "この設定により、次のものが作成されます。" -#: ./doc/high-availability-guide/api/section_cinder_api.xml49(simpara) +#: ./doc/high-availability-guide/api/section_cinder_api.xml49(para) msgid "" "p_cinder-api, a resource for manage Block Storage API " "service" msgstr "p_cinder-api, Block Storage API サービスを管理するためのリソース" -#: ./doc/high-availability-guide/api/section_cinder_api.xml53(simpara) +#: ./doc/high-availability-guide/api/section_cinder_api.xml53(para) msgid "" "crm configure supports batch input, so you may copy and " "paste the above into your live pacemaker configuration, and then make " @@ -763,7 +763,7 @@ msgid "" "resource to match your preferred virtual IP address." msgstr "crm configure はバッチ入力をサポートします。そのため、現在の pacemaker 設定の中に上をコピー・ペーストし、適宜変更を反映できます。たとえば、お好みの仮想 IP アドレスに一致させるために、crm configure メニューから edit p_ip_cinder-api と入力し、リソースを編集できます。" -#: ./doc/high-availability-guide/api/section_cinder_api.xml58(simpara) +#: ./doc/high-availability-guide/api/section_cinder_api.xml58(para) msgid "" "Once completed, commit your configuration changes by entering " "commit from the crm configure menu. " @@ -775,26 +775,26 @@ msgstr "完了すると、crm configure メニューから /etc/cinder/cinder.conf:" -msgstr "/etc/cinder/cinder.conf を編集します。" +#: ./doc/high-availability-guide/api/section_cinder_api.xml66(para) +msgid "Edit /etc/cinder/cinder.conf:" +msgstr "" #: ./doc/high-availability-guide/api/section_cinder_api.xml79(title) msgid "Configure OpenStack services to use highly available Block Storage API" msgstr "高可用性 Block Storage API を使用するための OpenStack サービスの設定" -#: ./doc/high-availability-guide/api/section_cinder_api.xml81(simpara) +#: ./doc/high-availability-guide/api/section_cinder_api.xml81(para) msgid "" "Your OpenStack services must now point their Block Storage API configuration" " to the highly available, virtual cluster IP address — rather than a Block " "Storage API server’s physical IP address as you normally would." msgstr "OpenStack サービスは、通常どおり Block Storage API サーバーの物理 IP アドレスを指定する代わりに、Block Storage API の設定が高可用性と仮想クラスター IP アドレスを指し示す必要があります。" -#: ./doc/high-availability-guide/api/section_cinder_api.xml84(simpara) +#: ./doc/high-availability-guide/api/section_cinder_api.xml84(para) msgid "You must create the Block Storage API endpoint with this IP." msgstr "この IP を用いて Block Storage API エンドポイントを作成する必要があります。" -#: ./doc/high-availability-guide/api/section_cinder_api.xml86(simpara) +#: ./doc/high-availability-guide/api/section_cinder_api.xml86(para) msgid "" "If you are using both private and public IP, you should create two Virtual " "IPs and define your endpoint like this:" @@ -804,7 +804,7 @@ msgstr "プライベート IP とパブリック IP の両方を使用する場 msgid "Configure Pacemaker group" msgstr "Pacemaker グループの設定" -#: ./doc/high-availability-guide/api/section_api_pacemaker.xml8(simpara) +#: ./doc/high-availability-guide/api/section_api_pacemaker.xml8(para) msgid "" "Finally, we need to create a service group to ensure that" " virtual IP is linked to the API services resources:" @@ -814,22 +814,22 @@ msgstr "最終的に、仮想 IP が API サービスリソースとリンクさ msgid "Highly available OpenStack Image API" msgstr "高可用性 OpenStack Image API" -#: ./doc/high-availability-guide/api/section_glance_api.xml8(simpara) +#: ./doc/high-availability-guide/api/section_glance_api.xml8(para) msgid "" "OpenStack Image Service offers a service for discovering, registering, and " "retrieving virtual machine images. To make the OpenStack Image API service " "highly available in active / passive mode, you must:" msgstr "OpenStack Image Service は仮想マシンイメージの検索、登録、取得に関するサービスを提供します。OpenStack Image API サービスをアクティブ / パッシブモードで高可用性にするために、以下を実行する必要があります。" -#: ./doc/high-availability-guide/api/section_glance_api.xml12(simpara) +#: ./doc/high-availability-guide/api/section_glance_api.xml12(para) msgid "Configure OpenStack Image to listen on the VIP address." msgstr "OpenStack Image がその仮想 IP アドレスをリッスンするよう設定します。" -#: ./doc/high-availability-guide/api/section_glance_api.xml17(simpara) +#: ./doc/high-availability-guide/api/section_glance_api.xml17(para) msgid "Manage OpenStack Image API daemon with the Pacemaker cluster manager." msgstr "Pacemaker クラスターマネージャーを用いた OpenStack Image API デーモンを管理します。" -#: ./doc/high-availability-guide/api/section_glance_api.xml28(simpara) +#: ./doc/high-availability-guide/api/section_glance_api.xml28(para) msgid "" "Here is the crm " "configure, and add the following cluster resources:" msgstr "OpenStack Image API リソース用の Pacemaker 設定を追加できます。crm configure を用いて Pacemaker クラスターに接続し、以下のクラスターリソースを追加します。" -#: ./doc/high-availability-guide/api/section_glance_api.xml47(simpara) +#: ./doc/high-availability-guide/api/section_glance_api.xml47(para) msgid "" "p_glance-api, a resource for manage OpenStack Image API " "service" msgstr "p_glance-api, OpenStack Image API サービスを管理するためのリソース" -#: ./doc/high-availability-guide/api/section_glance_api.xml51(simpara) +#: ./doc/high-availability-guide/api/section_glance_api.xml51(para) msgid "" "crm configure supports batch input, so you may copy and " "paste the above into your live pacemaker configuration, and then make " @@ -863,7 +863,7 @@ msgid "" "resource to match your preferred virtual IP address." msgstr "crm configure はバッチ入力をサポートします。そのため、現在の pacemaker 設定の中に上をコピー・ペーストし、適宜変更を反映できます。たとえば、お好みの仮想 IP アドレスに一致させるために、crm configure メニューから edit p_ip_glance-api と入力し、リソースを編集できます。" -#: ./doc/high-availability-guide/api/section_glance_api.xml56(simpara) +#: ./doc/high-availability-guide/api/section_glance_api.xml56(para) msgid "" "Once completed, commit your configuration changes by entering " "commit from the crm configure menu. " @@ -875,15 +875,15 @@ msgstr "完了すると、crm configure メニューから /etc/glance/glance-api.conf:" -msgstr "/etc/glance/glance-api.conf を編集します。" +#: ./doc/high-availability-guide/api/section_glance_api.xml64(para) +msgid "Edit /etc/glance/glance-api.conf:" +msgstr "" #: ./doc/high-availability-guide/api/section_glance_api.xml80(title) msgid "Configure OpenStack services to use high available OpenStack Image API" msgstr "高可用性 OpenStack Image Service API を使用するための OpenStack サービスの設定" -#: ./doc/high-availability-guide/api/section_glance_api.xml82(simpara) +#: ./doc/high-availability-guide/api/section_glance_api.xml82(para) msgid "" "Your OpenStack services must now point their OpenStack Image API " "configuration to the highly available, virtual cluster IP address — rather " @@ -891,19 +891,19 @@ msgid "" "would." msgstr "OpenStack サービスは、通常どおり OpenStack Image API サーバーの物理 IP アドレスを指定する代わりに、OpenStack Image API の設定が高可用性と仮想クラスター IP アドレスを指し示す必要があります。" -#: ./doc/high-availability-guide/api/section_glance_api.xml85(simpara) +#: ./doc/high-availability-guide/api/section_glance_api.xml85(para) msgid "" "For OpenStack Compute, for example, if your OpenStack Image API service IP " "address is 192.168.42.104 as in the configuration explained here, you would " -"use the following line in your nova.conf file:" -msgstr "OpenStack Compute の例として、OpenStack Image API の IP アドレスがここで説明された設定にあるように 192.168.42.104 ならば、nova.conf ファイルで以下の行を使用する必要があります。" +"use the following line in your nova.conf file:" +msgstr "" -#: ./doc/high-availability-guide/api/section_glance_api.xml89(simpara) +#: ./doc/high-availability-guide/api/section_glance_api.xml89(para) msgid "You must also create the OpenStack Image API endpoint with this IP." msgstr "この IP を用いて OpenStack Image API エンドポイントを作成する必要があります。" -#: ./doc/high-availability-guide/api/section_glance_api.xml91(simpara) -#: ./doc/high-availability-guide/api/section_neutron_server.xml85(simpara) +#: ./doc/high-availability-guide/api/section_glance_api.xml91(para) +#: ./doc/high-availability-guide/api/section_neutron_server.xml85(para) msgid "" "If you are using both private and public IP addresses, you should create two" " Virtual IP addresses and define your endpoint like this:" @@ -913,24 +913,24 @@ msgstr "プライベート IP とパブリック IP の両方を使用する場 msgid "Highly available OpenStack Networking server" msgstr "高可用性 OpenStack Networking サーバー" -#: ./doc/high-availability-guide/api/section_neutron_server.xml8(simpara) +#: ./doc/high-availability-guide/api/section_neutron_server.xml8(para) msgid "" "OpenStack Networking is the network connectivity service in OpenStack. " "Making the OpenStack Networking Server service highly available in active / " "passive mode involves" msgstr "OpenStack Networking は OpenStack におけるネットワーク接続性のサービスです。OpenStack Networking サーバーをアクティブ / パッシブモードで高可用性にすることは、次のことが関連します。" -#: ./doc/high-availability-guide/api/section_neutron_server.xml12(simpara) +#: ./doc/high-availability-guide/api/section_neutron_server.xml12(para) msgid "Configure OpenStack Networking to listen on the VIP address," msgstr "OpenStack Networking がその仮想 IP アドレスをリッスンする設定" -#: ./doc/high-availability-guide/api/section_neutron_server.xml17(simpara) +#: ./doc/high-availability-guide/api/section_neutron_server.xml17(para) msgid "" "managing OpenStack Networking API Server daemon with the Pacemaker cluster " "manager," msgstr "Pacemaker クラスターマネージャーを用いた OpenStack Networking API サーバーデーモンの管理" -#: ./doc/high-availability-guide/api/section_neutron_server.xml28(simpara) +#: ./doc/high-availability-guide/api/section_neutron_server.xml28(para) msgid "" "Here is the crm " "configure, and add the following cluster resources:" msgstr "OpenStack Networking Server リソース用の Pacemaker 設定を追加できます。crm configure を用いて Pacemaker クラスターに接続し、以下のクラスターリソースを追加します。" -#: ./doc/high-availability-guide/api/section_neutron_server.xml45(simpara) +#: ./doc/high-availability-guide/api/section_neutron_server.xml45(para) msgid "" "This configuration creates p_neutron-server, a resource " "for manage OpenStack Networking Server service" msgstr "この設定は OpenStack Networking サーバーサービスを管理するためのリソース p_neutron-server を作成します。" -#: ./doc/high-availability-guide/api/section_neutron_server.xml46(simpara) +#: ./doc/high-availability-guide/api/section_neutron_server.xml46(para) msgid "" "crm configure supports batch input, so you may copy and " "paste the above into your live pacemaker configuration, and then make " @@ -964,7 +964,7 @@ msgid "" " resource to match your preferred virtual IP address." msgstr "crm configure はバッチ入力をサポートします。そのため、現在の pacemaker 設定の中に上をコピー・ペーストし、適宜変更を反映できます。たとえば、お好みの仮想 IP アドレスに一致させるために、crm configure メニューから edit p_neutron-server と入力し、リソースを編集できます。" -#: ./doc/high-availability-guide/api/section_neutron_server.xml51(simpara) +#: ./doc/high-availability-guide/api/section_neutron_server.xml51(para) msgid "" "Once completed, commit your configuration changes by entering " "commit from the crm configure menu. " @@ -976,9 +976,9 @@ msgstr "完了すると、crm configure メニューから /etc/neutron/neutron.conf :" -msgstr "/etc/neutron/neutron.conf を編集します。" +#: ./doc/high-availability-guide/api/section_neutron_server.xml59(para) +msgid "Edit /etc/neutron/neutron.conf:" +msgstr "" #: ./doc/high-availability-guide/api/section_neutron_server.xml76(title) msgid "" @@ -986,7 +986,7 @@ msgid "" "server" msgstr "高可用性 OpenStack Networking を使用するための OpenStack サービスの設定" -#: ./doc/high-availability-guide/api/section_neutron_server.xml78(simpara) +#: ./doc/high-availability-guide/api/section_neutron_server.xml78(para) msgid "" "Your OpenStack services must now point their OpenStack Networking Server " "configuration to the highly available, virtual cluster IP address — rather " @@ -994,14 +994,14 @@ msgid "" "would." msgstr "OpenStack サービスは、通常どおり OpenStack Networking サーバーの物理 IP アドレスを指定する代わりに、OpenStack Networking サーバーの設定が高可用性と仮想クラスター IP アドレスを指し示す必要があります。" -#: ./doc/high-availability-guide/api/section_neutron_server.xml81(simpara) +#: ./doc/high-availability-guide/api/section_neutron_server.xml81(para) msgid "" "For example, you should configure OpenStack Compute for using highly " "available OpenStack Networking server in editing " "nova.conf file:" msgstr "たとえば、高可用性 Networking サーバーを使用するために、nova.conf ファイルを編集して OpenStack Compute を設定する必要があります :" -#: ./doc/high-availability-guide/api/section_neutron_server.xml83(simpara) +#: ./doc/high-availability-guide/api/section_neutron_server.xml83(para) msgid "" "You need to create the OpenStack Networking server endpoint with this IP." msgstr "この IP を用いて OpenStack Networking Server エンドポイントを作成する必要があります。" @@ -1010,28 +1010,28 @@ msgstr "この IP を用いて OpenStack Networking Server エンドポイント msgid "Highly available Telemetry central agent" msgstr "高可用性 Telemetry 中央エージェント" -#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml8(simpara) +#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml8(para) msgid "" "Telemetry (ceilometer) is the metering and monitoring service in OpenStack. " "The Central agent polls for resource utilization statistics for resources " "not tied to instances or compute nodes." msgstr "Telemetry (ceilometer) は OpenStack のメータリングとモニタリングのサービスです。中央エージェントは、インスタンスやコンピュートノードに結びつけられていないリソースに対して、リソースの利用状況の統計情報を収集します。" -#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml12(simpara) +#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml12(para) msgid "" "Due to limitations of a polling model, a single instance of this agent can " "be polling a given list of meters. In this setup, we install this service on" " the API nodes also in the active / passive mode." msgstr "収集モデルの制限により、このエージェントの単一のインスタンスが指定された一覧の利用状況を収集できます。このセットアップの場合、このサービスもアクティブ / パッシブモードで API ノードにインストールします。" -#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml16(simpara) +#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml16(para) msgid "" "Making the Telemetry central agent service highly available in active / " "passive mode involves managing its daemon with the Pacemaker cluster " "manager." msgstr "Telemetry 中央エージェントサービスをアクティブ / パッシブモードで高可用性にすることは、Pacemaker クラスターマネージャーでそのデーモンを管理することが関連します。" -#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml19(simpara) +#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml19(para) msgid "" "You will find at crm configure, and add the following cluster resources:" msgstr "Telemetry 中央エージェントリソース用の Pacemaker 設定を追加して、次に進むことができます。crm configure を用いて Pacemaker クラスターに接続し、以下のクラスターリソースを追加します。" -#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml39(simpara) +#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml39(para) msgid "" "p_ceilometer-agent-central, a resource for manage " "Ceilometer Central Agent service" msgstr "p_ceilometer-agent-central, Ceilometer 中央エージェントサービスを管理するためのリソース。" -#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml43(simpara) -#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml37(simpara) -#: ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml36(simpara) -#: ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml37(simpara) +#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml43(para) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml33(para) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml36(para) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml37(para) msgid "" "crm configure supports batch input, so you may copy and " "paste the above into your live pacemaker configuration, and then make " "changes as required." msgstr "crm configure はバッチ入力をサポートします。そのため、現在の pacemaker 設定の中に上をコピー・ペーストし、適宜変更を反映できます。" -#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml46(simpara) +#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml46(para) msgid "" "Once completed, commit your configuration changes by entering " "commit from the crm configure menu. " @@ -1078,30 +1078,30 @@ msgstr "完了すると、crm configure メニューから /etc/ceilometer/ceilometer.conf :" -msgstr "/etc/ceilometer/ceilometer.conf を編集します。" +#: ./doc/high-availability-guide/api/section_ceilometer_agent_central.xml54(para) +msgid "Edit /etc/ceilometer/ceilometer.conf:" +msgstr "" #: ./doc/high-availability-guide/api/section_keystone.xml6(title) msgid "Highly available OpenStack Identity" msgstr "高可用性 OpenStack Identity" -#: ./doc/high-availability-guide/api/section_keystone.xml8(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml8(para) msgid "" "OpenStack Identity is the Identity Service in OpenStack and used by many " "services. Making the OpenStack Identity service highly available in active /" " passive mode involves" msgstr "OpenStack Identity は OpenStack における認証サービスです。OpenStack Identity Service をアクティブ / パッシブモードで高可用性にすることは、次のことが関連します。" -#: ./doc/high-availability-guide/api/section_keystone.xml12(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml12(para) msgid "Configure OpenStack Identity to listen on the VIP address," msgstr "OpenStack Identity がその仮想 IP アドレスでリッスンするよう設定します。" -#: ./doc/high-availability-guide/api/section_keystone.xml17(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml17(para) msgid "managing OpenStack Identity daemon with the Pacemaker cluster manager," msgstr "Pacemaker クラスターマネージャーを用いた OpenStack Identity デーモンの管理" -#: ./doc/high-availability-guide/api/section_keystone.xml28(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml28(para) msgid "" "Here is the crm configure, and" " add the following cluster resources:" msgstr "OpenStack Identity リソース用の Pacemaker 設定を追加できます。crm configure を用いて Pacemaker クラスターに接続し、以下のクラスターリソースを追加します。" -#: ./doc/high-availability-guide/api/section_keystone.xml46(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml46(para) msgid "" "This configuration creates p_keystone, a resource for " "managing the OpenStack Identity service." msgstr "この設定は OpenStack Identity サービスを管理するためのリソース p_keystone を作成します。" -#: ./doc/high-availability-guide/api/section_keystone.xml47(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml47(para) msgid "" "crm configure supports batch input, so you may copy and " "paste the above into your live pacemaker configuration, and then make " @@ -1135,7 +1135,7 @@ msgid "" "edit the resource to match your preferred virtual IP address." msgstr "crm configure はバッチ入力をサポートします。そのため、現在の pacemaker 設定の中に上をコピー・ペーストし、適宜変更を反映できます。たとえば、お好みの仮想 IP アドレスに一致させるために、crm configure メニューから edit p_ip_keystone と入力し、リソースを編集できます。" -#: ./doc/high-availability-guide/api/section_keystone.xml52(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml52(para) msgid "" "Once completed, commit your configuration changes by entering " "commit from the crm configure menu. " @@ -1147,23 +1147,23 @@ msgstr "完了すると、crm configure メニューから keystone.conf) and change the bind parameters:" -msgstr "OpenStack Identity の設定ファイル (keystone.conf) を編集し、バインドのパラメーターを変更する必要があります。" +"(keystone.conf) and change the bind parameters:" +msgstr "" -#: ./doc/high-availability-guide/api/section_keystone.xml61(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml61(para) msgid "On Havana:" msgstr "Havana の場合:" -#: ./doc/high-availability-guide/api/section_keystone.xml63(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml63(para) msgid "" "On Icehouse, the admin_bind_host option lets you use a " "private network for the admin access." msgstr "Icehouse の場合、admin_bind_host オプションにより管理アクセス用のプライベートネットワークを使用できます。" -#: ./doc/high-availability-guide/api/section_keystone.xml66(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml66(para) msgid "" "To be sure all data will be highly available, you should be sure that you " "store everything in the MySQL database (which is also highly available):" @@ -1174,7 +1174,7 @@ msgid "" "Configure OpenStack services to use the highly available OpenStack Identity" msgstr "高可用性 OpenStack Identity を使用するための OpenStack サービスの設定" -#: ./doc/high-availability-guide/api/section_keystone.xml78(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml78(para) msgid "" "Your OpenStack services must now point their OpenStack Identity " "configuration to the highly available, virtual cluster IP address — rather " @@ -1182,7 +1182,7 @@ msgid "" "would." msgstr "OpenStack サービスは、通常どおり OpenStack Identity サーバーの物理 IP アドレスを指定する代わりに、OpenStack Identity サーバーの設定が高可用性と仮想クラスター IP アドレスを指し示す必要があります。" -#: ./doc/high-availability-guide/api/section_keystone.xml81(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml81(para) msgid "" "For example with OpenStack Compute, if your OpenStack Identity service IP " "address is 192.168.42.103 as in the configuration explained here, you would " @@ -1190,17 +1190,17 @@ msgid "" "paste.ini):" msgstr "OpenStack Compute を用いた例として、OpenStack Identity Service の IP アドレスがここで説明された設定にあるように 192.168.42.103 ならば、API 設定ファイル (api-paste.ini) で以下の行を使用する必要があります。" -#: ./doc/high-availability-guide/api/section_keystone.xml86(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml86(para) msgid "You also need to create the OpenStack Identity Endpoint with this IP." msgstr "この IP を用いて OpenStack Identity エンドポイントを作成する必要があります。" -#: ./doc/high-availability-guide/api/section_keystone.xml87(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml87(para) msgid "" -"NOTE : If you are using both private and public IP addresses, you should " +"NOTE: If you are using both private and public IP addresses, you should " "create two Virtual IP addresses and define your endpoint like this:" -msgstr "注: プライベート IP とパブリック IP の両方を使用する場合、2 つの仮想 IP アドレスを作成し、次のようにエンドポイントを定義すべきです。" +msgstr "" -#: ./doc/high-availability-guide/api/section_keystone.xml89(simpara) +#: ./doc/high-availability-guide/api/section_keystone.xml92(para) msgid "" "If you are using the horizon dashboard, you should edit the " "local_settings.py file:" @@ -1210,55 +1210,55 @@ msgstr "Dashboard を使用している場合、local_settings.pyp_ip_api, a virtual IP address" -" for use by the API node (192.168.42.103) :" -msgstr "この設定は API ノードにより使用される仮想 IP アドレス (192.168.42.103) p_ip_api を作成します。" +" for use by the API node (192.168.42.103):" +msgstr "" #: ./doc/high-availability-guide/controller/section_rabbitmq.xml11(title) msgid "Highly available RabbitMQ" msgstr "高可用性 RabbitMQ" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml13(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml13(para) msgid "" "RabbitMQ is the default AMQP server used by many OpenStack services. Making " "the RabbitMQ service highly available involves:" msgstr "RabbitMQ が多くの OpenStack サービスにより使用される標準の AMQP サーバーです。RabbitMQ サービスを高可用性にすることは、次のことが関連します。" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml17(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml17(para) msgid "configuring a DRBD device for use by RabbitMQ," msgstr "RabbitMQ により使用するための DRBD デバイスを設定します。" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml22(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml22(para) msgid "" "configuring RabbitMQ to use a data directory residing on that DRBD device," msgstr "RabbitMQ が DRBD デバイスにあるデータディレクトリを使用するよう設定します。" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml28(simpara) -#: ./doc/high-availability-guide/controller/section_mysql.xml28(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml28(para) +#: ./doc/high-availability-guide/controller/section_mysql.xml28(para) msgid "" "selecting and assigning a virtual IP address (VIP) that can freely float " "between cluster nodes," msgstr "クラスターノード間で自由に移動できる仮想 IP アドレス (VIP) を選択して割り当てます。" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml34(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml34(para) msgid "configuring RabbitMQ to listen on that IP address," msgstr "RabbitMQ がその IP アドレスでリッスンするよう設定します。" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml39(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml39(para) msgid "" "managing all resources, including the RabbitMQ daemon itself, with the " "Pacemaker cluster manager." msgstr "RabbitMQ デーモン自身を含む、すべてのリソースを Pacemaker クラスターマネージャーで管理します。" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml46(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml46(para) msgid "" "There is an alternative method of configuring RabbitMQ for high " "availability. That approach, known as /var/lib/rabbitmq directory. In this example, " @@ -1286,10 +1286,10 @@ msgstr "Pacemaker ベースの RabbitMQ サーバーは /var/lib/rabbit #: ./doc/high-availability-guide/controller/section_rabbitmq.xml65(title) msgid "" "rabbitmq DRBD resource configuration " -"(/etc/drbd.d/rabbitmq.res)" -msgstr "rabbitmq DRBD リソース設定 (/etc/drbd.d/rabbitmq.res)" +"(/etc/drbd.d/rabbitmq.res)" +msgstr "" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml81(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml81(para) msgid "" "This resource uses an underlying local disk (in DRBD terminology, a " "backing device) named " @@ -1304,8 +1304,8 @@ msgid "" "is, /dev/drbd1." msgstr "このリソースは、両方のクラスターノード node1node2 において、/dev/data/rabbitmq という名前のバックエンドのローカルディスク (DRBD の用語で backing device) を使用します。通常、これはこの目的のために特別に設定された LVM 論理ボリュームでしょう。DRBD meta-diskinternal です。これは DRBD 固有のメタデータが disk デバイス自身の最後に保存されることを意味します。このデバイスは IPv4 アドレス 10.0.42.100 と 10.0.42.254 の間で TCP ポート 7701 を使用して通信するよう設定されます。一度有効化されると、デバイスマイナー番号 1 を持つローカル DRBD ブロックデバイス、つまり /dev/drbd1 にマップされます。" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml90(simpara) -#: ./doc/high-availability-guide/controller/section_mysql.xml87(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml90(para) +#: ./doc/high-availability-guide/controller/section_mysql.xml87(para) msgid "" "Enabling a DRBD resource is explained in detail in the DRBD " @@ -1342,7 +1342,7 @@ msgstr "初期デバイス同期を開始し、デバイスを プラ msgid "Create a file system" msgstr "ファイルシステムの作成" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml127(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml127(para) msgid "" "Once the DRBD resource is running and in the primary role (and potentially " "still in the process of running the initial device synchronization), you may" @@ -1350,15 +1350,15 @@ msgid "" "the recommended filesystem:" msgstr "DRBD リソースが実行中になり、プライマリロールになると (まだ初期デバイス同期が実行中かもしれません)、RabbitMQ データのファイルシステムの作成を進められます。XFS が一般的に推奨されるファイルシステムです。" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml132(simpara) -#: ./doc/high-availability-guide/controller/section_mysql.xml129(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml132(para) +#: ./doc/high-availability-guide/controller/section_mysql.xml129(para) msgid "" "You may also use the alternate device path for the DRBD device, which may be" " easier to remember as it includes the self-explanatory resource name:" msgstr "DRBD デバイスに対する代替デバイスパスを使用することもできます。これは自己説明的なリソース名を含むため、より覚えやすいでしょう。" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml136(simpara) -#: ./doc/high-availability-guide/controller/section_mysql.xml133(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml136(para) +#: ./doc/high-availability-guide/controller/section_mysql.xml133(para) msgid "" "Once completed, you may safely return the device to the secondary role. Any " "ongoing device synchronization will continue in the background:" @@ -1368,7 +1368,7 @@ msgstr "一度完了すると、デバイスを安全にセカンダリロール msgid "Prepare RabbitMQ for Pacemaker high availability" msgstr "Pacemaker 高可用性のための RabbitMQ の準備" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml145(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml145(para) msgid "" "In order for Pacemaker monitoring to function properly, you must ensure that" " RabbitMQ’s .erlang.cookie files are identical on all " @@ -1382,41 +1382,41 @@ msgstr "Pacemaker の監視を正しく機能させるために、DRBD がマウ msgid "Add RabbitMQ resources to Pacemaker" msgstr "RabbitMQ リソースの Pacemaker への追加" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml160(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml160(para) msgid "" "You may now proceed with adding the Pacemaker configuration for RabbitMQ " "resources. Connect to the Pacemaker cluster with crm " "configure, and add the following cluster resources:" msgstr "OpenStack RabbitMQ リソース用の Pacemaker 設定を追加して、次に進むことができます。crm configure を用いて Pacemaker クラスターに接続し、以下のクラスターリソースを追加します。" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml193(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml193(para) msgid "" "p_ip_rabbitmq, a virtual IP address for use by RabbitMQ " "(192.168.42.100)," msgstr "p_ip_rabbitmq, RabbitMQ により使用される仮想 IP アドレス (192.168.42.100)," -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml198(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml198(para) msgid "" "p_fs_rabbitmq, a Pacemaker managed filesystem mounted to " "/var/lib/rabbitmq on whatever node currently runs the " "RabbitMQ service," msgstr "p_fs_rabbitmq, 現在 RabbitMQ サービスを実行している、すべてのノードにおいて /var/lib/rabbitmq にマウントされている、Pacemaker が管理しているファイルシステム。" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml204(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml204(para) msgid "" "ms_drbd_rabbitmq, the master/slave " "set managing the rabbitmq DRBD resource," msgstr "ms_drbd_rabbitmq, rabbitmq DRBD リソースを管理しているマスター/スレーブの組。" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml209(simpara) -#: ./doc/high-availability-guide/controller/section_mysql.xml221(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml209(para) +#: ./doc/high-availability-guide/controller/section_mysql.xml221(para) msgid "" "a service group and order and " "colocation constraints to ensure resources are started on" " the correct nodes, and in the correct sequence." msgstr "リソースが適切なノードにおいて、適切な順序で起動されることを確実にする、サービスの group, order および colocation 制約。" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml215(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml215(para) msgid "" "crm configure supports batch input, so you may copy and " "paste the above into your live pacemaker configuration, and then make " @@ -1425,7 +1425,7 @@ msgid "" "edit the resource to match your preferred virtual IP address." msgstr "crm configure はバッチ入力をサポートします。そのため、現在の pacemaker 設定の中に上をコピー・ペーストし、適宜変更を反映できます。たとえば、お好みの仮想 IP アドレスに一致させるために、crm configure メニューから edit p_ip_rabbitmq と入力し、リソースを編集できます。" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml220(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml220(para) msgid "" "Once completed, commit your configuration changes by entering " "commit from the crm configure menu. " @@ -1437,21 +1437,21 @@ msgstr "完了すると、crm configure メニューから glance-api.conf):" -msgstr "OpenStack Image の例として、RabbitMQ サーバーの IP アドレスがここで説明された設定にあるように 192.168.42.100 ならば、OpenStack Image API 定ファイル (glance-api.conf) で以下の行を使用する必要があります。" +"following line in your OpenStack Image API configuration file (glance-api.conf):" +msgstr "" -#: ./doc/high-availability-guide/controller/section_rabbitmq.xml236(simpara) +#: ./doc/high-availability-guide/controller/section_rabbitmq.xml236(para) msgid "" "No other changes are necessary to your OpenStack configuration. If the node " "currently hosting your RabbitMQ experiences a problem necessitating service " @@ -1464,31 +1464,31 @@ msgstr "OpenStack の設定に他の変更は必要ありません。現在 Rabb msgid "Highly available MySQL" msgstr "高可用性 MySQL" -#: ./doc/high-availability-guide/controller/section_mysql.xml13(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml13(para) msgid "" "MySQL is the default database server used by many OpenStack services. Making" " the MySQL service highly available involves" msgstr "MySQL は多くの OpenStack サービスにより使用される標準のデータベースサーバーです。MySQL サービスを高可用性にするには、次のことが関係します。" -#: ./doc/high-availability-guide/controller/section_mysql.xml17(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml17(para) msgid "Configure a DRBD device for use by MySQL," msgstr "MySQL により使用するための DRBD デバイスの設定," -#: ./doc/high-availability-guide/controller/section_mysql.xml22(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml22(para) msgid "Configure MySQL to use a data directory residing on that DRBD device," msgstr "MySQL がこの DRBD デバイスにあるデータディレクトリを使用するよう設定します。" -#: ./doc/high-availability-guide/controller/section_mysql.xml34(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml34(para) msgid "Configure MySQL to listen on that IP address," msgstr "その IP アドレスをリッスンするよう MySQL を設定します。" -#: ./doc/high-availability-guide/controller/section_mysql.xml39(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml39(para) msgid "" "managing all resources, including the MySQL daemon itself, with the " "Pacemaker cluster manager." msgstr "Pacemaker クラスターマネージャーを用いて、MySQL デーモン自身を含め、すべてのリソースを管理します。" -#: ./doc/high-availability-guide/controller/section_mysql.xml46(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml46(para) msgid "" "MySQL/Galera is " @@ -1499,7 +1499,7 @@ msgid "" "environments." msgstr "MySQL/Galera は MySQL を高可用性に設定するもう一つの方法です。十分に成熟すれば、MySQL の高可用性を達成する好ましい方法になるでしょう。しかしながら、執筆時点では、Pacemaker/DRBD による方法が OpenStack 環境に対する推奨のものです。" -#: ./doc/high-availability-guide/controller/section_mysql.xml57(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml57(para) msgid "" "The Pacemaker based MySQL server requires a DRBD resource from which it " "mounts the /var/lib/mysql directory. In this example, the" @@ -1509,10 +1509,10 @@ msgstr "Pacemaker ベースの MySQL サーバーは /var/lib/mysqlmysql DRBD resource configuration " -"(/etc/drbd.d/mysql.res)" -msgstr "mysql DRBD リソース設定 (/etc/drbd.d/mysql.res)" +"(/etc/drbd.d/mysql.res)" +msgstr "" -#: ./doc/high-availability-guide/controller/section_mysql.xml78(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml78(para) msgid "" "This resource uses an underlying local disk (in DRBD terminology, a " "backing device) named " @@ -1524,8 +1524,8 @@ msgid "" "device itself. The device is configured to communicate between IPv4 " "addresses 10.0.42.100 and 10.0.42.254, using TCP port 7700. Once enabled, it" " will map to a local DRBD block device with the device minor number 0, that " -"is, /dev/drbd0." -msgstr "このリソースは、両方のクラスターノード node1node2 において、/dev/data/mysql という名前のバックエンドのローカルディスク (DRBD の用語で backing device) を使用します。通常、これはこの目的のために特別に設定された LVM 論理ボリュームでしょう。DRBD meta-diskinternal です。これは DRBD 固有のメタデータが disk デバイス自身の最後に保存されることを意味します。このデバイスは IPv4 アドレス 10.0.42.100 と 10.0.42.254 の間で TCP ポート 7700 を使用して通信するよう設定されます。一度有効化されると、デバイスマイナー番号 0 を持つローカル DRBD ブロックデバイス、つまり /dev/drbd0 にマップされます。" +"is, /dev/drbd0." +msgstr "" #: ./doc/high-availability-guide/controller/section_mysql.xml95(para) msgid "" @@ -1545,7 +1545,7 @@ msgstr "/dev/drbd0 デバイスノードを作成し、DRBD msgid "Creating a file system" msgstr "ファイルシステムの作成" -#: ./doc/high-availability-guide/controller/section_mysql.xml124(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml124(para) msgid "" "Once the DRBD resource is running and in the primary role (and potentially " "still in the process of running the initial device synchronization), you may" @@ -1557,7 +1557,7 @@ msgstr "DRBD リソースが実行中になり、プライマリロールにな msgid "Prepare MySQL for Pacemaker high availability" msgstr "Pacemaker 高可用性のための MySQL の準備" -#: ./doc/high-availability-guide/controller/section_mysql.xml142(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml142(para) msgid "" "In order for Pacemaker monitoring to function properly, you must ensure that" " MySQL’s database files reside on the DRBD device. If you already have an " @@ -1566,19 +1566,19 @@ msgid "" "created filesystem on the DRBD device." msgstr "Pacemaker の監視を正しく機能させるために、MySQL のデータベースファイルを必ず DRBD デバイスに置く必要があります。既存の MySQL データベースがあれば、最も簡単な方法は、既存の /var/lib/mysql ディレクトリの中身を、DRBD デバイスに新しく作成したファイルシステムにそのまま移動することです。" -#: ./doc/high-availability-guide/controller/section_mysql.xml148(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml148(para) msgid "" "You must complete the next step while the MySQL database server is shut " "down." msgstr "MySQL データベースサーバーがシャットダウンしている間に、次の手順を完了する必要があります。" -#: ./doc/high-availability-guide/controller/section_mysql.xml154(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml154(para) msgid "" "For a new MySQL installation with no existing data, you may also run the " "mysql_install_db command:" msgstr "既存のデータが無い新しい MySQL インストール環境の場合、mysql_install_db コマンドを実行する必要があるかもしれません。" -#: ./doc/high-availability-guide/controller/section_mysql.xml159(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml159(para) msgid "" "Regardless of the approach, the steps outlined here must be completed on " "only one cluster node." @@ -1588,33 +1588,33 @@ msgstr "その方法に関わらず、ここに概要が示された手順は一 msgid "Add MySQL resources to Pacemaker" msgstr "MySQL リソースの Pacemaker への追加" -#: ./doc/high-availability-guide/controller/section_mysql.xml166(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml166(para) msgid "" "You can now add the Pacemaker configuration for MySQL resources. Connect to " "the Pacemaker cluster with crm configure, and add the " "following cluster resources:" msgstr "これで MySQL リソース用の Pacemaker 設定を追加できます。crm configure を用いて Pacemaker クラスターに接続し、以下のクラスターリソースを追加します。" -#: ./doc/high-availability-guide/controller/section_mysql.xml205(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml205(para) msgid "" "p_ip_mysql, a virtual IP address for use by MySQL " "(192.168.42.101)," msgstr "p_ip_mysql, MySQL により使用される仮想 IP アドレス (192.168.42.101)," -#: ./doc/high-availability-guide/controller/section_mysql.xml210(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml210(para) msgid "" "p_fs_mysql, a Pacemaker managed filesystem mounted to " "/var/lib/mysql on whatever node currently runs the MySQL " "service," msgstr "p_fs_mysql, 現在 MySQL サービスを実行している、すべてのノードにおいて /var/lib/mysql にマウントされている、Pacemaker が管理しているファイルシステム。" -#: ./doc/high-availability-guide/controller/section_mysql.xml216(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml216(para) msgid "" "ms_drbd_mysql, the master/slave set " "managing the mysql DRBD resource," msgstr "ms_drbd_mysql, mysql DRBD リソースを管理しているマスター/スレーブの組。" -#: ./doc/high-availability-guide/controller/section_mysql.xml227(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml227(para) msgid "" "crm configure supports batch input, so you may copy and " "paste the above into your live pacemaker configuration, and then make " @@ -1623,7 +1623,7 @@ msgid "" " the resource to match your preferred virtual IP address." msgstr "crm configure はバッチ入力をサポートします。そのため、現在の pacemaker 設定の中に上をコピー・ペーストし、適宜変更を反映できます。たとえば、お好みの仮想 IP アドレスに一致させるために、crm configure メニューから edit p_ip_mysql と入力し、リソースを編集できます。" -#: ./doc/high-availability-guide/controller/section_mysql.xml232(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml232(para) msgid "" "Once completed, commit your configuration changes by entering " "commit from the crm configure menu. " @@ -1635,21 +1635,21 @@ msgstr "完了すると、crm configure メニューから glance-registry.conf):" -msgstr "OpenStack Image の例として、MySQL サーバーの IP アドレスがここで説明された設定にあるように 192.168.42.101 ならば、OpenStack Image レジストリ設定ファイル (glance-registry.conf) で以下の行を使用する必要があります。" +"following line in your OpenStack Image registry configuration file " +"(glance-registry.conf):" +msgstr "" -#: ./doc/high-availability-guide/controller/section_mysql.xml248(simpara) +#: ./doc/high-availability-guide/controller/section_mysql.xml248(para) msgid "" "No other changes are necessary to your OpenStack configuration. If the node " "currently hosting your database experiences a problem necessitating service " @@ -1662,31 +1662,31 @@ msgstr "OpenStack の設定に他の変更は必要ありません。現在デ msgid "Memcached" msgstr "Memcached" -#: ./doc/high-availability-guide/ha_aa_controllers/section_memcached.xml8(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_memcached.xml8(para) msgid "" "Most of OpenStack services use an application to offer persistence and store" " ephemeral data (like tokens). Memcached is one of them and can scale-out " "easily without specific trick." msgstr "ほとんどの OpenStack サービスは、永続性を提供し、(トークンのような) 一時的なデータを保存するために、アプリケーションを使用します。memcached は、それらの一つで、特別な工夫をすることなく簡単にスケールアウト可能にできます。" -#: ./doc/high-availability-guide/ha_aa_controllers/section_memcached.xml10(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_memcached.xml10(para) msgid "" "To install and configure it, read the official " "documentation." msgstr "インストールおよび設定するために、公式ドキュメントを参照します。" -#: ./doc/high-availability-guide/ha_aa_controllers/section_memcached.xml11(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_memcached.xml11(para) msgid "" "Memory caching is managed by oslo-incubator so the way to use multiple " "memcached servers is the same for all projects." msgstr "メモリキャッシュは、複数の memcached サーバーを使用する方法がすべてのプロジェクトで同じにするために、Oslo インキュベーターにより管理されています。" -#: ./doc/high-availability-guide/ha_aa_controllers/section_memcached.xml12(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_memcached.xml12(para) msgid "Example with two hosts:" msgstr "2 つのホストを用いた例:" -#: ./doc/high-availability-guide/ha_aa_controllers/section_memcached.xml14(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_memcached.xml14(para) msgid "" "By default, controller1 handles the caching service but if the host goes " "down, controller2 does the job. For more information about memcached " @@ -1701,7 +1701,7 @@ msgstr "OpenStack API & スケジューラーの実行" msgid "API services" msgstr "API サービス" -#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml12(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml12(para) msgid "" "All OpenStack projects have an API service for controlling all the resources" " in the Cloud. In Active / Active mode, the most common setup is to scale-" @@ -1713,17 +1713,17 @@ msgstr "すべての OpenStack プロジェクトは、クラウドにあるす msgid "Configure API OpenStack services" msgstr "OpenStack サービスの API の設定" -#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml18(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml18(para) msgid "" "To configure our Cloud using Highly available and scalable API services, we " "need to ensure that:" msgstr "私たちのクラウド環境が高可用性かつスケーラブルな API サービスを使用するよう設定するために、以下の事項を確認する必要があります。" -#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml21(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml21(para) msgid "You use Virtual IP when configuring OpenStack Identity endpoints." msgstr "OpenStack Identity エンドポイントを設定するときに仮想 IP を使用します。" -#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml26(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml26(para) msgid "All OpenStack configuration files should refer to Virtual IP." msgstr "すべての OpenStack 設定ファイルは仮想 IP を参照すべきです。" @@ -1731,7 +1731,7 @@ msgstr "すべての OpenStack 設定ファイルは仮想 IP を参照すべき msgid "In case of failure" msgstr "失敗の場合" -#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml34(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml34(para) msgid "" "The monitor check is quite simple since it just establishes a TCP connection" " to the API port. Comparing to the Active / Passive mode using Corosync " @@ -1745,97 +1745,97 @@ msgstr "" msgid "Schedulers" msgstr "スケジューラー" -#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml43(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml43(para) msgid "" "OpenStack schedulers are used to determine how to dispatch compute, network " "and volume requests. The most common setup is to use RabbitMQ as messaging " "system already documented in this guide. Those services are connected to the" -" messaging backend and can scale-out :" -msgstr "OpenStack スケジューラーは、コンピュート、ネットワーク、ボリュームのリクエストをどのようにディスパッチするかを決めるために使用されます。最も一般的なセットアップ環境は、このガイドにドキュメント化されたメッセージングシステムとして RabbitMQ を使用することです。これらのサービスはメッセージングのバックエンドに接続され、スケールアウトできます。" +" messaging backend and can scale-out:" +msgstr "" -#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml48(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml48(para) msgid "nova-scheduler" msgstr "nova-scheduler" -#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml53(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml53(para) msgid "nova-conductor" msgstr "nova-conductor" -#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml58(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml58(para) msgid "cinder-scheduler" msgstr "cinder-scheduler" -#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml63(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml63(para) msgid "neutron-server" msgstr "neutron-server" -#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml68(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml68(para) msgid "ceilometer-collector" msgstr "ceilometer-collector" -#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml73(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml73(para) msgid "heat-engine" msgstr "heat-engine" -#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml78(simpara) +#: ./doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml78(para) msgid "" "Please refer to the RabbitMQ section for configure these services with " "multiple messaging servers." msgstr "これらのサービスを複数のメッセージングサービスを用いて設定する方法は、RabbitMQ のセクションを参照してください。" -#: ./doc/high-availability-guide/pacemaker/section_start_pacemaker.xml6(title) +#: ./doc/high-availability-guide/pacemaker/section_start_pacemaker.xml5(title) msgid "Start Pacemaker" msgstr "Pacemaker の開始" -#: ./doc/high-availability-guide/pacemaker/section_start_pacemaker.xml8(simpara) +#: ./doc/high-availability-guide/pacemaker/section_start_pacemaker.xml6(para) msgid "" "Once the Corosync services have been started, and you have established that " "the cluster is communicating properly, it is safe to start " "pacemakerd, the Pacemaker master control process:" msgstr "Corosync サービスが開始されると、クラスターが適切に通信するよう設定されます。pacemakerd, Pacemaker マスター制御プロセスを安全に開始されます:" -#: ./doc/high-availability-guide/pacemaker/section_start_pacemaker.xml13(simpara) +#: ./doc/high-availability-guide/pacemaker/section_start_pacemaker.xml11(para) msgid "/etc/init.d/pacemaker start (LSB)" msgstr "/etc/init.d/pacemaker start (LSB)" -#: ./doc/high-availability-guide/pacemaker/section_start_pacemaker.xml17(simpara) +#: ./doc/high-availability-guide/pacemaker/section_start_pacemaker.xml15(para) msgid "service pacemaker start (LSB, alternate)" msgstr "service pacemaker start (LSB, 代替)" -#: ./doc/high-availability-guide/pacemaker/section_start_pacemaker.xml21(simpara) +#: ./doc/high-availability-guide/pacemaker/section_start_pacemaker.xml19(para) msgid "start pacemaker (upstart)" msgstr "start pacemaker (upstart)" -#: ./doc/high-availability-guide/pacemaker/section_start_pacemaker.xml25(simpara) +#: ./doc/high-availability-guide/pacemaker/section_start_pacemaker.xml23(para) msgid "systemctl start pacemaker (systemd)" msgstr "systemctl start pacemaker (systemd)" -#: ./doc/high-availability-guide/pacemaker/section_start_pacemaker.xml29(simpara) +#: ./doc/high-availability-guide/pacemaker/section_start_pacemaker.xml27(para) msgid "" "Once Pacemaker services have started, Pacemaker will create a default empty " "cluster configuration with no resources. You may observe Pacemaker’s status " "with the crm_mon utility:" msgstr "Pacemaker サービスが開始されると、Pacemaker はリソースを持たない空の標準クラスター設定を作成します。crm_mon ユーティリティを用いて Pacemaker の状態を確認できます。" -#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml6(title) +#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml5(title) msgid "Set up Corosync" msgstr "Corosync のセットアップ" -#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml8(simpara) +#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml6(para) msgid "" "Besides installing the corosync package, you must also " "create a configuration file, stored in " -"/etc/corosync/corosync.conf. Most distributions ship an " -"example configuration file (corosync.conf.example) as " +"/etc/corosync/corosync.conf. Most distributions ship an" +" example configuration file (corosync.conf.example) as " "part of the documentation bundled with the corosync " "package. An example Corosync configuration file is shown below:" -msgstr "corosync パッケージのインストールに関連して、/etc/corosync/corosync.conf に保存する、設定ファイルを作成する必要があります。多くのディストリビューションは、corosync パッケージに同梱されているドキュメントの一部として、サンプル設定ファイル (corosync.conf.example) が同梱されています。サンプルの Corosync 設定ファイルを以下に示します:" +msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml16(title) -msgid "Corosync configuration file (corosync.conf)" -msgstr "Corosync 設定ファイル (corosync.conf)" +#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml14(title) +msgid "Corosync configuration file (corosync.conf)" +msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml90(para) +#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml88(para) msgid "" "The token value specifies the time, in milliseconds, " "during which the Corosync token is expected to be transmitted around the " @@ -1852,7 +1852,7 @@ msgid "" "slightly extended failover times." msgstr "token の値は、Corosync トークンがリングを 1 周するまでの時間をミリ秒単位で指定します。このタイムアウトを経過した場合、トークンが消失したと判定します。token_retransmits_before_loss_const はトークンを失った後、応答のない processor (クラスターノード) が障害と判定されます。言い換えると、token × token_retransmits_before_loss_const が、ノードが障害と判定されるまでに、クラスターメッセージに応答しなくてもよい最大時間です。 token の規定値は 1000 (1 秒) で、再送 が 4 回まで許可されます。これらの規定値は、フェイルオーバー時間を最小化する狙いですが、頻繁に「誤報」を起こし、短時間のネットワーク障害により意図せずフェイルオーバーする可能性があります。ここで使用されている値は、フェイルオーバー時間をわずかに延ばしたとしても、より安全なものです。" -#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml106(para) +#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml104(para) msgid "" "With secauth enabled, Corosync nodes mutually " "authenticate using a 128-byte shared secret stored in " @@ -1861,7 +1861,7 @@ msgid "" "secauth, cluster communications are also encrypted." msgstr "secauth を有効化している場合、Corosync ノードは /etc/corosync/authkey に保存されている 128 ビットの共有シークレットを使用して相互に認証します。このシークレットは corosync-keygen ユーティリティを用いて生成できます。secauth を使用するとき、クラスター通信も暗号化されます。" -#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml114(para) +#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml112(para) msgid "" "In Corosync configurations using redundant networking (with more than one " "interface), you must select a Redundant Ring Protocol " @@ -1869,26 +1869,26 @@ msgid "" "the recommended RRP mode." msgstr "冗長ネットワーク (複数の interface) を使用した Corosync 設定の場合、Redundant Ring Protocol (RRP) モードを none 以外で選択する必要があります。active が推奨 RRP モードです。" -#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml121(para) +#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml119(para) msgid "" "There are several things to note about the recommended interface " "configuration:" msgstr "インターフェースの推奨設定に関する注意事項がいくつかあります。" -#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml127(simpara) +#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml125(para) msgid "" "The ringnumber must differ between all configured " "interfaces, starting with 0." msgstr "ringnumber はすべての設定済みインターフェースで異なる必要があります。0 から始まります。" -#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml133(simpara) +#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml131(para) msgid "" "The bindnetaddr is the network " "address of the interfaces to bind to. The example uses two network addresses" " of /24 IPv4 subnets." msgstr "bindnetaddr はバインドするインターフェースのネットワークアドレスです。この例は二つの /24 IPv4 サブネットのネットワークアドレスを使用します。" -#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml139(simpara) +#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml137(para) msgid "" "Multicast groups (mcastaddr) must " "not be reused across cluster boundaries. In other words, no two " @@ -1898,33 +1898,33 @@ msgid "" "Scoped IP Multicast\"." msgstr "マルチキャストグループ (mcastaddr) は、クラスターの境界をまたがって再利用できません。違い言い方をすると、2 つの別のクラスターは同じマルチキャストグループを使用できません。RFC 2365, \"Administratively Scoped IP Multicast\" に適合しているマルチキャストアドレスを必ず選択してください。" -#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml148(simpara) +#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml146(para) msgid "" "For firewall configurations, note that Corosync communicates over UDP only, " "and uses mcastport (for receives) and " "mcastport-1 (for sends)." msgstr "ファイアウォール設定に関して、Corosync は UDP のみを使用して通信し、mcastport (受信用) と mcastport-1 (送信用) を使用することに注意してください。" -#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml157(para) +#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml155(para) msgid "" "The service declaration for the " "pacemaker service may be placed in the " -"corosync.conf file directly, or in its own separate file," -" /etc/corosync/service.d/pacemaker." -msgstr "pacemaker サービスに関する service 宣言は、corosync.conf ファイルに直接置かれるか、別のファイル /etc/corosync/service.d/pacemaker に置かれるかもしれません。" +"corosync.conf file directly, or in its own separate " +"file, /etc/corosync/service.d/pacemaker." +msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml164(simpara) +#: ./doc/high-availability-guide/pacemaker/section_set_up_corosync.xml162(para) msgid "" -"Once created, the corosync.conf file (and the " -"authkey file if the secauth option is " -"enabled) must be synchronized across all cluster nodes." -msgstr "一度作成すると、corosync.conf ファイル (および、secauth オプションが有効化されている場合は authkey ファイル) はすべてのクラスターノードで同期する必要があります。" +"Once created, the corosync.conf file (and the " +"authkey file if the secauth option " +"is enabled) must be synchronized across all cluster nodes." +msgstr "" -#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml6(title) +#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml5(title) msgid "Install packages" msgstr "パッケージのインストール" -#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml8(simpara) +#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml6(para) msgid "" "On any host that is meant to be part of a Pacemaker cluster, you must first " "establish cluster communications through the Corosync messaging layer. This " @@ -1932,31 +1932,31 @@ msgid "" "package manager will normally install automatically):" msgstr "Pacemaker クラスターに参加させるすべてのホストにおいて、まず Corosync メッセージング層によるクラスター通信を確立する必要があります。これは、以下のパッケージをインストールする必要があります (また通常、パッケージ管理ソフトウェアが自動的にそれらに依存するものをインストールします)。" -#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml15(simpara) +#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml13(para) msgid "" "pacemaker Note that the crm shell should be downloaded " "separately." msgstr "pacemaker は crm シェルを別途ダウンロードする必要があることに注意してください。" -#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml20(literal) +#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml18(literal) msgid "crmsh" msgstr "crmsh" -#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml25(literal) +#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml23(literal) msgid "corosync" msgstr "corosync" -#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml30(literal) +#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml28(literal) msgid "cluster-glue" msgstr "cluster-glue" -#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml34(simpara) +#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml32(para) msgid "" "fence-agents (Fedora only; all other distributions use " "fencing agents from cluster-glue)" msgstr "fence-agents (Fedora のみ、他のすべてのディストリビューションは cluster-glue からフェンシング・エージェントを使用します)" -#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml40(literal) +#: ./doc/high-availability-guide/pacemaker/section_install_packages.xml38(literal) msgid "resource-agents" msgstr "resource-agents" @@ -1964,7 +1964,7 @@ msgstr "resource-agents" msgid "Set basic cluster properties" msgstr "基本的なクラスターのプロパティの設定" -#: ./doc/high-availability-guide/pacemaker/section_set_basic_cluster_properties.xml8(simpara) +#: ./doc/high-availability-guide/pacemaker/section_set_basic_cluster_properties.xml8(para) msgid "" "Once your Pacemaker cluster is set up, it is recommended to set a few basic " "cluster properties. To do so, start the crm shell and " @@ -1973,7 +1973,7 @@ msgid "" "by typing crm configure directly from a shell prompt." msgstr "Pacemaker クラスターを構築すると、いくつかの基本的なクラスタープロパティを設定することを推奨します。そうするために、crm シェルを開始し、configure と入力して設定メニューに変更します。代わりに、シェルプロンプトから直接 crm configure と入力して、Pacemaker 設定メニューに入ることもできます。" -#: ./doc/high-availability-guide/pacemaker/section_set_basic_cluster_properties.xml14(simpara) +#: ./doc/high-availability-guide/pacemaker/section_set_basic_cluster_properties.xml14(para) msgid "Then, set the following properties:" msgstr "そして、以下のプロパティを設定します。" @@ -2007,7 +2007,7 @@ msgid "" "minutes." msgstr "Pacemaker はクラスター状態の処理にイベント駆動の方法を使用します。しかしながら、特定の Pacemaker アクションは、設定可能な間隔 cluster-recheck-interval で発生します。このデフォルトは 15 分です。一般的に 5 分や 3 分のようにより短い間隔にこれを慎重に減らします。" -#: ./doc/high-availability-guide/pacemaker/section_set_basic_cluster_properties.xml52(simpara) +#: ./doc/high-availability-guide/pacemaker/section_set_basic_cluster_properties.xml52(para) msgid "" "Once you have made these changes, you may commit the " "updated configuration." @@ -2017,7 +2017,7 @@ msgstr "これらの変更を実行すると、更新した設定を co msgid "Starting Corosync" msgstr "Corosync の開始" -#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml8(simpara) +#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml8(para) msgid "" "Corosync is started as a regular system service. Depending on your " "distribution, it may ship with a LSB (System V style) init script, an " @@ -2025,40 +2025,40 @@ msgid "" "named corosync:" msgstr "Corosync は通常のシステムサービスとして開始されます。お使いのディストリビューションにより、LSB (System V 形式) init スクリプト、upstart ジョブ、または systemd ユニットファイルが同梱されています。どの方法においても、サービスは通常 corosync という名前です:" -#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml14(simpara) +#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml14(para) msgid "/etc/init.d/corosync start (LSB)" msgstr "/etc/init.d/corosync start (LSB)" -#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml18(simpara) +#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml18(para) msgid "service corosync start (LSB, alternate)" msgstr "service corosync start (LSB, 代替)" -#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml22(simpara) +#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml22(para) msgid "start corosync (upstart)" msgstr "start corosync (upstart)" -#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml26(simpara) +#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml26(para) msgid "systemctl start corosync (systemd)" msgstr "systemctl start corosync (systemd)" -#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml30(simpara) +#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml30(para) msgid "You can now check the Corosync connectivity with two tools." msgstr "2 つのツールを用いて Corosync 接続性を確認できます。" -#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml31(simpara) +#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml31(para) msgid "" "The corosync-cfgtool utility, when invoked with the " "-s option, gives a summary of the health of the " "communication rings:" msgstr "corosync-cfgtool ユーティリティは、-s オプションを用いるとき、通信リングの健全性の概要を表示します:" -#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml42(simpara) +#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml42(para) msgid "" "The corosync-objctl utility can be used to dump the " "Corosync cluster member list:" msgstr "corosync-objctl ユーティリティは Corosync クラスターのメンバー一覧をダンプするために使用できます:" -#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml51(simpara) +#: ./doc/high-availability-guide/pacemaker/section_starting_corosync.xml51(para) msgid "" "You should see a status=joined entry for each of your " "constituent cluster nodes." @@ -2068,54 +2068,54 @@ msgstr "組み込まれているそれぞれのクラスターノードの項目 msgid "Configure OpenStack services to use RabbitMQ" msgstr "RabbitMQ を使用するための OpenStack サービスの設定" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml8(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml8(para) msgid "" "We have to configure the OpenStack components to use at least two RabbitMQ " "nodes." msgstr "2 つ以上の RabbitMQ ノードを使用するよう、OpenStack のコンポーネントを設定する必要があります。" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml9(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml9(para) msgid "Do this configuration on all services using RabbitMQ:" msgstr "RabbitMQ を使用するすべてのサービスでこの設定を行います。" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml10(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml10(para) msgid "RabbitMQ HA cluster host:port pairs:" msgstr "RabbitMQ HA クラスターの host:port の組:" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml12(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml12(para) msgid "How frequently to retry connecting with RabbitMQ:" msgstr "RabbitMQ と再接続する頻度:" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml14(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml14(para) msgid "How long to back-off for between retries when connecting to RabbitMQ:" msgstr "RabbitMQ に接続するとき再試行するまでにバックオフする間隔:" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml16(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml16(para) msgid "" "Maximum retries with trying to connect to RabbitMQ (infinite by default):" msgstr "RabbitMQ に接続を試行する最大回数 (デフォルトで無制限):" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml18(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml18(para) msgid "Use durable queues in RabbitMQ:" msgstr "RabbitMQ での永続キューの使用:" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml20(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml20(para) msgid "Use H/A queues in RabbitMQ (x-ha-policy: all):" msgstr "RabbitMQ での H/A キューの使用 (x-ha-policy: all):" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml22(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml22(para) msgid "" "If you change the configuration from an old setup which did not use HA " "queues, you should interrupt the service:" msgstr "HA キューを使用していない古いセットアップから設定を変更した場合、サービスを中断しなければいけません。" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml26(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml26(para) msgid "" "Services currently working with HA queues: OpenStack Compute, OpenStack " "Block Storage, OpenStack Networking, Telemetry." msgstr "次のサービスはいま HA キューで動作しています。OpenStack Compute、OpenStack Cinder、OpenStack Networking、Telemetry。" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_install_rabbitmq.xml8(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_install_rabbitmq.xml8(para) msgid "This setup has been tested with RabbitMQ 2.7.1." msgstr "このセットアップは RabbitMQ 2.7.1 を用いてテストしました。" @@ -2123,8 +2123,8 @@ msgstr "このセットアップは RabbitMQ 2.7.1 を用いてテストしま msgid "On Ubuntu / Debian" msgstr "Ubuntu / Debian の場合" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_install_rabbitmq.xml13(simpara) -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_install_rabbitmq.xml23(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_install_rabbitmq.xml13(para) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_install_rabbitmq.xml23(para) msgid "RabbitMQ is packaged on both distros:" msgstr "RabbitMQ がどちらもパッケージ化されています。" @@ -2144,7 +2144,7 @@ msgstr "Fedora / RHEL に RabbitMQ をインストールする方法の公式マ msgid "Configure RabbitMQ" msgstr "RabbitMQ の設定" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml8(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml8(para) msgid "" "Here we are building a cluster of RabbitMQ nodes to construct a RabbitMQ " "broker. Mirrored queues in RabbitMQ improve the availability of service " @@ -2154,7 +2154,7 @@ msgid "" "node. If we lose this node, we also lose the queue." msgstr "これから RabbitMQ ブローカーを構築するために RabbitMQ ノードのクラスターを構築します。障害の耐性ができるため、RabbitMQ のキューミラーによりサービスの可用性を高められます。メッセージ交換とバインドにより個々のノードの損失があると、キューとその内容が 1 つのノードに置かれているため、キューとそれらのメッセージが無くなることに注意してください。このノードが失われた場合、キューも失われます。" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml13(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml13(para) msgid "" "We consider that we run (at least) two RabbitMQ servers. To build a broker, " "we need to ensure that all nodes have the same erlang cookie file. To do so," @@ -2162,31 +2162,31 @@ msgid "" "server(s):" msgstr "(少なくとも) 2 つの RabbitMQ サーバーを実行することを考えます。ブローカーを構築するために、すべてのノードが必ず同じ erlang クッキーファイルを持つ必要があります。そうするために、すべての RabbitMQ を停止し、クッキーを rabbit1 サーバーから他のサーバーにコピーします。" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml18(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml18(para) msgid "" "Then, start RabbitMQ on nodes. If RabbitMQ fails to start, you can’t " "continue to the next step." msgstr "そして、ノードで RabbitMQ を起動します。RabbitMQ の起動に失敗した場合、次の手順に進めません。" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml20(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml20(para) msgid "Now, we are building the HA cluster. From rabbit2, run these commands:" msgstr "これで、HA クラスターの構築ができました。rabbit2 からこれらのコマンドを実行します。" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml24(simpara) -msgid "To verify the cluster status :" -msgstr "クラスターの状態を確認する方法:" +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml24(para) +msgid "To verify the cluster status:" +msgstr "" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml29(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml30(para) msgid "" "If the cluster is working, you can now proceed to creating users and " "passwords for queues." msgstr "クラスターが動作していれば、キュー用のユーザーとパスワードを作成する手順に進めます。" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml31(emphasis) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml32(emphasis) msgid "Note for RabbitMQ version 3" msgstr "RabbitMQ バージョン 3 の注意事項" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml33(simpara) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml34(para) msgid "" "Queue mirroring is no longer controlled by the x-ha-" "policy argument when declaring a queue. OpenStack can continue to" @@ -2195,46 +2195,46 @@ msgid "" "mirrored across all running nodes:" msgstr "キューの宣言時、キューミラーが x-ha-policy 引数により制御されなくなりました。OpenStack がこの引数を宣言し続けますが、キューはミラーされません。すべてのキュー (自動生成された名前を除いて) が実行中の全ノードにわたり確実にミラーされる必要があります。" -#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml38(link) +#: ./doc/high-availability-guide/ha_aa_rabbitmq/section_configure_rabbitmq.xml39(link) msgid "More information about High availability in RabbitMQ" msgstr "RabbitMQ の高可用性に関する詳細" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml6(title) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml5(title) msgid "Highly available neutron metadata agent" msgstr "高可用性 Neutron Metadata Agent" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml8(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml6(para) msgid "" "Neutron metadata agent allows Compute API metadata to be reachable by VMs on" " tenant networks. High availability for the metadata agent is achieved by " "adopting Pacemaker." msgstr "Neutron Metadata エージェントにより Nova API Metadata がプロジェクトのネットワークにある仮想マシンによりアクセスできるようになります。Metadata エージェントの高可用性は Pacemaker の適用により実現されます。" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml12(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml10(para) msgid "" "Here is the documentation " "for installing Neutron Metadata Agent." msgstr "ここに Neutron Metadata エージェントをインストールするためのドキュメントがあります。" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml16(title) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml13(title) msgid "Add neutron metadata agent resource to Pacemaker" msgstr "Neutron Metadata Agent リソースの Pacemaker への追加" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml22(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml18(para) msgid "" "You may now proceed with adding the Pacemaker configuration for neutron " "metadata agent resource. Connect to the Pacemaker cluster with crm " "configure, and add the following cluster resources:" msgstr "OpenStack Metadata Agent リソース用の Pacemaker 設定を追加して、次に進むことができます。crm configure を用いて Pacemaker クラスターに接続し、以下のクラスターリソースを追加します。" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml32(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml28(para) msgid "" "p_neutron-metadata-agent, a resource for manage Neutron " "Metadata Agent service" msgstr "p_neutron-metadata-agent, Neutron Metadata Agent サービスを管理するためのリソース。" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml40(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml36(para) msgid "" "Once completed, commit your configuration changes by entering " "commit from the crm configure menu. " @@ -2246,14 +2246,14 @@ msgstr "完了すると、crm configure メニューから documentation for " @@ -2264,20 +2264,20 @@ msgstr "ここに Neutron L3 エージェントをインストールするため msgid "Add neutron L3 agent resource to Pacemaker" msgstr "Neutron L3 Agent リソースの Pacemaker への追加" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml22(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml22(para) msgid "" "You may now proceed with adding the Pacemaker configuration for neutron L3 " "agent resource. Connect to the Pacemaker cluster with crm " "configure, and add the following cluster resources:" msgstr "Neutron L3 Agent リソース用の Pacemaker 設定を追加して、次に進むことができます。crm configure を用いて Pacemaker クラスターに接続し、以下のクラスターリソースを追加します。" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml32(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml32(para) msgid "" "p_neutron-l3-agent, a resource for manage Neutron L3 " "Agent service" msgstr "p_neutron-l3-agent, Neutron L3 Agent サービスを管理するためのリソース。" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml39(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml39(para) msgid "" "Once completed, commit your configuration changes by entering " "commit from the crm configure menu. " @@ -2285,7 +2285,7 @@ msgid "" "resources, on one of your nodes." msgstr "完了すると、crm configure メニューから commit と入力し、設定の変更をコミットします。Pacemaker は Neutron L3 Agent サービスおよび依存するリソースを同じノードに起動します。" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml43(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_l3_agent.xml43(para) msgid "" "This method does not ensure a zero downtime since it has to recreate all the" " namespaces and virtual routers on the node." @@ -2295,7 +2295,7 @@ msgstr "この方法は、ノードですべての名前空間と仮想ルータ msgid "Manage network resources" msgstr "ネットワークリソースの管理" -#: ./doc/high-availability-guide/network/section_manage_network_resources.xml8(simpara) +#: ./doc/high-availability-guide/network/section_manage_network_resources.xml8(para) msgid "" "You can now add the Pacemaker configuration for managing all network " "resources together with a group. Connect to the Pacemaker cluster with " @@ -2306,14 +2306,14 @@ msgstr "すべてのネットワークリソースをグループと一緒に管 msgid "Highly available neutron DHCP agent" msgstr "高可用性 Neutron DHCP Agent" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml8(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml8(para) msgid "" "Neutron DHCP agent distributes IP addresses to the VMs with dnsmasq (by " "default). High availability for the DHCP agent is achieved by adopting " "Pacemaker." msgstr "Neutron DHCP エージェントは (デフォルトで) dnsmasq を用いて仮想マシンに IP アドレスを配布します。DHCP エージェントの高可用性は Pacemaker の適用により実現されます。" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml12(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml12(para) msgid "" "Here is the documentation for" @@ -2324,20 +2324,20 @@ msgstr "ここに Neutron DHCP エージェントをインストールするた msgid "Add neutron DHCP agent resource to Pacemaker" msgstr "Neutron DHCP Agent リソースの Pacemaker への追加" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml22(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml22(para) msgid "" "You may now proceed with adding the Pacemaker configuration for neutron DHCP" " agent resource. Connect to the Pacemaker cluster with crm " "configure, and add the following cluster resources:" msgstr "OpenStack DHCP Agent リソース用の Pacemaker 設定を追加して、次に進むことができます。crm configure を用いて Pacemaker クラスターに接続し、以下のクラスターリソースを追加します。" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml32(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml32(para) msgid "" "p_neutron-dhcp-agent, a resource for manage Neutron DHCP " "Agent service" msgstr "p_neutron-dhcp-agent, Neutron DHCP Agent サービスを管理するためのリソース。" -#: ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml40(simpara) +#: ./doc/high-availability-guide/network/section_highly_available_neutron_dhcp_agent.xml40(para) msgid "" "Once completed, commit your configuration changes by entering " "commit from the crm configure menu. " diff --git a/doc/install-guide/locale/install-guide.pot b/doc/install-guide/locale/install-guide.pot index 69543bd210..8e52a7f162 100644 --- a/doc/install-guide/locale/install-guide.pot +++ b/doc/install-guide/locale/install-guide.pot @@ -1,7 +1,7 @@ msgid "" msgstr "" "Project-Id-Version: PACKAGE VERSION\n" -"POT-Creation-Date: 2014-07-08 06:08+0000\n" +"POT-Creation-Date: 2014-07-09 06:06+0000\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language-Team: LANGUAGE \n" @@ -563,7 +563,7 @@ msgstr "" msgid "Create the admin tenant:" msgstr "" -#: ./doc/install-guide/section_keystone-users.xml:59(para) ./doc/install-guide/section_keystone-services.xml:37(para) +#: ./doc/install-guide/section_keystone-users.xml:59(para) ./doc/install-guide/section_keystone-verify.xml:48(para) ./doc/install-guide/section_keystone-services.xml:37(para) msgid "Because OpenStack generates IDs dynamically, you will see different values from this example command output." msgstr "" @@ -571,7 +571,7 @@ msgstr "" msgid "Create the admin user:" msgstr "" -#: ./doc/install-guide/section_keystone-users.xml:65(replaceable) ./doc/install-guide/section_keystone-verify.xml:25(replaceable) ./doc/install-guide/section_keystone-verify.xml:35(replaceable) ./doc/install-guide/section_keystone-verify.xml:49(replaceable) ./doc/install-guide/section_trove-install.xml:126(replaceable) ./doc/install-guide/section_trove-install.xml:155(replaceable) ./doc/install-guide/section_trove-install.xml:206(replaceable) +#: ./doc/install-guide/section_keystone-users.xml:65(replaceable) ./doc/install-guide/section_keystone-verify.xml:19(replaceable) ./doc/install-guide/section_keystone-verify.xml:38(replaceable) ./doc/install-guide/section_keystone-verify.xml:60(replaceable) ./doc/install-guide/section_keystone-verify.xml:77(replaceable) ./doc/install-guide/section_trove-install.xml:126(replaceable) ./doc/install-guide/section_trove-install.xml:155(replaceable) ./doc/install-guide/section_trove-install.xml:206(replaceable) msgid "ADMIN_PASS" msgstr "" @@ -623,7 +623,7 @@ msgstr "" msgid "Create the demo user:" msgstr "" -#: ./doc/install-guide/section_keystone-users.xml:143(replaceable) ./doc/install-guide/ch_clients.xml:40(replaceable) +#: ./doc/install-guide/section_keystone-users.xml:143(replaceable) ./doc/install-guide/section_keystone-verify.xml:89(replaceable) ./doc/install-guide/section_keystone-verify.xml:101(replaceable) ./doc/install-guide/ch_clients.xml:40(replaceable) msgid "DEMO_PASS" msgstr "" @@ -747,7 +747,7 @@ msgstr "" msgid "On the volume node:" msgstr "" -#: ./doc/install-guide/section_glance-verify.xml:5(title) ./doc/install-guide/section_heat-verify.xml:5(title) ./doc/install-guide/section_basics-ntp.xml:94(title) ./doc/install-guide/section_nova-verify.xml:6(title) +#: ./doc/install-guide/section_glance-verify.xml:5(title) ./doc/install-guide/section_keystone-verify.xml:7(title) ./doc/install-guide/section_heat-verify.xml:5(title) ./doc/install-guide/section_basics-ntp.xml:94(title) ./doc/install-guide/section_nova-verify.xml:6(title) msgid "Verify operation" msgstr "" @@ -1149,64 +1149,68 @@ msgstr "" msgid "Your OpenStack environment now includes the dashboard. You can launch an instance or add more services to your environment in the following chapters." msgstr "" -#: ./doc/install-guide/section_keystone-verify.xml:6(title) -msgid "Verify the Identity Service installation" +#: ./doc/install-guide/section_keystone-verify.xml:8(para) +msgid "This section describes how to verify operation of the Identity service." msgstr "" -#: ./doc/install-guide/section_keystone-verify.xml:9(para) -msgid "To verify that the Identity Service is installed and configured correctly, clear the values in the OS_SERVICE_TOKEN and OS_SERVICE_ENDPOINT environment variables:" +#: ./doc/install-guide/section_keystone-verify.xml:12(para) +msgid "Unset the temporary OS_SERVICE_TOKEN and OS_SERVICE_ENDPOINT environment variables:" msgstr "" -#: ./doc/install-guide/section_keystone-verify.xml:15(para) -msgid "These variables, which were used to bootstrap the administrative user and register the Identity Service, are no longer needed." +#: ./doc/install-guide/section_keystone-verify.xml:17(para) +msgid "As the admin tenant and user, request an authentication token:" msgstr "" -#: ./doc/install-guide/section_keystone-verify.xml:20(para) -msgid "You can now use regular user name-based authentication." +#: ./doc/install-guide/section_keystone-verify.xml:21(para) +msgid "Replace ADMIN_PASS with the password you chose for the admin user in the Identity service. You might need to use single quotes (') around your password if it includes special characters." msgstr "" -#: ./doc/install-guide/section_keystone-verify.xml:22(para) -msgid "Request a authentication token by using the admin user and the password you chose for that user:" +#: ./doc/install-guide/section_keystone-verify.xml:25(para) +msgid "Lengthy output that includes a token value verifies operation for the admin tenant and user." msgstr "" -#: ./doc/install-guide/section_keystone-verify.xml:27(para) -msgid "In response, you receive a token paired with your user ID. This verifies that the Identity Service is running on the expected endpoint and that your user account is established with the expected credentials." +#: ./doc/install-guide/section_keystone-verify.xml:29(para) +msgid "As the admin tenant and user, list tenants to verify that the admin tenant and user can execute admin-only CLI commands and that the Identity service contains the tenants that you created in :" msgstr "" -#: ./doc/install-guide/section_keystone-verify.xml:33(para) -msgid "Verify that authorization behaves as expected. To do so, request authorization on a tenant:" +#: ./doc/install-guide/section_keystone-verify.xml:34(para) +msgid "As the admin tenant and user, list tenants to verify that the admin tenant and user can execute admin-only CLI commands and that the Identity service contains the tenants created by the configuration tool:" msgstr "" -#: ./doc/install-guide/section_keystone-verify.xml:38(para) -msgid "In response, you receive a token that includes the ID of the tenant that you specified. This verifies that your user account has an explicitly defined role on the specified tenant and the tenant exists as expected." +#: ./doc/install-guide/section_keystone-verify.xml:53(para) +msgid "As the admin tenant and user, list users to verify that the Identity service contains the users that you created in :" msgstr "" -#: ./doc/install-guide/section_keystone-verify.xml:44(para) -msgid "You can also set your --os-* variables in your environment to simplify command-line usage. Set up a admin-openrc.sh file with the admin credentials and admin endpoint:" +#: ./doc/install-guide/section_keystone-verify.xml:57(para) +msgid "As the admin tenant and user, list users to verify that the Identity service contains the users created by the configuration tool:" msgstr "" -#: ./doc/install-guide/section_keystone-verify.xml:54(para) -msgid "Source this file to read in the environment variables:" +#: ./doc/install-guide/section_keystone-verify.xml:70(para) +msgid "As the admin tenant and user, list roles to verify that the Identity service contains the role that you created in :" msgstr "" -#: ./doc/install-guide/section_keystone-verify.xml:59(para) -msgid "Verify that your admin-openrc.sh file is configured correctly. Run the same command without the --os-* arguments:" +#: ./doc/install-guide/section_keystone-verify.xml:74(para) +msgid "As the admin tenant and user, list roles to verify that the Identity service contains the role created by the configuration tool:" msgstr "" -#: ./doc/install-guide/section_keystone-verify.xml:63(para) -msgid "The command returns a token and the ID of the specified tenant. This verifies that you have configured your environment variables correctly." +#: ./doc/install-guide/section_keystone-verify.xml:87(para) +msgid "As the demo tenant and user, request an authentication token:" msgstr "" -#: ./doc/install-guide/section_keystone-verify.xml:68(para) -msgid "Verify that your admin account has authorization to perform administrative commands:" +#: ./doc/install-guide/section_keystone-verify.xml:91(para) +msgid "Replace DEMO_PASS with the password you chose for the demo user in the Identity service." msgstr "" -#: ./doc/install-guide/section_keystone-verify.xml:85(para) -msgid "Seeing that the id in the output from the command matches the user_id in the command, and that the admin role is listed for that user, for the related tenant, this verifies that your user account has the admin role, which matches the role used in the Identity Service policy.json file." +#: ./doc/install-guide/section_keystone-verify.xml:94(para) +msgid "Lengthy output that includes a token value verifies operation for the demo tenant and user." msgstr "" -#: ./doc/install-guide/section_keystone-verify.xml:95(para) -msgid "As long as you define your credentials and the Identity Service endpoint through the command line or environment variables, you can run all OpenStack client commands from any machine. For details, see ." +#: ./doc/install-guide/section_keystone-verify.xml:98(para) +msgid "As the demo tenant and user, attempt to list users to verify that you cannot execute admin-only CLI commands:" +msgstr "" + +#: ./doc/install-guide/section_keystone-verify.xml:105(para) +msgid "Each OpenStack service references a policy.json file to determine the operations available to a particular tenant, user, or role. For more information, see the Operations Guide - Managing Projects and Users." msgstr "" #: ./doc/install-guide/section_basics-packages.xml:7(title) diff --git a/doc/install-guide/locale/ja.po b/doc/install-guide/locale/ja.po index ded9244953..1b297b3135 100644 --- a/doc/install-guide/locale/ja.po +++ b/doc/install-guide/locale/ja.po @@ -5,8 +5,8 @@ msgid "" msgstr "" "Project-Id-Version: OpenStack Manuals\n" -"POT-Creation-Date: 2014-07-08 03:10+0000\n" -"PO-Revision-Date: 2014-07-07 20:32+0000\n" +"POT-Creation-Date: 2014-07-09 04:52+0000\n" +"PO-Revision-Date: 2014-07-09 03:30+0000\n" "Last-Translator: openstackjenkins \n" "Language-Team: Japanese (http://www.transifex.com/projects/p/openstack-manuals-i18n/language/ja/)\n" "MIME-Version: 1.0\n" @@ -1092,6 +1092,7 @@ msgid "Create the admin tenant:" msgstr "admin プロジェクトを作成します。" #: ./doc/install-guide/section_keystone-users.xml59(para) +#: ./doc/install-guide/section_keystone-verify.xml48(para) #: ./doc/install-guide/section_keystone-services.xml37(para) msgid "" "Because OpenStack generates IDs dynamically, you will see different values " @@ -1103,9 +1104,10 @@ msgid "Create the admin user:" msgstr "admin ユーザーを作成します。" #: ./doc/install-guide/section_keystone-users.xml65(replaceable) -#: ./doc/install-guide/section_keystone-verify.xml25(replaceable) -#: ./doc/install-guide/section_keystone-verify.xml35(replaceable) -#: ./doc/install-guide/section_keystone-verify.xml49(replaceable) +#: ./doc/install-guide/section_keystone-verify.xml19(replaceable) +#: ./doc/install-guide/section_keystone-verify.xml38(replaceable) +#: ./doc/install-guide/section_keystone-verify.xml60(replaceable) +#: ./doc/install-guide/section_keystone-verify.xml77(replaceable) #: ./doc/install-guide/section_trove-install.xml126(replaceable) #: ./doc/install-guide/section_trove-install.xml155(replaceable) #: ./doc/install-guide/section_trove-install.xml206(replaceable) @@ -1191,6 +1193,8 @@ msgid "Create the demo user:" msgstr "demo ユーザーを作成します。" #: ./doc/install-guide/section_keystone-users.xml143(replaceable) +#: ./doc/install-guide/section_keystone-verify.xml89(replaceable) +#: ./doc/install-guide/section_keystone-verify.xml101(replaceable) #: ./doc/install-guide/ch_clients.xml40(replaceable) msgid "DEMO_PASS" msgstr "DEMO_PASS" @@ -1366,6 +1370,7 @@ msgid "On the volume node:" msgstr "ボリュームノードで:" #: ./doc/install-guide/section_glance-verify.xml5(title) +#: ./doc/install-guide/section_keystone-verify.xml7(title) #: ./doc/install-guide/section_heat-verify.xml5(title) #: ./doc/install-guide/section_basics-ntp.xml94(title) #: ./doc/install-guide/section_nova-verify.xml6(title) @@ -2087,100 +2092,111 @@ msgid "" "environment in the following chapters." msgstr "OpenStack 環境にダッシュボードが追加されました。インスタンスの起動、以降の章に記載されているサービスの環境への追加を実行できます。" -#: ./doc/install-guide/section_keystone-verify.xml6(title) -msgid "Verify the Identity Service installation" -msgstr "Identity Service のインストールの検証" - -#: ./doc/install-guide/section_keystone-verify.xml9(para) +#: ./doc/install-guide/section_keystone-verify.xml8(para) msgid "" -"To verify that the Identity Service is installed and configured correctly, " -"clear the values in the OS_SERVICE_TOKEN and " +"This section describes how to verify operation of the Identity service." +msgstr "" + +#: ./doc/install-guide/section_keystone-verify.xml12(para) +msgid "" +"Unset the temporary OS_SERVICE_TOKEN and " "OS_SERVICE_ENDPOINT environment variables:" -msgstr "Identity Service が正しくインストールされ、設定されていることを確認するためには、OS_SERVICE_TOKEN 環境変数と OS_SERVICE_ENDPOINT 環境変数にある値を削除します。" +msgstr "" -#: ./doc/install-guide/section_keystone-verify.xml15(para) +#: ./doc/install-guide/section_keystone-verify.xml17(para) msgid "" -"These variables, which were used to bootstrap the administrative user and " -"register the Identity Service, are no longer needed." -msgstr "管理ユーザーをブートストラップし、Identity Service に登録するために使用された、これらの変数はもはや必要ありません。" +"As the admin tenant and user, request an authentication " +"token:" +msgstr "" -#: ./doc/install-guide/section_keystone-verify.xml20(para) -msgid "You can now use regular user name-based authentication." -msgstr "これで通常のユーザー名による認証を使用できます。" - -#: ./doc/install-guide/section_keystone-verify.xml22(para) +#: ./doc/install-guide/section_keystone-verify.xml21(para) msgid "" -"Request a authentication token by using the admin user " -"and the password you chose for that user:" -msgstr "admin ユーザーと、そのユーザー用に選択したパスワードを使用して認証トークンを要求します。" +"Replace ADMIN_PASS with the password you chose " +"for the admin user in the Identity service. You might " +"need to use single quotes (') around your password if it includes special " +"characters." +msgstr "" -#: ./doc/install-guide/section_keystone-verify.xml27(para) +#: ./doc/install-guide/section_keystone-verify.xml25(para) msgid "" -"In response, you receive a token paired with your user ID. This verifies " -"that the Identity Service is running on the expected endpoint and that your " -"user account is established with the expected credentials." -msgstr "応答で、ユーザー ID とペアになったトークンを受け取ります。これにより、Identity Service が期待したエンドポイントで実行されていて、ユーザーアカウントが期待したクレデンシャルで確立されていることを検証できます。" +"Lengthy output that includes a token value verifies operation for the " +"admin tenant and user." +msgstr "" -#: ./doc/install-guide/section_keystone-verify.xml33(para) +#: ./doc/install-guide/section_keystone-verify.xml29(para) msgid "" -"Verify that authorization behaves as expected. To do so, request " -"authorization on a tenant:" -msgstr "認可が期待したとおり動作することを検証します。そうするために、プロジェクトで認可を要求します。" +"As the admin tenant and user, list tenants to verify that" +" the admin tenant and user can execute admin-only CLI " +"commands and that the Identity service contains the tenants that you created" +" in :" +msgstr "" -#: ./doc/install-guide/section_keystone-verify.xml38(para) +#: ./doc/install-guide/section_keystone-verify.xml34(para) msgid "" -"In response, you receive a token that includes the ID of the tenant that you" -" specified. This verifies that your user account has an explicitly defined " -"role on the specified tenant and the tenant exists as expected." -msgstr "応答で、指定したプロジェクトの ID を含むトークンを受け取ります。これにより、ユーザーアカウントが指定したプロジェクトで明示的に定義したロールを持ち、プロジェクトが期待したとおりに存在することを検証します。" +"As the admin tenant and user, list tenants to verify that" +" the admin tenant and user can execute admin-only CLI " +"commands and that the Identity service contains the tenants created by the " +"configuration tool:" +msgstr "" -#: ./doc/install-guide/section_keystone-verify.xml44(para) +#: ./doc/install-guide/section_keystone-verify.xml53(para) msgid "" -"You can also set your --os-* variables in your " -"environment to simplify command-line usage. Set up a admin-" -"openrc.sh file with the admin credentials and admin endpoint:" -msgstr "コマンドラインの使用を簡単にするために、お使いの環境で --os-* 変数を設定することもできます。admin クレデンシャルと admin エンドポイントを用いて admin-openrc.sh ファイルをセットアップします。" +"As the admin tenant and user, list users to verify that " +"the Identity service contains the users that you created in :" +msgstr "" -#: ./doc/install-guide/section_keystone-verify.xml54(para) -msgid "Source this file to read in the environment variables:" -msgstr "環境変数を読み込むために、このファイルを source します。" - -#: ./doc/install-guide/section_keystone-verify.xml59(para) +#: ./doc/install-guide/section_keystone-verify.xml57(para) msgid "" -"Verify that your admin-openrc.sh file is configured " -"correctly. Run the same command without the --os-* " -"arguments:" -msgstr "admin-openrc.sh ファイルが正しく設定されていることを検証します。同じコマンドを --os-* 引数なしで実行します。" +"As the admin tenant and user, list users to verify that " +"the Identity service contains the users created by the configuration tool:" +msgstr "" -#: ./doc/install-guide/section_keystone-verify.xml63(para) +#: ./doc/install-guide/section_keystone-verify.xml70(para) msgid "" -"The command returns a token and the ID of the specified tenant. This " -"verifies that you have configured your environment variables correctly." -msgstr "コマンドはトークンと指定されたプロジェクトの ID を返します。これにより、環境変数が正しく設定されていることを確認します。" +"As the admin tenant and user, list roles to verify that " +"the Identity service contains the role that you created in :" +msgstr "" -#: ./doc/install-guide/section_keystone-verify.xml68(para) +#: ./doc/install-guide/section_keystone-verify.xml74(para) msgid "" -"Verify that your admin account has authorization to perform administrative " -"commands:" -msgstr "admin アカウントが管理コマンドを実行する権限があることを検証します。" +"As the admin tenant and user, list roles to verify that " +"the Identity service contains the role created by the configuration tool:" +msgstr "" -#: ./doc/install-guide/section_keystone-verify.xml85(para) +#: ./doc/install-guide/section_keystone-verify.xml87(para) msgid "" -"Seeing that the id in the output from the " -" command matches the user_id in the " -" command, and that the admin role is listed for that user, " -"for the related tenant, this verifies that your user account has the " -"admin role, which matches the role used in the Identity " -"Service policy.json file." -msgstr " コマンドの出力の id コマンドの user_id と同じであること、admin ロールがそのユーザー、その関連プロジェクトに対して表示されることを確認します。これにより、お使いのユーザーアカウントが admin ロールを持つことを確認します。このロールは Identity Service の policy.json ファイルで使用されるロールと対応します。" +"As the demo tenant and user, request an authentication " +"token:" +msgstr "" -#: ./doc/install-guide/section_keystone-verify.xml95(para) +#: ./doc/install-guide/section_keystone-verify.xml91(para) msgid "" -"As long as you define your credentials and the Identity Service endpoint " -"through the command line or environment variables, you can run all OpenStack" -" client commands from any machine. For details, see ." -msgstr "コマンドラインや環境変数経由でクレデンシャルと Identity Service エンドポイントを定義する限り、すべてのマシンからすべての OpenStack クライアントコマンドを実行できます。詳細は を参照してください。" +"Replace DEMO_PASS with the password you chose for" +" the demo user in the Identity service." +msgstr "" + +#: ./doc/install-guide/section_keystone-verify.xml94(para) +msgid "" +"Lengthy output that includes a token value verifies operation for the " +"demo tenant and user." +msgstr "" + +#: ./doc/install-guide/section_keystone-verify.xml98(para) +msgid "" +"As the demo tenant and user, attempt to list users to " +"verify that you cannot execute admin-only CLI commands:" +msgstr "" + +#: ./doc/install-guide/section_keystone-verify.xml105(para) +msgid "" +"Each OpenStack service references a policy.json file to" +" determine the operations available to a particular tenant, user, or role. " +"For more information, see the Operations Guide - Managing " +"Projects and Users." +msgstr "" #: ./doc/install-guide/section_basics-packages.xml7(title) msgid "OpenStack packages"