Imported Translations from Transifex
Change-Id: Ic50283ff6c23a46375fda48bae161898bdeb126b
This commit is contained in:
@@ -1,7 +1,7 @@
|
||||
msgid ""
|
||||
msgstr ""
|
||||
"Project-Id-Version: PACKAGE VERSION\n"
|
||||
"POT-Creation-Date: 2014-04-06 06:25+0000\n"
|
||||
"POT-Creation-Date: 2014-04-07 06:26+0000\n"
|
||||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||||
"Language-Team: LANGUAGE <LL@li.org>\n"
|
||||
@@ -2774,6 +2774,82 @@ msgstr ""
|
||||
msgid "Admin users can specify an exact compute node to run on using the command <placeholder-1/>"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-configure-service-groups.xml:15(title)
|
||||
msgid "Configure Compute service groups"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-configure-service-groups.xml:16(para)
|
||||
msgid "To effectively manage and utilize compute nodes, the Compute service must know their statuses. For example, when a user launches a new VM, the Compute scheduler sends the request to a live node; the Compute service queries the ServiceGroup API to get information about whether a node is alive."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-configure-service-groups.xml:20(para)
|
||||
msgid "When a compute worker (running the <systemitem class=\"service\">nova-compute</systemitem> daemon) starts, it calls the <systemitem>join</systemitem> API to join the compute group. Any interested service (for example, the scheduler) can query the group's membership and the status of its nodes. Internally, the <systemitem>ServiceGroup</systemitem> client driver automatically updates the compute worker status."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-configure-service-groups.xml:25(para)
|
||||
msgid "The database, ZooKeeper, and Memcache drivers are available."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-configure-service-groups.xml:27(title)
|
||||
msgid "Database ServiceGroup driver"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-configure-service-groups.xml:28(para)
|
||||
msgid "By default, Compute uses the database driver to track node liveness. In a compute worker, this driver periodically sends a <placeholder-1/> command to the database, saying <quote>I'm OK</quote> with a timestamp. Compute uses a pre-defined timeout (<literal>service_down_time</literal>) to determine whether a node is dead."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-configure-service-groups.xml:32(para)
|
||||
msgid "The driver has limitations, which can be an issue depending on your setup. The more compute worker nodes that you have, the more pressure you put on the database. By default, the timeout is 60 seconds so it might take some time to detect node failures. You could reduce the timeout value, but you must also make the database update more frequently, which again increases the database workload."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-configure-service-groups.xml:37(para)
|
||||
msgid "The database contains data that is both transient (whether the node is alive) and persistent (for example, entries for VM owners). With the ServiceGroup abstraction, Compute can treat each type separately."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-configure-service-groups.xml:42(title)
|
||||
msgid "ZooKeeper ServiceGroup driver"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-configure-service-groups.xml:43(para)
|
||||
msgid "The ZooKeeper ServiceGroup driver works by using ZooKeeper ephemeral nodes. ZooKeeper, in contrast to databases, is a distributed system. Its load is divided among several servers. At a compute worker node, after establishing a ZooKeeper session, the driver creates an ephemeral znode in the group directory. Ephemeral znodes have the same lifespan as the session. If the worker node or the <systemitem class=\"service\">nova-compute</systemitem> daemon crashes, or a network partition is in place between the worker and the ZooKeeper server quorums, the ephemeral znodes are removed automatically. The driver gets the group membership by running the <placeholder-1/> command in the group directory."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-configure-service-groups.xml:53(para)
|
||||
msgid "To use the ZooKeeper driver, you must install ZooKeeper servers and client libraries. Setting up ZooKeeper servers is outside the scope of this guide (for more information, see <link href=\"http://zookeeper.apache.org/\">Apache Zookeeper</link>)."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-configure-service-groups.xml:57(para)
|
||||
msgid "To use ZooKeeper, you must install client-side Python libraries on every nova node: <literal>python-zookeeper</literal> – the official Zookeeper Python binding and <literal>evzookeeper</literal> – the library to make the binding work with the eventlet threading model."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-configure-service-groups.xml:61(para)
|
||||
msgid "The following example assumes the ZooKeeper server addresses and ports are <literal>192.168.2.1:2181</literal>, <literal>192.168.2.2:2181</literal>, and <literal>192.168.2.3:2181</literal>."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-configure-service-groups.xml:64(para)
|
||||
msgid "The following values in the <filename>/etc/nova/nova.conf</filename> file (on every node) are required for the <systemitem>ZooKeeper</systemitem> driver:"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-configure-service-groups.xml:71(para)
|
||||
msgid "To customize the Compute Service groups, use the following configuration option settings:"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-configure-service-groups.xml:76(title)
|
||||
msgid "Memcache ServiceGroup driver"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-configure-service-groups.xml:77(para)
|
||||
msgid "The <systemitem>memcache</systemitem> ServiceGroup driver uses memcached, which is a distributed memory object caching system that is often used to increase site performance. For more details, see <link href=\"http://memcached.org/\">memcached.org</link>."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-configure-service-groups.xml:81(para)
|
||||
msgid "To use the <systemitem>memcache</systemitem> driver, you must install <systemitem>memcached</systemitem>. However, because <systemitem>memcached</systemitem> is often used for both OpenStack Object Storage and OpenStack dashboard, it might already be installed. If <systemitem>memcached</systemitem> is not installed, refer to the <link href=\"http://docs.openstack.org/havana/install-guide/contents\"><citetitle>OpenStack Installation Guide</citetitle></link> for more information."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-configure-service-groups.xml:89(para)
|
||||
msgid "The following values in the <filename>/etc/nova/nova.conf</filename> file (on every node) are required for the <systemitem>memcache</systemitem> driver:"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-instance-mgt-tools.xml:7(title)
|
||||
msgid "Instance management tools"
|
||||
msgstr ""
|
||||
@@ -3190,363 +3266,363 @@ msgstr ""
|
||||
msgid "Ensure instances are migrated successfully with <placeholder-1/>. If instances are still running on HostB, check log files (src/dest <systemitem class=\"service\">nova-compute</systemitem> and <systemitem class=\"service\">nova-scheduler</systemitem>) to determine why. <placeholder-2/>"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:502(title)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:504(title)
|
||||
msgid "Recover from a failed compute node"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:503(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:505(para)
|
||||
msgid "If you have deployed Compute with a shared file system, you can quickly recover from a failed compute node. Of the two methods covered in these sections, the evacuate API is the preferred method even in the absence of shared storage. The evacuate API provides many benefits over manual recovery, such as re-attachment of volumes and floating IPs."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:512(title)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:514(title)
|
||||
msgid "Manual recovery"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:513(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:515(para)
|
||||
msgid "For KVM/libvirt compute node recovery, see the previous section. Use the following procedure for all other hypervisors."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:516(title)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:518(title)
|
||||
msgid "To work with host information"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:518(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:520(para)
|
||||
msgid "Identify the VMs on the affected hosts, using tools such as a combination of <literal>nova list</literal> and <literal>nova show</literal> or <literal>euca-describe-instances</literal>. Here's an example using the EC2 API - instance i-000015b9 that is running on node np-rcc54:"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:525(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:527(para)
|
||||
msgid "You can review the status of the host by using the Compute database. Some of the important information is highlighted below. This example converts an EC2 API instance ID into an OpenStack ID; if you used the <literal>nova</literal> commands, you can substitute the ID directly. You can find the credentials for your database in <filename>/etc/nova.conf</filename>."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:552(title)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:554(title)
|
||||
msgid "To recover the VM"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:554(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:556(para)
|
||||
msgid "When you know the status of the VM on the failed host, determine to which compute host the affected VM should be moved. For example, run the following database command to move the VM to np-rcc46:"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:560(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:562(para)
|
||||
msgid "If using a hypervisor that relies on libvirt (such as KVM), it is a good idea to update the <literal>libvirt.xml</literal> file (found in <literal>/var/lib/nova/instances/[instance ID]</literal>). The important changes to make are:"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:567(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:569(para)
|
||||
msgid "Change the <literal>DHCPSERVER</literal> value to the host IP address of the compute host that is now the VM's new home."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:572(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:574(para)
|
||||
msgid "Update the VNC IP if it isn't already to: <literal>0.0.0.0</literal>."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:579(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:581(para)
|
||||
msgid "Reboot the VM:"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:583(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:585(para)
|
||||
msgid "In theory, the above database update and <literal>nova reboot</literal> command are all that is required to recover a VM from a failed host. However, if further problems occur, consider looking at recreating the network filter configuration using <literal>virsh</literal>, restarting the Compute services or updating the <literal>vm_state</literal> and <literal>power_state</literal> in the Compute database."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:592(title)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:594(title)
|
||||
msgid "Recover from a UID/GID mismatch"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:593(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:595(para)
|
||||
msgid "When running OpenStack compute, using a shared file system or an automated configuration tool, you could encounter a situation where some files on your compute node are using the wrong UID or GID. This causes a raft of errors, such as being unable to live migrate, or start virtual machines."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:599(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:601(para)
|
||||
msgid "The following procedure runs on <systemitem class=\"service\">nova-compute</systemitem> hosts, based on the KVM hypervisor, and could help to restore the situation:"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:603(title)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:605(title)
|
||||
msgid "To recover from a UID/GID mismatch"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:605(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:607(para)
|
||||
msgid "Ensure you don't use numbers that are already used for some other user/group."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:609(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:611(para)
|
||||
msgid "Set the nova uid in <filename>/etc/passwd</filename> to the same number in all hosts (for example, 112)."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:613(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:615(para)
|
||||
msgid "Set the libvirt-qemu uid in <filename>/etc/passwd</filename> to the same number in all hosts (for example, 119)."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:619(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:621(para)
|
||||
msgid "Set the nova group in <filename>/etc/group</filename> file to the same number in all hosts (for example, 120)."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:625(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:627(para)
|
||||
msgid "Set the libvirtd group in <filename>/etc/group</filename> file to the same number in all hosts (for example, 119)."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:631(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:633(para)
|
||||
msgid "Stop the services on the compute node."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:635(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:637(para)
|
||||
msgid "Change all the files owned by user nova or by group nova. For example:"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:641(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:643(para)
|
||||
msgid "Repeat the steps for the libvirt-qemu owned files if those needed to change."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:645(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:647(para)
|
||||
msgid "Restart the services."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:648(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:650(para)
|
||||
msgid "Now you can run the <placeholder-1/> command to verify that all files using the correct identifiers."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:655(title)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:657(title)
|
||||
msgid "Compute disaster recovery process"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:656(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:658(para)
|
||||
msgid "Use the following procedures to manage your cloud after a disaster, and to easily back up its persistent storage volumes. Backups <emphasis role=\"bold\">are</emphasis> mandatory, even outside of disaster scenarios."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:659(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:661(para)
|
||||
msgid "For a DRP definition, see <link href=\"http://en.wikipedia.org/wiki/Disaster_Recovery_Plan\">http://en.wikipedia.org/wiki/Disaster_Recovery_Plan</link>."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:663(title)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:665(title)
|
||||
msgid "A- The disaster recovery process presentation"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:665(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:667(para)
|
||||
msgid "A disaster could happen to several components of your architecture: a disk crash, a network loss, a power cut, and so on. In this example, assume the following set up:"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:671(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:673(para)
|
||||
msgid "A cloud controller (<systemitem>nova-api</systemitem>, <systemitem>nova-objecstore</systemitem>, <systemitem>nova-network</systemitem>)"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:676(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:678(para)
|
||||
msgid "A compute node (<systemitem class=\"service\">nova-compute</systemitem>)"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:681(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:683(para)
|
||||
msgid "A Storage Area Network used by <systemitem class=\"service\">cinder-volumes</systemitem> (aka SAN)"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:687(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:689(para)
|
||||
msgid "The disaster example is the worst one: a power loss. That power loss applies to the three components. <emphasis role=\"italic\">Let's see what runs and how it runs before the crash</emphasis>:"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:694(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:696(para)
|
||||
msgid "From the SAN to the cloud controller, we have an active iscsi session (used for the \"cinder-volumes\" LVM's VG)."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:699(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:701(para)
|
||||
msgid "From the cloud controller to the compute node, we also have active iscsi sessions (managed by <systemitem class=\"service\">cinder-volume</systemitem>)."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:704(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:706(para)
|
||||
msgid "For every volume, an iscsi session is made (so 14 ebs volumes equals 14 sessions)."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:708(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:710(para)
|
||||
msgid "From the cloud controller to the compute node, we also have iptables/ ebtables rules which allow access from the cloud controller to the running instance."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:713(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:715(para)
|
||||
msgid "And at least, from the cloud controller to the compute node; saved into database, the current state of the instances (in that case \"running\" ), and their volumes attachment (mount point, volume ID, volume status, and so on.)"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:719(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:721(para)
|
||||
msgid "Now, after the power loss occurs and all hardware components restart, the situation is as follows:"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:724(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:726(para)
|
||||
msgid "From the SAN to the cloud, the ISCSI session no longer exists."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:728(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:730(para)
|
||||
msgid "From the cloud controller to the compute node, the ISCSI sessions no longer exist."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:733(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:735(para)
|
||||
msgid "From the cloud controller to the compute node, the iptables and ebtables are recreated, since, at boot, <systemitem>nova-network</systemitem> reapplies the configurations."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:739(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:741(para)
|
||||
msgid "From the cloud controller, instances are in a shutdown state (because they are no longer running)"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:743(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:745(para)
|
||||
msgid "In the database, data was not updated at all, since Compute could not have anticipated the crash."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:747(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:749(para)
|
||||
msgid "Before going further, and to prevent the administrator from making fatal mistakes,<emphasis role=\"bold\"> the instances won't be lost</emphasis>, because no \"<placeholder-1/>\" or \"<placeholder-2/>\" command was invoked, so the files for the instances remain on the compute node."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:752(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:754(para)
|
||||
msgid "Perform these tasks in this exact order. <emphasis role=\"underline\">Any extra step would be dangerous at this stage</emphasis> :"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:757(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:759(para)
|
||||
msgid "Get the current relation from a volume to its instance, so that you can recreate the attachment."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:762(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:764(para)
|
||||
msgid "Update the database to clean the stalled state. (After that, you cannot perform the first step)."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:767(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:769(para)
|
||||
msgid "Restart the instances. In other words, go from a shutdown to running state."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:772(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:774(para)
|
||||
msgid "After the restart, reattach the volumes to their respective instances (optional)."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:776(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:778(para)
|
||||
msgid "SSH into the instances to reboot them."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:782(title)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:784(title)
|
||||
msgid "B - Disaster recovery"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:784(title)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:786(title)
|
||||
msgid "To perform disaster recovery"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:786(title)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:788(title)
|
||||
msgid "Get the instance-to-volume relationship"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:788(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:790(para)
|
||||
msgid "You must get the current relationship from a volume to its instance, because you will re-create the attachment."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:790(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:792(para)
|
||||
msgid "You can find this relationship by running <placeholder-1/>. Note that the <placeholder-2/> client includes the ability to get volume information from Block Storage."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:795(title)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:797(title)
|
||||
msgid "Update the database"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:796(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:798(para)
|
||||
msgid "Update the database to clean the stalled state. You must restore for every volume, using these queries to clean up the database:"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:803(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:805(para)
|
||||
msgid "Then, when you run <placeholder-1/> commands, all volumes appear in the listing."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:807(title)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:809(title)
|
||||
msgid "Restart instances"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:809(replaceable)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:811(replaceable)
|
||||
msgid "$instance"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:808(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:810(para)
|
||||
msgid "Restart the instances using the <placeholder-1/> command."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:810(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:812(para)
|
||||
msgid "At this stage, depending on your image, some instances completely reboot and become reachable, while others stop on the \"plymouth\" stage."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:815(title)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:817(title)
|
||||
msgid "DO NOT reboot a second time"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:816(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:818(para)
|
||||
msgid "Do not reboot instances that are stopped at this point. Instance state depends on whether you added an <filename>/etc/fstab</filename> entry for that volume. Images built with the <package>cloud-init</package> package remain in a pending state, while others skip the missing volume and start. The idea of that stage is only to ask nova to reboot every instance, so the stored state is preserved. For more information about <package>cloud-init</package>, see <link href=\"https://help.ubuntu.com/community/CloudInit\">help.ubuntu.com/community/CloudInit</link>."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:827(title)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:829(title)
|
||||
msgid "Reattach volumes"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:828(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:830(para)
|
||||
msgid "After the restart, you can reattach the volumes to their respective instances. Now that <placeholder-1/> has restored the right status, it is time to perform the attachments through a <placeholder-2/>"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:832(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:834(para)
|
||||
msgid "This simple snippet uses the created file:"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:844(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:846(para)
|
||||
msgid "At that stage, instances that were pending on the boot sequence (<emphasis role=\"italic\">plymouth</emphasis>) automatically continue their boot, and restart normally, while the ones that booted see the volume."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:852(title)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:854(title)
|
||||
msgid "SSH into instances"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:853(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:855(para)
|
||||
msgid "If some services depend on the volume, or if a volume has an entry into <systemitem>fstab</systemitem>, it could be good to simply restart the instance. This restart needs to be made from the instance itself, not through <placeholder-1/>. So, we SSH into the instance and perform a reboot:"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:861(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:863(para)
|
||||
msgid "By completing this procedure, you can successfully recover your cloud."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:864(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:866(para)
|
||||
msgid "Follow these guidelines:"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:867(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:869(para)
|
||||
msgid "Use the <parameter> errors=remount</parameter> parameter in the <filename>fstab</filename> file, which prevents data corruption."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:870(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:872(para)
|
||||
msgid "The system locks any write to the disk if it detects an I/O error. This configuration option should be added into the <systemitem class=\"service\">cinder-volume</systemitem> server (the one which performs the ISCSI connection to the SAN), but also into the instances' <filename>fstab</filename> file."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:877(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:879(para)
|
||||
msgid "Do not add the entry for the SAN's disks to the <systemitem class=\"service\">cinder-volume</systemitem>'s <filename>fstab</filename> file."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:880(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:882(para)
|
||||
msgid "Some systems hang on that step, which means you could lose access to your cloud-controller. To re-run the session manually, you would run the following command before performing the mount: <placeholder-1/>"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:886(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:888(para)
|
||||
msgid "For your instances, if you have the whole <filename>/home/</filename> directory on the disk, instead of emptying the <filename>/home</filename> directory and map the disk on it, leave a user's directory with the user's bash files and the <filename>authorized_keys</filename> file."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:891(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:893(para)
|
||||
msgid "This enables you to connect to the instance, even without the volume attached, if you allow only connections through public keys."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:898(title)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:900(title)
|
||||
msgid "C - Scripted DRP"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:900(title)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:902(title)
|
||||
msgid "To use scripted DRP"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:901(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:903(para)
|
||||
msgid "You can download from <link href=\"https://github.com/Razique/BashStuff/blob/master/SYSTEMS/OpenStack/SCR_5006_V00_NUAC-OPENSTACK-DRP-OpenStack.sh\">here</link> a bash script which performs these steps:"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:906(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:908(para)
|
||||
msgid "The \"test mode\" allows you to perform that whole sequence for only one instance."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:911(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:913(para)
|
||||
msgid "To reproduce the power loss, connect to the compute node which runs that same instance and close the iscsi session. <emphasis role=\"underline\">Do not detach the volume through <placeholder-1/></emphasis>, but instead manually close the iscsi session."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:922(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:924(para)
|
||||
msgid "In this example, the iscsi session is number 15 for that instance:"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:927(para)
|
||||
#: ./doc/admin-guide-cloud/compute/section_compute-system-admin.xml:929(para)
|
||||
msgid "Do not forget the <literal>-r</literal> flag. Otherwise, you close ALL sessions."
|
||||
msgstr ""
|
||||
|
||||
|
@@ -1,7 +1,7 @@
|
||||
msgid ""
|
||||
msgstr ""
|
||||
"Project-Id-Version: PACKAGE VERSION\n"
|
||||
"POT-Creation-Date: 2014-04-06 06:25+0000\n"
|
||||
"POT-Creation-Date: 2014-04-07 06:26+0000\n"
|
||||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||||
"Language-Team: LANGUAGE <LL@li.org>\n"
|
||||
|
@@ -1,7 +1,7 @@
|
||||
msgid ""
|
||||
msgstr ""
|
||||
"Project-Id-Version: PACKAGE VERSION\n"
|
||||
"POT-Creation-Date: 2014-04-06 06:26+0000\n"
|
||||
"POT-Creation-Date: 2014-04-07 06:27+0000\n"
|
||||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||||
"Language-Team: LANGUAGE <LL@li.org>\n"
|
||||
@@ -744,15 +744,11 @@ msgid "The proxy initiates the connection to VNC server and continues to proxy u
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:43(para)
|
||||
msgid "The proxy also tunnels the VNC protocol over WebSockets so that the noVNC client can talk VNC."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:45(para)
|
||||
msgid "In general, the VNC proxy:"
|
||||
msgid "The proxy also tunnels the VNC protocol over WebSockets so that the <systemitem>noVNC</systemitem> client can talk to VNC servers. In general, the VNC proxy:"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:48(para)
|
||||
msgid "Bridges between the public network where the clients live and the private network where vncservers live."
|
||||
msgid "Bridges between the public network where the clients live and the private network where VNC servers live."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:52(para)
|
||||
@@ -808,111 +804,107 @@ msgid "VNC configuration options"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:121(para)
|
||||
msgid "To customize the VNC console, use the configuration option settings documented in <xref linkend=\"config_table_nova_vnc\"/>."
|
||||
msgid "To customize the VNC console, use the following configuration options:"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:124(para)
|
||||
msgid "To support <link href=\"http://docs.openstack.org/trunk/config-reference/content/configuring-openstack-compute-basics.html#section_configuring-compute-migrations\">live migration</link>, you cannot specify a specific IP address for <literal>vncserver_listen</literal>, because that IP address does not exist on the destination host."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:131(para)
|
||||
msgid "The <literal>vncserver_proxyclient_address</literal> defaults to <literal>127.0.0.1</literal>, which is the address of the compute host that nova instructs proxies to use when connecting to instance servers."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:135(para)
|
||||
msgid "For all-in-one XenServer domU deployments, set this to 169.254.0.1."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:137(para)
|
||||
msgid "For multi-host XenServer domU deployments, set to a dom0 management IP on the same network as the proxies."
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:134(para)
|
||||
msgid "The <literal>vncserver_proxyclient_address</literal> defaults to <literal>127.0.0.1</literal>, which is the address of the compute host that Compute instructs proxies to use when connecting to instance servers."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:139(para)
|
||||
msgid "For all-in-one XenServer domU deployments, set this to 169.254.0.1."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:140(para)
|
||||
msgid "For multi-host XenServer domU deployments, set to a dom0 management IP on the same network as the proxies."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:142(para)
|
||||
msgid "For multi-host libvirt deployments, set to a host management IP on the same network as the proxies."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:145(title)
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:150(title)
|
||||
msgid "nova-novncproxy (noVNC)"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:147(para)
|
||||
msgid "You must install the noVNC package, which contains the <systemitem class=\"service\">nova-novncproxy</systemitem> service."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:150(para)
|
||||
msgid "As root, run the following command:"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:152(para)
|
||||
msgid "You must install the <package>noVNC</package> package, which contains the <systemitem class=\"service\">nova-novncproxy</systemitem> service. As root, run the following command:"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:156(para)
|
||||
msgid "The service starts automatically on installation."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:153(para)
|
||||
msgid "To restart it, run the following command:"
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:157(para)
|
||||
msgid "To restart the service, run:"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:155(para)
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:159(para)
|
||||
msgid "The configuration option parameter should point to your <filename>nova.conf</filename> file, which includes the message queue server address and credentials."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:158(para)
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:162(para)
|
||||
msgid "By default, <systemitem class=\"service\">nova-novncproxy</systemitem> binds on <literal>0.0.0.0:6080</literal>."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:161(para)
|
||||
msgid "To connect the service to your nova deployment, add the following configuration options to your <filename>nova.conf</filename> file:"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:166(para)
|
||||
msgid "<literal>vncserver_listen</literal>=<replaceable>0.0.0.0</replaceable>"
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:165(para)
|
||||
msgid "To connect the service to your Compute deployment, add the following configuration options to your <filename>nova.conf</filename> file:"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:169(para)
|
||||
msgid "<literal>vncserver_listen</literal>=<replaceable>0.0.0.0</replaceable>"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:172(para)
|
||||
msgid "Specifies the address on which the VNC service should bind. Make sure it is assigned one of the compute node interfaces. This address is the one used by your domain file."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:175(para)
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:178(para)
|
||||
msgid "To use live migration, use the <replaceable>0.0.0.0</replaceable> address."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:180(para)
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:183(para)
|
||||
msgid "<literal>vncserver_ proxyclient_ address </literal>=<replaceable>127.0.0.1</replaceable>"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:184(para)
|
||||
msgid "The address of the compute host that nova instructs proxies to use when connecting to instance <literal>vncservers</literal>."
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:187(para)
|
||||
msgid "The address of the compute host that Compute instructs proxies to use when connecting to instance <literal>vncservers</literal>."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:192(title)
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:194(title)
|
||||
msgid "Frequently asked questions about VNC access to virtual machines"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:198(literal)
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:200(literal)
|
||||
msgid "nova-xvpvncproxy"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:199(systemitem)
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:201(systemitem)
|
||||
msgid "nova-novncproxy"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:197(emphasis)
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:199(emphasis)
|
||||
msgid "Q: What is the difference between <placeholder-1/> and <placeholder-2/>?"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:201(para)
|
||||
msgid "A: <literal>nova-xvpvncproxy</literal>, which ships with nova, is a proxy that supports a simple Java client. <systemitem class=\"service\">nova-novncproxy</systemitem> uses noVNC to provide VNC support through a web browser."
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:203(para)
|
||||
msgid "A: <literal>nova-xvpvncproxy</literal>, which ships with OpenStack Compute, is a proxy that supports a simple Java client. <systemitem class=\"service\">nova-novncproxy</systemitem> uses noVNC to provide VNC support through a web browser."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:208(emphasis)
|
||||
msgid "Q: I want VNC support in the Dashboard. What services do I need?"
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:209(emphasis)
|
||||
msgid "Q: I want VNC support in the OpenStack dashboard. What services do I need?"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:210(para)
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:211(para)
|
||||
msgid "A: You need <systemitem class=\"service\">nova-novncproxy</systemitem>, <systemitem class=\"service\">nova-consoleauth</systemitem>, and correctly configured compute hosts."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:216(emphasis)
|
||||
msgid "Q: When I use <placeholder-1/> or click on the VNC tab of the Dashboard, it hangs. Why?"
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:217(emphasis)
|
||||
msgid "Q: When I use <placeholder-1/> or click on the VNC tab of the OpenStack dashboard, it hangs. Why?"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-vnc.xml:219(para)
|
||||
@@ -11984,7 +11976,7 @@ msgid "VNC must be explicitly disabled to get access to the SPICE console. Set t
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_compute-configure-spice.xml:22(para)
|
||||
msgid "<xref linkend=\"config_table_nova_spice\"/> documents the options to configure SPICE as the console for OpenStack Compute."
|
||||
msgid "Use the following options to configure SPICE as the console for OpenStack Compute:"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/section_dashboard-configure.xml:6(title)
|
||||
@@ -17864,7 +17856,7 @@ msgstr ""
|
||||
msgid "rabbit_ha_queues = False"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/nova-rpc_all.xml:103(td) ./doc/common/tables/cinder-rpc.xml:135(td) ./doc/common/tables/glance-rabbitmq.xml:51(td) ./doc/common/tables/ceilometer-rabbitmq.xml:43(td) ./doc/common/tables/nova-rabbitmq.xml:23(td)
|
||||
#: ./doc/common/tables/nova-rpc_all.xml:103(td) ./doc/common/tables/cinder-rpc.xml:135(td) ./doc/common/tables/glance-rabbitmq.xml:51(td) ./doc/common/tables/neutron-rabbitmq.xml:23(td) ./doc/common/tables/ceilometer-rabbitmq.xml:43(td) ./doc/common/tables/nova-rabbitmq.xml:23(td)
|
||||
msgid "(BoolOpt) Use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database."
|
||||
msgstr ""
|
||||
|
||||
@@ -18453,7 +18445,7 @@ msgid "auto_sync_on_failure = True"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-ml2_bigswitch.xml:86(td) ./doc/common/tables/neutron-bigswitch.xml:86(td)
|
||||
msgid "(BoolOpt) If neutron fails to create a resource because the backend controller doesn't know of a dependency, automatically trigger a full data synchronization to the controller."
|
||||
msgid "(BoolOpt) If neutron fails to create a resource because the backend controller doesn't know of a dependency, the plugin automatically triggers a full data synchronization to the controller."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-ml2_bigswitch.xml:89(td) ./doc/common/tables/neutron-bigswitch.xml:89(td)
|
||||
@@ -18493,7 +18485,7 @@ msgid "server_auth = None"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-ml2_bigswitch.xml:106(td) ./doc/common/tables/neutron-bigswitch.xml:106(td)
|
||||
msgid "(StrOpt) The username and password for authenticating against the BigSwitch or Floodlight controller."
|
||||
msgid "(StrOpt) The username and password for authenticating against the Big Switch or Floodlight controller."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-ml2_bigswitch.xml:109(td) ./doc/common/tables/neutron-bigswitch.xml:109(td)
|
||||
@@ -18501,7 +18493,7 @@ msgid "server_ssl = True"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-ml2_bigswitch.xml:110(td) ./doc/common/tables/neutron-bigswitch.xml:110(td)
|
||||
msgid "(BoolOpt) If True, Use SSL when connecting to the BigSwitch or Floodlight controller."
|
||||
msgid "(BoolOpt) If True, Use SSL when connecting to the Big Switch or Floodlight controller."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-ml2_bigswitch.xml:113(td) ./doc/common/tables/neutron-bigswitch.xml:113(td)
|
||||
@@ -18517,7 +18509,7 @@ msgid "servers = localhost:8800"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-ml2_bigswitch.xml:118(td) ./doc/common/tables/neutron-bigswitch.xml:118(td)
|
||||
msgid "(ListOpt) A comma separated list of BigSwitch or Floodlight servers and port numbers. The plugin proxies the requests to the BigSwitch/Floodlight server, which performs the networking configuration. Only oneserver is needed per deployment, but you may wish todeploy multiple servers to support failover."
|
||||
msgid "(ListOpt) A comma separated list of Big Switch or Floodlight servers and port numbers. The plugin proxies the requests to the Big Switch/Floodlight server, which performs the networking configuration. Only oneserver is needed per deployment, but you may wish todeploy multiple servers to support failover."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-ml2_bigswitch.xml:121(td) ./doc/common/tables/neutron-bigswitch.xml:121(td)
|
||||
@@ -19056,11 +19048,11 @@ msgstr ""
|
||||
msgid "(StrOpt) DEPRECATED. A logging.Formatter log message format string which may use any of the available logging.LogRecord attributes. This option is deprecated. Please use logging_context_format_string and logging_default_format_string instead."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/trove-logging.xml:54(td) ./doc/common/tables/glance-logging.xml:62(td) ./doc/common/tables/ceilometer-logging.xml:62(td) ./doc/common/tables/neutron-logging.xml:62(td) ./doc/common/tables/heat-logging.xml:54(td) ./doc/common/tables/cinder-common.xml:94(td)
|
||||
#: ./doc/common/tables/trove-logging.xml:54(td) ./doc/common/tables/glance-logging.xml:62(td) ./doc/common/tables/ceilometer-logging.xml:62(td) ./doc/common/tables/neutron-logging.xml:62(td) ./doc/common/tables/heat-logging.xml:54(td) ./doc/common/tables/nova-logging.xml:66(td) ./doc/common/tables/cinder-common.xml:94(td)
|
||||
msgid "logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/trove-logging.xml:55(td) ./doc/common/tables/neutron-logging.xml:63(td) ./doc/common/tables/heat-logging.xml:55(td) ./doc/common/tables/nova-logging.xml:67(td)
|
||||
#: ./doc/common/tables/trove-logging.xml:55(td) ./doc/common/tables/neutron-logging.xml:63(td) ./doc/common/tables/heat-logging.xml:55(td)
|
||||
msgid "(StrOpt) format string to use for log messages with context"
|
||||
msgstr ""
|
||||
|
||||
@@ -19068,7 +19060,7 @@ msgstr ""
|
||||
msgid "logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/trove-logging.xml:59(td) ./doc/common/tables/neutron-logging.xml:67(td) ./doc/common/tables/heat-logging.xml:59(td) ./doc/common/tables/nova-logging.xml:71(td)
|
||||
#: ./doc/common/tables/trove-logging.xml:59(td) ./doc/common/tables/neutron-logging.xml:67(td) ./doc/common/tables/heat-logging.xml:59(td)
|
||||
msgid "(StrOpt) data to append to log format when level is DEBUG"
|
||||
msgstr ""
|
||||
|
||||
@@ -19076,7 +19068,7 @@ msgstr ""
|
||||
msgid "logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/trove-logging.xml:63(td) ./doc/common/tables/neutron-logging.xml:71(td) ./doc/common/tables/heat-logging.xml:63(td) ./doc/common/tables/nova-logging.xml:75(td)
|
||||
#: ./doc/common/tables/trove-logging.xml:63(td) ./doc/common/tables/neutron-logging.xml:71(td) ./doc/common/tables/heat-logging.xml:63(td)
|
||||
msgid "(StrOpt) format string to use for log messages without context"
|
||||
msgstr ""
|
||||
|
||||
@@ -19084,7 +19076,7 @@ msgstr ""
|
||||
msgid "logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d TRACE %(name)s %(instance)s"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/trove-logging.xml:67(td) ./doc/common/tables/neutron-logging.xml:75(td) ./doc/common/tables/heat-logging.xml:67(td) ./doc/common/tables/nova-logging.xml:79(td)
|
||||
#: ./doc/common/tables/trove-logging.xml:67(td) ./doc/common/tables/neutron-logging.xml:75(td) ./doc/common/tables/heat-logging.xml:67(td)
|
||||
msgid "(StrOpt) prefix each line of exception output with this format"
|
||||
msgstr ""
|
||||
|
||||
@@ -19096,7 +19088,7 @@ msgstr ""
|
||||
msgid "publish_errors = False"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/trove-logging.xml:75(td) ./doc/common/tables/heat-notification.xml:27(td) ./doc/common/tables/neutron-logging.xml:79(td) ./doc/common/tables/nova-logging.xml:83(td)
|
||||
#: ./doc/common/tables/trove-logging.xml:75(td) ./doc/common/tables/heat-notification.xml:27(td) ./doc/common/tables/neutron-logging.xml:79(td)
|
||||
msgid "(BoolOpt) publish error events"
|
||||
msgstr ""
|
||||
|
||||
@@ -19104,7 +19096,7 @@ msgstr ""
|
||||
msgid "syslog-log-facility = LOG_USER"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/trove-logging.xml:79(td) ./doc/common/tables/neutron-logging.xml:83(td) ./doc/common/tables/heat-logging.xml:71(td) ./doc/common/tables/nova-logging.xml:87(td)
|
||||
#: ./doc/common/tables/trove-logging.xml:79(td) ./doc/common/tables/neutron-logging.xml:83(td) ./doc/common/tables/heat-logging.xml:71(td)
|
||||
msgid "(StrOpt) syslog facility to receive log lines"
|
||||
msgstr ""
|
||||
|
||||
@@ -19516,7 +19508,7 @@ msgstr ""
|
||||
msgid "(StrOpt) Certificate file to use when starting the server securely."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/nova-ca.xml:73(td) ./doc/common/tables/heat-clients_nova.xml:38(td) ./doc/common/tables/heat-cfn_api.xml:49(td) ./doc/common/tables/heat-cfn_api.xml:72(td) ./doc/common/tables/heat-clients.xml:45(td) ./doc/common/tables/heat-clients_heat.xml:38(td) ./doc/common/tables/heat-clients_swift.xml:38(td) ./doc/common/tables/heat-cloudwatch_api.xml:45(td) ./doc/common/tables/heat-cloudwatch_api.xml:68(td) ./doc/common/tables/heat-clients_ceilometer.xml:38(td) ./doc/common/tables/heat-clients_trove.xml:38(td) ./doc/common/tables/heat-clients_cinder.xml:38(td) ./doc/common/tables/cinder-ssl.xml:30(td) ./doc/common/tables/heat-api.xml:115(td) ./doc/common/tables/heat-api.xml:149(td) ./doc/common/tables/neutron-ssl.xml:45(td) ./doc/common/tables/heat-clients_neutron.xml:38(td) ./doc/common/tables/trove-ssl.xml:30(td) ./doc/common/tables/glance-ssl.xml:30(td) ./doc/common/tables/heat-clients_keystone.xml:38(td) ./doc/common/tables/neutron-nec.xml:42(td) ./doc/common/tables/ceilometer-ssl.xml:30(td)
|
||||
#: ./doc/common/tables/nova-ca.xml:73(td) ./doc/common/tables/heat-clients_nova.xml:38(td) ./doc/common/tables/heat-cfn_api.xml:49(td) ./doc/common/tables/heat-cfn_api.xml:72(td) ./doc/common/tables/heat-clients.xml:45(td) ./doc/common/tables/heat-clients_heat.xml:38(td) ./doc/common/tables/heat-clients_swift.xml:38(td) ./doc/common/tables/heat-cloudwatch_api.xml:45(td) ./doc/common/tables/heat-cloudwatch_api.xml:68(td) ./doc/common/tables/heat-clients_ceilometer.xml:38(td) ./doc/common/tables/heat-clients_trove.xml:38(td) ./doc/common/tables/heat-clients_cinder.xml:38(td) ./doc/common/tables/cinder-ssl.xml:30(td) ./doc/common/tables/heat-api.xml:115(td) ./doc/common/tables/heat-api.xml:149(td) ./doc/common/tables/neutron-ssl.xml:45(td) ./doc/common/tables/heat-clients_neutron.xml:38(td) ./doc/common/tables/trove-ssl.xml:30(td) ./doc/common/tables/glance-ssl.xml:30(td) ./doc/common/tables/heat-clients_keystone.xml:38(td) ./doc/common/tables/neutron-nec.xml:46(td) ./doc/common/tables/ceilometer-ssl.xml:30(td)
|
||||
msgid "key_file = None"
|
||||
msgstr ""
|
||||
|
||||
@@ -20244,8 +20236,8 @@ msgstr ""
|
||||
msgid "(StrOpt) SSL key file (valid only if SSL enabled)"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-kombu.xml:35(td) ./doc/common/tables/heat-rabbitmq.xml:35(td) ./doc/common/tables/trove-amqp.xml:55(td)
|
||||
msgid "(StrOpt) SSL version to use (valid only if SSL enabled). valid values are TLSv1, SSLv23 and SSLv3. SSLv2 may be available on some distributions"
|
||||
#: ./doc/common/tables/neutron-kombu.xml:35(td) ./doc/common/tables/ceilometer-rabbitmq.xml:39(td)
|
||||
msgid "(StrOpt) If SSL is enabled, the SSL version to use. Valid values are TLSv1, SSLv23 and SSLv3. SSLv2 might be available on some distributions."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/trove-redis.xml:38(td) ./doc/common/tables/heat-rpc.xml:62(td) ./doc/common/tables/ceilometer-redis.xml:53(td) ./doc/common/tables/neutron-rpc.xml:77(td)
|
||||
@@ -21676,7 +21668,7 @@ msgstr ""
|
||||
msgid "Description of configuration options for rabbitmq"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/trove-rabbitmq.xml:23(td) ./doc/common/tables/heat-rabbitmq.xml:39(td) ./doc/common/tables/neutron-rabbitmq.xml:23(td)
|
||||
#: ./doc/common/tables/trove-rabbitmq.xml:23(td) ./doc/common/tables/heat-rabbitmq.xml:39(td)
|
||||
msgid "(BoolOpt) use H/A queues in RabbitMQ (x-ha-policy: all).You need to wipe RabbitMQ database when changing this option."
|
||||
msgstr ""
|
||||
|
||||
@@ -21688,11 +21680,11 @@ msgstr ""
|
||||
msgid "(ListOpt) RabbitMQ HA cluster host:port pairs"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/trove-rabbitmq.xml:35(td) ./doc/common/tables/heat-rabbitmq.xml:51(td) ./doc/common/tables/neutron-rabbitmq.xml:35(td)
|
||||
#: ./doc/common/tables/trove-rabbitmq.xml:35(td) ./doc/common/tables/heat-rabbitmq.xml:51(td)
|
||||
msgid "(IntOpt) maximum retries with trying to connect to RabbitMQ (the default of 0 implies an infinite retry count)"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/trove-rabbitmq.xml:39(td) ./doc/common/tables/heat-rabbitmq.xml:55(td) ./doc/common/tables/neutron-rabbitmq.xml:39(td)
|
||||
#: ./doc/common/tables/trove-rabbitmq.xml:39(td) ./doc/common/tables/heat-rabbitmq.xml:55(td)
|
||||
msgid "(StrOpt) the RabbitMQ password"
|
||||
msgstr ""
|
||||
|
||||
@@ -21700,23 +21692,23 @@ msgstr ""
|
||||
msgid "(IntOpt) The RabbitMQ broker port where a single node is used"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/trove-rabbitmq.xml:47(td) ./doc/common/tables/heat-rabbitmq.xml:63(td) ./doc/common/tables/neutron-rabbitmq.xml:47(td)
|
||||
#: ./doc/common/tables/trove-rabbitmq.xml:47(td) ./doc/common/tables/heat-rabbitmq.xml:63(td)
|
||||
msgid "(IntOpt) how long to backoff for between retries when connecting to RabbitMQ"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/trove-rabbitmq.xml:51(td) ./doc/common/tables/heat-rabbitmq.xml:67(td) ./doc/common/tables/neutron-rabbitmq.xml:51(td)
|
||||
#: ./doc/common/tables/trove-rabbitmq.xml:51(td) ./doc/common/tables/heat-rabbitmq.xml:67(td)
|
||||
msgid "(IntOpt) how frequently to retry connecting with RabbitMQ"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/trove-rabbitmq.xml:55(td) ./doc/common/tables/heat-rabbitmq.xml:71(td) ./doc/common/tables/neutron-rabbitmq.xml:55(td)
|
||||
#: ./doc/common/tables/trove-rabbitmq.xml:55(td) ./doc/common/tables/heat-rabbitmq.xml:71(td)
|
||||
msgid "(BoolOpt) connect over SSL for RabbitMQ"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/trove-rabbitmq.xml:59(td) ./doc/common/tables/heat-rabbitmq.xml:75(td) ./doc/common/tables/neutron-rabbitmq.xml:59(td)
|
||||
#: ./doc/common/tables/trove-rabbitmq.xml:59(td) ./doc/common/tables/heat-rabbitmq.xml:75(td)
|
||||
msgid "(StrOpt) the RabbitMQ userid"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/trove-rabbitmq.xml:63(td) ./doc/common/tables/heat-rabbitmq.xml:79(td) ./doc/common/tables/neutron-rabbitmq.xml:63(td)
|
||||
#: ./doc/common/tables/trove-rabbitmq.xml:63(td) ./doc/common/tables/heat-rabbitmq.xml:79(td)
|
||||
msgid "(StrOpt) the RabbitMQ virtual host"
|
||||
msgstr ""
|
||||
|
||||
@@ -22844,8 +22836,8 @@ msgstr ""
|
||||
msgid "backdoor_port = None"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/nova-testing.xml:27(td)
|
||||
msgid "(StrOpt) Enable eventlet backdoor. Acceptable values are 0, <port> and <start>:<end>, where 0 results in listening on a random tcp port number, <port> results in listening on the specified port number and not enabling backdoorif it is in use and <start>:<end> results in listening on the smallest unused port number within the specified range of port numbers. The chosen port is displayed in the service's log file."
|
||||
#: ./doc/common/tables/nova-testing.xml:27(td) ./doc/common/tables/trove-debug.xml:23(td) ./doc/common/tables/cinder-api.xml:31(td) ./doc/common/tables/heat-debug.xml:23(td) ./doc/common/tables/ceilometer-common.xml:23(td) ./doc/common/tables/neutron-testing.xml:23(td)
|
||||
msgid "(StrOpt) Enable eventlet backdoor. Acceptable values are 0, <port>, and <start>:<end>, where 0 results in listening on a random tcp port number; <port> results in listening on the specified port number (and not enabling backdoor if that port is in use); and <start>:<end> results in listening on the smallest unused port number within the specified range of port numbers. The chosen port is displayed in the service's log file."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/nova-testing.xml:30(td)
|
||||
@@ -22976,10 +22968,6 @@ msgstr ""
|
||||
msgid "Description of configuration options for debug"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/trove-debug.xml:23(td) ./doc/common/tables/cinder-api.xml:31(td) ./doc/common/tables/heat-debug.xml:23(td) ./doc/common/tables/ceilometer-common.xml:23(td) ./doc/common/tables/neutron-testing.xml:23(td)
|
||||
msgid "(StrOpt) Enable eventlet backdoor. Acceptable values are 0, <port>, and <start>:<end>, where 0 results in listening on a random tcp port number; <port> results in listening on the specified port number (and not enabling backdoor if that port is in use); and <start>:<end> results in listening on the smallest unused port number within the specified range of port numbers. The chosen port is displayed in the service's log file."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/trove-debug.xml:27(td) ./doc/common/tables/neutron-wsgi.xml:23(td) ./doc/common/tables/cinder-log.xml:23(td)
|
||||
msgid "(IntOpt) Number of backlog requests to configure the socket with"
|
||||
msgstr ""
|
||||
@@ -22996,7 +22984,7 @@ msgstr ""
|
||||
msgid "fatal_deprecations = False"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/trove-debug.xml:39(td) ./doc/common/tables/heat-debug.xml:35(td) ./doc/common/tables/neutron-logging.xml:31(td) ./doc/common/tables/nova-logging.xml:31(td)
|
||||
#: ./doc/common/tables/trove-debug.xml:39(td) ./doc/common/tables/heat-debug.xml:35(td) ./doc/common/tables/neutron-logging.xml:31(td)
|
||||
msgid "(BoolOpt) make deprecations fatal"
|
||||
msgstr ""
|
||||
|
||||
@@ -23220,7 +23208,7 @@ msgstr ""
|
||||
msgid "(StrOpt) Password for Redis server (optional)."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/cinder-rpc.xml:91(td) ./doc/common/tables/glance-logging.xml:79(td) ./doc/common/tables/ceilometer-logging.xml:79(td)
|
||||
#: ./doc/common/tables/cinder-rpc.xml:91(td) ./doc/common/tables/glance-logging.xml:79(td) ./doc/common/tables/ceilometer-logging.xml:79(td) ./doc/common/tables/nova-logging.xml:83(td)
|
||||
msgid "(BoolOpt) Publish error events"
|
||||
msgstr ""
|
||||
|
||||
@@ -23804,11 +23792,11 @@ msgstr ""
|
||||
msgid "(IntOpt) Minimum number of SQL connections to keep open in a pool"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/nova-db.xml:88(td) ./doc/common/tables/glance-db.xml:74(td)
|
||||
#: ./doc/common/tables/nova-db.xml:88(td)
|
||||
msgid "mysql_sql_mode = None"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/nova-db.xml:89(td) ./doc/common/tables/glance-db.xml:75(td)
|
||||
#: ./doc/common/tables/nova-db.xml:89(td)
|
||||
msgid "(StrOpt) The SQL mode to be used for MySQL sessions (default is empty, meaning do not override any server-side SQL mode setting)"
|
||||
msgstr ""
|
||||
|
||||
@@ -23888,31 +23876,31 @@ msgstr ""
|
||||
msgid "default_log_levels = amqp=WARN, amqplib=WARN, boto=WARN, qpid=WARN, sqlalchemy=WARN, suds=INFO, iso8601=WARN, requests.packages.urllib3.connectionpool=WARN"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/glance-logging.xml:27(td) ./doc/common/tables/ceilometer-logging.xml:27(td) ./doc/common/tables/cinder-common.xml:35(td)
|
||||
#: ./doc/common/tables/glance-logging.xml:27(td) ./doc/common/tables/ceilometer-logging.xml:27(td) ./doc/common/tables/nova-logging.xml:27(td) ./doc/common/tables/cinder-common.xml:35(td)
|
||||
msgid "(ListOpt) List of logger=LEVEL pairs"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/glance-logging.xml:31(td) ./doc/common/tables/cinder-common.xml:51(td) ./doc/common/tables/ceilometer-common.xml:31(td)
|
||||
#: ./doc/common/tables/glance-logging.xml:31(td) ./doc/common/tables/nova-logging.xml:31(td) ./doc/common/tables/cinder-common.xml:51(td) ./doc/common/tables/ceilometer-common.xml:31(td)
|
||||
msgid "(BoolOpt) Make deprecations fatal"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/glance-logging.xml:63(td) ./doc/common/tables/ceilometer-logging.xml:63(td) ./doc/common/tables/cinder-common.xml:95(td)
|
||||
#: ./doc/common/tables/glance-logging.xml:63(td) ./doc/common/tables/ceilometer-logging.xml:63(td) ./doc/common/tables/nova-logging.xml:67(td) ./doc/common/tables/cinder-common.xml:95(td)
|
||||
msgid "(StrOpt) Format string to use for log messages with context"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/glance-logging.xml:67(td) ./doc/common/tables/ceilometer-logging.xml:67(td) ./doc/common/tables/cinder-common.xml:99(td)
|
||||
#: ./doc/common/tables/glance-logging.xml:67(td) ./doc/common/tables/ceilometer-logging.xml:67(td) ./doc/common/tables/nova-logging.xml:71(td) ./doc/common/tables/cinder-common.xml:99(td)
|
||||
msgid "(StrOpt) Data to append to log format when level is DEBUG"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/glance-logging.xml:71(td) ./doc/common/tables/ceilometer-logging.xml:71(td) ./doc/common/tables/cinder-common.xml:103(td)
|
||||
#: ./doc/common/tables/glance-logging.xml:71(td) ./doc/common/tables/ceilometer-logging.xml:71(td) ./doc/common/tables/nova-logging.xml:75(td) ./doc/common/tables/cinder-common.xml:103(td)
|
||||
msgid "(StrOpt) Format string to use for log messages without context"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/glance-logging.xml:75(td) ./doc/common/tables/ceilometer-logging.xml:75(td) ./doc/common/tables/cinder-common.xml:107(td)
|
||||
#: ./doc/common/tables/glance-logging.xml:75(td) ./doc/common/tables/ceilometer-logging.xml:75(td) ./doc/common/tables/nova-logging.xml:79(td) ./doc/common/tables/cinder-common.xml:107(td)
|
||||
msgid "(StrOpt) Prefix each line of exception output with this format"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/glance-logging.xml:83(td) ./doc/common/tables/ceilometer-logging.xml:83(td) ./doc/common/tables/cinder-common.xml:195(td)
|
||||
#: ./doc/common/tables/glance-logging.xml:83(td) ./doc/common/tables/ceilometer-logging.xml:83(td) ./doc/common/tables/nova-logging.xml:87(td) ./doc/common/tables/cinder-common.xml:195(td)
|
||||
msgid "(StrOpt) Syslog facility to receive log lines"
|
||||
msgstr ""
|
||||
|
||||
@@ -24264,6 +24252,10 @@ msgstr ""
|
||||
msgid "(StrOpt) Optional heat url in format like http://0.0.0.0:8004/v1/%(tenant_id)s."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/heat-rabbitmq.xml:35(td) ./doc/common/tables/trove-amqp.xml:55(td)
|
||||
msgid "(StrOpt) SSL version to use (valid only if SSL enabled). valid values are TLSv1, SSLv23 and SSLv3. SSLv2 may be available on some distributions"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/ceilometer-logging.xml:30(td) ./doc/common/tables/neutron-notifier.xml:22(td) ./doc/common/tables/heat-amqp.xml:34(td) ./doc/common/tables/trove-common.xml:42(td) ./doc/common/tables/nova-compute.xml:54(td)
|
||||
msgid "default_notification_level = INFO"
|
||||
msgstr ""
|
||||
@@ -25096,11 +25088,11 @@ msgstr ""
|
||||
msgid "default_log_levels = amqp=WARN, amqplib=WARN, boto=WARN, qpid=WARN, sqlalchemy=WARN, suds=INFO, iso8601=WARN"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-logging.xml:27(td) ./doc/common/tables/heat-logging.xml:23(td) ./doc/common/tables/nova-logging.xml:27(td) ./doc/common/tables/trove-common.xml:35(td)
|
||||
#: ./doc/common/tables/neutron-logging.xml:27(td) ./doc/common/tables/heat-logging.xml:23(td) ./doc/common/tables/trove-common.xml:35(td)
|
||||
msgid "(ListOpt) list of logger=LEVEL pairs"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-logging.xml:86(td) ./doc/common/tables/neutron-nec.xml:54(td)
|
||||
#: ./doc/common/tables/neutron-logging.xml:86(td) ./doc/common/tables/neutron-nec.xml:58(td)
|
||||
msgid "use_ssl = False"
|
||||
msgstr ""
|
||||
|
||||
@@ -25565,7 +25557,7 @@ msgid "vif_driver = nova.virt.libvirt.vif.LibvirtGenericVIFDriver"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/nova-libvirt.xml:78(td)
|
||||
msgid "(StrOpt) The libvirt VIF driver to configure the VIFs."
|
||||
msgid "(StrOpt) DEPRECATED. The libvirt VIF driver to configure the VIFs.This option is deprecated and will be removed in the Juno release."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/nova-libvirt.xml:81(td)
|
||||
@@ -27301,7 +27293,7 @@ msgid "(StrOpt) Optional VIM Service WSDL Location e.g http://<server>/vim
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/nova-logging.xml:26(td)
|
||||
msgid "default_log_levels = amqp=WARN, amqplib=WARN, boto=WARN, qpid=WARN, sqlalchemy=WARN, suds=INFO, oslo.messaging=INFO, iso8601=WARN"
|
||||
msgid "default_log_levels = amqp=WARN, amqplib=WARN, boto=WARN, qpid=WARN, sqlalchemy=WARN, suds=INFO, oslo.messaging=INFO, iso8601=WARN, requests.packages.urllib3.connectionpool=WARN"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/nova-logging.xml:34(td) ./doc/common/tables/cinder-common.xml:54(td)
|
||||
@@ -27312,10 +27304,6 @@ msgstr ""
|
||||
msgid "(BoolOpt) Make exception message format errors fatal"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/nova-logging.xml:66(td)
|
||||
msgid "logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user)s %(tenant)s] %(instance)s%(message)s"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/nova-xen.xml:8(caption)
|
||||
msgid "Description of configuration options for xen"
|
||||
msgstr ""
|
||||
@@ -29720,7 +29708,7 @@ msgstr ""
|
||||
msgid "Description of configuration options for fwaas"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-fwaas.xml:19(th) ./doc/common/tables/neutron-nec.xml:69(th)
|
||||
#: ./doc/common/tables/neutron-fwaas.xml:19(th) ./doc/common/tables/neutron-nec.xml:73(th)
|
||||
msgid "[fwaas]"
|
||||
msgstr ""
|
||||
|
||||
@@ -31640,6 +31628,34 @@ msgstr ""
|
||||
msgid "(StrOpt) configuration file for HDS cinder plugin for HUS"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-rabbitmq.xml:35(td) ./doc/common/tables/ceilometer-rabbitmq.xml:55(td)
|
||||
msgid "(IntOpt) Maximum number of RabbitMQ connection retries. Default is 0 (infinite retry count)"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-rabbitmq.xml:39(td) ./doc/common/tables/ceilometer-rabbitmq.xml:59(td)
|
||||
msgid "(StrOpt) The RabbitMQ password"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-rabbitmq.xml:47(td) ./doc/common/tables/ceilometer-rabbitmq.xml:67(td)
|
||||
msgid "(IntOpt) How long to backoff for between retries when connecting to RabbitMQ"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-rabbitmq.xml:51(td) ./doc/common/tables/ceilometer-rabbitmq.xml:71(td)
|
||||
msgid "(IntOpt) How frequently to retry connecting with RabbitMQ"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-rabbitmq.xml:55(td) ./doc/common/tables/ceilometer-rabbitmq.xml:75(td)
|
||||
msgid "(BoolOpt) Connect over SSL for RabbitMQ"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-rabbitmq.xml:59(td) ./doc/common/tables/ceilometer-rabbitmq.xml:79(td)
|
||||
msgid "(StrOpt) The RabbitMQ userid"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-rabbitmq.xml:63(td) ./doc/common/tables/ceilometer-rabbitmq.xml:83(td)
|
||||
msgid "(StrOpt) The RabbitMQ virtual host"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/swift-proxy-server-filter-container-quotas.xml:7(literal)
|
||||
msgid "[filter:container-quotas]"
|
||||
msgstr ""
|
||||
@@ -32276,6 +32292,14 @@ msgstr ""
|
||||
msgid "(ListOpt) If the list is not empty then a v3 API extension will only be loaded if it exists in this list. Specify the extension aliases here."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/glance-db.xml:74(td)
|
||||
msgid "mysql_sql_mode = TRADITIONAL"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/glance-db.xml:75(td)
|
||||
msgid "(StrOpt) The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode="
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/glance-db.xml:86(td)
|
||||
msgid "sqlite_db = glance.sqlite"
|
||||
msgstr ""
|
||||
@@ -33512,90 +33536,66 @@ msgstr ""
|
||||
msgid "(StrOpt) Host to connect to"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-nec.xml:43(td)
|
||||
msgid "(StrOpt) Key file"
|
||||
#: ./doc/common/tables/neutron-nec.xml:42(td)
|
||||
msgid "insecure_ssl = False"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-nec.xml:46(td)
|
||||
msgid "path_prefix ="
|
||||
#: ./doc/common/tables/neutron-nec.xml:43(td)
|
||||
msgid "(BoolOpt) Disable SSL certificate verification"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-nec.xml:47(td)
|
||||
msgid "(StrOpt) Base URL of OFC REST API. It is prepended to each API request."
|
||||
msgid "(StrOpt) Key file"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-nec.xml:50(td)
|
||||
msgid "port = 8888"
|
||||
msgid "path_prefix ="
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-nec.xml:51(td)
|
||||
msgid "(StrOpt) Port to connect to"
|
||||
msgid "(StrOpt) Base URL of OFC REST API. It is prepended to each API request."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-nec.xml:54(td)
|
||||
msgid "port = 8888"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-nec.xml:55(td)
|
||||
msgid "(StrOpt) Port to connect to"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-nec.xml:59(td)
|
||||
msgid "(BoolOpt) Use SSL to connect"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-nec.xml:58(th)
|
||||
#: ./doc/common/tables/neutron-nec.xml:62(th)
|
||||
msgid "[PROVIDER]"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-nec.xml:61(td)
|
||||
#: ./doc/common/tables/neutron-nec.xml:65(td)
|
||||
msgid "default_router_provider = l3-agent"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-nec.xml:62(td)
|
||||
#: ./doc/common/tables/neutron-nec.xml:66(td)
|
||||
msgid "(StrOpt) Default router provider to use."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-nec.xml:65(td)
|
||||
#: ./doc/common/tables/neutron-nec.xml:69(td)
|
||||
msgid "router_providers = l3-agent, openflow"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-nec.xml:66(td)
|
||||
#: ./doc/common/tables/neutron-nec.xml:70(td)
|
||||
msgid "(ListOpt) List of enabled router providers."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-nec.xml:72(td)
|
||||
#: ./doc/common/tables/neutron-nec.xml:76(td)
|
||||
msgid "driver ="
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/neutron-nec.xml:73(td)
|
||||
#: ./doc/common/tables/neutron-nec.xml:77(td)
|
||||
msgid "(StrOpt) Name of the FWaaS Driver"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/ceilometer-rabbitmq.xml:39(td)
|
||||
msgid "(StrOpt) If SSL is enabled, the SSL version to use. Valid values are TLSv1, SSLv23 and SSLv3. SSLv2 might be available on some distributions."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/ceilometer-rabbitmq.xml:55(td)
|
||||
msgid "(IntOpt) Maximum number of RabbitMQ connection retries. Default is 0 (infinite retry count)"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/ceilometer-rabbitmq.xml:59(td)
|
||||
msgid "(StrOpt) The RabbitMQ password"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/ceilometer-rabbitmq.xml:67(td)
|
||||
msgid "(IntOpt) How long to backoff for between retries when connecting to RabbitMQ"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/ceilometer-rabbitmq.xml:71(td)
|
||||
msgid "(IntOpt) How frequently to retry connecting with RabbitMQ"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/ceilometer-rabbitmq.xml:75(td)
|
||||
msgid "(BoolOpt) Connect over SSL for RabbitMQ"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/ceilometer-rabbitmq.xml:79(td)
|
||||
msgid "(StrOpt) The RabbitMQ userid"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/ceilometer-rabbitmq.xml:83(td)
|
||||
msgid "(StrOpt) The RabbitMQ virtual host"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/common/tables/swift-object-expirer-object-expirer.xml:7(literal)
|
||||
msgid "[object-expirer]"
|
||||
msgstr ""
|
||||
|
@@ -1,7 +1,7 @@
|
||||
msgid ""
|
||||
msgstr ""
|
||||
"Project-Id-Version: PACKAGE VERSION\n"
|
||||
"POT-Creation-Date: 2014-04-06 06:26+0000\n"
|
||||
"POT-Creation-Date: 2014-04-07 06:27+0000\n"
|
||||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||||
"Language-Team: LANGUAGE <LL@li.org>\n"
|
||||
@@ -1340,62 +1340,6 @@ msgstr ""
|
||||
msgid "If it is not set to <systemitem>kvm</systemitem>, run:"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/config-reference/compute/section_compute-configure-service-groups.xml:15(title)
|
||||
msgid "Configuring Compute service groups"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/config-reference/compute/section_compute-configure-service-groups.xml:16(para)
|
||||
msgid "To effectively manage and utilize compute nodes, the Compute service must know their statuses. For example, when a user launches a new VM, the Compute scheduler should send the request to a live node (with enough capacity too, of course). From the Grizzly release and later, the Compute service queries the ServiceGroup API to get the node liveness information."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/config-reference/compute/section_compute-configure-service-groups.xml:21(para)
|
||||
msgid "When a compute worker (running the <systemitem class=\"service\">nova-compute</systemitem> daemon) starts, it calls the join API to join the compute group, so that every service that is interested in the information (for example, the scheduler) can query the group membership or the status of a particular node. Internally, the ServiceGroup client driver automatically updates the compute worker status."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/config-reference/compute/section_compute-configure-service-groups.xml:27(para)
|
||||
msgid "The following drivers are implemented: database and ZooKeeper. Further drivers are in review or development, such as memcache."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/config-reference/compute/section_compute-configure-service-groups.xml:31(title)
|
||||
msgid "Database ServiceGroup driver"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/config-reference/compute/section_compute-configure-service-groups.xml:32(para)
|
||||
msgid "Compute uses the database driver, which is the default driver, to track node liveness. In a compute worker, this driver periodically sends a <placeholder-1/> command to the database, saying <quote>I'm OK</quote> with a timestamp. A pre-defined timeout (<literal>service_down_time</literal>) determines if a node is dead."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/config-reference/compute/section_compute-configure-service-groups.xml:38(para)
|
||||
msgid "The driver has limitations, which may or may not be an issue for you, depending on your setup. The more compute worker nodes that you have, the more pressure you put on the database. By default, the timeout is 60 seconds so it might take some time to detect node failures. You could reduce the timeout value, but you must also make the DB update more frequently, which again increases the DB workload."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/config-reference/compute/section_compute-configure-service-groups.xml:44(para)
|
||||
msgid "Fundamentally, the data that describes whether the node is alive is \"transient\" — After a few seconds, this data is obsolete. Other data in the database is persistent, such as the entries that describe who owns which VMs. However, because this data is stored in the same database, is treated the same way. The ServiceGroup abstraction aims to treat them separately."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/config-reference/compute/section_compute-configure-service-groups.xml:53(title)
|
||||
msgid "ZooKeeper ServiceGroup driver"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/config-reference/compute/section_compute-configure-service-groups.xml:54(para)
|
||||
msgid "The ZooKeeper ServiceGroup driver works by using ZooKeeper ephemeral nodes. ZooKeeper, in contrast to databases, is a distributed system. Its load is divided among several servers. At a compute worker node, after establishing a ZooKeeper session, it creates an ephemeral znode in the group directory. Ephemeral znodes have the same lifespan as the session. If the worker node or the <systemitem class=\"service\">nova-compute</systemitem> daemon crashes, or a network partition is in place between the worker and the ZooKeeper server quorums, the ephemeral znodes are removed automatically. The driver gets the group membership by running the <placeholder-1/> command in the group directory."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/config-reference/compute/section_compute-configure-service-groups.xml:64(para)
|
||||
msgid "To use the ZooKeeper driver, you must install ZooKeeper servers and client libraries. Setting up ZooKeeper servers is outside the scope of this article. For the rest of the article, assume these servers are installed, and their addresses and ports are <literal>192.168.2.1:2181</literal>, <literal>192.168.2.2:2181</literal>, <literal>192.168.2.3:2181</literal>."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/config-reference/compute/section_compute-configure-service-groups.xml:71(para)
|
||||
msgid "To use ZooKeeper, you must install client-side Python libraries on every nova node: <literal>python-zookeeper</literal> – the official Zookeeper Python binding and <literal>evzookeeper</literal> – the library to make the binding work with the eventlet threading model."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/config-reference/compute/section_compute-configure-service-groups.xml:77(para)
|
||||
msgid "The relevant configuration snippet in the <filename>/etc/nova/nova.conf</filename> file on every node is:"
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/config-reference/compute/section_compute-configure-service-groups.xml:82(para)
|
||||
msgid "To customize the Compute Service groups, use the configuration option settings documented in <xref linkend=\"config_table_nova_zookeeper\"/>."
|
||||
msgstr ""
|
||||
|
||||
#: ./doc/config-reference/compute/section_compute-conductor.xml:7(title)
|
||||
msgid "Conductor"
|
||||
msgstr ""
|
||||
|
File diff suppressed because it is too large
Load Diff
@@ -1,7 +1,7 @@
|
||||
msgid ""
|
||||
msgstr ""
|
||||
"Project-Id-Version: PACKAGE VERSION\n"
|
||||
"POT-Creation-Date: 2014-04-06 06:26+0000\n"
|
||||
"POT-Creation-Date: 2014-04-07 06:27+0000\n"
|
||||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||||
"Language-Team: LANGUAGE <LL@li.org>\n"
|
||||
|
@@ -1,7 +1,7 @@
|
||||
msgid ""
|
||||
msgstr ""
|
||||
"Project-Id-Version: PACKAGE VERSION\n"
|
||||
"POT-Creation-Date: 2014-04-06 06:26+0000\n"
|
||||
"POT-Creation-Date: 2014-04-07 06:27+0000\n"
|
||||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||||
"Language-Team: LANGUAGE <LL@li.org>\n"
|
||||
|
File diff suppressed because one or more lines are too long
@@ -1,7 +1,7 @@
|
||||
msgid ""
|
||||
msgstr ""
|
||||
"Project-Id-Version: PACKAGE VERSION\n"
|
||||
"POT-Creation-Date: 2014-04-06 06:26+0000\n"
|
||||
"POT-Creation-Date: 2014-04-07 06:27+0000\n"
|
||||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||||
"Language-Team: LANGUAGE <LL@li.org>\n"
|
||||
|
@@ -1,7 +1,7 @@
|
||||
msgid ""
|
||||
msgstr ""
|
||||
"Project-Id-Version: PACKAGE VERSION\n"
|
||||
"POT-Creation-Date: 2014-04-06 06:27+0000\n"
|
||||
"POT-Creation-Date: 2014-04-07 06:27+0000\n"
|
||||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||||
"Language-Team: LANGUAGE <LL@li.org>\n"
|
||||
|
@@ -1,7 +1,7 @@
|
||||
msgid ""
|
||||
msgstr ""
|
||||
"Project-Id-Version: PACKAGE VERSION\n"
|
||||
"POT-Creation-Date: 2014-04-06 06:27+0000\n"
|
||||
"POT-Creation-Date: 2014-04-07 06:27+0000\n"
|
||||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||||
"Language-Team: LANGUAGE <LL@li.org>\n"
|
||||
|
@@ -1,7 +1,7 @@
|
||||
msgid ""
|
||||
msgstr ""
|
||||
"Project-Id-Version: PACKAGE VERSION\n"
|
||||
"POT-Creation-Date: 2014-04-06 06:27+0000\n"
|
||||
"POT-Creation-Date: 2014-04-07 06:27+0000\n"
|
||||
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
|
||||
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
|
||||
"Language-Team: LANGUAGE <LL@li.org>\n"
|
||||
|
Reference in New Issue
Block a user