openstack-manuals/doc/config-reference/locale/sw_KE.po
Tom Fifield 5447eb4035 Imported Translations from Transifex
Change-Id: Ia67097d964b62d2fef2b32d93b5988400bb27bfa
2014-01-18 01:02:05 +08:00

10827 lines
428 KiB
Plaintext
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

#
# Translators:
msgid ""
msgstr ""
"Project-Id-Version: OpenStack Manuals\n"
"POT-Creation-Date: 2014-01-17 07:14+0000\n"
"PO-Revision-Date: 2014-01-16 20:25+0000\n"
"Last-Translator: openstackjenkins <jenkins@openstack.org>\n"
"Language-Team: Swahili (Kenya) (http://www.transifex.com/projects/p/openstack/language/sw_KE/)\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: sw_KE\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
#: ./doc/config-reference/ch_imageservice.xml10(title)
msgid "Image Service"
msgstr ""
#: ./doc/config-reference/ch_imageservice.xml11(para)
msgid ""
"Compute relies on an external image service to store virtual machine images "
"and maintain a catalog of available images. By default, Compute is "
"configured to use the OpenStack Image Service (Glance), which is currently "
"the only supported image service."
msgstr ""
#: ./doc/config-reference/ch_imageservice.xml19(para)
msgid ""
"If your installation requires euca2ools to register new images, you must run"
" the <systemitem class=\"service\">nova-objectstore</systemitem> service. "
"This service provides an Amazon S3 front-end for Glance, which is required "
"by euca2ools."
msgstr ""
#: ./doc/config-reference/ch_imageservice.xml26(para)
msgid ""
"You can modify many of the OpenStack Image Catalogue and Delivery Service. "
"The following tables provide a comprehensive list."
msgstr ""
#: ./doc/config-reference/ch_config-overview.xml6(title)
msgid "OpenStack configuration overview"
msgstr ""
#: ./doc/config-reference/ch_config-overview.xml7(para)
msgid ""
"OpenStack is a collection of open source project components that enable "
"setting up cloud services. Each component uses similar configuration "
"techniques and a common framework for INI file options."
msgstr ""
#: ./doc/config-reference/ch_config-overview.xml12(para)
msgid ""
"This guide pulls together multiple references and configuration options for "
"the following OpenStack components:"
msgstr ""
#: ./doc/config-reference/ch_config-overview.xml16(para)
msgid "OpenStack Identity"
msgstr ""
#: ./doc/config-reference/ch_config-overview.xml17(para)
msgid "OpenStack Compute"
msgstr ""
#: ./doc/config-reference/ch_config-overview.xml18(para)
msgid "OpenStack Image Service"
msgstr ""
#: ./doc/config-reference/ch_config-overview.xml19(para)
msgid "OpenStack Networking"
msgstr ""
#: ./doc/config-reference/ch_config-overview.xml20(para)
msgid "OpenStack Dashboard"
msgstr ""
#: ./doc/config-reference/ch_config-overview.xml21(para)
msgid "OpenStack Object Storage"
msgstr ""
#: ./doc/config-reference/ch_config-overview.xml22(para)
msgid "OpenStack Block Storage"
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml6(title)
msgid "Compute"
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml7(para)
msgid ""
"The OpenStack Compute service is a cloud computing fabric controller, which "
"is the main part of an IaaS system. You can use OpenStack Compute to host "
"and manage cloud computing systems. This section describes the OpenStack "
"Compute configuration options."
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml12(para)
msgid ""
"To configure your Compute installation, you must define configuration "
"options in these files:"
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml16(para)
msgid ""
"<filename>nova.conf</filename>. Contains most of the Compute configuration "
"options. Resides in the <filename>/etc/nova</filename> directory."
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml21(para)
msgid ""
"<filename>api-paste.ini</filename>. Defines Compute limits. Resides in the "
"<filename>/etc/nova</filename> directory."
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml26(para)
msgid ""
"Related Image Service and Identity Service management configuration files."
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml32(title)
msgid "Configure logging"
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml33(para)
msgid ""
"You can use <filename>nova.conf</filename> file to configure where Compute "
"logs events, the level of logging, and log formats."
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml36(para)
msgid ""
"To customize log formats for OpenStack Compute, use these configuration "
"option settings."
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml41(title)
msgid "Configure authentication and authorization"
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml42(para)
msgid ""
"There are different methods of authentication for the OpenStack Compute "
"project, including no authentication. The preferred system is the OpenStack "
"Identity Service, code-named Keystone."
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml46(para)
msgid ""
"To customize authorization settings for Compute, see these configuration "
"settings in <filename>nova.conf</filename>."
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml49(para)
msgid ""
"To customize certificate authority settings for Compute, see these "
"configuration settings in <filename>nova.conf</filename>."
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml53(para)
msgid ""
"To customize Compute and the Identity service to use LDAP as a backend, "
"refer to these configuration settings in <filename>nova.conf</filename>."
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml62(title)
msgid "Configure resize"
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml63(para)
msgid ""
"Resize (or Server resize) is the ability to change the flavor of a server, "
"thus allowing it to upscale or downscale according to user needs. For this "
"feature to work properly, you might need to configure some underlying virt "
"layers."
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml68(title)
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml5(title)
msgid "KVM"
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml69(para)
msgid ""
"Resize on KVM is implemented currently by transferring the images between "
"compute nodes over ssh. For KVM you need hostnames to resolve properly and "
"passwordless ssh access between your compute hosts. Direct access from one "
"compute host to another is needed to copy the VM file across."
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml74(para)
msgid ""
"Cloud end users can find out how to resize a server by reading the <link "
"href=\"http://docs.openstack.org/user-"
"guide/content/nova_cli_resize.html\">OpenStack End User Guide</link>."
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml80(title)
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml289(title)
msgid "XenServer"
msgstr ""
#: ./doc/config-reference/ch_computeconfigure.xml81(para)
msgid ""
"To get resize to work with XenServer (and XCP), you need to establish a root"
" trust between all hypervisor nodes and provide an /image mount point to "
"your hypervisors dom0."
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml6(title)
#: ./doc/config-reference/bk-config-ref.xml8(titleabbrev)
msgid "OpenStack Configuration Reference"
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml16(orgname)
#: ./doc/config-reference/bk-config-ref.xml21(holder)
msgid "OpenStack Foundation"
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml20(year)
msgid "2013"
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml23(productname)
msgid "OpenStack"
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml24(releaseinfo)
msgid "havana"
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml28(remark)
msgid "Copyright details are filled in by the template."
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml33(para)
msgid ""
"This document is for system administrators who want to look up configuration"
" options. It contains lists of configuration options available with "
"OpenStack and uses auto-generation to generate options and the descriptions "
"from the code for each project. It includes sample configuration files."
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml43(date)
msgid "2014-01-09"
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml47(para)
msgid ""
"Removes content addressed in installation, merges duplicated content, and "
"revises legacy references."
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml55(date)
msgid "2013-10-17"
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml59(para)
msgid "Havana release."
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml65(date)
msgid "2013-08-16"
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml69(para)
msgid ""
"Moves Block Storage driver configuration information from the "
"<citetitle>Block Storage Administration Guide</citetitle> to this reference."
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml77(date)
msgid "2013-06-10"
msgstr ""
#: ./doc/config-reference/bk-config-ref.xml81(para)
msgid "Initial creation of Configuration Reference."
msgstr ""
#: ./doc/config-reference/ch_blockstorageconfigure.xml10(title)
msgid "Block Storage"
msgstr ""
#: ./doc/config-reference/ch_blockstorageconfigure.xml11(para)
msgid ""
"The OpenStack Block Storage Service works with many different storage "
"drivers that you can configure by using these instructions."
msgstr ""
#: ./doc/config-reference/ch_blockstorageconfigure.xml15(title)
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml192(title)
msgid "<filename>cinder.conf</filename> configuration file"
msgstr ""
#: ./doc/config-reference/ch_blockstorageconfigure.xml16(para)
msgid ""
"The <filename>cinder.conf</filename> file is installed in "
"<filename>/etc/cinder</filename> by default. When you manually install the "
"Block Storage Service, the options in the <filename>cinder.conf</filename> "
"file are set to default values."
msgstr ""
#: ./doc/config-reference/ch_blockstorageconfigure.xml20(para)
msgid "This example shows a typical <filename>cinder.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/ch_identityconfigure.xml10(title)
msgid "Identity Service"
msgstr ""
#: ./doc/config-reference/ch_identityconfigure.xml11(para)
msgid ""
"This chapter details the OpenStack Identity Service configuration options. "
"For installation prerequisites and step-by-step walkthroughs, see the "
"<citetitle>OpenStack Installation Guide</citetitle> for your distribution "
"(<link href=\"docs.openstack.org\">docs.openstack.org</link>) and "
"<citetitle><link href=\"http://docs.openstack.org/admin-guide-"
"cloud/content/\">Cloud Administrator Guide</link></citetitle>."
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml6(title)
msgid "Object Storage"
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml7(para)
msgid ""
"OpenStack Object Storage uses multiple configuration files for multiple "
"services and background daemons, and <placeholder-1/> to manage server "
"configurations. Default configuration options appear in the "
"<code>[DEFAULT]</code> section. You can override the default values by "
"setting values in the other sections."
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml17(title)
msgid "Object server configuration"
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml18(para)
msgid ""
"Find an example object server configuration at <filename>etc/object-server"
".conf-sample</filename> in the source code repository."
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml21(para)
#: ./doc/config-reference/ch_objectstorageconfigure.xml48(para)
#: ./doc/config-reference/ch_objectstorageconfigure.xml78(para)
#: ./doc/config-reference/ch_objectstorageconfigure.xml105(para)
msgid "The available configuration options are:"
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml39(title)
msgid "Sample object server configuration file"
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml44(title)
msgid "Container server configuration"
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml45(para)
msgid ""
"Find an example container server configuration at <filename>etc/container-"
"server.conf-sample</filename> in the source code repository."
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml68(title)
msgid "Sample container server configuration file"
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml74(title)
msgid "Account server configuration"
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml75(para)
msgid ""
"Find an example account server configuration at <filename>etc/account-server"
".conf-sample</filename> in the source code repository."
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml96(title)
msgid "Sample account server configuration file"
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml101(title)
msgid "Proxy server configuration"
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml102(para)
msgid ""
"Find an example proxy server configuration at <filename>etc/proxy-server"
".conf-sample</filename> in the source code repository."
msgstr ""
#: ./doc/config-reference/ch_objectstorageconfigure.xml131(title)
msgid "Sample proxy server configuration file"
msgstr ""
#: ./doc/config-reference/ch_dashboardconfigure.xml6(title)
msgid "Dashboard"
msgstr ""
#: ./doc/config-reference/ch_dashboardconfigure.xml7(para)
msgid ""
"This chapter describes how to configure the OpenStack dashboard with Apache "
"web server."
msgstr ""
#: ./doc/config-reference/ch_networkingconfigure.xml10(title)
msgid "Networking"
msgstr ""
#: ./doc/config-reference/ch_networkingconfigure.xml11(para)
msgid ""
"This chapter explains the OpenStack Networking configuration options. For "
"installation prerequisites, steps, and use cases, see the "
"<citetitle>OpenStack Installation Guide</citetitle> for your distribution "
"(<link href=\"docs.openstack.org\">docs.openstack.org</link>) and "
"<citetitle><link href=\"http://docs.openstack.org/admin-guide-"
"cloud/content/\">Cloud Administrator Guide</link></citetitle>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml10(title)
#: ./doc/config-reference/compute/section_compute-configure-xen.xml75(title)
msgid "Xen configuration reference"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml11(para)
msgid ""
"The following section discusses some commonly changed options in XenServer. "
"The table below provides a complete reference of all configuration options "
"available for configuring Xen with OpenStack."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml15(para)
msgid ""
"The recommended way to use Xen with OpenStack is through the XenAPI driver. "
"To enable the XenAPI driver, add the following configuration options "
"<filename>/etc/nova/nova.conf</filename> and restart the <systemitem "
"class=\"service\">nova-compute</systemitem> service:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml23(para)
msgid ""
"These connection details are used by the OpenStack Compute service to "
"contact your hypervisor and are the same details you use to connect "
"XenCenter, the XenServer management console, to your XenServer or XCP box."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml27(para)
msgid ""
"The <literal>xenapi_connection_url</literal> is generally the management "
"network IP address of the XenServer. Though it is possible to use the "
"internal network IP Address (169.250.0.1) to contact XenAPI, this does not "
"allow live migration between hosts, and other functionalities like host "
"aggregates do not work."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml34(para)
msgid ""
"It is possible to manage Xen using libvirt, though this is not well-tested "
"or supported. To experiment using Xen through libvirt add the following "
"configuration options <filename>/etc/nova/nova.conf</filename>: "
"<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml42(title)
#: ./doc/config-reference/networking/section_networking-options-reference.xml19(title)
msgid "Agent"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml43(para)
msgid ""
"If you don't have the guest agent on your VMs, it takes a long time for nova"
" to decide the VM has successfully started. Generally a large timeout is "
"required for Windows instances, bug you may want to tweak "
"<literal>agent_version_timeout</literal>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml48(title)
msgid "Firewall"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml49(para)
msgid ""
"If using nova-network, IPTables is supported: <placeholder-1/> Alternately, "
"doing the isolation in Dom0: <placeholder-2/>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml56(title)
msgid "VNC proxy address"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml57(para)
msgid ""
"Assuming you are talking to XenAPI through the host local management "
"network, and XenServer is on the address: 169.254.0.1, you can use the "
"following: <literal>vncserver_proxyclient_address=169.254.0.1</literal>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml63(title)
msgid "Storage"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml71(para)
msgid ""
"To use a XenServer pool, you must create the pool by using the Host "
"Aggregates feature."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-xen.xml64(para)
msgid ""
"You can specify which Storage Repository to use with nova by looking at the "
"following flag. The default is to use the local-storage setup by the default"
" installer: <placeholder-1/> Another good alternative is to use the "
"\"default\" storage (for example if you have attached NFS or any other "
"shared storage): <placeholder-2/><placeholder-3/>"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for
#. you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/config-reference/compute/section_introduction-to-xen.xml127(None)
msgid ""
"@@image: '../../common/figures/xenserver_architecture.png'; "
"md5=8eb25be1693aa7865967ac7b07d3e563"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml7(title)
msgid "Xen, XenAPI, XenServer, and XCP"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml9(para)
msgid ""
"This section describes Xen, XenAPI, XenServer, and XCP, their differences, "
"and how to use them with OpenStack. After you understand how the Xen and KVM"
" architectures differ, you can determine when to use each architecture in "
"your OpenStack cloud."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml15(title)
msgid "Xen terminology"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml16(para)
msgid ""
"<emphasis role=\"bold\">Xen</emphasis>. A hypervisor that provides the "
"fundamental isolation between virtual machines. Xen is open source (GPLv2) "
"and is managed by Xen.org, an cross-industry organization."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml20(para)
msgid ""
"Xen is a component of many different products and projects. The hypervisor "
"itself is very similar across all these projects, but the way that it is "
"managed can be different, which can cause confusion if you're not clear "
"which tool stack you are using. Make sure you know what tool stack you want "
"before you get started."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml26(para)
msgid ""
"<emphasis role=\"bold\">Xen Cloud Platform (XCP)</emphasis>. An open source "
"(GPLv2) tool stack for Xen. It is designed specifically as a platform for "
"enterprise and cloud computing, and is well integrated with OpenStack. XCP "
"is available both as a binary distribution, installed from an iso, and from "
"Linux distributions, such as <link href=\"http://packages.ubuntu.com/precise"
"/xcp-xapi\">xcp-xapi</link> in Ubuntu. The current versions of XCP available"
" in Linux distributions do not yet include all the features available in the"
" binary distribution of XCP."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml38(para)
msgid ""
"<emphasis role=\"bold\">Citrix XenServer</emphasis>. A commercial product. "
"It is based on XCP, and exposes the same tool stack and management API. As "
"an analogy, think of XenServer being based on XCP in the way that Red Hat "
"Enterprise Linux is based on Fedora. XenServer has a free version (which is "
"very similar to XCP) and paid-for versions with additional features enabled."
" Citrix provides support for XenServer, but as of July 2012, they do not "
"provide any support for XCP. For a comparison between these products see the"
" <link href=\"http://wiki.xen.org/wiki/XCP/XenServer_Feature_Matrix\"> XCP "
"Feature Matrix</link>."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml50(para)
msgid ""
"Both XenServer and XCP include Xen, Linux, and the primary control daemon "
"known as <emphasis role=\"bold\">xapi</emphasis>."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml53(para)
msgid ""
"The API shared between XCP and XenServer is called <emphasis "
"role=\"bold\">XenAPI</emphasis>. OpenStack usually refers to XenAPI, to "
"indicate that the integration works equally well on XCP and XenServer. "
"Sometimes, a careless person will refer to XenServer specifically, but you "
"can be reasonably confident that anything that works on XenServer will also "
"work on the latest version of XCP. Read the <link "
"href=\"http://docs.vmd.citrix.com/XenServer/6.0.0/1.0/en_gb/sdk.html#object_model_overview\">"
" XenAPI Object Model Overview</link> for definitions of XenAPI specific "
"terms such as SR, VDI, VIF and PIF."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml66(title)
msgid "Privileged and unprivileged domains"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml67(para)
msgid ""
"A Xen host runs a number of virtual machines, VMs, or domains (the terms are"
" synonymous on Xen). One of these is in charge of running the rest of the "
"system, and is known as \"domain 0,\" or \"dom0.\" It is the first domain to"
" boot after Xen, and owns the storage and networking hardware, the device "
"drivers, and the primary control software. Any other VM is unprivileged, and"
" are known as a \"domU\" or \"guest\". All customer VMs are unprivileged of "
"course, but you should note that on Xen the OpenStack control software "
"(<systemitem class=\"service\">nova-compute</systemitem>) also runs in a "
"domU. This gives a level of security isolation between the privileged system"
" software and the OpenStack software (much of which is customer-facing). "
"This architecture is described in more detail later."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml83(para)
msgid ""
"There is an ongoing project to split domain 0 into multiple privileged "
"domains known as <emphasis role=\"bold\">driver domains</emphasis> and "
"<emphasis role=\"bold\">stub domains</emphasis>. This would give even better"
" separation between critical components. This technology is what powers "
"Citrix XenClient RT, and is likely to be added into XCP in the next few "
"years. However, the current architecture just has three levels of "
"separation: dom0, the OpenStack domU, and the completely unprivileged "
"customer VMs."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml96(title)
msgid "Paravirtualized versus hardware virtualized domains"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml98(para)
msgid ""
"A Xen virtual machine can be <emphasis role=\"bold\">paravirtualized "
"(PV)</emphasis> or <emphasis role=\"bold\">hardware virtualized "
"(HVM)</emphasis>. This refers to the interaction between Xen, domain 0, and "
"the guest VM's kernel. PV guests are aware of the fact that they are "
"virtualized and will co-operate with Xen and domain 0; this gives them "
"better performance characteristics. HVM guests are not aware of their "
"environment, and the hardware has to pretend that they are running on an "
"unvirtualized machine. HVM guests do not need to modify the guest operating "
"system, which is essential when running Windows."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml111(para)
msgid ""
"In OpenStack, customer VMs may run in either PV or HVM mode. However, the "
"OpenStack domU (that's the one running <systemitem class=\"service\">nova-"
"compute</systemitem>) <emphasis role=\"bold\">must</emphasis> be running in "
"PV mode."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml119(title)
msgid "XenAPI Deployment Architecture"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml121(para)
msgid ""
"When you deploy OpenStack on XCP or XenServer, you get something similar to "
"this: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml133(para)
msgid "The hypervisor: Xen"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml136(para)
msgid ""
"Domain 0: runs xapi and some small pieces from OpenStack (some xapi plug-ins"
" and network isolation rules). The majority of this is provided by XenServer"
" or XCP (or yourself using Kronos)."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml143(para)
msgid ""
"OpenStack VM: The <systemitem class=\"service\">nova-compute</systemitem> "
"code runs in a paravirtualized virtual machine, running on the host under "
"management. Each host runs a local instance of <systemitem class=\"service"
"\">nova-compute</systemitem>. It will often also be running nova-network "
"(depending on your network mode). In this case, nova-network is managing the"
" addresses given to the tenant VMs through DHCP."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml155(para)
msgid ""
"Nova uses the XenAPI Python library to talk to xapi, and it uses the "
"Management Network to reach from the domU to dom0 without leaving the host."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml131(para)
msgid "Key things to note: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml163(para)
msgid "The above diagram assumes FlatDHCP networking (the DevStack default)."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml169(para)
msgid ""
"Management network - RabbitMQ, MySQL, etc. Please note that the VM images "
"are downloaded by the XenAPI plug-ins, so make sure that the images can be "
"downloaded through the management network. It usually means binding those "
"services to the management interface."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml179(para)
msgid ""
"Tenant network - controlled by nova-network. The parameters of this network "
"depend on the networking model selected (Flat, Flat DHCP, VLAN)."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml186(para)
msgid "Public network - floating IPs, public API endpoints."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml167(para)
msgid "There are three main OpenStack Networks:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml192(para)
msgid ""
"The networks shown here must be connected to the corresponding physical "
"networks within the data center. In the simplest case, three individual "
"physical network cards could be used. It is also possible to use VLANs to "
"separate these networks. Please note, that the selected configuration must "
"be in line with the networking model selected for the cloud. (In case of "
"VLAN networking, the physical channels have to be able to forward the tagged"
" traffic.)"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml161(para)
msgid "Some notes on the networking: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml207(title)
msgid "XenAPI pools"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml208(para)
msgid ""
"The host-aggregates feature enables you to create pools of XenServer hosts "
"to enable live migration when using shared storage. However, you cannot "
"configure shared storage."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml214(title)
msgid "Further reading"
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml218(para)
msgid ""
"Citrix XenServer official documentation:<link "
"href=\"http://docs.vmd.citrix.com/XenServer/6.0.0/1.0/en_gb/\"> "
"http://docs.vmd.citrix.com/XenServer</link>."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml225(para)
msgid ""
"What is Xen? by Xen.org: <link "
"href=\"http://xen.org/files/Marketing/WhatisXen.pdf\"> "
"http://xen.org/files/Marketing/WhatisXen.pdf</link>."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml231(para)
msgid ""
"Xen Hypervisor project: <link href=\"http://xen.org/products/xenhyp.html\"> "
"http://xen.org/products/xenhyp.html</link>."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml237(para)
msgid ""
"XCP project: <link href=\"http://xen.org/products/cloudxen.html\"> "
"http://xen.org/products/cloudxen.html</link>."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml243(para)
msgid ""
"Further XenServer and OpenStack information: <link "
"href=\"http://wiki.openstack.org/XenServer\"> "
"http://wiki.openstack.org/XenServer</link>."
msgstr ""
#: ./doc/config-reference/compute/section_introduction-to-xen.xml215(para)
msgid ""
"Here are some of the resources available to learn more about Xen: "
"<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_baremetal.xml6(title)
msgid "Bare metal driver"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_baremetal.xml7(para)
msgid ""
"The baremetal driver is a hypervisor driver for OpenStack Nova Compute. "
"Within the OpenStack framework, it has the same role as the drivers for "
"other hypervisors (libvirt, xen, etc), and yet it is presently unique in "
"that the hardware is not virtualized - there is no hypervisor between the "
"tenants and the physical hardware. It exposes hardware through the OpenStack"
" APIs, using pluggable sub-drivers to deliver machine imaging (PXE) and "
"power control (IPMI). With this, provisioning and management of physical "
"hardware is accomplished by using common cloud APIs and tools, such as "
"OpenStack Orchestration or salt-cloud. However, due to this unique "
"situation, using the baremetal driver requires some additional preparation "
"of its environment, the details of which are beyond the scope of this guide."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_baremetal.xml20(para)
msgid ""
"Some OpenStack Compute features are not implemented by the baremetal "
"hypervisor driver. See the <link "
"href=\"http://wiki.openstack.org/HypervisorSupportMatrix\"> hypervisor "
"support matrix</link> for details."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_baremetal.xml24(para)
msgid ""
"For the Baremetal driver to be loaded and function properly, ensure that the"
" following options are set in <filename>/etc/nova/nova.conf</filename> on "
"your <systemitem class=\"service\">nova-compute</systemitem> hosts."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_baremetal.xml34(para)
msgid ""
"Many configuration options are specific to the Baremetal driver. Also, some "
"additional steps are required, such as building the baremetal deploy "
"ramdisk. See the <link "
"href=\"https://wiki.openstack.org/wiki/Baremetal\">main wiki page</link> for"
" details and implementation suggestions."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-backing-storage.xml10(title)
msgid "Configure Compute backing storage"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-backing-storage.xml11(para)
msgid ""
"Backing Storage is the storage used to provide the expanded operating system"
" image, and any ephemeral storage. Inside the virtual machine, this is "
"normally presented as two virtual hard disks (for example, /dev/vda and "
"/dev/vdb respectively). However, inside OpenStack, this can be derived from "
"one of three methods: LVM, QCOW or RAW, chosen using the "
"<literal>libvirt_images_type</literal> option in "
"<filename>nova.conf</filename> on the compute node."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-backing-storage.xml19(para)
msgid ""
"QCOW is the default backing store. It uses a copy-on-write philosophy to "
"delay allocation of storage until it is actually needed. This means that the"
" space required for the backing of an image can be significantly less on the"
" real disk than what seems available in the virtual machine operating "
"system."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-backing-storage.xml24(para)
msgid ""
"RAW creates files without any sort of file formatting, effectively creating "
"files with the plain binary one would normally see on a real disk. This can "
"increase performance, but means that the entire size of the virtual disk is "
"reserved on the physical disk."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-backing-storage.xml29(para)
msgid ""
"Local <link "
"href=\"http://http//en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)\">LVM"
" volumes</link> can also be used. Set "
"<literal>libvirt_images_volume_group=nova_local</literal> where "
"<literal>nova_local</literal> is the name of the LVM group you have created."
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml10(title)
msgid "Hypervisors"
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml11(para)
msgid ""
"OpenStack Compute supports many hypervisors, which might make it difficult "
"for you to choose one. Most installations use only one hypervisor. However "
"you can use <xref linkend=\"computefilter\"/> and <xref "
"linkend=\"imagepropertiesfilter\"/> to schedule to different hypervisors "
"within the same installation. The following links help you choose a "
"hypervisor. See <link "
"href=\"http://wiki.openstack.org/HypervisorSupportMatrix\">http://wiki.openstack.org/HypervisorSupportMatrix</link>"
" for a detailed list of features and support across the hypervisors."
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml22(para)
msgid "The following hypervisors are supported:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml25(para)
msgid ""
"<link href=\"http://www.linux-kvm.org/page/Main_Page\">KVM</link> - Kernel-"
"based Virtual Machine. The virtual disk formats that it supports is "
"inherited from QEMU since it uses a modified QEMU program to launch the "
"virtual machine. The supported formats include raw images, the qcow2, and "
"VMware formats."
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml34(para)
msgid ""
"<link href=\"http://lxc.sourceforge.net/\">LXC</link> - Linux Containers "
"(through libvirt), use to run Linux-based virtual machines."
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml39(para)
msgid ""
"<link href=\"http://wiki.qemu.org/Manual\">QEMU</link> - Quick EMUlator, "
"generally only used for development purposes."
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml44(para)
msgid ""
"<link href=\"http://user-mode-linux.sourceforge.net/\">UML</link> - User "
"Mode Linux, generally only used for development purposes."
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml50(para)
msgid ""
"<link href=\"http://www.vmware.com/products/vsphere-"
"hypervisor/support.html\">VMWare vSphere</link> 4.1 update 1 and newer, runs"
" VMWare-based Linux and Windows images through a connection with a vCenter "
"server or directly with an ESXi host."
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml58(para)
msgid ""
"<link href=\"http://www.xen.org\">Xen</link> - XenServer, Xen Cloud Platform"
" (XCP), use to run Linux or Windows virtual machines. You must install the "
"<systemitem class=\"service\">nova-compute</systemitem> service in a para-"
"virtualized VM."
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml66(para)
msgid ""
"<link href=\"http://www.microsoft.com/en-us/server-cloud/windows-server"
"/server-virtualization-features.aspx\"> Hyper-V</link> - Server "
"virtualization with Microsoft's Hyper-V, use to run Windows, Linux, and "
"FreeBSD virtual machines. Runs <systemitem class=\"service\">nova-"
"compute</systemitem> natively on the Windows virtualization platform."
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml75(para)
msgid ""
"<link href=\"https://wiki.openstack.org/wiki/Baremetal\"> Bare Metal</link> "
"- Not a hypervisor in the traditional sense, this driver provisions physical"
" hardware through pluggable sub-drivers (for example, PXE for image "
"deployment, and IPMI for power management)."
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml83(para)
msgid ""
"<link href=\"http://www.docker.io/\">Docker</link>is an open-source engine "
"which automates the deployment of &gt;applications as highly portable, self-"
"sufficient containers which are &gt;independent of hardware, language, "
"framework, packaging system and hosting &gt;provider."
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml92(title)
msgid "Hypervisor configuration basics"
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml93(para)
msgid ""
"The node where the <systemitem class=\"service\">nova-compute</systemitem> "
"service is installed and running is the machine that runs all the virtual "
"machines, referred to as the compute node in this guide."
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml97(para)
msgid ""
"By default, the selected hypervisor is KVM. To change to another hypervisor,"
" change the <literal>libvirt_type</literal> option in "
"<filename>nova.conf</filename> and restart the <systemitem class=\"service"
"\">nova-compute</systemitem> service."
msgstr ""
#: ./doc/config-reference/compute/section_compute-hypervisors.xml103(para)
msgid ""
"Here are the general <filename>nova.conf</filename> options that are used to"
" configure the compute node's hypervisor. Specific options for particular "
"hypervisors can be found in following sections."
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for
#. you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/config-reference/compute/section_compute-config-samples.xml44(None)
msgid ""
"@@image: '../../common/figures/SCH_5004_V00_NUAC-"
"Network_mode_KVM_Flat_OpenStack.png'; md5=1e883ef27e5912b5c516d153b8844a28"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for
#. you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/config-reference/compute/section_compute-config-samples.xml83(None)
msgid ""
"@@image: '../../common/figures/SCH_5005_V00_NUAC-"
"Network_mode_XEN_Flat_OpenStack.png'; md5=3b151435a0fda3702d4fac5a964fac83"
msgstr ""
#: ./doc/config-reference/compute/section_compute-config-samples.xml9(title)
msgid "Example <filename>nova.conf</filename> configuration files"
msgstr ""
#: ./doc/config-reference/compute/section_compute-config-samples.xml11(para)
msgid ""
"The following sections describe the configuration options in the "
"<filename>nova.conf</filename> file. You must copy the "
"<filename>nova.conf</filename> file to each compute node. The sample "
"<filename>nova.conf</filename> files show examples of specific "
"configurations."
msgstr ""
#: ./doc/config-reference/compute/section_compute-config-samples.xml17(title)
msgid "Small, private cloud"
msgstr ""
#: ./doc/config-reference/compute/section_compute-config-samples.xml18(para)
msgid ""
"This example <filename>nova.conf</filename> file configures a small private "
"cloud with cloud controller services, database server, and messaging server "
"on the same server. In this case, CONTROLLER_IP represents the IP address of"
" a central server, BRIDGE_INTERFACE represents the bridge such as br100, the"
" NETWORK_INTERFACE represents an interface to your VLAN setup, and passwords"
" are represented as DB_PASSWORD_COMPUTE for your Compute (nova) database "
"password, and RABBIT PASSWORD represents the password to your message queue "
"installation."
msgstr ""
#: ./doc/config-reference/compute/section_compute-config-samples.xml31(title)
#: ./doc/config-reference/compute/section_compute-config-samples.xml38(title)
#: ./doc/config-reference/compute/section_compute-config-samples.xml77(title)
msgid "KVM, Flat, MySQL, and Glance, OpenStack or EC2 API"
msgstr ""
#: ./doc/config-reference/compute/section_compute-config-samples.xml33(para)
msgid ""
"This example <filename>nova.conf</filename> file, from an internal Rackspace"
" test system, is used for demonstrations."
msgstr ""
#: ./doc/config-reference/compute/section_compute-config-samples.xml50(title)
msgid "XenServer, Flat networking, MySQL, and Glance, OpenStack API"
msgstr ""
#: ./doc/config-reference/compute/section_compute-config-samples.xml52(para)
msgid ""
"This example <filename>nova.conf</filename> file is from an internal "
"Rackspace test system."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_lxc.xml6(title)
msgid "LXC (Linux containers)"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_lxc.xml7(para)
msgid ""
"LXC (also known as Linux containers) is a virtualization technology that "
"works at the operating system level. This is different from hardware "
"virtualization, the approach used by other hypervisors such as KVM, Xen, and"
" VMWare. LXC (as currently implemented using libvirt in the nova project) is"
" not a secure virtualization technology for multi-tenant environments "
"(specifically, containers may affect resource quotas for other containers "
"hosted on the same machine). Additional containment technologies, such as "
"AppArmor, may be used to provide better isolation between containers, "
"although this is not the case by default. For all these reasons, the choice "
"of this virtualization technology is not recommended in production."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_lxc.xml19(para)
msgid ""
"If your compute hosts do not have hardware support for virtualization, LXC "
"will likely provide better performance than QEMU. In addition, if your "
"guests must access specialized hardware, such as GPUs, this might be easier "
"to achieve with LXC than other hypervisors."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_lxc.xml22(para)
msgid ""
"Some OpenStack Compute features might be missing when running with LXC as "
"the hypervisor. See the <link "
"href=\"http://wiki.openstack.org/HypervisorSupportMatrix\">hypervisor "
"support matrix</link> for details."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_lxc.xml25(para)
msgid ""
"To enable LXC, ensure the following options are set in "
"<filename>/etc/nova/nova.conf</filename> on all hosts running the "
"<systemitem class=\"service\">nova-compute</systemitem> "
"service.<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_lxc.xml30(para)
msgid ""
"On Ubuntu 12.04, enable LXC support in OpenStack by installing the <literal"
">nova-compute-lxc</literal> package."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-db.xml9(title)
msgid "Database configuration"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-db.xml10(para)
msgid ""
"You can configure OpenStack Compute to use any SQLAlchemy-compatible "
"database. The database name is <literal>nova</literal>. The <systemitem "
"class=\"service\">nova-conductor</systemitem> service is the only service "
"that writes to the database. The other Compute services access the database "
"through the <systemitem class=\"service\">nova-conductor</systemitem> "
"service."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-db.xml17(para)
msgid ""
"To ensure that the database schema is current, run the following command:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-db.xml19(para)
msgid ""
"If <systemitem class=\"service\">nova-conductor</systemitem> is not used, "
"entries to the database are mostly written by the <systemitem "
"class=\"service\">nova-scheduler</systemitem> service, although all services"
" must be able to update entries in the database."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-db.xml24(para)
msgid ""
"In either case, use these settings to configure the connection string for "
"the nova database."
msgstr ""
#: ./doc/config-reference/compute/section_compute-options-reference.xml6(title)
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml86(title)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml309(caption)
msgid "Configuration options"
msgstr ""
#: ./doc/config-reference/compute/section_compute-options-reference.xml7(para)
msgid ""
"For a complete list of all available configuration options for each "
"OpenStack Compute service, run bin/nova-&lt;servicename&gt; --help."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-ipv6.xml9(title)
msgid "Configure Compute to use IPv6 addresses"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-ipv6.xml10(para)
msgid ""
"You can configure Compute to use both IPv4 and IPv6 addresses for "
"communication by putting it into a IPv4/IPv6 dual stack mode. In IPv4/IPv6 "
"dual stack mode, instances can acquire their IPv6 global unicast address by "
"stateless address auto configuration mechanism [RFC 4862/2462]. IPv4/IPv6 "
"dual stack mode works with <literal>VlanManager</literal> and "
"<literal>FlatDHCPManager</literal> networking modes. In "
"<literal>VlanManager</literal>, different 64bit global routing prefix is "
"used for each project. In <literal>FlatDHCPManager</literal>, one 64bit "
"global routing prefix is used for all instances."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-ipv6.xml18(para)
msgid ""
"This configuration has been tested with VM images that have IPv6 stateless "
"address auto configuration capability (must use EUI-64 address for stateless"
" address auto configuration), a requirement for any VM you want to run with "
"an IPv6 address. Each node that executes a <literal>nova-*</literal> service"
" must have <literal>python-netaddr</literal> and <literal>radvd</literal> "
"installed."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-ipv6.xml24(para)
msgid "On all nova-nodes, install python-netaddr:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-ipv6.xml26(para)
msgid ""
"On all <literal>nova-network</literal> nodes install "
"<literal>radvd</literal> and configure IPv6 networking:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-ipv6.xml31(para)
msgid ""
"Edit the <filename>nova.conf</filename> file on all nodes to set the "
"use_ipv6 configuration option to True. Restart all nova- services."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-ipv6.xml34(para)
msgid ""
"When using the command <placeholder-1/> you can add a fixed range for IPv6 "
"addresses. You must specify public or private after the create parameter."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-ipv6.xml36(replaceable)
msgid "fixed_range_v4"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-ipv6.xml36(replaceable)
msgid "vlan_id"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-ipv6.xml36(replaceable)
msgid "vpn_start"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-ipv6.xml36(replaceable)
msgid "fixed_range_v6"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-ipv6.xml37(para)
msgid ""
"You can set IPv6 global routing prefix by using the "
"<literal>--fixed_range_v6</literal> parameter. The default is: "
"<literal>fd00::/48</literal>. When you use "
"<literal>FlatDHCPManager</literal>, the command uses the original value of "
"<literal>--fixed_range_v6</literal>. When you use "
"<literal>VlanManager</literal>, the command creates prefixes of subnet by "
"incrementing subnet id. Guest VMs uses this prefix for generating their IPv6"
" global unicast address."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-ipv6.xml43(para)
msgid "Here is a usage example for <literal>VlanManager</literal>:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-ipv6.xml45(para)
msgid "Here is a usage example for <literal>FlatDHCPManager</literal>:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-console.xml7(title)
msgid "Configure remote console access"
msgstr ""
#. <?dbhtml stop-chunking?>
#: ./doc/config-reference/compute/section_compute-configure-console.xml9(para)
msgid ""
"OpenStack has two main methods for providing a remote console or remote "
"desktop access to guest Virtual Machines. They are VNC, and SPICE HTML5 and "
"can be used either through the OpenStack dashboard and the command line. "
"Best practice is to select one or the other to run."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml6(para)
msgid "KVM is configured as the default hypervisor for Compute."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml8(para)
msgid ""
"This document contains several sections about hypervisor selection. If you "
"are reading this document linearly, you do not want to load the KVM module "
"before you install <systemitem class=\"service\">nova-compute</systemitem>. "
"The <systemitem class=\"service\">nova-compute</systemitem> service depends "
"on qemu-kvm, which installs <filename>/lib/udev/rules.d/45-qemu-"
"kvm.rules</filename>, which sets the correct permissions on the /dev/kvm "
"device node."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml15(para)
msgid ""
"To enable KVM explicitly, add the following configuration options to the "
"<filename>/etc/nova/nova.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml19(para)
msgid ""
"The KVM hypervisor supports the following virtual machine image formats:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml22(para)
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml43(para)
msgid "Raw"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml25(para)
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml46(para)
msgid "QEMU Copy-on-write (qcow2)"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml28(para)
msgid "QED Qemu Enhanced Disk"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml31(para)
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml49(para)
msgid "VMWare virtual machine disk format (vmdk)"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml34(para)
msgid ""
"This section describes how to enable KVM on your system. For more "
"information, see the following distribution-specific documentation:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml38(para)
msgid ""
"<link "
"href=\"http://fedoraproject.org/wiki/Getting_started_with_virtualization\">Fedora:"
" Getting started with virtualization</link> from the Fedora project wiki."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml44(para)
msgid ""
"<link href=\"https://help.ubuntu.com/community/KVM/Installation\">Ubuntu: "
"KVM/Installation</link> from the Community Ubuntu documentation."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml48(para)
msgid ""
"<link href=\"http://static.debian-"
"handbook.info/browse/stable/sect.virtualization.html#idp11279352\">Debian: "
"Virtualization with KVM</link> from the Debian handbook."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml53(para)
msgid ""
"<link href=\"http://docs.redhat.com/docs/en-"
"US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_Installation_Guide"
"/sect-Virtualization_Host_Configuration_and_Guest_Installation_Guide-"
"Host_Installation-"
"Installing_KVM_packages_on_an_existing_Red_Hat_Enterprise_Linux_system.html\">RHEL:"
" Installing virtualization packages on an existing Red Hat Enterprise Linux "
"system</link> from the <citetitle>Red Hat Enterprise Linux Virtualization "
"Host Configuration and Guest Installation Guide</citetitle>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml60(para)
msgid ""
"<link href=\"http://doc.opensuse.org/documentation/html/openSUSE/opensuse-"
"kvm/cha.kvm.requires.html#sec.kvm.requires.install\">openSUSE: Installing "
"KVM</link> from the openSUSE Virtualization with KVM manual."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml66(para)
msgid ""
"<link href=\"http://doc.opensuse.org/products/draft/SLES/SLES-"
"kvm_sd_draft/cha.kvm.requires.html#sec.kvm.requires.install\">SLES: "
"Installing KVM</link> from the SUSE Linux Enterprise Server Virtualization "
"with KVM manual."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml76(title)
msgid "Specify the CPU model of KVM guests"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml77(para)
msgid ""
"The Compute service enables you to control the guest CPU model that is "
"exposed to KVM virtual machines. Use cases include:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml81(para)
msgid ""
"To maximize performance of virtual machines by exposing new host CPU "
"features to the guest"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml85(para)
msgid ""
"To ensure a consistent default CPU across all machines, removing reliance of"
" variable QEMU defaults"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml89(para)
msgid ""
"In libvirt, the CPU is specified by providing a base CPU model name (which "
"is a shorthand for a set of feature flags), a set of additional feature "
"flags, and the topology (sockets/cores/threads). The libvirt KVM driver "
"provides a number of standard CPU model names. These models are defined in "
"the <filename>/usr/share/libvirt/cpu_map.xml</filename> file. Check this "
"file to determine which models are supported by your local installation."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml95(para)
msgid ""
"Two Compute configuration options define which type of CPU model is exposed "
"to the hypervisor when using KVM: <literal>libvirt_cpu_mode</literal> and "
"<literal>libvirt_cpu_model</literal>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml98(para)
msgid ""
"The <literal>libvirt_cpu_mode</literal> option can take one of the following"
" values: <literal>none</literal>, <literal>host-passthrough</literal>, "
"<literal>host-model</literal>, and <literal>custom</literal>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml102(title)
msgid "Host model (default for KVM &amp; QEMU)"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml103(para)
msgid ""
"If your <filename>nova.conf</filename> file contains "
"<literal>libvirt_cpu_mode=host-model</literal>, libvirt identifies the CPU "
"model in <filename>/usr/share/libvirt/cpu_map.xml</filename> file that most "
"closely matches the host, and requests additional CPU flags to complete the "
"match. This configuration provides the maximum functionality and performance"
" and maintains good reliability and compatibility if the guest is migrated "
"to another host with slightly different host CPUs."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml112(title)
msgid "Host pass through"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml113(para)
msgid ""
"If your <filename>nova.conf</filename> file contains "
"<literal>libvirt_cpu_mode=host-passthrough</literal>, libvirt tells KVM to "
"pass through the host CPU with no modifications. The difference to host-"
"model, instead of just matching feature flags, every last detail of the host"
" CPU is matched. This gives absolutely best performance, and can be "
"important to some apps which check low level CPU details, but it comes at a "
"cost with respect to migration: the guest can only be migrated to an exactly"
" matching host CPU."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml122(title)
msgid "Custom"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml123(para)
msgid ""
"If your <filename>nova.conf</filename> file contains "
"<literal>libvirt_cpu_mode=custom</literal>, you can explicitly specify one "
"of the supported named model using the libvirt_cpu_model configuration "
"option. For example, to configure the KVM guests to expose Nehalem CPUs, "
"your <filename>nova.conf</filename> file should contain:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml132(title)
msgid ""
"None (default for all libvirt-driven hypervisors other than KVM &amp; QEMU)"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml134(para)
msgid ""
"If your <filename>nova.conf</filename> file contains "
"<literal>libvirt_cpu_mode=none</literal>, libvirt does not specify a CPU "
"model. Instead, the hypervisor chooses the default model. This setting is "
"equivalent to the Compute service behavior prior to the Folsom release."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml141(title)
msgid "Guest agent support"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml142(para)
msgid ""
"With the Havana release, support for guest agents was added, allowing "
"optional access between compute nods and guests through a socket, using the "
"qmp protocol."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml144(para)
msgid ""
"To enable this feature, you must set "
"<literal>hw_qemu_guest_agent=yes</literal> as a metadata parameter on the "
"image you wish to use to create guest-agent-capable instances from. You can "
"explicitly disable the feature by setting "
"<literal>hw_qemu_guest_agent=no</literal> in the image metadata."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml150(title)
msgid "KVM performance tweaks"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml151(para)
msgid ""
"The <link href=\"http://www.linux-kvm.org/page/VhostNet\">VHostNet</link> "
"kernel module improves network performance. To load the kernel module, run "
"the following command as root:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml157(title)
msgid "Troubleshoot KVM"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml158(para)
msgid ""
"Trying to launch a new virtual machine instance fails with the "
"<literal>ERROR</literal>state, and the following error appears in the "
"<filename>/var/log/nova/nova-compute.log</filename> file:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml162(para)
msgid "This message indicates that the KVM kernel modules were not loaded."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml163(para)
msgid ""
"If you cannot start VMs after installation without rebooting, the "
"permissions might not be correct. This can happen if you load the KVM module"
" before you install <systemitem class=\"service\">nova-compute</systemitem>."
" To check whether the group is set to kvm, run:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_kvm.xml168(para)
msgid "If it is not set to kvm, run:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_docker.xml6(title)
msgid "Docker driver"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_docker.xml7(para)
msgid ""
"The Docker driver is a hypervisor driver for OpenStack Compute, introduced "
"with the Havana release. Docker is an open-source engine which automates the"
" deployment of applications as highly portable, self-sufficient containers "
"which are independent of hardware, language, framework, packaging system and"
" hosting provider. Docker extends LXC with a high level API providing a "
"lightweight virtualization solution that runs processes in isolation. It "
"provides a way to automate software deployment in a secure and repeatable "
"environment. A standard container in Docker contains a software component "
"along with all of its dependencies - binaries, libraries, configuration "
"files, scripts, virtualenvs, jars, gems and tarballs. Docker can be run on "
"any x86_64 Linux kernel that supports cgroups and aufs. Docker is a way of "
"managing LXC containers on a single machine. However used behind OpenStack "
"Compute makes Docker much more powerful since it is then possible to manage "
"several hosts which will then manage hundreds of containers. The current "
"Docker project aims for full OpenStack compatibility. Containers don't aim "
"to be a replacement for VMs, they are just complementary in the sense that "
"they are better for specific use cases. Compute's support for VMs is "
"currently advanced thanks to the variety of hypervisors running VMs. However"
" it's not the case for containers even though libvirt/LXC is a good starting"
" point. Docker aims to go the second level of integration."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_docker.xml27(para)
msgid ""
"Some OpenStack Compute features are not implemented by the docker driver. "
"See the <link href=\"http://wiki.openstack.org/HypervisorSupportMatrix\"> "
"hypervisor support matrix</link> for details."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_docker.xml33(para)
msgid ""
"To enable Docker, ensure the following options are set in "
"<filename>/etc/nova/nova-compute.conf</filename> on all hosts running the "
"<systemitem class=\"service\">nova-compute</systemitem> service. "
"<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_docker.xml37(para)
msgid ""
"Glance also needs to be configured to support the Docker container format, "
"in <filename>/etc/glance-api.conf</filename>: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-service-groups.xml15(title)
msgid "Configuring Compute Service Groups"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-service-groups.xml16(para)
msgid ""
"To effectively manage and utilize compute nodes, the Compute service must "
"know their statuses. For example, when a user launches a new VM, the Compute"
" scheduler should send the request to a live node (with enough capacity too,"
" of course). From the Grizzly release and later, the Compute service queries"
" the ServiceGroup API to get the node liveness information."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-service-groups.xml21(para)
msgid ""
"When a compute worker (running the <systemitem class=\"service\">nova-"
"compute</systemitem> daemon) starts, it calls the join API to join the "
"compute group, so that every service that is interested in the information "
"(for example, the scheduler) can query the group membership or the status of"
" a particular node. Internally, the ServiceGroup client driver automatically"
" updates the compute worker status."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-service-groups.xml27(para)
msgid ""
"The following drivers are implemented: database and ZooKeeper. Further "
"drivers are in review or development, such as memcache."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-service-groups.xml31(title)
msgid "Database ServiceGroup driver"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-service-groups.xml32(para)
msgid ""
"Compute uses the database driver, which is the default driver, to track node"
" liveness. In a compute worker, this driver periodically sends a "
"<placeholder-1/> command to the database, saying <quote>I'm OK</quote> with "
"a timestamp. A pre-defined timeout (<literal>service_down_time</literal>) "
"determines if a node is dead."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-service-groups.xml38(para)
msgid ""
"The driver has limitations, which may or may not be an issue for you, "
"depending on your setup. The more compute worker nodes that you have, the "
"more pressure you put on the database. By default, the timeout is 60 seconds"
" so it might take some time to detect node failures. You could reduce the "
"timeout value, but you must also make the DB update more frequently, which "
"again increases the DB workload."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-service-groups.xml44(para)
msgid ""
"Fundamentally, the data that describes whether the node is alive is "
"\"transient\" — After a few seconds, this data is obsolete. Other data in "
"the database is persistent, such as the entries that describe who owns which"
" VMs. However, because this data is stored in the same database, is treated "
"the same way. The ServiceGroup abstraction aims to treat them separately."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-service-groups.xml53(title)
msgid "ZooKeeper ServiceGroup driver"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-service-groups.xml54(para)
msgid ""
"The ZooKeeper ServiceGroup driver works by using ZooKeeper ephemeral nodes. "
"ZooKeeper, in contrast to databases, is a distributed system. Its load is "
"divided among several servers. At a compute worker node, after establishing "
"a ZooKeeper session, it creates an ephemeral znode in the group directory. "
"Ephemeral znodes have the same lifespan as the session. If the worker node "
"or the <systemitem class=\"service\">nova-compute</systemitem> daemon "
"crashes, or a network partition is in place between the worker and the "
"ZooKeeper server quorums, the ephemeral znodes are removed automatically. "
"The driver gets the group membership by running the <placeholder-1/> command"
" in the group directory."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-service-groups.xml64(para)
msgid ""
"To use the ZooKeeper driver, you must install ZooKeeper servers and client "
"libraries. Setting up ZooKeeper servers is outside the scope of this "
"article. For the rest of the article, assume these servers are installed, "
"and their addresses and ports are <literal>192.168.2.1:2181</literal>, "
"<literal>192.168.2.2:2181</literal>, <literal>192.168.2.3:2181</literal>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-service-groups.xml71(para)
msgid ""
"To use ZooKeeper, you must install client-side Python libraries on every "
"nova node: <literal>python-zookeeper</literal> the official Zookeeper "
"Python binding and <literal>evzookeeper</literal> the library to make the "
"binding work with the eventlet threading model."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-service-groups.xml77(para)
msgid ""
"The relevant configuration snippet in the "
"<filename>/etc/nova/nova.conf</filename> file on every node is:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-conductor.xml7(title)
msgid "Conductor"
msgstr ""
#: ./doc/config-reference/compute/section_compute-conductor.xml8(para)
msgid ""
"The <systemitem class=\"service\">nova-conductor</systemitem> service "
"enables OpenStack to function without compute nodes accessing the database. "
"Conceptually, it implements a new layer on top of <systemitem "
"class=\"service\">nova-compute</systemitem>. It should not be deployed on "
"compute nodes, or else the security benefits of removing database access "
"from <systemitem class=\"service\">nova-compute</systemitem> are negated. "
"Just like other nova services such as <systemitem class=\"service\">nova-"
"api</systemitem> or nova-scheduler, it can be scaled horizontally. You can "
"run multiple instances of <systemitem class=\"service\">nova-"
"conductor</systemitem> on different machines as needed for scaling purposes."
msgstr ""
#: ./doc/config-reference/compute/section_compute-conductor.xml21(para)
msgid ""
"In the Grizzly release, the methods exposed by <systemitem class=\"service"
"\">nova-conductor</systemitem> are relatively simple methods used by "
"<systemitem class=\"service\">nova-compute</systemitem> to offload its "
"database operations. Places where <systemitem class=\"service\">nova-"
"compute</systemitem> previously performed database access are now talking to"
" <systemitem class=\"service\">nova-conductor</systemitem>. However, we have"
" plans in the medium to long term to move more and more of what is currently"
" in <systemitem class=\"service\">nova-compute</systemitem> up to the "
"<systemitem class=\"service\">nova-conductor</systemitem> layer. The compute"
" service will start to look like a less intelligent slave service to "
"<systemitem class=\"service\">nova-conductor</systemitem>. The conductor "
"service will implement long running complex operations, ensuring forward "
"progress and graceful error handling. This will be especially beneficial for"
" operations that cross multiple compute nodes, such as migrations or "
"resizes."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml6(title)
msgid "Overview of nova.conf"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml7(para)
msgid ""
"The <filename>nova.conf</filename> configuration file is an <link "
"href=\"https://en.wikipedia.org/wiki/INI_file\">INI file format</link> file "
"that specifies options as <literal>key=value</literal> pairs, which are "
"grouped into sections. The <literal>DEFAULT</literal> section contains most "
"of the configuration options. For example:"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml19(para)
msgid ""
"You can use a particular configuration option file by using the "
"<literal>option</literal> (<filename>nova.conf</filename>) parameter when "
"you run one of the <literal>nova-*</literal> services. This parameter "
"inserts configuration option definitions from the specified configuration "
"file name, which might be useful for debugging or performance tuning."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml25(para)
msgid ""
"To place comments in the <filename>nova.conf</filename> file, start a new "
"line that begins with the pound (<literal>#</literal>) character. For a list"
" of configuration options, see the tables in this guide."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml29(para)
msgid ""
"To learn more about the <filename>nova.conf</filename> configuration file, "
"review these general purpose configuration options."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml34(title)
msgid "Types of configuration options"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml35(para)
msgid ""
"Each configuration option has an associated data type. The supported data "
"types for configuration options are:"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml40(term)
msgid "BoolOpt"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml42(para)
msgid ""
"Boolean option. Value must be either <literal>true</literal> or "
"<literal>false</literal> . Example:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml49(term)
msgid "StrOpt"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml51(para)
msgid "String option. Value is an arbitrary string. Example:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml56(term)
msgid "IntOption"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml58(para)
msgid "Integer option. Value must be an integer. Example: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml64(term)
msgid "MultiStrOpt"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml66(para)
msgid ""
"String option. Same as StrOpt, except that it can be declared multiple times"
" to indicate multiple values. Example:"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml74(term)
msgid "ListOpt"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml76(para)
msgid ""
"List option. Value is a list of arbitrary strings separated by commas. "
"Example:"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml82(term)
msgid "FloatOpt"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml84(para)
msgid "Floating-point option. Value must be a floating-point number. Example:"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml91(para)
msgid "Do not specify quotes around Nova options."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml95(title)
msgid "Sections"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml99(literal)
msgid "[DEFAULT]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml101(para)
msgid ""
"Contains most configuration options. If the documentation for a "
"configuration option does not specify its section, assume that it appears in"
" this section."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml109(literal)
msgid "[cells]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml111(para)
msgid ""
"Configures cells functionality. For details, see the Cells section (<link "
"href=\"../config-reference/content/section_compute-cells.html\"/>)."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml119(literal)
msgid "[baremetal]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml121(para)
msgid "Configures the baremetal hypervisor driver."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml126(literal)
msgid "[conductor]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml128(para)
msgid ""
"Configures the <systemitem class=\"service\">nova-conductor</systemitem> "
"service."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml135(literal)
msgid "[trusted_computing]"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml137(para)
msgid ""
"Configures the trusted computing pools functionality and how to connect to a"
" remote attestation service."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml96(para)
msgid ""
"Configuration options are grouped by section. The Compute configuration file"
" supports the following sections:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml146(title)
msgid "Variable substitution"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml147(para)
msgid ""
"The configuration file supports variable substitution. After you set a "
"configuration option, it can be referenced in later configuration values "
"when you precede it with <literal>$</literal>. This example defines "
"<literal>my_ip</literal> and then uses <literal>$my_ip</literal> as a "
"variable:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml156(para)
msgid ""
"If a value must contain the <literal>$</literal> character, escape it with "
"<literal>$$</literal>. For example, if your LDAP DNS password is "
"<literal>$xkj432</literal>, specify it, as follows:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml161(para)
msgid ""
"The Compute code uses the Python "
"<literal>string.Template.safe_substitute()</literal> method to implement "
"variable substitution. For more details on how variable substitution is "
"resolved, see <link href=\"http://docs.python.org/2/library/string.html"
"#template-strings\">http://docs.python.org/2/library/string.html#template-"
"strings</link> and <link "
"href=\"http://www.python.org/dev/peps/pep-0292/\">http://www.python.org/dev/peps/pep-0292/</link>."
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml173(title)
msgid "Whitespace"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml174(para)
msgid ""
"To include whitespace in a configuration value, use a quoted string. For "
"example:"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml179(title)
msgid "Define an alternate location for nova.conf"
msgstr ""
#: ./doc/config-reference/compute/section_nova-conf.xml180(para)
msgid ""
"All <systemitem class=\"service\">nova-*</systemitem> services and the "
"<placeholder-1/> command-line client load the configuration file. To define "
"an alternate location for the configuration file, pass the <parameter"
">--config-file <replaceable>/path/to/nova.conf</replaceable></parameter> "
"parameter when you start a <systemitem class=\"service\">nova-*</systemitem>"
" service or call a <placeholder-2/> command."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml6(title)
msgid "Hyper-V virtualization platform"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml15(emphasis)
msgid "Windows Server 2008r2"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml16(para)
msgid ""
"Both Server and Server Core with the Hyper-V role enabled (Shared Nothing "
"Live migration is not supported using 2008r2)"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml20(emphasis)
msgid "Windows Server 2012"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml21(para)
msgid "Server and Core (with the Hyper-V role enabled), and Hyper-V Server"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml7(para)
msgid ""
"It is possible to use Hyper-V as a compute node within an OpenStack "
"Deployment. The <systemitem class=\"service\">nova-compute</systemitem> "
"service runs as \"openstack-compute,\" a 32-bit service directly upon the "
"Windows platform with the Hyper-V role enabled. The necessary Python "
"components as well as the <systemitem class=\"service\">nova-"
"compute</systemitem> service are installed directly onto the Windows "
"platform. Windows Clustering Services are not needed for functionality "
"within the OpenStack infrastructure. The use of the Windows Server 2012 "
"platform is recommend for the best experience and is the platform for active"
" development. The following Windows platforms have been tested as compute "
"nodes:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml26(title)
msgid "Hyper-V configuration"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml27(para)
msgid ""
"The following sections discuss how to prepare the Windows Hyper-V node for "
"operation as an OpenStack Compute node. Unless stated otherwise, any "
"configuration information should work for both the Windows 2008r2 and 2012 "
"platforms."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml30(emphasis)
msgid "Local Storage Considerations"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml31(para)
msgid ""
"The Hyper-V compute node needs to have ample storage for storing the virtual"
" machine images running on the compute nodes. You may use a single volume "
"for all, or partition it into an OS volume and VM volume. It is up to the "
"individual deploying to decide."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml37(title)
msgid "Configure NTP"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml38(para)
msgid ""
"Network time services must be configured to ensure proper operation of the "
"Hyper-V compute node. To set network time on your Hyper-V host you must run "
"the following commands:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml52(title)
msgid "Configure Hyper-V virtual switching"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml53(para)
msgid ""
"Information regarding the Hyper-V virtual Switch can be located here: <link "
"href=\"http://technet.microsoft.com/en-"
"us/library/hh831823.aspx\">http://technet.microsoft.com/en-"
"us/library/hh831823.aspx</link>"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml57(para)
msgid ""
"To quickly enable an interface to be used as a Virtual Interface the "
"following PowerShell may be used:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml67(title)
msgid "Enable iSCSI initiator service"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml68(para)
msgid ""
"To prepare the Hyper-V node to be able to attach to volumes provided by "
"cinder you must first make sure the Windows iSCSI initiator service is "
"running and started automatically."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml79(title)
msgid "Configure shared nothing live migration"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml80(para)
msgid ""
"Detailed information on the configuration of live migration can be found "
"here: <link href=\"http://technet.microsoft.com/en-"
"us/library/jj134199.aspx\">http://technet.microsoft.com/en-"
"us/library/jj134199.aspx</link>"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml83(para)
msgid "The following outlines the steps of shared nothing live migration."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml86(para)
msgid ""
"The target hosts ensures that live migration is enabled and properly "
"configured in Hyper-V."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml90(para)
msgid ""
"The target hosts checks if the image to be migrated requires a base VHD and "
"pulls it from Glance if not already available on the target host."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml94(para)
msgid ""
"The source hosts ensures that live migration is enabled and properly "
"configured in Hyper-V."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml98(para)
msgid "The source hosts initiates a Hyper-V live migration."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml101(para)
msgid ""
"The source hosts communicates to the manager the outcome of the operation."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml109(literal)
msgid "instances_shared_storage=False"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml110(para)
msgid ""
"This needed to support \"shared nothing\" Hyper-V live migrations. It is "
"used in nova/compute/manager.py"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml114(literal)
msgid "limit_cpu_features=True"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml115(para)
msgid ""
"This flag is needed to support live migration to hosts with different CPU "
"features. This flag is checked during instance creation in order to limit "
"the CPU features used by the VM."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml121(literal)
msgid "instances_path=DRIVELETTER:\\PATH\\TO\\YOUR\\INSTANCES"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml105(para)
msgid ""
"The following two configuration options/flags are needed in order to support"
" Hyper-V live migration and must be added to your "
"<filename>nova.conf</filename> on the Hyper-V compute node:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml125(para)
msgid "Additional Requirements:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml128(para)
msgid "Hyper-V 2012 RC or Windows Server 2012 RC with Hyper-V role enabled"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml131(para)
msgid ""
"A Windows domain controller with the Hyper-V compute nodes as domain members"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml135(para)
msgid ""
"The instances_path command line option/flag needs to be the same on all "
"hosts"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml139(para)
msgid ""
"The openstack-compute service deployed with the setup must run with domain "
"credentials. You can set the service credentials with:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml145(emphasis)
msgid "How to setup live migration on Hyper-V"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml146(para)
msgid ""
"To enable shared nothing live migration run the 3 PowerShell instructions "
"below on each Hyper-V host:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml152(replaceable)
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml337(replaceable)
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml338(replaceable)
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml345(replaceable)
msgid "IP_ADDRESS"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml158(para)
msgid ""
"Please replace the IP_ADDRESS with the address of the interface which will "
"provide the virtual switching for nova-network."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml161(emphasis)
msgid "Additional Reading"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml162(para)
msgid ""
"Here's an article that clarifies the various live migration options in "
"Hyper-V:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml166(link)
msgid ""
"http://ariessysadmin.blogspot.ro/2012/04/hyper-v-live-migration-of-"
"windows.html"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml170(title)
msgid "Python Requirements"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml171(emphasis)
msgid "Python"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml177(link)
msgid "http://www.python.org/ftp/python/2.7.3/python-2.7.3.msi"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml181(para)
msgid "Install the MSI accepting the default options."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml184(para)
msgid "The installation will put python in C:/python27."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml172(para)
msgid ""
"Python 2.7.3 must be installed prior to installing the OpenStack Compute "
"Driver on the Hyper-V server. Download and then install the MSI for windows "
"here:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml188(emphasis)
msgid "Setuptools"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml189(para)
msgid ""
"You will require pip to install the necessary python module dependencies. "
"The installer will install under the C:\\python27 directory structure. "
"Setuptools for Python 2.7 for Windows can be download from here:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml194(link)
msgid ""
"http://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11.win32-py2.7.exe"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml197(emphasis)
msgid "Python Dependencies"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml198(para)
msgid ""
"You must download and manually install the following packages on the Compute"
" node:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml202(emphasis)
msgid "MySQL-python"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml205(link)
msgid "http://codegood.com/download/10/"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml209(emphasis)
msgid "pywin32"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml210(para)
msgid "Download and run the installer from the following location"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml214(link)
msgid ""
"http://sourceforge.net/projects/pywin32/files/pywin32/Build%20217/pywin32-217.win32-py2.7.exe"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml218(emphasis)
msgid "greenlet"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml219(para)
msgid "Select the link below:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml221(link)
msgid "http://www.lfd.uci.edu/~gohlke/pythonlibs/"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml222(para)
msgid ""
"You must scroll to the greenlet section for the following file: "
"greenlet-0.4.0.win32-py2.7.exe"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml224(para)
msgid ""
"Click on the file, to initiate the download. Once the download is complete, "
"run the installer."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml228(para)
msgid ""
"You must install the following Python packages through <placeholder-1/> or "
"<placeholder-2/>. Run the following replacing PACKAGENAME with the following"
" packages:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml231(replaceable)
msgid "PACKAGE_NAME"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml235(para)
msgid "amqplib"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml238(para)
msgid "anyjson"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml241(para)
msgid "distribute"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml244(para)
msgid "eventlet"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml247(para)
msgid "httplib2"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml250(para)
msgid "iso8601"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml253(para)
msgid "jsonschema"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml256(para)
msgid "kombu"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml259(para)
msgid "netaddr"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml262(para)
msgid "paste"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml265(para)
msgid "paste-deploy"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml268(para)
msgid "prettytable"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml271(para)
msgid "python-cinderclient"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml274(para)
msgid "python-glanceclient"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml277(para)
msgid "python-keystoneclient"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml280(para)
msgid "repoze.lru"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml283(para)
msgid "routes"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml286(para)
msgid "sqlalchemy"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml289(para)
msgid "simplejson"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml292(para)
msgid "warlock"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml295(para)
msgid "webob"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml298(para)
msgid "wmi"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml303(title)
msgid "Install Nova-compute"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml304(emphasis)
msgid "Using git on Windows to retrieve source"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml305(para)
msgid ""
"Git be used to download the necessary source code. The installer to run Git "
"on Windows can be downloaded here:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml310(link)
msgid ""
"http://code.google.com/p/msysgit/downloads/list?q=full+installer+official+git"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml312(para)
msgid ""
"Download the latest installer. Once the download is complete double click "
"the installer and follow the prompts in the installation wizard. The default"
" should be acceptable for the needs of the document."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml315(para)
msgid "Once installed you may run the following to clone the Nova code."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml321(title)
msgid "Configure Nova.conf"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml322(para)
msgid ""
"The <filename>nova.conf</filename> file must be placed in "
"<literal>C:\\etc\\nova</literal> for running OpenStack on Hyper-V. Below is "
"a sample <filename>nova.conf</filename> for Windows:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml346(para)
msgid "The following table contains a reference of all options for hyper-v"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml350(title)
msgid "Prepare images for use with Hyper-V"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml351(para)
msgid ""
"Hyper-V currently supports only the VHD file format for virtual machine "
"instances. Detailed instructions for installing virtual machines on Hyper-V "
"can be found here:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml355(link)
msgid "http://technet.microsoft.com/en-us/library/cc772480.aspx"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml356(para)
msgid ""
"Once you have successfully created a virtual machine, you can then upload "
"the image to glance using the native glance-client:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml359(replaceable)
msgid "VM_IMAGE_NAME"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml363(title)
msgid "Run Compute with Hyper-V"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml364(para)
msgid ""
"To start the <systemitem class=\"service\">nova-compute</systemitem> "
"service, run this command from a console in the Windows server:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml371(title)
msgid "Troubleshoot Hyper-V configuration"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml374(para)
msgid ""
"I ran the <literal>nova-manage service list</literal> command from my "
"controller; however, I'm not seeing smiley faces for Hyper-V compute nodes, "
"what do I do?"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml379(link)
msgid "here"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_hyper-v.xml377(emphasis)
msgid ""
"Verify that you are synchronized with a network time source. Instructions "
"for configuring NTP on your Hyper-V compute node are located "
"<placeholder-1/>"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for
#. you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml39(None)
msgid ""
"@@image: '../../common/figures/vmware-nova-driver-architecture.jpg'; "
"md5=d95084ce963cffbe3e86307c87d804c1"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml6(title)
msgid "VMware vSphere"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml9(title)
msgid "Introduction"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml10(para)
msgid ""
"OpenStack Compute supports the VMware vSphere product family and enables "
"access to advanced features such as vMotion, High Availability, and Dynamic "
"Resource Scheduling (DRS). This section describes how to configure VMware-"
"based virtual machine images for launch. vSphere versions 4.1 and newer are "
"supported."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml16(para)
msgid ""
"The VMware vCenter driver enables the <systemitem class=\"service\">nova-"
"compute</systemitem> service to communicate with a VMware vCenter server "
"that manages one or more ESX host clusters. The driver aggregates the ESX "
"hosts in each cluster to present one large hypervisor entity for each "
"cluster to the Compute scheduler. Because individual ESX hosts are not "
"exposed to the scheduler, Compute schedules to the granularity of clusters "
"and vCenter uses DRS to select the actual ESX host within the cluster. When "
"a virtual machine makes its way into a vCenter cluster, it can use all "
"vSphere features."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml26(para)
msgid ""
"The following sections describe how to configure the VMware vCenter driver."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml30(title)
msgid "High-level architecture"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml31(para)
msgid ""
"The following diagram shows a high-level view of the VMware driver "
"architecture:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml34(title)
msgid "VMware driver architecture"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml43(para)
msgid ""
"As the figure shows, the OpenStack Compute Scheduler sees three hypervisors "
"that each correspond to a cluster in vCenter. <systemitem class=\"service"
"\">Nova-compute</systemitem> contains the VMware driver. You can run with "
"multiple <systemitem class=\"service\">nova-compute</systemitem> services. "
"While Compute schedules at the granularity of a cluster, the VMware driver "
"inside <systemitem class=\"service\">nova-compute</systemitem> interacts "
"with the vCenter APIs to select an appropriate ESX host within the cluster. "
"Internally, vCenter uses DRS for placement."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml53(para)
msgid ""
"The VMware vCenter driver also interacts with the OpenStack Image Service to"
" copy VMDK images from the Image Service back end store. The dotted line in "
"the figure represents VMDK images being copied from the OpenStack Image "
"Service to the vSphere data store. VMDK images are cached in the data store "
"so the copy operation is only required the first time that the VMDK image is"
" used."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml60(para)
msgid ""
"After OpenStack boots a VM into a vSphere cluster, the VM becomes visible in"
" vCenter and can access vSphere advanced features. At the same time, the VM "
"is visible in the OpenStack dashboard and you can manage it as you would any"
" other OpenStack VM. You can perform advanced vSphere operations in vCenter "
"while you configure OpenStack resources such as VMs through the OpenStack "
"dashboard."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml67(para)
msgid ""
"The figure does not show how networking fits into the architecture. Both "
"<systemitem class=\"service\">nova-network</systemitem> and the OpenStack "
"Networking Service are supported. For details, see <xref "
"linkend=\"VMWare_networking\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml74(title)
msgid "Configuration overview"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml75(para)
msgid ""
"To get started with the VMware vCenter driver, complete the following high-"
"level steps:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml79(para)
msgid "Configure vCenter correctly. See <xref linkend=\"vmware-prereqs\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml83(para)
msgid ""
"Configure <filename>nova.conf</filename> for the VMware vCenter driver. See "
"<xref linkend=\"VMWareVCDriver_details\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml88(para)
msgid ""
"Load desired VMDK images into the OpenStack Image Service. See <xref "
"linkend=\"VMWare_images\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml92(para)
msgid ""
"Configure networking with either <systemitem class=\"service\">nova-"
"network</systemitem> or the OpenStack Networking Service. See <xref "
"linkend=\"VMWare_networking\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml100(title)
msgid "Prerequisites and limitations"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml101(para)
msgid ""
"Use the following list to prepare a vSphere environment that runs with the "
"VMware vCenter driver:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml105(para)
msgid ""
"<emphasis role=\"bold\">vCenter inventory</emphasis>. Make sure that any "
"vCenter used by OpenStack contains a single data center. A future Havana "
"stable release will address this temporary limitation."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml111(para)
msgid ""
"<emphasis role=\"bold\">DRS</emphasis>. For any cluster that contains "
"multiple ESX hosts, enable DRS and enable fully automated placement."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml116(para)
msgid ""
"<emphasis role=\"bold\">Shared storage</emphasis>. Only shared storage is "
"supported and data stores must be shared among all hosts in a cluster. It is"
" recommended to remove data stores not intended for OpenStack from clusters "
"being configured for OpenStack. Currently, a single data store can be used "
"per cluster. A future Havana stable release will address this temporary "
"limitation."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml125(para)
msgid ""
"<emphasis role=\"bold\">Clusters and data stores</emphasis>. Do not use "
"OpenStack clusters and data stores for other purposes. If you do, OpenStack "
"displays incorrect usage information."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml131(para)
msgid ""
"<emphasis role=\"bold\">Networking</emphasis>. The networking configuration "
"depends on the desired networking model. See <xref "
"linkend=\"VMWare_networking\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml136(para)
msgid ""
"<emphasis role=\"bold\">Security groups</emphasis>. If you use the VMware "
"driver with OpenStack Networking and the NSX plug-in, security groups are "
"supported. If you use <systemitem class=\"service\">nova-"
"network</systemitem>, security groups are not supported."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml141(para)
msgid "The NSX plug-in is the only plug-in that is validated for vSphere."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml145(para)
msgid ""
"<emphasis role=\"bold\">VNC</emphasis>. The port range 5900 - 6105 "
"(inclusive) is automatically enabled for VNC connections on every ESX host "
"in all clusters under OpenStack control. For more information about using a "
"VNC client to connect to virtual machine, see <link "
"href=\"http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;cmd=displayKC&amp;externalId=1246\">http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;cmd=displayKC&amp;externalId=1246</link>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml152(para)
msgid ""
"In addition to the default VNC port numbers (5900 to 6000) specified in the "
"above document, the following ports are also used: 6101, 6102, and 6105."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml155(para)
msgid ""
"You must modify the ESXi firewall configuration to allow the VNC ports. "
"Additionally, for the firewall modifications to persist after a reboot, you "
"must create a custom vSphere Installation Bundle (VIB) which is then "
"installed onto the running ESXi host or added to a custom image profile used"
" to install ESXi hosts. For details about how to create a VIB for persisting"
" the firewall configuration modifications, see <link "
"href=\"http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;cmd=displayKC&amp;externalId=2007381\">"
" "
"http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;cmd=displayKC&amp;externalId=2007381</link>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml168(para)
msgid ""
"<emphasis role=\"bold\">Ephemeral Disks</emphasis>. Ephemeral disks are not "
"supported. A future stable release will address this temporary limitation."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml175(title)
msgid "VMware vCenter driver"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml176(para)
msgid ""
"Use the VMware vCenter driver (VMwareVCDriver) to connect OpenStack Compute "
"with vCenter. This recommended configuration enables access through vCenter "
"to advanced vSphere features like vMotion, High Availability, and Dynamic "
"Resource Scheduling (DRS)."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml182(title)
msgid "VMwareVCDriver configuration options"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml183(para)
msgid ""
"When you use the VMwareVCDriver (vCenter) with OpenStack Compute, add the "
"following VMware-specific configuration options to the "
"<filename>nova.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml200(para)
msgid ""
"Clusters: The vCenter driver can support multiple clusters. To use more than"
" one cluster, simply add multiple <code>cluster_name</code> lines in "
"<filename>nova.conf</filename> with the appropriate cluster name. Clusters "
"and data stores used by the vCenter driver should not contain any VMs other "
"than those created by the driver."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml209(para)
msgid ""
"Data stores: The <code>datastore_regex</code> field specifies the data "
"stores to use with Compute. For example, "
"<code>datastore_regex=\"nas.*\"</code> selects all the data stores that have"
" a name starting with \"nas\". If this line is omitted, Compute uses the "
"first data store returned by the vSphere API. It is recommended not to use "
"this field and instead remove data stores that are not intended for "
"OpenStack."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml220(para)
msgid ""
"A <systemitem class=\"service\">nova-compute</systemitem> service can "
"control one or more clusters containing multiple ESX hosts, making "
"<systemitem class=\"service\">nova-compute</systemitem> a critical service "
"from a high availability perspective. Because the host that runs <systemitem"
" class=\"service\">nova-compute</systemitem> can fail while the vCenter and "
"ESX still run, you must protect the <systemitem class=\"service\">nova-"
"compute</systemitem> service against host failures."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml230(para)
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml634(para)
msgid ""
"Many <filename>nova.conf</filename> options are relevant to libvirt but do "
"not apply to this driver."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml233(para)
msgid ""
"You must complete additional configuration for environments that use vSphere"
" 5.0 and earlier. See <xref linkend=\"VMWare_additional_config\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml239(title)
msgid "Images with VMware vSphere"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml240(para)
msgid ""
"The vCenter driver supports images in the VMDK format. Disks in this format "
"can be obtained from VMware Fusion or from an ESX environment. It is also "
"possible to convert other formats, such as qcow2, to the VMDK format using "
"the <code>qemu-img</code> utility. After a VMDK disk is available, load it "
"into the OpenStack Image Service. Then, you can use it with the VMware "
"vCenter driver. The following sections provide additional details on the "
"supported disks and the commands used for conversion and upload."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml250(title)
msgid "Supported image types"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml251(para)
msgid ""
"Upload images to the OpenStack Image Service in VMDK format. The following "
"VMDK disk types are supported:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml255(para)
msgid ""
"<emphasis role=\"italic\">VMFS Flat Disks</emphasis> (includes thin, thick, "
"zeroedthick, and eagerzeroedthick). Note that once a VMFS thin disk is "
"exported from VMFS to a non-VMFS location, like the OpenStack Image Service,"
" it becomes a preallocated flat disk. This impacts the transfer time from "
"the OpenStack Image Service to the data store when the full preallocated "
"flat disk, rather than the thin disk, must be transferred."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml265(para)
msgid ""
"<emphasis role=\"italic\">Monolithic Sparse disks</emphasis>. Sparse disks "
"get imported from the OpenStack Image Service into ESX as thin provisioned "
"disks. Monolithic Sparse disks can be obtained from VMware Fusion or can be "
"created by converting from other virtual disk formats using the <code>qemu-"
"img</code> utility."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml274(para)
msgid ""
"The following table shows the <code>vmware_disktype</code> property that "
"applies to each of the supported VMDK disk types:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml278(caption)
msgid "OpenStack Image Service disk type settings"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml281(th)
msgid "vmware_disktype property"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml282(th)
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml31(title)
msgid "VMDK disk type"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml287(td)
msgid "sparse"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml289(para)
msgid "Monolithic Sparse"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml293(td)
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml51(td)
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml53(td)
msgid "thin"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml295(para)
msgid "VMFS flat, thin provisioned"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml299(td)
msgid "preallocated (default)"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml301(para)
msgid "VMFS flat, thick/zeroedthick/eagerzeroedthick"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml308(para)
msgid ""
"The <code>vmware_disktype</code> property is set when an image is loaded "
"into the OpenStack Image Service. For example, the following command creates"
" a Monolithic Sparse image by setting <code>vmware_disktype</code> to "
"<literal>sparse</literal>:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml317(para)
msgid ""
"Note that specifying <literal>thin</literal> does not provide any advantage "
"over <literal>preallocated</literal> with the current version of the driver."
" Future versions might restore the thin properties of the disk after it is "
"downloaded to a vSphere data store."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml324(title)
msgid "Convert and load images"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml325(para)
msgid ""
"Using the <code>qemu-img</code> utility, disk images in several formats "
"(such as, qcow2) can be converted to the VMDK format."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml328(para)
msgid ""
"For example, the following command can be used to convert a <link "
"href=\"http://cloud-images.ubuntu.com/precise/current/precise-server-"
"cloudimg-amd64-disk1.img\">qcow2 Ubuntu Precise cloud image</link>:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml334(para)
msgid ""
"VMDK disks converted through <code>qemu-img</code> are <emphasis "
"role=\"italic\">always</emphasis> monolithic sparse VMDK disks with an IDE "
"adapter type. Using the previous example of the Precise Ubuntu image after "
"the <code>qemu-img</code> conversion, the command to upload the VMDK disk "
"should be something like:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml345(para)
msgid ""
"Note that the <code>vmware_disktype</code> is set to <emphasis "
"role=\"italic\">sparse</emphasis> and the <code>vmware_adaptertype</code> is"
" set to <emphasis role=\"italic\">ide</emphasis> in the previous command."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml349(para)
msgid ""
"If the image did not come from the <code>qemu-img</code> utility, the "
"<code>vmware_disktype</code> and <code>vmware_adaptertype</code> might be "
"different. To determine the image adapter type from an image file, use the "
"following command and look for the <code>ddb.adapterType=</code> line:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml356(para)
msgid ""
"Assuming a preallocated disk type and an iSCSI lsiLogic adapter type, the "
"following command uploads the VMDK disk:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml364(para)
msgid ""
"Currently, OS boot VMDK disks with an IDE adapter type cannot be attached to"
" a virtual SCSI controller and likewise disks with one of the SCSI adapter "
"types (such as, busLogic, lsiLogic) cannot be attached to the IDE "
"controller. Therefore, as the previous examples show, it is important to set"
" the <code>vmware_adaptertype</code> property correctly. The default adapter"
" type is lsiLogic, which is SCSI, so you can omit the "
"<parameter>vmware_adaptertype</parameter> property if you are certain that "
"the image adapter type is lsiLogic."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml376(title)
msgid "Tag VMware images"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml377(para)
msgid ""
"In a mixed hypervisor environment, OpenStack Compute uses the "
"<code>hypervisor_type</code> tag to match images to the correct hypervisor "
"type. For VMware images, set the hypervisor type to "
"<literal>vmware</literal>. Other valid hypervisor types include: xen, qemu, "
"kvm, lxc, uml, and hyperv."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml390(title)
msgid "Optimize images"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml391(para)
msgid ""
"Monolithic Sparse disks are considerably faster to download but have the "
"overhead of an additional conversion step. When imported into ESX, sparse "
"disks get converted to VMFS flat thin provisioned disks. The download and "
"conversion steps only affect the first launched instance that uses the "
"sparse disk image. The converted disk image is cached, so subsequent "
"instances that use this disk image can simply use the cached version."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml399(para)
msgid ""
"To avoid the conversion step (at the cost of longer download times) consider"
" converting sparse disks to thin provisioned or preallocated disks before "
"loading them into the OpenStack Image Service. Below are some tools that can"
" be used to pre-convert sparse disks."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml406(emphasis)
msgid "Using vSphere CLI (or sometimes called the remote CLI or rCLI) tools"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml408(para)
msgid ""
"Assuming that the sparse disk is made available on a data store accessible "
"by an ESX host, the following command converts it to preallocated format:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml412(para)
msgid ""
"(Note that the vifs tool from the same CLI package can be used to upload the"
" disk to be converted. The vifs tool can also be used to download the "
"converted disk if necessary.)"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml418(emphasis)
msgid "Using vmkfstools directly on the ESX host"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml420(para)
msgid ""
"If the SSH service is enabled on an ESX host, the sparse disk can be "
"uploaded to the ESX data store via scp and the vmkfstools local to the ESX "
"host can use used to perform the conversion: (After logging in to the host "
"via ssh)"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml429(emphasis)
msgid "vmware-vdiskmanager"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml430(para)
msgid ""
"<code>vmware-vdiskmanager</code> is a utility that comes bundled with VMware"
" Fusion and VMware Workstation. Below is an example of converting a sparse "
"disk to preallocated format:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml435(para)
msgid ""
"In all of the above cases, the converted vmdk is actually a pair of files: "
"the descriptor file <emphasis role=\"italic\">converted.vmdk</emphasis> and "
"the actual virtual disk data file <emphasis role=\"italic\">converted-"
"flat.vmdk</emphasis>. The file to be uploaded to the OpenStack Image Service"
" is <emphasis role=\"italic\">converted-flat.vmdk</emphasis>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml446(title)
msgid "Image handling"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml447(para)
msgid ""
"The ESX hypervisor requires a copy of the VMDK file in order to boot up a "
"virtual machine. As a result, the vCenter OpenStack Compute driver must "
"download the VMDK via HTTP from the OpenStack Image Service to a data store "
"that is visible to the hypervisor. To optimize this process, the first time "
"a VMDK file is used, it gets cached in the data store. Subsequent virtual "
"machines that need the VMDK use the cached version and don't have to copy "
"the file again from the OpenStack Image Service."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml456(para)
msgid ""
"Even with a cached VMDK, there is still a copy operation from the cache "
"location to the hypervisor file directory in the shared data store. To avoid"
" this copy, boot the image in linked_clone mode. To learn how to enable this"
" mode, see <xref linkend=\"VMWare_config\"/>. Note also that it is possible "
"to override the linked_clone mode on a per-image basis by using the "
"<code>vmware_linked_clone</code> property in the OpenStack Image Service."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml467(title)
msgid "Networking with VMware vSphere"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml468(para)
msgid ""
"The VMware driver supports networking with the <systemitem class=\"service"
"\">nova-network</systemitem> service or the OpenStack Networking Service. "
"Depending on your installation, complete these configuration steps before "
"you provision VMs:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml475(para)
msgid ""
"<emphasis role=\"bold\">The <systemitem class=\"service\">nova-"
"network</systemitem> service with the FlatManager or "
"FlatDHCPManager</emphasis>. Create a port group with the same name as the "
"<literal>flat_network_bridge</literal> value in the "
"<filename>nova.conf</filename> file. The default value is "
"<literal>br100</literal>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml481(para)
msgid "All VM NICs are attached to this port group."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml482(para)
msgid ""
"Ensure that the flat interface of the node that runs the <systemitem "
"class=\"service\">nova-network</systemitem> service has a path to this "
"network."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml487(para)
msgid ""
"<emphasis role=\"bold\">The <systemitem class=\"service\">nova-"
"network</systemitem> service with the VlanManager</emphasis>. Set the "
"<literal>vlan_interface</literal> configuration option to match the ESX host"
" interface that handles VLAN-tagged VM traffic."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml493(para)
msgid "OpenStack Compute automatically creates the corresponding port groups."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml497(para)
msgid ""
"<emphasis role=\"bold\">The OpenStack Networking Service</emphasis>. If you "
"use <acronym>OVS</acronym> as the l2 agent, create a port group with the "
"same name as the <literal>DEFAULT.neutron_ovs_bridge</literal> value in the "
"<filename>nova.conf</filename> file. Otherwise, create a port group with the"
" same name as the <literal>vmware.integration_bridge</literal> value in the "
"<filename>nova.conf</filename> file. In both cases, the default value is "
"<literal>br-int</literal>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml506(para)
msgid ""
"All VM NICs are attached to this port group for management by the OpenStack "
"Networking plug-in."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml512(title)
msgid "Volumes with VMware vSphere"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml513(para)
msgid ""
"The VMware driver supports attaching volumes from the OpenStack Block "
"Storage service. The VMware VMDK driver for OpenStack Block Storage is "
"recommended and should be used for managing volumes based on vSphere data "
"stores. More information about the VMware VMDK driver can be found at: <link"
" href=\"http://docs.openstack.org/trunk/config-reference/content/vmware-"
"vmdk-driver.html\">VMware VMDK Driver</link>. Also an iscsi volume driver "
"provides limited support and can be used only for attachments."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml524(title)
msgid "vSphere 5.0 and earlier additional set up"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml525(para)
msgid ""
"Users of vSphere 5.0 or earlier must host their WSDL files locally. These "
"steps are applicable for vCenter 5.0 or ESXi 5.0 and you can either mirror "
"the WSDL from the vCenter or ESXi server that you intend to use or you can "
"download the SDK directly from VMware. These workaround steps fix a <link "
"href=\"http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&amp;externalId=2010507\">known"
" issue</link> with the WSDL that was resolved in later versions."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml534(title)
msgid "Mirror WSDL from vCenter (or ESXi)"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml536(para)
msgid ""
"Set the <code>VMWAREAPI_IP</code> shell variable to the IP address for your "
"vCenter or ESXi host from where you plan to mirror files. For example:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml542(para)
msgid "Create a local file system directory to hold the WSDL files:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml547(para)
msgid "Change into the new directory. <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml551(para)
msgid ""
"Use your OS-specific tools to install a command-line tool that can download "
"files like <placeholder-1/>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml556(para)
msgid "Download the files to the local file cache:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml566(para)
msgid ""
"Because the <filename>reflect-types.xsd</filename> and <filename>reflect-"
"messagetypes.xsd</filename> files do not fetch properly, you must stub out "
"these files. Use the following XML listing to replace the missing file "
"content. The XML parser underneath Python can be very particular and if you "
"put a space in the wrong place, it can break the parser. Copy the following "
"contents and formatting carefully."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml583(para)
msgid ""
"Now that the files are locally present, tell the driver to look for the SOAP"
" service WSDLs in the local file system and not on the remote vSphere "
"server. Add the following setting to the <filename>nova.conf</filename> file"
" for your <systemitem class=\"service\">nova-compute</systemitem> node:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml593(para)
msgid ""
"Alternatively, download the version appropriate SDK from <link "
"href=\"http://www.vmware.com/support/developer/vc-"
"sdk/\">http://www.vmware.com/support/developer/vc-sdk/</link> and copy it to"
" the <filename>/opt/stack/vmware</filename> file. Make sure that the WSDL is"
" available, in for example "
"<filename>/opt/stack/vmware/SDK/wsdl/vim25/vimService.wsdl</filename>. You "
"must point <filename>nova.conf</filename> to fetch this WSDL file from the "
"local file system by using a URL."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml602(para)
msgid ""
"When using the VMwareVCDriver (vCenter) with OpenStack Compute with vSphere "
"version 5.0 or earlier, <filename>nova.conf</filename> must include the "
"following extra config option:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml610(title)
msgid "VMware ESX driver"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml611(para)
msgid ""
"This section covers details of using the VMwareESXDriver. The ESX Driver has"
" not been extensively tested and is not recommended. To configure the VMware"
" vCenter driver instead, see <xref linkend=\"VMWareVCDriver_details\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml616(title)
msgid "VMwareESXDriver configuration options"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml617(para)
msgid ""
"When you use the VMwareESXDriver (no vCenter) with OpenStack Compute, add "
"the following VMware-specific configuration options to the "
"<filename>nova.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml629(para)
msgid ""
"Remember that you will have one <systemitem class=\"service\">nova-"
"compute</systemitem> service for each ESXi host. It is recommended that this"
" host run as a VM on the same ESXi host that it manages."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml639(title)
msgid "Requirements and limitations"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml640(para)
msgid ""
"The ESXDriver cannot use many of the vSphere platform advanced capabilities,"
" namely vMotion, high availability, and DRS."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_vmware.xml646(title)
#: ./doc/config-reference/compute/section_compute-scheduler.xml527(title)
msgid "Configuration reference"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for
#. you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/config-reference/compute/section_compute-scheduler.xml75(None)
msgid ""
"@@image: '../../common/figures/filteringWorkflow1.png'; "
"md5=c144af5cbdee1bd17a7bde0bea5b5fe7"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml10(title)
msgid "Scheduling"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml11(para)
msgid ""
"Compute uses the <systemitem class=\"service\">nova-scheduler</systemitem> "
"service to determine how to dispatch compute and volume requests. For "
"example, the <systemitem class=\"service\">nova-scheduler</systemitem> "
"service determines which host a VM should launch on. The term "
"<firstterm>host</firstterm> in the context of filters means a physical node "
"that has a <systemitem class=\"service\">nova-compute</systemitem> service "
"running on it. You can configure the scheduler through a variety of options."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml17(para)
msgid "Compute is configured with the following default scheduler options:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml24(para)
msgid ""
"By default, the compute scheduler is configured as a filter scheduler, as "
"described in the next section. In the default configuration, this scheduler "
"considers hosts that meet all the following criteria:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml30(para)
msgid ""
"Are in the requested availability zone "
"(<literal>AvailabilityZoneFilter</literal>)."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml34(para)
msgid "Have sufficient RAM available (<literal>RamFilter</literal>)."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml38(para)
msgid ""
"Are capable of servicing the request (<literal>ComputeFilter</literal>)."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml42(para)
msgid ""
"For information on the volume scheduler, refer the Block Storage section of "
"<link href=\"http://docs.openstack.org/admin-guide-cloud/content/managing-"
"volumes.html\"><citetitle>OpenStack Cloud Administrator "
"Guide</citetitle></link> for information."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml46(title)
msgid "Filter scheduler"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml47(para)
msgid ""
"The Filter Scheduler "
"(<literal>nova.scheduler.filter_scheduler.FilterScheduler</literal>) is the "
"default scheduler for scheduling virtual machine instances. It supports "
"filtering and weighting to make informed decisions on where a new instance "
"should be created. You can use this scheduler to schedule compute requests "
"but not volume requests. For example, you can use it with only the "
"<literal>compute_scheduler_driver</literal> configuration option."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml60(title)
msgid "Filters"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml70(title)
msgid "Filtering"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml61(para)
msgid ""
"When the Filter Scheduler receives a request for a resource, it first "
"applies filters to determine which hosts are eligible for consideration when"
" dispatching a resource. Filters are binary: either a host is accepted by "
"the filter, or it is rejected. Hosts that are accepted by the filter are "
"then processed by a different algorithm to decide which hosts to use for "
"that request, described in the <link linkend=\"weights\">Weights</link> "
"section. <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml80(para)
msgid ""
"The <literal>scheduler_available_filters</literal> configuration option in "
"<filename>nova.conf</filename> provides the Compute service with the list of"
" the filters that are used by the scheduler. The default setting specifies "
"all of the filter that are included with the Compute service:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml87(para)
msgid ""
"This configuration option can be specified multiple times. For example, if "
"you implemented your own custom filter in Python called "
"<literal>myfilter.MyFilter</literal> and you wanted to use both the built-in"
" filters and your custom filter, your <filename>nova.conf</filename> file "
"would contain:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml96(para)
msgid ""
"The <literal>scheduler_default_filters</literal> configuration option in "
"<filename>nova.conf</filename> defines the list of filters that are applied "
"by the <systemitem class=\"service\">nova-scheduler</systemitem> service. As"
" mentioned, the default filters are:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml102(para)
msgid "The following sections describe the available filters."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml105(title)
msgid "AggregateCoreFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml106(para)
msgid ""
"Implements blueprint per-aggregate-resource-ratio. AggregateCoreFilter "
"supports per-aggregate <literal>cpu_allocation_ratio</literal>. If the per-"
"aggregate value is not found, the value falls back to the global setting."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml113(title)
msgid "AggregateInstanceExtraSpecsFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml114(para)
msgid ""
"Matches properties defined in an instance type's extra specs against admin-"
"defined properties on a host aggregate. Works with specifications that are "
"unscoped, or are scoped with "
"<literal>aggregate_instance_extra_specs</literal>. See the <link linkend"
"=\"host-aggregates\">host aggregates</link> section for documentation on how"
" to use this filter."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml123(title)
msgid "AggregateMultiTenancyIsolation"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml124(para)
msgid ""
"Isolates tenants to specific <link linkend=\"host-aggregates\">host "
"aggregates</link>. If a host is in an aggregate that has the metadata key "
"<literal>filter_tenant_id</literal> it only creates instances from that "
"tenant (or list of tenants). A host can be in different aggregates. If a "
"host does not belong to an aggregate with the metadata key, it can create "
"instances from all tenants."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml134(title)
msgid "AggregateRamFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml135(para)
msgid ""
"Implements blueprint <literal>per-aggregate-resource-ratio</literal>. "
"Supports per-aggregate <literal>ram_allocation_ratio</literal>. If per-"
"aggregate value is not found, it falls back to the default setting."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml143(title)
msgid "AllHostsFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml144(para)
msgid ""
"This is a no-op filter, it does not eliminate any of the available hosts."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml148(title)
msgid "AvailabilityZoneFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml149(para)
msgid ""
"Filters hosts by availability zone. This filter must be enabled for the "
"scheduler to respect availability zones in requests."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml154(title)
msgid "ComputeCapabilitiesFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml155(para)
msgid ""
"Matches properties defined in an instance type's extra specs against compute"
" capabilities."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml157(para)
msgid ""
"If an extra specs key contains a colon \":\", anything before the colon is "
"treated as a namespace, and anything after the colon is treated as the key "
"to be matched. If a namespace is present and is not 'capabilities', it is "
"ignored by this filter."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml163(para)
msgid ""
"Disable the ComputeCapabilitiesFilter when using a Bare Metal configuration,"
" due to <link href=\"https://bugs.launchpad.net/nova/+bug/1129485\">bug "
"1129485</link>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml170(title)
msgid "ComputeFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml171(para)
msgid "Passes all hosts that are operational and enabled."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml173(para)
msgid "In general, this filter should always be enabled."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml177(title)
msgid "CoreFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml178(para)
msgid ""
"Only schedule instances on hosts if there are sufficient CPU cores "
"available. If this filter is not set, the scheduler may over provision a "
"host based on cores (for example, the virtual cores running on an instance "
"may exceed the physical cores)."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml183(para)
msgid ""
"This filter can be configured to allow a fixed amount of vCPU overcommitment"
" by using the <literal>cpu_allocation_ratio</literal> Configuration option "
"in <filename>nova.conf</filename>. The default setting is:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml190(para)
msgid ""
"With this setting, if 8 vCPUs are on a node, the scheduler allows instances "
"up to 128 vCPU to be run on that node."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml193(para)
msgid "To disallow vCPU overcommitment set:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml197(title)
msgid "DifferentHostFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml198(para)
msgid ""
"Schedule the instance on a different host from a set of instances. To take "
"advantage of this filter, the requester must pass a scheduler hint, using "
"<literal>different_host</literal> as the key and a list of instance uuids as"
" the value. This filter is the opposite of the "
"<literal>SameHostFilter</literal>. Using the <placeholder-1/> command-line "
"tool, use the <literal>--hint</literal> flag. For example:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml210(para)
msgid ""
"With the API, use the <literal>os:scheduler_hints</literal> key. For "
"example:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml227(title)
msgid "DiskFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml228(para)
msgid ""
"Only schedule instances on hosts if there is sufficient disk space available"
" for root and ephemeral storage."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml231(para)
msgid ""
"This filter can be configured to allow a fixed amount of disk overcommitment"
" by using the <literal>disk_allocation_ratio</literal> Configuration option "
"in <filename>nova.conf</filename>. The default setting is:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml238(para)
msgid ""
"Adjusting this value to greater than 1.0 enables scheduling instances while "
"over committing disk resources on the node. This might be desirable if you "
"use an image format that is sparse or copy on write such that each virtual "
"instance does not require a 1:1 allocation of virtual disk to physical "
"storage."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml245(title)
msgid "GroupAffinityFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml246(para)
msgid ""
"The GroupAffinityFilter ensures that an instance is scheduled on to a host "
"from a set of group hosts. To take advantage of this filter, the requester "
"must pass a scheduler hint, using <literal>group</literal> as the key and an"
" arbitrary name as the value. Using the <placeholder-1/> command-line tool, "
"use the <literal>--hint</literal> flag. For example:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml258(title)
msgid "GroupAntiAffinityFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml259(para)
msgid ""
"The GroupAntiAffinityFilter ensures that each instance in a group is on a "
"different host. To take advantage of this filter, the requester must pass a "
"scheduler hint, using <literal>group</literal> as the key and an arbitrary "
"name as the value. Using the <placeholder-1/> command-line tool, use the "
"<literal>--hint</literal> flag. For example:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml271(title)
msgid "ImagePropertiesFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml272(para)
msgid ""
"Filters hosts based on properties defined on the instance's image. It passes"
" hosts that can support the specified image properties contained in the "
"instance. Properties include the architecture, hypervisor type, and virtual "
"machine mode. for example, an instance might require a host that runs an "
"ARM-based processor and QEMU as the hypervisor. An image can be decorated "
"with these properties by using:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml281(para)
msgid "The image properties that the filter checks for are:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml285(para)
msgid ""
"<literal>architecture</literal>: Architecture describes the machine "
"architecture required by the image. Examples are i686, x86_64, arm, and "
"ppc64."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml291(para)
msgid ""
"<literal>hypervisor_type</literal>: Hypervisor type describes the hypervisor"
" required by the image. Examples are xen, kvm, qemu, and xenapi."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml297(para)
msgid ""
"<literal>vm_mode</literal>: Virtual machine mode describes the hypervisor "
"application binary interface (ABI) required by the image. Examples are 'xen'"
" for Xen 3.0 paravirtual ABI, 'hvm' for native ABI, 'uml' for User Mode "
"Linux paravirtual ABI, exe for container virt executable ABI."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml308(title)
msgid "IsolatedHostsFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml309(para)
msgid ""
"Allows the admin to define a special (isolated) set of images and a special "
"(isolated) set of hosts, such that the isolated images can only run on the "
"isolated hosts, and the isolated hosts can only run isolated images. The "
"flag <literal>restrict_isolated_hosts_to_isolated_images</literal> can be "
"used to force isolated hosts to only run isolated images."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml315(para)
msgid ""
"The admin must specify the isolated set of images and hosts in the "
"<filename>nova.conf</filename> file using the "
"<literal>isolated_hosts</literal> and <literal>isolated_images</literal> "
"configuration options. For example: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml325(title)
msgid "JsonFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml330(para)
msgid "="
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml333(para)
msgid "&lt;"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml336(para)
msgid "&gt;"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml339(para)
msgid "in"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml342(para)
msgid "&lt;="
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml345(para)
msgid "&gt;="
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml348(para)
msgid "not"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml351(para)
msgid "or"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml354(para)
msgid "and"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml358(para)
msgid "$free_ram_mb"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml361(para)
msgid "$free_disk_mb"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml364(para)
msgid "$total_usable_ram_mb"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml367(para)
msgid "$vcpus_total"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml370(para)
msgid "$vcpus_used"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml326(para)
msgid ""
"The JsonFilter allows a user to construct a custom filter by passing a "
"scheduler hint in JSON format. The following operators are "
"supported:<placeholder-1/>The filter supports the following "
"variables:<placeholder-2/>Using the <placeholder-3/> command-line tool, use "
"the <literal>--hint</literal> flag:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml377(para)
#: ./doc/config-reference/compute/section_compute-scheduler.xml436(para)
#: ./doc/config-reference/compute/section_compute-scheduler.xml482(para)
msgid "With the API, use the <literal>os:scheduler_hints</literal> key:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml393(title)
msgid "RamFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml394(para)
msgid ""
"Only schedule instances on hosts that have sufficient RAM available. If this"
" filter is not set, the scheduler may over provision a host based on RAM "
"(for example, the RAM allocated by virtual machine instances may exceed the "
"physical RAM)."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml399(para)
msgid ""
"This filter can be configured to allow a fixed amount of RAM overcommitment "
"by using the <literal>ram_allocation_ratio</literal> configuration option in"
" <filename>nova.conf</filename>. The default setting is:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml406(para)
msgid ""
"With this setting, if there is 1GB of free RAM, the scheduler allows "
"instances up to size 1.5GB to be run on that instance."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml411(title)
msgid "RetryFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml412(para)
msgid ""
"Filter out hosts that have already been attempted for scheduling purposes. "
"If the scheduler selects a host to respond to a service request, and the "
"host fails to respond to the request, this filter prevents the scheduler "
"from retrying that host for the service request."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml417(para)
msgid ""
"This filter is only useful if the <literal>scheduler_max_attempts</literal> "
"configuration option is set to a value greater than zero."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml423(title)
msgid "SameHostFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml424(para)
msgid ""
"Schedule the instance on the same host as another instance in a set of "
"instances. To take advantage of this filter, the requester must pass a "
"scheduler hint, using <literal>same_host</literal> as the key and a list of "
"instance uuids as the value. This filter is the opposite of the "
"<literal>DifferentHostFilter</literal>. Using the <placeholder-1/> command-"
"line tool, use the <literal>--hint</literal> flag:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml453(title)
msgid "SimpleCIDRAffinityFilter"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml454(para)
msgid ""
"Schedule the instance based on host IP subnet range. To take advantage of "
"this filter, the requester must specify a range of valid IP address in CIDR "
"format, by passing two scheduler hints:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml460(literal)
msgid "build_near_host_ip"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml462(para)
msgid ""
"The first IP address in the subnet (for example, "
"<literal>192.168.1.1</literal>)"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml468(literal)
msgid "cidr"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml470(para)
msgid ""
"The CIDR that corresponds to the subnet (for example, "
"<literal>/24</literal>)"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml476(para)
msgid ""
"Using the <placeholder-1/> command-line tool, use the "
"<literal>--hint</literal> flag. For example, to specify the IP subnet "
"<literal>192.168.1.1/24</literal>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml499(title)
msgid "Weights"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml501(para)
msgid ""
"The Filter Scheduler weighs hosts based on the config option "
"<literal>scheduler_weight_classes</literal>, this defaults to "
"<literal>nova.scheduler.weights.all_weighers</literal>, which selects the "
"only weigher available -- the RamWeigher. Hosts are then weighed and sorted "
"with the largest weight winning."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml510(para)
msgid ""
"The default is to spread instances across all hosts evenly. Set the "
"<literal>ram_weight_multiplier</literal> option to a negative number if you "
"prefer stacking instead of spreading."
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml516(title)
msgid "Chance scheduler"
msgstr ""
#: ./doc/config-reference/compute/section_compute-scheduler.xml518(para)
msgid ""
"As an administrator, you work with the Filter Scheduler. However, the "
"Compute service also uses the Chance Scheduler, "
"<literal>nova.scheduler.chance.ChanceScheduler</literal>, which randomly "
"selects from lists of filtered hosts. It is the default volume scheduler."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml10(title)
msgid "Cells"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml12(para)
msgid ""
"<emphasis role=\"italic\">Cells</emphasis> functionality allows you to scale"
" an OpenStack Compute cloud in a more distributed fashion without having to "
"use complicated technologies like database and message queue clustering. It "
"is intended to support very large deployments."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml16(para)
msgid ""
"When this functionality is enabled, the hosts in an OpenStack Compute cloud "
"are partitioned into groups called cells. Cells are configured as a tree. "
"The top-level cell should have a host that runs a <systemitem "
"class=\"service\">nova-api</systemitem> service, but no <systemitem "
"class=\"service\">nova-compute</systemitem> services. Each child cell should"
" run all of the typical <systemitem class=\"service\">nova-*</systemitem> "
"services in a regular Compute cloud except for <systemitem class=\"service"
"\">nova-api</systemitem>. You can think of cells as a normal Compute "
"deployment in that each cell has its own database server and message queue "
"broker."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml24(para)
msgid ""
"The <systemitem class=\"service\">nova-cells</systemitem> service handles "
"communication between cells and selects cells for new instances. This "
"service is required for every cell. Communication between cells is "
"pluggable, and currently the only option is communication through RPC."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml28(para)
msgid ""
"Cells scheduling is separate from host scheduling. <systemitem "
"class=\"service\">nova-cells</systemitem> first picks a cell (now randomly, "
"but future releases plan to add filtering/weighing functionality, and "
"decisions will be based on broadcasts of capacity/capabilities). Once a cell"
" is selected and the new build request reaches its <systemitem "
"class=\"service\">nova-cells</systemitem> service, it is sent over to the "
"host scheduler in that cell and the build proceeds as it would have without "
"cells."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml35(para)
msgid "Cell functionality is currently considered experimental."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml38(title)
msgid "Cell configuration options"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml43(literal)
msgid "enable"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml45(para)
msgid ""
"Set this is <literal>True</literal> to turn on cell functionality, which is "
"off by default."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml50(literal)
msgid "name"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml52(para)
msgid "Name of the current cell. This must be unique for each cell."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml56(literal)
msgid "capabilities"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml58(para)
msgid ""
"List of arbitrary "
"<literal><replaceable>key</replaceable>=<replaceable>value</replaceable></literal>"
" pairs defining capabilities of the current cell. Values include "
"<literal>hypervisor=xenserver;kvm,os=linux;windows</literal>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml66(literal)
msgid "call_timeout"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml68(para)
msgid "How long in seconds to wait for replies from calls between cells."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml73(term)
msgid "scheduler_filter_classes"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml75(para)
msgid ""
"Filter classes that the cells scheduler should use. By default, uses "
"\"<literal>nova.cells.filters.all_filters</literal>\" to map to all cells "
"filters included with Compute."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml81(term)
msgid "scheduler_weight_classes"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml82(para)
msgid ""
"Weight classes the cells scheduler should use. By default, uses "
"\"<literal>nova.cells.weights.all_weighers</literal>\" to map to all cells "
"weight algorithms (weighers) included with Compute."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml88(term)
msgid "ram_weight_multiplier"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml90(para)
msgid ""
"Multiplier used for weighing ram. Negative numbers mean you want Compute to "
"stack VMs on one host instead of spreading out new VMs to more hosts in the "
"cell. Default value is 10.0."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml39(para)
msgid ""
"Cells are disabled by default. All cell-related configuration options go "
"under a <literal>[cells]</literal> section in "
"<filename>nova.conf</filename>. The following cell-related options are "
"currently supported:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml99(title)
msgid "Configure the API (top-level) cell"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml100(para)
msgid ""
"The compute API class must be changed in the API cell so that requests can "
"be proxied through nova-cells down to the correct cell properly. Add the "
"following to <filename>nova.conf</filename> in the API cell:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml112(title)
msgid "Configure the child cells"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml121(replaceable)
msgid "cell1"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml113(para)
msgid ""
"Add the following to <filename>nova.conf</filename> in the child cells, "
"replacing <replaceable>cell1</replaceable> with the name of each "
"cell:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml124(title)
msgid "Configure the database in each cell"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml125(para)
msgid ""
"Before bringing the services online, the database in each cell needs to be "
"configured with information about related cells. In particular, the API cell"
" needs to know about its immediate children, and the child cells must know "
"about their immediate agents. The information needed is the "
"<application>RabbitMQ</application> server credentials for the particular "
"cell."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml130(para)
msgid ""
"Use the <placeholder-1/> command to add this information to the database in "
"each cell:<placeholder-2/>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml152(para)
msgid ""
"As an example, assume we have an API cell named <literal>api</literal> and a"
" child cell named <literal>cell1</literal>. Within the api cell, we have the"
" following RabbitMQ server info:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml160(para)
msgid ""
"And in the child cell named <literal>cell1</literal> we have the following "
"RabbitMQ server info:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml167(para)
msgid "We would run this in the API cell, as root.<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml169(para)
msgid "Repeat the above for all child cells."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml170(para)
msgid ""
"In the child cell, we would run the following, as root:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml176(title)
msgid "Cell scheduling configuration"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml177(para)
msgid ""
"To determine the best cell for launching a new instance, Compute uses a set "
"of filters and weights configured in "
"<filename>/etc/nova/nova.conf</filename>. The following options are "
"available to prioritize cells for scheduling:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml183(para)
msgid ""
"<code>scheduler_filter_classes</code> - Specifies the list of filter "
"classes. By default <code>nova.cells.weights.all_filters</code> is "
"specified, which maps to all cells filters included with Compute (see <xref "
"linkend=\"scheduler-filters\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml189(para)
msgid ""
"<code>scheduler_weight_classes</code> - Specifies the list of weight "
"classes. By default <code>nova.cells.weights.all_weighers</code> is "
"specified, which maps to all cell weight algorithms (weighers) included with"
" Compute. The following modules are available:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml194(para)
msgid ""
"<code>mute_child</code>: Downgrades the likelihood of child cells being "
"chosen for scheduling requests, which haven't sent capacity or capability "
"updates in a while. Options include <code>mute_weight_multiplier</code> "
"(multiplier for mute children; value should be negative) and "
"<code>mute_weight_value</code> (assigned to mute children; should be a "
"positive value)."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml203(para)
msgid ""
"<code>ram_by_instance_type</code>: Select cells with the most RAM capacity "
"for the instance type being requested. Because higher weights win, Compute "
"returns the number of available units for the instance type requested. The "
"<code>ram_weight_multiplier</code> option defaults to 10.0 that adds to the "
"weight by a factor of 10. Use a negative number to stack VMs on one host "
"instead of spreading out new VMs to more hosts in the cell."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml211(para)
msgid ""
"<code>weight_offset</code>: Allows modifying the database to weight a "
"particular cell. You can use this when you want to disable a cell (for "
"example, '0'), or to set a default cell by making its weight_offset very "
"high (for example, '999999999999999'). The highest weight will be the first "
"cell to be scheduled for launching an instance."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml219(para)
msgid ""
"Additionally, the following options are available for the cell scheduler:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml223(para)
msgid ""
"<code>scheduler_retries</code> - Specifies how many times the scheduler "
"tries to launch a new instance when no cells are available (default=10)."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml227(para)
msgid ""
"<code>scheduler_retry_delay</code> - Specifies the delay (in seconds) "
"between retries (default=2)."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml232(para)
msgid ""
"As an admin user, you can also add a filter that directs builds to a "
"particular cell. The <filename>policy.json</filename> file must have a line "
"with <literal>\"cells_scheduler_filter:TargetCellFilter\" : "
"\"is_admin:True\"</literal> to let an admin user specify a scheduler hint to"
" direct a build to a particular cell."
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml239(title)
msgid "Optional cell configuration"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml240(para)
msgid ""
"Cells currently keeps all inter-cell communication data, including user "
"names and passwords, in the database. This is undesirable and unnecessary "
"since cells data isn't updated very frequently. Instead, create a JSON file "
"to input cells data specified via a <code>[cells]cells_config</code> option."
" When specified, the database is no longer consulted when reloading cells "
"data. The file will need the columns present in the Cell model (excluding "
"common database fields and the <code>id</code> column). The queue connection"
" information must be specified through a <code>transport_url</code> field, "
"instead of <code>username</code>, <code>password</code>, and so on. The "
"<code>transport_url</code> has the following form:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-cells.xml255(para)
msgid ""
"The scheme can be either <literal>qpid</literal> or "
"<literal>rabbit</literal>, as shown previously. The following sample shows "
"this optional configuration:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-security.xml10(title)
msgid "Security hardening"
msgstr ""
#: ./doc/config-reference/compute/section_compute-security.xml11(para)
msgid ""
"OpenStack Compute can be integrated with various third-party technologies to"
" increase security. For more information, see the <link "
"href=\"http://docs.openstack.org/sec/\"><citetitle>OpenStack Security "
"Guide</citetitle></link>."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml7(title)
msgid "QEMU"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml8(para)
msgid ""
"From the perspective of the Compute service, the QEMU hypervisor is very "
"similar to the KVM hypervisor. Both are controlled through libvirt, both "
"support the same feature set, and all virtual machine images that are "
"compatible with KVM are also compatible with QEMU. The main difference is "
"that QEMU does not support native virtualization. Consequently, QEMU has "
"worse performance than KVM and is a poor choice for a production deployment."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml15(para)
msgid "Running on older hardware that lacks virtualization support."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml19(para)
msgid ""
"Running the Compute service inside of a virtual machine for development or "
"testing purposes, where the hypervisor does not support native "
"virtualization for guests."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml13(para)
msgid "The typical uses cases for QEMU are<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml25(para)
msgid ""
"To enable QEMU, add these settings to "
"<filename>nova.conf</filename>:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml29(para)
msgid ""
"For some operations you may also have to install the <placeholder-1/> "
"utility:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml31(para)
msgid "On Ubuntu: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml34(para)
msgid "On RHEL, Fedora or CentOS: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml37(para)
msgid "On openSUSE: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml40(para)
msgid ""
"The QEMU hypervisor supports the following virtual machine image formats:"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml53(title)
msgid "Tips and fixes for QEMU on RHEL"
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml54(para)
msgid ""
"If you are testing OpenStack in a virtual machine, you need to configure "
"nova to use qemu without KVM and hardware virtualization. The second command"
" relaxes SELinux rules to allow this mode of operation (<link "
"href=\"https://bugzilla.redhat.com/show_bug.cgi?id=753589\"> "
"https://bugzilla.redhat.com/show_bug.cgi?id=753589</link>). The last two "
"commands here work around a libvirt issue fixed in RHEL 6.4. Note nested "
"virtualization will be the much slower TCG variety, and you should provide "
"lots of memory to the top level guest, as the OpenStack-created guests "
"default to 2GM RAM with no overcommit."
msgstr ""
#: ./doc/config-reference/compute/section_hypervisor_qemu.xml65(para)
msgid "The second command, <placeholder-1/>, may take a while."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml10(title)
msgid "Configure migrations"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml12(para)
msgid ""
"Only cloud administrators can perform live migrations. If your cloud is "
"configured to use cells, you can perform live migration within but not "
"between cells."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml16(para)
msgid ""
"Migration enables an administrator to move a virtual machine instance from "
"one compute host to another. This feature is useful when a compute host "
"requires maintenance. Migration can also be useful to redistribute the load "
"when many VM instances are running on a specific physical machine."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml21(para)
msgid "The migration types are:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml24(para)
msgid ""
"<emphasis role=\"bold\">Migration</emphasis> (or non-live migration). The "
"instance is shut down (and the instance knows that it was rebooted) for a "
"period of time to be moved to another hypervisor."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml30(para)
msgid ""
"<emphasis role=\"bold\">Live migration</emphasis> (or true live migration). "
"Almost no instance downtime. Useful when the instances must be kept running "
"during the migration."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml35(para)
msgid "The types of <firstterm>live migration</firstterm> are:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml38(para)
msgid ""
"<emphasis role=\"bold\">Shared storage-based live migration</emphasis>. Both"
" hypervisors have access to shared storage."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml43(para)
msgid ""
"<emphasis role=\"bold\">Block live migration</emphasis>. No shared storage "
"is required."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml47(para)
msgid ""
"<emphasis role=\"bold\">Volume-backed live migration</emphasis>. When "
"instances are backed by volumes rather than ephemeral disk, no shared "
"storage is required, and migration is supported (currently only in libvirt-"
"based hypervisors)."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml54(para)
msgid ""
"The following sections describe how to configure your hosts and compute "
"nodes for migrations by using the KVM and XenServer hypervisors."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml58(title)
msgid "KVM-Libvirt"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml60(title)
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml293(title)
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml368(title)
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml117(title)
msgid "Prerequisites"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml62(para)
msgid "<emphasis role=\"bold\">Hypervisor:</emphasis> KVM with libvirt"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml66(para)
msgid ""
"<emphasis role=\"bold\">Shared storage:</emphasis><filename><replaceable"
">NOVA-INST-DIR</replaceable>/instances/</filename> (for example, "
"<filename>/var/lib/nova/instances</filename>) has to be mounted by shared "
"storage. This guide uses NFS but other options, including the <link "
"href=\"http://gluster.org/community/documentation//index.php/OSConnect\">OpenStack"
" Gluster Connector</link> are available."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml75(para)
msgid ""
"<emphasis role=\"bold\">Instances:</emphasis> Instance can be migrated with "
"iSCSI based volumes"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml80(title)
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml377(title)
msgid "Notes"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml83(para)
msgid ""
"Because the Compute service does not use the libvirt live migration "
"functionality by default, guests are suspended before migration and might "
"experience several minutes of downtime. For details, see <xref linkend"
"=\"true-live-migration-kvm-libvirt\"/>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml90(para)
msgid ""
"This guide assumes the default value for <option>instances_path</option> in "
"your <filename>nova.conf</filename> file (<filename><replaceable>NOVA-INST-"
"DIR</replaceable>/instances</filename>). If you have changed the "
"<literal>state_path</literal> or <literal>instances_path</literal> "
"variables, modify accordingly."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml99(para)
msgid ""
"You must specify <literal>vncserver_listen=0.0.0.0</literal> or live "
"migration does not work correctly."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml106(title)
msgid "Example Compute installation environment"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml108(para)
msgid ""
"Prepare at least three servers; for example, <literal>HostA</literal>, "
"<literal>HostB</literal>, and <literal>HostC</literal>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml113(para)
msgid ""
"<literal>HostA</literal> is the <firstterm baseform=\"cloud "
"controller\">Cloud Controller</firstterm>, and should run these services: "
"<systemitem class=\"service\">nova-api</systemitem>, <systemitem "
"class=\"service\">nova-scheduler</systemitem>, <literal>nova-"
"network</literal>, <systemitem class=\"service\">cinder-volume</systemitem>,"
" and <literal>nova-objectstore</literal>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml123(para)
msgid ""
"<literal>HostB</literal> and <literal>HostC</literal> are the <firstterm "
"baseform=\"compute node\">compute nodes</firstterm> that run <systemitem "
"class=\"service\">nova-compute</systemitem>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml129(para)
msgid ""
"Ensure that <literal><replaceable>NOVA-INST-DIR</replaceable></literal> (set"
" with <literal>state_path</literal> in the <filename>nova.conf</filename> "
"file) is the same on all hosts."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml136(para)
msgid ""
"In this example, <literal>HostA</literal> is the NFSv4 server that exports "
"<filename><replaceable>NOVA-INST-DIR</replaceable>/instances</filename>, and"
" <literal>HostB</literal> and <literal>HostC</literal> mount it."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml144(title)
msgid "To configure your system"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml146(para)
msgid ""
"Configure your DNS or <filename>/etc/hosts</filename> and ensure it is "
"consistent across all hosts. Make sure that the three hosts can perform name"
" resolution with each other. As a test, use the <placeholder-1/> command to "
"ping each host from one another."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml156(para)
msgid ""
"Ensure that the UID and GID of your nova and libvirt users are identical "
"between each of your servers. This ensures that the permissions on the NFS "
"mount works correctly."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml162(para)
msgid ""
"Follow the instructions at <link "
"href=\"https://help.ubuntu.com/community/SettingUpNFSHowTo\">the Ubuntu NFS "
"HowTo to setup an NFS server on <literal>HostA</literal>, and NFS Clients on"
" <literal>HostB</literal> and <literal>HostC</literal>.</link>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml168(para)
msgid ""
"The aim is to export <filename><replaceable>NOVA-INST-"
"DIR</replaceable>/instances</filename> from <literal>HostA</literal>, and "
"have it readable and writable by the nova user on <literal>HostB</literal> "
"and <literal>HostC</literal>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml175(para)
msgid ""
"Using your knowledge from the Ubuntu documentation, configure the NFS server"
" at <literal>HostA</literal> by adding this line to the "
"<filename>/etc/exports</filename> file:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml179(replaceable)
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml194(replaceable)
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml199(replaceable)
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml206(replaceable)
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml211(replaceable)
msgid "NOVA-INST-DIR"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml180(para)
msgid ""
"Change the subnet mask (<literal>255.255.0.0</literal>) to the appropriate "
"value to include the IP addresses of <literal>HostB</literal> and "
"<literal>HostC</literal>. Then restart the NFS server:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml188(para)
msgid "Set the 'execute/search' bit on your shared directory."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml190(para)
msgid ""
"On both compute nodes, make sure to enable the 'execute/search' bit to allow"
" qemu to be able to use the images within the directories. On all hosts, run"
" the following command:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml197(para)
msgid ""
"Configure NFS at HostB and HostC by adding this line to the "
"<filename>/etc/fstab</filename> file:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml200(para)
msgid "Make sure that you can mount the exported directory can be mounted:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml203(para)
msgid ""
"Check that HostA can see the \"<filename><replaceable>NOVA-INST-"
"DIR</replaceable>/instances/</filename>\" directory:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml206(filename)
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml211(filename)
msgid "<placeholder-1/>/instances/"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml208(para)
msgid ""
"Perform the same check at HostB and HostC, paying special attention to the "
"permissions (nova should be able to write):"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml224(para)
msgid ""
"Update the libvirt configurations. Modify the "
"<filename>/etc/libvirt/libvirtd.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml234(para)
msgid "Modify the <filename>/etc/libvirt/qemu.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml238(para)
msgid "Modify the <filename>/etc/init/libvirt-bin.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml243(para)
msgid "Modify the <filename>/etc/default/libvirt-bin</filename> file:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml247(para)
msgid ""
"Restart libvirt. After you run the command, ensure that libvirt is "
"successfully restarted:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml254(para)
msgid "Configure your firewall to allow libvirt to communicate between nodes."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml256(para)
msgid ""
"For information about ports that are used with libvirt, see <link "
"href=\"http://libvirt.org/remote.html#Remote_libvirtd_configuration\">the "
"libvirt documentation</link> By default, libvirt listens on TCP port 16509 "
"and an ephemeral TCP range from 49152 to 49261 is used for the KVM "
"communications. As this guide has disabled libvirt auth, you should take "
"good care that these ports are only open to hosts within your installation."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml267(para)
msgid ""
"You can now configure options for live migration. In most cases, you do not "
"need to configure any options. The following chart is for advanced usage "
"only."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml274(title)
msgid "Enable true live migration"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml275(para)
msgid ""
"By default, the Compute service does not use the libvirt live migration "
"functionality. To enable this functionality, add the following line to the "
"<filename>nova.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml280(para)
msgid ""
"The Compute service does not use libvirt's live migration by default because"
" there is a risk that the migration process never ends. This can happen if "
"the guest operating system dirties blocks on the disk faster than they can "
"migrated."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml291(title)
msgid "Shared storage"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml295(para)
msgid ""
"<emphasis role=\"bold\">Compatible XenServer hypervisors</emphasis>. For "
"more information, see the <link "
"href=\"http://docs.vmd.citrix.com/XenServer/6.0.0/1.0/en_gb/reference.html#pooling_homogeneity_requirements\">Requirements"
" for Creating Resource Pools</link> section of the <citetitle>XenServer "
"Administrator's Guide</citetitle>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml304(para)
msgid ""
"<emphasis role=\"bold\">Shared storage</emphasis>. An NFS export, visible to"
" all XenServer hosts."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml307(para)
msgid ""
"For the supported NFS versions, see the <link "
"href=\"http://docs.vmd.citrix.com/XenServer/6.0.0/1.0/en_gb/reference.html#id1002701\">NFS"
" VHD</link> section of the <citetitle>XenServer Administrator's "
"Guide</citetitle>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml314(para)
msgid ""
"To use shared storage live migration with XenServer hypervisors, the hosts "
"must be joined to a XenServer pool. To create that pool, a host aggregate "
"must be created with special metadata. This metadata is used by the XAPI "
"plug-ins to establish the pool."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml320(title)
msgid "To use shared storage live migration with XenServer hypervisors"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml323(para)
msgid ""
"Add an NFS VHD storage to your master XenServer, and set it as default SR. "
"For more information, please refer to the <link "
"href=\"http://docs.vmd.citrix.com/XenServer/6.0.0/1.0/en_gb/reference.html#id1002701\">NFS"
" VHD</link> section in the <citetitle>XenServer Administrator's "
"Guide</citetitle>."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml331(para)
msgid ""
"Configure all the compute nodes to use the default sr for pool operations. "
"Add this line to your <filename>nova.conf</filename> configuration files "
"across your compute nodes:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml338(para)
msgid "Create a host aggregate:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml340(para)
msgid ""
"The command displays a table that contains the ID of the newly created "
"aggregate."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml342(para)
msgid ""
"Now add special metadata to the aggregate, to mark it as a hypervisor pool:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml346(para)
msgid "Make the first compute node part of that aggregate:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml349(para)
msgid "At this point, the host is part of a XenServer pool."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml353(para)
msgid "Add additional hosts to the pool:"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml356(para)
msgid ""
"At this point, the added compute node and the host are shut down, to join "
"the host to the XenServer pool. The operation fails, if any server other "
"than the compute node is running/suspended on your host."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml366(title)
msgid "Block migration"
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml370(para)
msgid ""
"<emphasis role=\"bold\">Compatible XenServer hypervisors</emphasis>. The "
"hypervisors must support the Storage XenMotion feature. See your XenServer "
"manual to make sure your edition has this feature."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml380(para)
msgid ""
"To use block migration, you must use the <parameter>--block-"
"migrate</parameter> parameter with the live migration command."
msgstr ""
#: ./doc/config-reference/compute/section_compute-configure-migrations.xml385(para)
msgid ""
"Block migration works only with EXT local storage SRs, and the server must "
"not have any volumes attached."
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml10(title)
msgid "Networking plug-ins"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml11(para)
msgid ""
"OpenStack Networking introduces the concept of a plug-in, which is a back-"
"end implementation of the OpenStack Networking API. A plug-in can use a "
"variety of technologies to implement the logical API requests. Some "
"OpenStack Networking plug-ins might use basic Linux VLANs and IP tables, "
"while others might use more advanced technologies, such as L2-in-L3 "
"tunneling or OpenFlow. These sections detail the configuration options for "
"the various plug-ins."
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml22(title)
msgid "BigSwitch configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml28(title)
msgid "Brocade configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml34(title)
msgid "CISCO configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml40(title)
msgid "CloudBase Hyper-V plug-in configuration options (deprecated)"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml47(title)
msgid "CloudBase Hyper-V Agent configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml54(title)
msgid "Linux bridge plug-in configuration options (deprecated)"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml61(title)
msgid "Linux bridge Agent configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml68(title)
msgid "Mellanox configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml74(title)
msgid "Meta Plug-in configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml75(para)
msgid "The Meta Plug-in allows you to use multiple plug-ins at the same time."
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml83(title)
msgid "MidoNet configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml89(title)
msgid "NEC configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml95(title)
msgid "Nicira NVP configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml101(title)
msgid "Open vSwitch plug-in configuration options (deprecated)"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml108(title)
msgid "Open vSwitch Agent configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml115(title)
msgid "PLUMgrid configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins.xml121(title)
msgid "Ryu configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml10(title)
msgid "Modular Layer 2 (ml2) configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml11(para)
msgid ""
"The Modular Layer 2 (ml2) plug-in has two components: network types and "
"mechanisms. You can configure these components separately. This section "
"describes these configuration options."
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml16(title)
msgid "MTU bug with VXLAN tunnelling"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml17(para)
msgid ""
"Due to a bug in Linux Bridge software maximum transmission unit (MTU) "
"handling, using VXLAN tunnels does not work by default."
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml22(para)
msgid ""
"A simple workaround is to increase the MTU value of the physical interface "
"and physical switch fabric by at least 50 bytes. For example, increase the "
"MTU value to 1550. This value enables an automatic 50-byte MTU difference "
"between the physical interface (1500) and the VXLAN interface (automatically"
" 1500-50 = 1450). An MTU value of 1450 causes issues when virtual machine "
"taps are configured at an MTU value of 1500."
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml33(para)
msgid ""
"Another workaround is to decrease the virtual ethernet devices' MTU. Set the"
" <option>network_device_mtu</option> option to 1450 in the "
"<filename>neutron.conf</filename> file, and set all guest virtual machines' "
"MTU to the same value by using a DHCP option. For information about how to "
"use this option, see <link href=\"http://docs.openstack.org/admin-guide-"
"cloud/content/ch_networking.html#openvswitch_plugin\">Configure OVS plug-"
"in</link>."
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml48(title)
msgid "Modular Layer 2 (ml2) Flat Type configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml53(title)
msgid "Modular Layer 2 (ml2) VXLAN Type configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml58(title)
msgid "Modular Layer 2 (ml2) Arista Mechanism configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml64(title)
msgid "Modular Layer 2 (ml2) Cisco Mechanism configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml69(title)
msgid "Modular Layer 2 (ml2) L2 Population Mechanism configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-plugins-ml2.xml74(title)
msgid "Modular Layer 2 (ml2) Tail-f NCS Mechanism configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml6(title)
msgid "Networking configuration options"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml7(para)
msgid ""
"The options and descriptions listed in this introduction are auto generated "
"from the code in the Networking service project, which provides software-"
"defined networking between VMs run in Compute. The list contains common "
"options, while the subsections list the options for the various networking "
"plug-ins."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml20(para)
msgid "Use the following options to alter agent-related settings."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml25(title)
msgid "API"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml26(para)
msgid "Use the following options to alter API-related settings."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml31(title)
msgid "Database"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml32(para)
msgid "Use the following options to alter Database-related settings."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml37(title)
msgid "Logging"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml38(para)
msgid "Use the following options to alter logging settings."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml43(title)
msgid "Metadata Agent"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml44(para)
msgid ""
"Use the following options in the <filename>metadata_agent.ini</filename> "
"file for the Metadata agent."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml50(title)
msgid "Policy"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml51(para)
msgid ""
"Use the following options in the <filename>neutron.conf</filename> file to "
"change policy settings."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml57(title)
msgid "Quotas"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml58(para)
msgid ""
"Use the following options in the <filename>neutron.conf</filename> file for "
"the quota system."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml64(title)
msgid "Scheduler"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml65(para)
msgid ""
"Use the following options in the <filename>neutron.conf</filename> file to "
"change scheduler settings."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml71(title)
msgid "Security Groups"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml72(para)
msgid ""
"Use the following options in the configuration file for your driver to "
"change security group settings."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml78(title)
msgid "SSL"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml79(para)
msgid ""
"Use the following options in the <filename>neutron.conf</filename> file to "
"enable SSL."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml85(title)
msgid "Testing"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml86(para)
msgid "Use the following options to alter testing-related features."
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml91(title)
msgid "WSGI"
msgstr ""
#: ./doc/config-reference/networking/section_networking-options-reference.xml92(para)
msgid ""
"Use the following options in the <filename>neutron.conf</filename> file to "
"configure the WSGI layer."
msgstr ""
#: ./doc/config-reference/block-storage/section_backup-drivers.xml6(title)
msgid "Backup drivers"
msgstr ""
#: ./doc/config-reference/block-storage/section_backup-drivers.xml7(para)
msgid ""
"This section describes how to configure the <systemitem class=\"service"
"\">cinder-backup</systemitem> service and its drivers."
msgstr ""
#: ./doc/config-reference/block-storage/section_backup-drivers.xml10(para)
msgid ""
"The volume drivers are included with the Block Storage repository (<link "
"href=\"https://github.com/openstack/cinder\">https://github.com/openstack/cinder</link>)."
" To set a backup driver, use the <literal>backup_driver</literal> flag. By "
"default there is no backup driver enabled."
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-drivers.xml6(title)
msgid "Volume drivers"
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-drivers.xml7(para)
msgid ""
"To use different volume drivers for the <systemitem class=\"service"
"\">cinder-volume</systemitem> service, use the parameters described in these"
" sections."
msgstr ""
#: ./doc/config-reference/block-storage/section_volume-drivers.xml10(para)
msgid ""
"The volume drivers are included in the Block Storage repository (<link "
"href=\"https://github.com/openstack/cinder\">https://github.com/openstack/cinder</link>)."
" To set a volume driver, use the <literal>volume_driver</literal> flag. The "
"default is:"
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml6(title)
msgid "Introduction to the Block Storage Service"
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml7(para)
msgid ""
"The Openstack Block Storage Service provides persistent block storage "
"resources that OpenStack Compute instances can consume. This includes "
"secondary attached storage similar to the Amazon Elastic Block Storage (EBS)"
" offering. In addition, you can write images to a Block Storage device for "
"Compute to use as a bootable persistent instance."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml14(para)
msgid ""
"The Block Storage Service differs slightly from the Amazon EBS offering. The"
" Block Storage Service does not provide a shared storage solution like NFS. "
"With the Block Storage Service, you can attach a device to only one "
"instance."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml19(para)
msgid "The Block Storage Service provides:"
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml22(para)
msgid ""
"<systemitem class=\"service\">cinder-api</systemitem>. A WSGI app that "
"authenticates and routes requests throughout the Block Storage Service. It "
"supports the OpenStack APIs only, although there is a translation that can "
"be done through Compute's EC2 interface, which calls in to the cinderclient."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml30(para)
msgid ""
"<systemitem class=\"service\">cinder-scheduler</systemitem>. Schedules and "
"routes requests to the appropriate volume service. As of Grizzly; depending "
"upon your configuration this may be simple round-robin scheduling to the "
"running volume services, or it can be more sophisticated through the use of "
"the Filter Scheduler. The Filter Scheduler is the default in Grizzly and "
"enables filters on things like Capacity, Availability Zone, Volume Types, "
"and Capabilities as well as custom filters."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml38(para)
msgid ""
"<systemitem class=\"service\">cinder-volume</systemitem>. Manages Block "
"Storage devices, specifically the back-end devices themselves."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml43(para)
msgid ""
"<systemitem class=\"service\">cinder-backup</systemitem> Provides a means to"
" back up a Block Storage Volume to OpenStack Object Store (SWIFT)."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml48(para)
msgid "The Block Storage Service contains the following components:"
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml52(para)
msgid ""
"<emphasis role=\"bold\">Back-end Storage Devices</emphasis>. The Block "
"Storage Service requires some form of back-end storage that the service is "
"built on. The default implementation is to use LVM on a local volume group "
"named \"cinder-volumes.\" In addition to the base driver implementation, the"
" Block Storage Service also provides the means to add support for other "
"storage devices to be utilized such as external Raid Arrays or other storage"
" appliances. These back-end storage devices may have custom block sizes when"
" using KVM or QEMU as the hypervisor."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml65(para)
msgid ""
"<emphasis role=\"bold\">Users and Tenants (Projects)</emphasis>. The Block "
"Storage Service is designed to be used by many different cloud computing "
"consumers or customers, basically tenants on a shared system, using role-"
"based access assignments. Roles control the actions that a user is allowed "
"to perform. In the default configuration, most actions do not require a "
"particular role, but this is configurable by the system administrator "
"editing the appropriate <filename>policy.json</filename> file that maintains"
" the rules. A user's access to particular volumes is limited by tenant, but "
"the username and password are assigned per user. Key pairs granting access "
"to a volume are enabled per user, but quotas to control resource consumption"
" across available hardware resources are per tenant."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml81(para)
msgid "For tenants, quota controls are available to limit:"
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml85(para)
msgid "The number of volumes that can be created"
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml89(para)
msgid "The number of snapshots that can be created"
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml93(para)
msgid ""
"The total number of GBs allowed per tenant (shared between snapshots and "
"volumes)"
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml97(para)
msgid ""
"You can revise the default quota values with the cinder CLI, so the limits "
"placed by quotas are editable by admin users."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml100(para)
msgid ""
"<emphasis role=\"bold\">Volumes, Snapshots, and Backups</emphasis>. The "
"basic resources offered by the Block Storage Service are volumes and "
"snapshots, which are derived from volumes, and backups:"
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml107(para)
msgid ""
"<emphasis role=\"bold\">Volumes</emphasis>. Allocated block storage "
"resources that can be attached to instances as secondary storage or they can"
" be used as the root store to boot instances. Volumes are persistent R/W "
"block storage devices most commonly attached to the Compute node through "
"iSCSI."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml116(para)
msgid ""
"<emphasis role=\"bold\">Snapshots</emphasis>. A read-only point in time copy"
" of a volume. The snapshot can be created from a volume that is currently in"
" use (through the use of '--force True') or in an available state. The "
"snapshot can then be used to create a new volume through create from "
"snapshot."
msgstr ""
#: ./doc/config-reference/block-storage/section_block-storage-overview.xml125(para)
msgid ""
"<emphasis role=\"bold\">Backups</emphasis>. An archived copy of a volume "
"currently stored in OpenStack Object Storage (Swift)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml6(title)
msgid "VMware VMDK driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml7(para)
msgid ""
"Use the VMware VMDK driver to enable management of the OpenStack Block "
"Storage volumes on vCenter-managed data stores. Volumes are backed by VMDK "
"files on data stores using any VMware-compatible storage technology such as "
"NFS, iSCSI, FiberChannel, and vSAN."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml13(title)
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml134(title)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml72(title)
msgid "Configuration"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml14(para)
msgid ""
"The recommended volume driver for OpenStack Block Storage is the VMware "
"vCenter VMDK driver. When you configure the driver, you must match it with "
"the appropriate OpenStack Compute driver from VMware and both drivers must "
"point to the same server."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml19(para)
msgid ""
"For example, in the <filename>nova.conf</filename> file, use this option to "
"define the Compute driver:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml22(para)
msgid ""
"In the <filename>cinder.conf</filename> file, use this option to define the "
"volume driver:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml25(para)
msgid ""
"The following table lists various options that the drivers support for the "
"OpenStack Block Storage configuration (<filename>cinder.conf</filename>):"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml32(para)
msgid ""
"The VMware VMDK drivers support the creation of VMDK disk files of type "
"<literal>thin</literal>, <literal>thick</literal>, or "
"<literal>eagerZeroedThick</literal>. Use the <code>vmware:vmdk_type</code> "
"extra spec key with the appropriate value to specify the VMDK disk file "
"type. The following table captures the mapping between the extra spec entry "
"and the VMDK disk file type:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml40(caption)
msgid "Extra spec entry to VMDK disk file type mapping"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml44(td)
msgid "Disk file type"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml45(td)
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml94(td)
msgid "Extra spec key"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml46(td)
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml95(td)
msgid "Extra spec value"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml52(td)
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml57(td)
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml62(td)
msgid "vmware:vmdk_type"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml56(td)
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml58(td)
msgid "thick"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml61(td)
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml63(td)
msgid "eagerZeroedThick"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml67(para)
msgid ""
"If no <code>vmdk_type</code> extra spec entry is specified, the default disk"
" file type is <literal>thin</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml70(para)
msgid ""
"The example below shows how to create a <code>thick</code> VMDK volume using"
" the appropriate <code>vmdk_type</code>:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml80(title)
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml93(td)
msgid "Clone type"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml81(para)
msgid ""
"With the VMware VMDK drivers, you can create a volume from another source "
"volume or from a snapshot point. The VMware vCenter VMDK driver supports "
"clone types <literal>full</literal> and <literal>linked/fast</literal>. The "
"clone type is specified using the <code>vmware:clone_type</code> extra spec "
"key with the appropriate value. The following table captures the mapping for"
" clone types:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml90(caption)
msgid "Extra spec entry to clone type mapping"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml100(td)
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml102(td)
msgid "full"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml101(td)
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml106(td)
msgid "vmware:clone_type"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml105(td)
msgid "linked/fast"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml107(td)
msgid "linked"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml111(para)
msgid "If not specified, the default clone type is <literal>full</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml113(para)
msgid ""
"The following is an example of linked cloning from another source volume:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml120(para)
msgid ""
"Note: The VMware ESX VMDK driver ignores the extra spec entry and always "
"creates a <literal>full</literal> clone."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml125(title)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml13(title)
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml31(title)
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml42(title)
msgid "Supported operations"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml126(para)
msgid ""
"The following operations are supported by the VMware vCenter and ESX VMDK "
"drivers:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml130(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml17(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml48(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml74(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml93(para)
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml36(para)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml30(para)
msgid "Create volume"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml133(para)
msgid ""
"Create volume from another source volume. (Supported only if source volume "
"is not attached to an instance.)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml138(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml35(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml111(para)
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml66(para)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml60(para)
msgid "Create volume from snapshot"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml141(para)
msgid "Create volume from glance image"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml144(para)
msgid ""
"Attach volume (When a volume is attached to an instance, a reconfigure "
"operation is performed on the instance to add the volume's VMDK to it. The "
"user must manually rescan and mount the device from within the guest "
"operating system.)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml151(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml26(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml57(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml83(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml102(para)
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml45(para)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml39(para)
msgid "Detach volume"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml154(para)
msgid ""
"Create snapshot (Allowed only if volume is not attached to an instance.)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml158(para)
msgid ""
"Delete snapshot (Allowed only if volume is not attached to an instance.)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml162(para)
msgid ""
"Upload as image to glance (Allowed only if volume is not attached to an "
"instance.)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml167(para)
msgid ""
"Although the VMware ESX VMDK driver supports these operations, it has not "
"been extensively tested."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml172(title)
msgid "Data store selection"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml173(para)
msgid ""
"When creating a volume, the driver chooses a data store that has sufficient "
"free space and has the highest <literal>freespace/totalspace</literal> "
"metric value."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml177(para)
msgid ""
"When a volume is attached to an instance, the driver attempts to place the "
"volume under the instance's ESX host on a data store that is selected using "
"the strategy above."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml4(title)
msgid "Nexenta drivers"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml5(para)
msgid ""
"NexentaStor Appliance is NAS/SAN software platform designed for building "
"reliable and fast network storage arrays. The Nexenta Storage Appliance uses"
" ZFS as a disk management system. NexentaStor can serve as a storage node "
"for the OpenStack and for the virtual servers through iSCSI and NFS "
"protocols."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml11(para)
msgid ""
"With the NFS option, every Compute volume is represented by a directory "
"designated to be its own file system in the ZFS file system. These file "
"systems are exported using NFS."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml14(para)
msgid ""
"With either option some minimal setup is required to tell OpenStack which "
"NexentaStor servers are being used, whether they are supporting iSCSI and/or"
" NFS and how to access each of the servers."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml18(para)
msgid ""
"Typically the only operation required on the NexentaStor servers is to "
"create the containing directory for the iSCSI or NFS exports. For NFS this "
"containing directory must be explicitly exported via NFS. There is no "
"software that must be installed on the NexentaStor servers; they are "
"controlled using existing management plane interfaces."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml26(title)
msgid "Nexenta iSCSI driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml27(para)
msgid ""
"The Nexenta iSCSI driver allows you to use NexentaStor appliance to store "
"Compute volumes. Every Compute volume is represented by a single zvol in a "
"predefined Nexenta namespace. For every new volume the driver creates a "
"iSCSI target and iSCSI target group that are used to access it from compute "
"hosts."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml33(para)
msgid ""
"The Nexenta iSCSI volume driver should work with all versions of "
"NexentaStor. The NexentaStor appliance must be installed and configured "
"according to the relevant Nexenta documentation. A pool and an enclosing "
"namespace must be created for all iSCSI volumes to be accessed through the "
"volume driver. This should be done as specified in the release specific "
"NexentaStor documentation."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml40(para)
msgid ""
"The NexentaStor Appliance iSCSI driver is selected using the normal "
"procedures for one or multiple back-end volume drivers. You must configure "
"these items for each NexentaStor appliance that the iSCSI volume driver "
"controls:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml46(title)
msgid "Enable the Nexenta iSCSI driver and related options"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml48(para)
msgid "This table contains the options supported by the Nexenta iSCSI driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml52(para)
msgid ""
"To use Compute with the Nexenta iSCSI driver, first set the "
"<code>volume_driver</code>:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml56(para)
msgid ""
"Then, set the <code>nexenta_host</code> parameter and other parameters from "
"the table, if needed."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml63(title)
msgid "Nexenta NFS driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml64(para)
msgid ""
"The Nexenta NFS driver allows you to use NexentaStor appliance to store "
"Compute volumes via NFS. Every Compute volume is represented by a single NFS"
" file within a shared directory."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml68(para)
msgid ""
"While the NFS protocols standardize file access for users, they do not "
"standardize administrative actions such as taking snapshots or replicating "
"file systems. The Openstack Volume Drivers bring a common interface to these"
" operations. The Nexenta NFS driver implements these standard actions using "
"the ZFS management plane that already is deployed on NexentaStor appliances."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml75(para)
msgid ""
"The Nexenta NFS volume driver should work with all versions of NexentaStor. "
"The NexentaStor appliance must be installed and configured according to the "
"relevant Nexenta documentation. A single parent file system must be created "
"for all virtual disk directories supported for OpenStack. This directory "
"must be created and exported on each NexentaStor appliance. This should be "
"done as specified in the release specific NexentaStor documentation."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml84(title)
msgid "Enable the Nexenta NFS driver and related options"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml86(para)
msgid ""
"To use Compute with the Nexenta NFS driver, first set the "
"<code>volume_driver</code>:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml91(para)
msgid ""
"The following table contains the options supported by the Nexenta NFS "
"driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml95(para)
msgid ""
"Add your list of Nexenta NFS servers to the file you specified with the "
"<code>nexenta_shares_config</code> option. For example, if the value of this"
" option was set to <filename>/etc/cinder/nfs_shares</filename>, then:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml105(para)
msgid "Comments are allowed in this file. They begin with a <code>#</code>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nexenta-volume-driver.xml107(para)
msgid ""
"Each line in this file represents a NFS share. The first part of the line is"
" the NFS share URL, the second is the connection URL to the NexentaStor "
"Appliance."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zadara-volume-driver.xml6(title)
msgid "Zadara"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/zadara-volume-driver.xml7(para)
msgid ""
"There is a volume back-end for Zadara. Set the following in your "
"<filename>cinder.conf</filename>, and use the following options to configure"
" it."
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for
#. you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml286(None)
msgid ""
"@@image: "
"'../../../common/figures/coraid/Repository_Creation_Plan_screen.png'; "
"md5=83038804978648c2db4001a46c11f8ba"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml6(title)
msgid "Coraid AoE driver configuration"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml7(para)
msgid ""
"Coraid storage appliances can provide block-level storage to OpenStack "
"instances. Coraid storage appliances use the low-latency ATA-over-Ethernet "
"(ATA) protocol to provide high-bandwidth data transfer between hosts and "
"data on the network."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml11(para)
msgid "Once configured for OpenStack, you can:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml14(para)
msgid "Create, delete, attach, and detach block storage volumes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml18(para)
msgid "Create, list, and delete volume snapshots."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml21(para)
msgid ""
"Create a volume from a snapshot, copy an image to a volume, copy a volume to"
" an image, clone a volume, and get volume statistics."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml26(para)
msgid ""
"This document describes how to configure the OpenStack Block Storage Service"
" for use with Coraid storage appliances."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml29(title)
msgid "Terminology"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml30(para)
msgid "These terms are used in this section:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml34(th)
msgid "Term"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml35(th)
msgid "Definition"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml40(td)
msgid "AoE"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml41(td)
msgid "ATA-over-Ethernet protocol"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml44(td)
msgid "EtherCloud Storage Manager (ESM)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml45(td)
msgid ""
"ESM provides live monitoring and management of EtherDrive appliances that "
"use the AoE protocol, such as the SRX and VSX."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml50(td)
msgid "Fully-Qualified Repository Name (FQRN)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml53(replaceable)
msgid "performance_class"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml53(replaceable)
msgid "availability_class"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml53(replaceable)
msgid "profile_name"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml53(replaceable)
msgid "repository_name"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml51(td)
msgid ""
"The FQRN is the full identifier of a storage profile. FQRN syntax is: "
"<placeholder-1/><placeholder-2/><placeholder-3/><placeholder-4/><placeholder-5/><placeholder-6/><placeholder-7/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml56(td)
msgid "SAN"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml57(td)
msgid "Storage Area Network"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml60(td)
msgid "SRX"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml61(td)
msgid "Coraid EtherDrive SRX block storage appliance"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml64(td)
msgid "VSX"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml65(td)
msgid "Coraid EtherDrive VSX storage virtualization appliance"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml72(title)
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml14(title)
msgid "Requirements"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml73(para)
msgid ""
"To support the OpenStack Block Storage Service, your SAN must include an SRX"
" for physical storage, a VSX running at least CorOS v2.0.6 for snapshot "
"support, and an ESM running at least v2.1.1 for storage repository "
"orchestration. Ensure that all storage appliances are installed and "
"connected to your network before you configure OpenStack volumes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml79(para)
msgid ""
"So that the node can communicate with the SAN, you must install the Coraid "
"AoE Linux driver on each compute node on the network that runs an OpenStack "
"instance."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml84(title)
msgid "Overview"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml85(para)
msgid ""
"To configure the OpenStack Block Storage for use with Coraid storage "
"appliances, perform the following procedures:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml89(para)
msgid ""
"<link linkend=\"coraid_installing_aoe_driver\">Download and install the "
"Coraid Linux AoE driver</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml93(para)
msgid ""
"<link linkend=\"coraid_creating_storage_profile\">Create a storage profile "
"by using the Coraid ESM GUI</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml97(para)
msgid ""
"<link linkend=\"coraid_creating_storage_repository\">Create a storage "
"repository by using the ESM GUI and record the FQRN</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml102(para)
msgid ""
"<link linkend=\"coraid_configuring_cinder.conf\">Configure the "
"<filename>cinder.conf</filename> file</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml106(para)
msgid ""
"<link linkend=\"coraid_creating_associating_volume_type\">Create and "
"associate a block storage volume type</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml113(title)
msgid "Install the Coraid AoE driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml114(para)
msgid ""
"Install the Coraid AoE driver on every compute node that will require access"
" to block storage."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml116(para)
msgid ""
"The latest AoE drivers will always be located at <link "
"href=\"http://support.coraid.com/support/linux/\">http://support.coraid.com/support/linux/</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml119(para)
msgid ""
"To download and install the AoE driver, follow the instructions below, "
"replacing “aoeXXX” with the AoE driver file name:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml124(para)
msgid "Download the latest Coraid AoE driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml130(para)
msgid "Unpack the AoE driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml133(para)
msgid "Install the AoE driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml139(para)
msgid "Initialize the AoE driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml145(para)
msgid ""
"Optionally, specify the Ethernet interfaces that the node can use to "
"communicate with the SAN."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml147(para)
msgid ""
"The AoE driver may use every Ethernet interface available to the node unless"
" limited with the <literal>aoe_iflist</literal> parameter. For more "
"information about the <literal>aoe_iflist</literal> parameter, see the "
"<filename>aoe readme</filename> file included with the AoE driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml154(replaceable)
msgid "eth1 eth2 ..."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml160(title)
msgid "Create a storage profile"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml161(para)
msgid "To create a storage profile using the ESM GUI:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml164(para)
msgid "Log in to the ESM."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml167(para)
msgid ""
"Click <guibutton>Storage Profiles</guibutton> in the <guilabel>SAN "
"Domain</guilabel> pane."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml171(para)
msgid ""
"Choose <guimenuitem>Menu &gt; Create Storage Profile</guimenuitem>. If the "
"option is unavailable, you might not have appropriate permissions. Make sure"
" you are logged in to the ESM as the SAN administrator."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml177(para)
msgid "Use the storage class selector to select a storage class."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml179(para)
msgid ""
"Each storage class includes performance and availability criteria (see the "
"Storage Classes topic in the ESM Online Help for information on the "
"different options)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml184(para)
msgid ""
"Select a RAID type (if more than one is available) for the selected profile "
"type."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml188(para)
msgid "Type a <guilabel>Storage Profile</guilabel> name."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml189(para)
msgid ""
"The name is restricted to alphanumeric characters, underscore (_), and "
"hyphen (-), and cannot exceed 32 characters."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml194(para)
msgid "Select the drive size from the drop-down menu."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml197(para)
msgid ""
"Select the number of drives to be initialized for each RAID (LUN) from the "
"drop-down menu (if the selected RAID type requires multiple drives)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml202(para)
msgid ""
"Type the number of RAID sets (LUNs) you want to create in the repository by "
"using this profile."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml206(para)
msgid "Click <guibutton>Next</guibutton>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml211(title)
msgid "Create a storage repository and get the FQRN"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml212(para)
msgid ""
"Create a storage repository and get its fully qualified repository name "
"(FQRN):"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml216(para)
msgid "Access the <guilabel>Create Storage Repository</guilabel> dialog box."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml220(para)
msgid "Type a Storage Repository name."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml221(para)
msgid ""
"The name is restricted to alphanumeric characters, underscore (_), hyphen "
"(-), and cannot exceed 32 characters."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml226(para)
msgid ""
"Click <guibutton>Limited</guibutton> or <guibutton>Unlimited</guibutton> to "
"indicate the maximum repository size."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml229(para)
msgid ""
"<guibutton>Limited</guibutton> sets the amount of space that can be "
"allocated to the repository. Specify the size in TB, GB, or MB."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml232(para)
msgid ""
"When the difference between the reserved space and the space already "
"allocated to LUNs is less than is required by a LUN allocation request, the "
"reserved space is increased until the repository limit is reached."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml237(para)
msgid ""
"The reserved space does not include space used for parity or space used for "
"mirrors. If parity and/or mirrors are required, the actual space allocated "
"to the repository from the SAN is greater than that specified in reserved "
"space."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml243(para)
msgid ""
"<emphasis role=\"bold\">Unlimited</emphasis>—Unlimited means that the amount"
" of space allocated to the repository is unlimited and additional space is "
"allocated to the repository automatically when space is required and "
"available."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml249(para)
msgid ""
"Drives specified in the associated Storage Profile must be available on the "
"SAN in order to allocate additional resources."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml255(para)
msgid "Check the <guibutton>Resizeable LUN</guibutton> box."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml257(para)
msgid "This is required for OpenStack volumes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml259(para)
msgid ""
"If the Storage Profile associated with the repository has platinum "
"availability, the Resizeable LUN box is automatically checked."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml265(para)
msgid ""
"Check the <guibutton>Show Allocation Plan API calls</guibutton> box. Click "
"<guibutton>Next</guibutton>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml270(para)
msgid "Record the FQRN and click <guibutton>Finish</guibutton>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml272(para)
msgid ""
"The FQRN is located in the first line of output following the "
"<literal>Plan</literal> keyword in the <guilabel>Repository Creation "
"Plan</guilabel> window. The FQRN syntax is "
"<replaceable>performance_class</replaceable><placeholder-1/><replaceable>availability_class</replaceable><placeholder-2/><replaceable>profile_name</replaceable><placeholder-3/><replaceable>repository_name</replaceable>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml277(para)
msgid ""
"In this example, the FQRN is <literal>Bronze-"
"Platinum:BP1000:OSTest</literal>, and is highlighted."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml281(title)
msgid "Repository Creation Plan screen"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml290(para)
msgid ""
"Record the FQRN; it is a required parameter later in the configuration "
"procedure."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml296(title)
msgid "Configure options in the cinder.conf file"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml297(para)
msgid ""
"Edit or add the following lines to the file<filename> "
"/etc/cinder/cinder.conf</filename>:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml300(replaceable)
msgid "ESM_IP_address"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml301(replaceable)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml340(option)
msgid "username"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml302(replaceable)
msgid "Access_Control_Group_name"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml303(replaceable)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml350(option)
msgid "password"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml304(replaceable)
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml352(replaceable)
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml370(replaceable)
msgid "coraid_repository_key"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml306(para)
msgid ""
"Access to storage devices and storage repositories can be controlled using "
"Access Control Groups configured in ESM. Configuring "
"<filename>cinder.conf</filename> to log on to ESM as the SAN administrator "
"(user name <literal>admin</literal>), will grant full access to the devices "
"and repositories configured in ESM."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml312(para)
msgid ""
"Optionally, you can configure an ESM Access Control Group and user. Then, "
"use the <filename>cinder.conf</filename> file to configure access to the ESM"
" through that group, and user limits access from the OpenStack instance to "
"devices and storage repositories that are defined in the group."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml317(para)
msgid ""
"To manage access to the SAN by using Access Control Groups, you must enable "
"the Use Access Control setting in the <emphasis role=\"bold\">ESM System "
"Setup</emphasis> &gt;<emphasis role=\"bold\"> Security</emphasis> screen."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml321(para)
msgid "For more information, see the ESM Online Help."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml324(title)
msgid "Create and associate a volume type"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml325(para)
msgid "Create and associate a volume with the ESM storage repository."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml329(para)
msgid "Restart Cinder."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml335(para)
msgid "Create a volume."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml336(replaceable)
msgid "volume_type_name"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml337(para)
msgid ""
"where <replaceable>volume_type_name</replaceable> is the name you assign the"
" volume. You will see output similar to the following:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml345(para)
msgid "Record the value in the ID field; you use this value in the next step."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml349(para)
msgid "Associate the volume type with the Storage Repository."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml352(replaceable)
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml363(replaceable)
msgid "UUID"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml352(replaceable)
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml378(replaceable)
msgid "FQRN"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml357(th)
msgid "Variable"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml358(th)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml311(td)
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml202(td)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml319(td)
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml80(th)
msgid "Description"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml364(td)
msgid ""
"The ID returned from the <placeholder-1/> command. You can use the "
"<placeholder-2/> command to recover the ID."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml373(filename)
msgid "cinder.conf"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml375(literal)
msgid "coraid_repository"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml371(td)
msgid ""
"The key name used to associate the Cinder volume type with the ESM in the "
"<placeholder-1/> file. If no key name was defined, this is default value for"
" <placeholder-2/>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/coraid-driver.xml379(td)
msgid "The FQRN recorded during the Create Storage Repository process."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/solidfire-volume-driver.xml5(title)
msgid "SolidFire"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/solidfire-volume-driver.xml6(para)
msgid ""
"The SolidFire Cluster is a high performance all SSD iSCSI storage device "
"that provides massive scale out capability and extreme fault tolerance. A "
"key feature of the SolidFire cluster is the ability to set and modify during"
" operation specific QoS levels on a volume for volume basis. The SolidFire "
"cluster offers this along with de-duplication, compression, and an "
"architecture that takes full advantage of SSDs."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/solidfire-volume-driver.xml14(para)
msgid ""
"To configure the use of a SolidFire cluster with Block Storage, modify your "
"<filename>cinder.conf</filename> file as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/solidfire-volume-driver.xml23(para)
msgid ""
"The SolidFire driver creates a unique account prefixed with <literal"
">$cinder-volume-service-hostname-$tenant-id</literal> on the SolidFire "
"cluster for each tenant that accesses the cluster through the Volume API. "
"Unfortunately, this account formation results in issues for High "
"Availability (HA) installations and installations where the <systemitem "
"class=\"service\">cinder-volume</systemitem> service can move to a new node."
" HA installations can return an <errortext>Account Not Found</errortext> "
"error because the call to the SolidFire cluster is not always going to be "
"sent from the same node. In installations where the <systemitem "
"class=\"service\">cinder-volume</systemitem> service moves to a new node, "
"the same issue can occur when you perform operations on existing volumes, "
"such as clone, extend, delete, and so on."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/solidfire-volume-driver.xml41(para)
msgid ""
"Set the <literal>sf_account_prefix</literal> option to an empty string ('') "
"in the <filename>cinder.conf</filename> file. This setting results in unique"
" accounts being created on the SolidFire cluster, but the accounts are "
"prefixed with the tenant-id or any unique identifier that you choose and are"
" independent of the host where the <systemitem class=\"service\">cinder-"
"volume</systemitem> service resides."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml4(title)
msgid "XenAPI Storage Manager volume driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml5(para)
msgid ""
"The Xen Storage Manager volume driver (xensm) is a XenAPI hypervisor "
"specific volume driver, and can be used to provide basic storage "
"functionality, including volume creation and destruction, on a number of "
"different storage back-ends. It also enables the capability of using more "
"sophisticated storage back-ends for operations like cloning/snapshots, and "
"so on. Some of the storage plug-ins that are already supported in Citrix "
"XenServer and Xen Cloud Platform (XCP) are:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml15(para)
msgid ""
"NFS VHD: Storage repository (SR) plug-in that stores disks as Virtual Hard "
"Disk (VHD) files on a remote Network File System (NFS)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml20(para)
msgid ""
"Local VHD on LVM: SR plug-in that represents disks as VHD disks on Logical "
"Volumes (LVM) within a locally-attached Volume Group."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml25(para)
msgid ""
"HBA LUN-per-VDI driver: SR plug-in that represents Logical Units (LUs) as "
"Virtual Disk Images (VDIs) sourced by host bus adapters (HBAs). For example,"
" hardware-based iSCSI or FC support."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml31(para)
msgid ""
"NetApp: SR driver for mapping of LUNs to VDIs on a NETAPP server, providing "
"use of fast snapshot and clone features on the filer."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml36(para)
msgid ""
"LVHD over FC: SR plug-in that represents disks as VHDs on Logical Volumes "
"within a Volume Group created on an HBA LUN. For example, hardware-based "
"iSCSI or FC support."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml42(para)
msgid ""
"iSCSI: Base ISCSI SR driver, provides a LUN-per-VDI. Does not support "
"creation of VDIs but accesses existing LUNs on a target."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml47(para)
msgid ""
"LVHD over iSCSI: SR plug-in that represents disks as Logical Volumes within "
"a Volume Group created on an iSCSI LUN."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml52(para)
msgid ""
"EqualLogic: SR driver for mapping of LUNs to VDIs on a EQUALLOGIC array "
"group, providing use of fast snapshot and clone features on the array."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml58(title)
msgid "Design and operation"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml60(title)
msgid "Definitions"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml63(para)
msgid ""
"<emphasis role=\"bold\">Back-end:</emphasis> A term for a particular storage"
" back-end. This could be iSCSI, NFS, NetApp, and so on."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml68(para)
msgid ""
"<emphasis role=\"bold\">Back-end-config:</emphasis> All the parameters "
"required to connect to a specific back-end. For example, for NFS, this would"
" be the server, path, and so on."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml75(para)
msgid ""
"<emphasis role=\"bold\">Flavor:</emphasis> This term is equivalent to volume"
" \"types\". A user friendly term to specify some notion of quality of "
"service. For example, \"gold\" might mean that the volumes use a back-end "
"where backups are possible. A flavor can be associated with multiple back-"
"ends. The volume scheduler, with the help of the driver, decides which back-"
"end is used to create a volume of a particular flavor. Currently, the driver"
" uses a simple \"first-fit\" policy, where the first back-end that can "
"successfully create this volume is the one that is used."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml93(title)
msgid "Operation"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml94(para)
msgid ""
"The admin uses the nova-manage command detailed below to add flavors and "
"back-ends."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml96(para)
msgid ""
"One or more <systemitem class=\"service\">cinder-volume</systemitem> service"
" instances are deployed for each availability zone. When an instance is "
"started, it creates storage repositories (SRs) to connect to the back-ends "
"available within that zone. All <systemitem class=\"service\">cinder-"
"volume</systemitem> instances within a zone can see all the available back-"
"ends. These instances are completely symmetric and hence should be able to "
"service any <literal>create_volume</literal> request within the zone."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml108(title)
msgid "On XenServer, PV guests required"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml109(para)
msgid ""
"Note that when using XenServer you can only attach a volume to a PV guest."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml115(title)
msgid "Configure XenAPI Storage Manager"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml120(para)
msgid ""
"xensm requires that you use either Citrix XenServer or XCP as the "
"hypervisor. The NetApp and EqualLogic back-ends are not supported on XCP."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml126(para)
msgid ""
"Ensure all <emphasis role=\"bold\">hosts</emphasis> running volume and "
"compute services have connectivity to the storage system."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml141(systemitem)
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml181(systemitem)
msgid "nova-compute"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml138(emphasis)
msgid ""
"Set the following configuration options for the nova volume service: "
"(<placeholder-1/> also requires the volume_driver configuration option.)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml152(emphasis)
msgid ""
"You must create the back-end configurations that the volume driver uses "
"before you start the volume service."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml161(para)
msgid ""
"SR type and configuration connection parameters are in keeping with the "
"<link href=\"http://support.citrix.com/article/CTX124887\">XenAPI Command "
"Line Interface</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml168(para)
msgid "Example: For the NFS storage manager plug-in, run these commands:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml179(systemitem)
msgid "cinder-volume"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml177(emphasis)
msgid ""
"Start <placeholder-1/> and <placeholder-2/> with the new configuration "
"options."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml188(title)
msgid "Create and access the volumes from VMs"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml189(para)
msgid ""
"Currently, the flavors have not been tied to the volume types API. As a "
"result, we simply end up creating volumes in a \"first fit\" order on the "
"given back-ends."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xen-sm-driver.xml193(para)
msgid ""
"Use the standard <placeholder-1/> or OpenStack API commands (such as volume "
"extensions) to create, destroy, attach, or detach volumes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-xiv-volume-driver.xml5(title)
msgid "IBM XIV/DS8K volume driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-xiv-volume-driver.xml6(para)
msgid ""
"There is a unified volume back-end for IBM XIV and DS8K storage. Set the "
"following in your <filename>cinder.conf</filename>, and use the following "
"options to configure it."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml5(title)
msgid "NetApp unified driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml6(para)
msgid ""
"The NetApp® unified driver is a block storage driver that supports multiple "
"storage families and protocols. A storage family corresponds to storage "
"systems built on different NetApp technologies such as clustered Data ONTAP®"
" and Data ONTAP operating in 7-Mode. The storage protocol refers to the "
"protocol used to initiate data storage and access operations on those "
"storage systems like iSCSI and NFS. The NetApp unified driver can be "
"configured to provision and manage OpenStack volumes on a given storage "
"family using a specified storage protocol. The OpenStack volumes can then be"
" used for accessing and storing data using the storage protocol on the "
"storage family system. The NetApp unified driver is an extensible interface "
"that can support new storage families and protocols."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml21(title)
msgid "NetApp clustered Data ONTAP storage family"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml22(para)
msgid ""
"The NetApp clustered Data ONTAP storage family represents a configuration "
"group which provides OpenStack compute instances access to clustered Data "
"ONTAP storage systems. At present it can be configured in Cinder to work "
"with iSCSI and NFS storage protocols."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml28(title)
msgid "NetApp iSCSI configuration for clustered Data ONTAP"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml30(para)
msgid ""
"The NetApp iSCSI configuration for clustered Data ONTAP is an interface from"
" OpenStack to clustered Data ONTAP storage systems for provisioning and "
"managing the SAN block storage entity; that is, a NetApp LUN which can be "
"accessed using the iSCSI protocol."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml35(para)
msgid ""
"The iSCSI configuration for clustered Data ONTAP is a direct interface from "
"Cinder to the clustered Data ONTAP instance and as such does not require "
"additional management software to achieve the desired functionality. It uses"
" NetApp APIs to interact with the clustered Data ONTAP instance."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml42(title)
msgid ""
"Configuration options for clustered Data ONTAP family with iSCSI protocol"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml44(para)
msgid ""
"Configure the volume driver, storage family and storage protocol to the "
"NetApp unified driver, clustered Data ONTAP, and iSCSI respectively by "
"setting the <literal>volume_driver</literal>, "
"<literal>netapp_storage_family</literal> and "
"<literal>netapp_storage_protocol</literal> options in "
"<filename>cinder.conf</filename> as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml62(para)
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml194(para)
msgid ""
"You must override the default value of "
"<literal>netapp_storage_protocol</literal> with <literal>iscsi</literal> in "
"order to utilize the iSCSI protocol."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml67(para)
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml115(para)
msgid ""
"If you specify an account in the <literal>netapp_login</literal> that only "
"has virtual storage server (Vserver) administration privileges (rather than "
"cluster-wide administration privileges), some advanced features of the "
"NetApp unified driver will not work and you may see warnings in the Cinder "
"logs."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml74(para)
msgid ""
"For more information on these options and other deployment and operational "
"scenarios, visit the <link "
"href=\"https://communities.netapp.com/groups/openstack\"> OpenStack NetApp "
"community.</link>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml81(title)
msgid "NetApp NFS configuration for clustered Data ONTAP"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml83(para)
msgid ""
"The NetApp NFS configuration for clustered Data ONTAP is an interface from "
"OpenStack to a clustered Data ONTAP system for provisioning and managing "
"OpenStack volumes on NFS exports provided by the clustered Data ONTAP system"
" that are accessed using the NFS protocol."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml88(para)
msgid ""
"The NFS configuration for clustered Data ONTAP is a direct interface from "
"Cinder to the clustered Data ONTAP instance and as such does not require any"
" additional management software to achieve the desired functionality. It "
"uses NetApp APIs to interact with the clustered Data ONTAP instance."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml94(title)
msgid ""
"Configuration options for the clustered Data ONTAP family with NFS protocol"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml96(para)
msgid ""
"Configure the volume driver, storage family and storage protocol to NetApp "
"unified driver, clustered Data ONTAP, and NFS respectively by setting the "
"<literal>volume_driver</literal>, <literal>netapp_storage_family</literal> "
"and <literal>netapp_storage_protocol</literal> options in "
"<filename>cinder.conf</filename> as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml122(para)
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml199(para)
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml239(para)
msgid ""
"For more information on these options and other deployment and operational "
"scenarios, visit the <link "
"href=\"https://communities.netapp.com/groups/openstack\">OpenStack NetApp "
"community.</link>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml126(title)
msgid "NetApp-supported extra specs for clustered Data ONTAP"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml128(para)
msgid ""
"Extra specs allow individual vendors to specify additional filter criteria "
"which the Cinder scheduler can use when evaluating which volume node should "
"fulfill a volume provisioning request. When using the NetApp unified driver "
"with a clustered Data ONTAP storage system, you can leverage extra specs "
"with Cinder volume types to ensure that Cinder volumes are created on "
"storage backends that have certain properties (e.g. QoS, mirroring, "
"compression) configured."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml135(para)
msgid ""
"Extra specs are associated with Cinder volume types, so that when users "
"request volumes of a particular volume type, they will be created on storage"
" backends that meet the list of requirements (e.g. available space, extra "
"specs, etc). You can use the specs in the table later in this section when "
"defining Cinder volume types using the <literal>cinder type-key</literal> "
"command."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml142(para)
msgid ""
"It is recommended to only set the value of extra specs to "
"<literal>True</literal> when combining multiple specs to enforce a certain "
"logic set. If you desire to remove volumes with a certain feature enabled "
"from consideration from the Cinder volume scheduler, be sure to use the "
"negated spec name with a value of <literal> True</literal> rather than "
"setting the positive spec to a value of <literal>False</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml153(title)
msgid "NetApp Data ONTAP operating in 7-Mode storage family"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml154(para)
msgid ""
"The NetApp Data ONTAP operating in 7-Mode storage family represents a "
"configuration group which provides OpenStack compute instances access to "
"7-Mode storage systems. At present it can be configured in Cinder to work "
"with iSCSI and NFS storage protocols."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml160(title)
msgid "NetApp iSCSI configuration for Data ONTAP operating in 7-Mode"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml162(para)
msgid ""
"The NetApp iSCSI configuration for Data ONTAP operating in 7-Mode is an "
"interface from OpenStack to Data ONTAP operating in 7-Mode storage systems "
"for provisioning and managing the SAN block storage entity, that is, a LUN "
"which can be accessed using iSCSI protocol."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml167(para)
msgid ""
"The iSCSI configuration for Data ONTAP operating in 7-Mode is a direct "
"interface from OpenStack to Data ONTAP operating in 7-Mode storage system "
"and it does not require additional management software to achieve the "
"desired functionality. It uses NetApp ONTAPI to interact with the Data ONTAP"
" operating in 7-Mode storage system."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml174(title)
msgid ""
"Configuration options for the Data ONTAP operating in 7-Mode storage family "
"with iSCSI protocol"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml176(para)
msgid ""
"Configure the volume driver, storage family and storage protocol to the "
"NetApp unified driver, Data ONTAP operating in 7-Mode, and iSCSI "
"respectively by setting the <literal>volume_driver</literal>, "
"<literal>netapp_storage_family</literal> and "
"<literal>netapp_storage_protocol</literal> options in "
"<filename>cinder.conf</filename> as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml203(title)
msgid "NetApp NFS configuration for Data ONTAP operating in 7-Mode"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml205(para)
msgid ""
"The NetApp NFS configuration for Data ONTAP operating in 7-Mode is an "
"interface from OpenStack to Data ONTAP operating in 7-Mode storage system "
"for provisioning and managing OpenStack volumes on NFS exports provided by "
"the Data ONTAP operating in 7-Mode storage system which can then be accessed"
" using NFS protocol."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml211(para)
msgid ""
"The NFS configuration for Data ONTAP operating in 7-Mode is a direct "
"interface from Cinder to the Data ONTAP operating in 7-Mode instance and as "
"such does not require any additional management software to achieve the "
"desired functionality. It uses NetApp ONTAPI to interact with the Data ONTAP"
" operating in 7-Mode storage system."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml218(title)
msgid ""
"Configuration options for the Data ONTAP operating in 7-Mode family with NFS"
" protocol"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml220(para)
msgid ""
"Configure the volume driver, storage family and storage protocol to the "
"NetApp unified driver, Data ONTAP operating in 7-Mode, and NFS respectively "
"by setting the <literal>volume_driver</literal>, "
"<literal>netapp_storage_family</literal> and "
"<literal>netapp_storage_protocol</literal> options in "
"<filename>cinder.conf</filename> as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml244(title)
msgid "Upgrading prior NetApp drivers to the NetApp unified driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml245(para)
msgid ""
"NetApp introduced a new unified block storage driver in Havana for "
"configuring different storage families and storage protocols. This requires "
"defining upgrade path for NetApp drivers which existed in releases prior to "
"Havana. This section covers the upgrade configuration for NetApp drivers to "
"the new unified configuration and a list of deprecated NetApp drivers."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml253(title)
msgid "Upgraded NetApp drivers"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml254(para)
msgid ""
"This section describes how to update Cinder configuration from a pre-Havana "
"release to the new unified driver format."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml258(title)
msgid "Driver upgrade configuration"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml261(para)
msgid ""
"NetApp iSCSI direct driver for Clustered Data ONTAP in Grizzly (or earlier)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml266(para)
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml280(para)
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml295(para)
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml310(para)
msgid "NetApp Unified Driver configuration"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml275(para)
msgid ""
"NetApp NFS direct driver for Clustered Data ONTAP in Grizzly (or earlier)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml289(para)
msgid ""
"NetApp iSCSI direct driver for Data ONTAP operating in 7-Mode storage "
"controller in Grizzly (or earlier)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml304(para)
msgid ""
"NetApp NFS direct driver for Data ONTAP operating in 7-Mode storage "
"controller in Grizzly (or earlier)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml323(title)
msgid "Deprecated NetApp drivers"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml324(para)
msgid ""
"This section lists the NetApp drivers in previous releases that are "
"deprecated in Havana."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml328(para)
msgid "NetApp iSCSI driver for clustered Data ONTAP."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml335(para)
msgid "NetApp NFS driver for clustered Data ONTAP."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml342(para)
msgid ""
"NetApp iSCSI driver for Data ONTAP operating in 7-Mode storage controller."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml349(para)
msgid ""
"NetApp NFS driver for Data ONTAP operating in 7-Mode storage controller."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/netapp-volume-driver.xml357(para)
msgid ""
"See the <link "
"href=\"https://communities.netapp.com/groups/openstack\">OpenStack NetApp "
"community</link> for support information on deprecated NetApp drivers in the"
" Havana release."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml6(title)
msgid "Huawei storage driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml7(para)
msgid ""
"Huawei driver supports the iSCSI and Fibre Channel connections and enables "
"OceanStor T series unified storage, OceanStor Dorado high-performance "
"storage, and OceanStor HVS high-end storage to provide block storage "
"services for OpenStack."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml20(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml51(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml77(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml96(para)
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml39(para)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml33(para)
msgid "Delete volume"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml23(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml54(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml80(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml99(para)
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml42(para)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml36(para)
msgid "Attach volume"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml29(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml60(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml105(para)
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml48(para)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml48(para)
msgid "Create snapshot"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml32(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml63(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml108(para)
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml51(para)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml51(para)
msgid "Delete snapshot"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml38(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml114(para)
msgid "Create clone volume"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml41(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml66(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml86(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml117(para)
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml57(para)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml54(para)
msgid "Copy image to volume"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml44(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml69(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml89(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml120(para)
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml60(para)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml57(para)
msgid "Copy volume to image"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml14(para)
msgid ""
"OceanStor T series unified storage supports the following "
"operations:<placeholder-1/>OceanStor Dorado5100 supports the following "
"operations:<placeholder-2/>OceanStor Dorado2100 G2 supports the following "
"operations:<placeholder-3/>OceanStor HVS supports the following "
"operations:<placeholder-4/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml125(title)
msgid "Configure Cinder nodes"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml126(para)
msgid ""
"In <filename>/etc/cinder</filename>, create the driver configuration file "
"named <filename>cinder_huawei_conf.xml</filename>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml129(para)
msgid ""
"You must configure <option>Product</option> and <option>Protocol</option> to"
" specify a storage system and link type. The following uses the iSCSI driver"
" as an example. The driver configuration file of OceanStor T series unified "
"storage is shown as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml160(para)
msgid ""
"The driver configuration file of OceanStor Dorado5100 is shown as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml186(para)
msgid ""
"The driver configuration file of OceanStor Dorado2100 G2 is shown as "
"follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml210(para)
msgid "The driver configuration file of OceanStor HVS is shown as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml236(para)
msgid ""
"You do not need to configure the iSCSI target IP address for the Fibre "
"Channel driver. In the prior example, delete the iSCSI configuration:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml246(para)
msgid ""
"To add <option>volume_driver</option> and "
"<option>cinder_huawei_conf_file</option> items, you can modify the "
"<filename>cinder.conf</filename> configuration file as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml252(para)
msgid "You can configure multiple Huawei back-end storages as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml263(para)
msgid ""
"OceanStor HVS storage system supports the QoS function. You must create a "
"QoS policy for the HVS storage system and create the volume type to enable "
"QoS as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml272(para)
msgid ""
"<option>OpenStack_QoS_high</option> is a QoS policy created by a user for "
"the HVS storage system. <option>QoS_high</option> is the self-defined volume"
" type. Set the <option>io_priority</option> option to "
"<literal>high</literal>, <literal>normal</literal>, or "
"<literal>low</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml280(para)
msgid ""
"OceanStor HVS storage system supports the SmartTier function. SmartTier has "
"three tiers. You can create the volume type to enable SmartTier as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml288(para)
msgid ""
"<option>distribute_policy</option> and <option>transfer_strategy</option> "
"can only be set to <literal>high</literal>, <literal>normal</literal>, or "
"<literal>low</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml295(title)
msgid "Configuration file details"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml296(para)
msgid "This table describes the Huawei storage driver configuration options:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml299(caption)
msgid "Huawei storage driver configuration options"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml308(td)
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml199(td)
msgid "Flag name"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml309(td)
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml200(td)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml317(td)
msgid "Type"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml310(td)
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml201(td)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml318(td)
msgid "Default"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml317(option)
msgid "Product"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml320(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml334(td)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml345(td)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml357(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml369(td)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml381(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml396(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml501(td)
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml209(para)
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml224(para)
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml262(para)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml326(para)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml334(para)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml379(para)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml391(para)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml408(para)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml419(para)
msgid "Required"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml326(para)
msgid ""
"Type of a storage product. Valid values are <literal>T</literal>, "
"<literal>Dorado</literal>, or <literal>HVS</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml333(option)
msgid "Protocol"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml339(literal)
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml412(para)
msgid "iSCSI"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml340(literal)
msgid "FC"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml338(td)
msgid ""
"Type of a protocol. Valid values are <placeholder-1/> or <placeholder-2/>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml344(option)
msgid "ControllerIP0"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml349(td)
msgid "IP address of the primary controller (not required for the HVS)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml354(option)
msgid "ControllerIP1"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml363(para)
msgid "IP address of the secondary controller (not required for the HVS)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml368(option)
msgid "HVSURL"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml373(td)
msgid "Access address of the Rest port (required only for the HVS)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml378(option)
msgid "UserName"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml387(para)
msgid "User name of an administrator"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml392(option)
msgid "UserPassword"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml402(para)
msgid "Password of an administrator"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml407(option)
msgid "LUNType"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml410(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml427(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml445(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml463(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml476(td)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml491(td)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml513(td)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml524(td)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml534(td)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml545(td)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml553(td)
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml217(para)
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml271(para)
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml295(para)
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml306(para)
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml330(para)
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml341(para)
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml362(para)
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml380(para)
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml390(para)
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml411(para)
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml422(para)
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml432(para)
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml448(para)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml342(para)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml352(para)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml364(para)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml430(para)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml441(para)
msgid "Optional"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml413(para)
msgid "Thin"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml416(para)
msgid ""
"Type of a created LUN. Valid values are <literal>Thick</literal> or "
"<literal>Thin</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml423(option)
msgid "StripUnitSize"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml430(para)
msgid "64"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml433(para)
msgid "Stripe depth of a created LUN. The value is expressed in KB."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml435(para)
msgid "This flag is not valid for a thin LUN."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml441(option)
msgid "WriteType"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml448(para)
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml466(para)
msgid "1"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml451(para)
msgid ""
"Cache write method. The method can be write back, write through, or Required"
" write back. The default value is <literal>1</literal>, indicating write "
"back."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml460(option)
msgid "MirrorSwitch"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml469(para)
msgid ""
"Cache mirroring policy. The default value is <literal>1</literal>, "
"indicating that a mirroring policy is used."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml475(option)
msgid "Prefetch Type"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml478(para)
msgid "3"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml481(para)
msgid ""
"Cache prefetch strategy. The strategy can be constant prefetch, variable "
"prefetch, or intelligent prefetch. Default value is <literal>3</literal>, "
"which indicates intelligent prefetch and is not required for the HVS."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml490(option)
msgid "Prefetch Value"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml493(para)
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml381(para)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml431(para)
msgid "0"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml496(para)
msgid "Cache prefetch value."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml500(option)
msgid "StoragePool"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml506(para)
msgid ""
"Name of a storage pool that you want to use. Not required for the Dorado2100"
" G2."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml512(option)
msgid "DefaultTargetIP"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml518(para)
msgid "Default IP address of the iSCSI port provided for compute nodes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml523(option)
msgid "Initiator Name"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml529(para)
msgid "Name of a compute node initiator."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml533(option)
msgid "Initiator TargetIP"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml539(para)
msgid "IP address of the iSCSI port provided for Compute nodes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml544(option)
msgid "OSType"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml547(para)
msgid "Linux"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml549(td)
msgid "The OS type for a Compute node."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml552(option)
msgid "HostIP"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml557(td)
msgid "The IPs for Compute nodes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml564(para)
msgid ""
"You can configure one iSCSI target port for each or all Compute nodes. The "
"driver checks whether a target port IP address is configured for the current"
" Compute node. If not, select <option>DefaultTargetIP</option>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml571(para)
msgid ""
"You can configure multiple storage pools in one configuration file, which "
"supports the use of multiple storage pools in a storage system. (HVS allows "
"configuration of only one storage pool.)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml578(para)
msgid ""
"For details about LUN configuration information, see the <placeholder-1/> "
"command in the command-line interface (CLI) documentation or run the "
"<placeholder-2/> on the storage system CLI."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/huawei-storage-driver.xml587(para)
msgid ""
"After the driver is loaded, the storage system obtains any modification of "
"the driver configuration file in real time and you do not need to restart "
"the <systemitem class=\"service\">cinder-volume</systemitem> service."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/glusterfs-driver.xml6(title)
msgid "GlusterFS driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/glusterfs-driver.xml7(para)
msgid ""
"GlusterFS is an open-source scalable distributed file system that is able to"
" grow to petabytes and beyond in size. More information can be found on "
"<link href=\"http://www.gluster.org/\">Gluster's homepage</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/glusterfs-driver.xml12(para)
msgid ""
"This driver enables use of GlusterFS in a similar fashion as the NFS driver."
" It supports basic volume operations, and like NFS, does not support "
"snapshot/clone."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/glusterfs-driver.xml16(para)
msgid ""
"You must use a Linux kernel of version 3.4 or greater (or version 2.6.32 or "
"greater in RHEL/CentOS 6.3+) when working with Gluster-based volumes. See "
"<link href=\"https://bugs.launchpad.net/nova/+bug/1177103\">Bug "
"1177103</link> for more information."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/glusterfs-driver.xml22(para)
msgid ""
"To use Cinder with GlusterFS, first set the <literal>volume_driver</literal>"
" in <filename>cinder.conf</filename>:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/glusterfs-driver.xml26(para)
msgid ""
"The following table contains the configuration options supported by the "
"GlusterFS driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml5(title)
msgid "EMC SMI-S iSCSI driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml6(para)
msgid ""
"The EMC SMI-S iSCSI driver, which is based on the iSCSI driver, can create, "
"delete, attach, and detach volumes, create and delete snapshots, and so on."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml9(para)
msgid ""
"The EMC SMI-S iSCSI driver runs volume operations by communicating with the "
"back-end EMC storage. It uses a CIM client in Python called PyWBEM to "
"perform CIM operations over HTTP."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml13(para)
msgid ""
"The EMC CIM Object Manager (ECOM) is packaged with the EMC SMI-S provider. "
"It is a CIM server that enables CIM clients to perform CIM operations over "
"HTTP by using SMI-S in the back-end for EMC storage operations."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml17(para)
msgid ""
"The EMC SMI-S Provider supports the SNIA Storage Management Initiative "
"(SMI), an ANSI standard for storage management. It supports VMAX and VNX "
"storage systems."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml21(title)
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml16(title)
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml17(title)
msgid "System requirements"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml22(para)
msgid ""
"EMC SMI-S Provider V4.5.1 and higher is required. You can download SMI-S "
"from the <link href=\"http://powerlink.emc.com\">EMC Powerlink</link> web "
"site. See the EMC SMI-S Provider release notes for installation "
"instructions."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml27(para)
msgid "EMC storage VMAX Family and VNX Series are supported."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml32(para)
msgid "VMAX and VNX arrays support these operations:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml54(para)
msgid "Create cloned volume"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml63(para)
msgid "Only VNX supports these operations:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml69(para)
msgid "Only thin provisioning is supported."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml72(title)
msgid "Task flow"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml74(title)
msgid "To set up the EMC SMI-S iSCSI driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml76(para)
msgid ""
"Install the <package>python-pywbem</package> package for your distribution. "
"See <xref linkend=\"install-pywbem\"/>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml81(para)
msgid ""
"Download SMI-S from PowerLink and install it. Add your VNX/VMAX arrays to "
"SMI-S."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml83(para)
msgid ""
"For information, see <xref linkend=\"setup-smi-s\"/> and the SMI-S release "
"notes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml87(para)
msgid "Register with VNX. See <xref linkend=\"register-emc\"/>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml91(para)
msgid "Create a masking view on VMAX. See <xref linkend=\"create-masking\"/>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml96(title)
msgid "Install the <package>python-pywbem</package> package"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml99(para)
msgid ""
"Install the <package>python-pywbem</package> package for your distribution:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml103(para)
msgid "On Ubuntu:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml107(para)
msgid "On openSUSE:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml111(para)
msgid "On Fedora:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml119(title)
msgid "Set up SMI-S"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml120(para)
msgid ""
"You can install SMI-S on a non-OpenStack host. Supported platforms include "
"different flavors of Windows, Red Hat, and SUSE Linux. The host can be "
"either a physical server or VM hosted by an ESX server. See the EMC SMI-S "
"Provider release notes for supported platforms and installation "
"instructions."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml127(para)
msgid ""
"You must discover storage arrays on the SMI-S server before you can use the "
"Cinder driver. Follow instructions in the SMI-S release notes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml131(para)
msgid ""
"SMI-S is usually installed at <filename>/opt/emc/ECIM/ECOM/bin</filename> on"
" Linux and <filename>C:\\Program Files\\EMC\\ECIM\\ECOM\\bin</filename> on "
"Windows. After you install and configure SMI-S, go to that directory and "
"type <placeholder-1/>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml138(para)
msgid ""
"Use <placeholder-1/> in <placeholder-2/> to add an array. Use "
"<placeholder-3/> and examine the output after the array is added. Make sure "
"that the arrays are recognized by the SMI-S server before using the EMC "
"Cinder driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml146(title)
msgid "Register with VNX"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml147(para)
msgid ""
"To export a VNX volume to a Compute node, you must register the node with "
"VNX."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml149(para)
msgid ""
"On the Compute node <literal>1.1.1.1</literal>, run these commands (assume "
"<literal>10.10.61.35</literal> is the iscsi target):"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml152(literal)
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml158(literal)
msgid "10.10.61.35"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml156(para)
msgid ""
"Log in to VNX from the Compute node by using the target corresponding to the"
" SPA port:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml158(literal)
msgid "iqn.1992-04.com.emc:cx.apm01234567890.a0"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml159(para)
msgid ""
"Assume <literal>iqn.1993-08.org.debian:01:1a2b3c4d5f6g</literal> is the "
"initiator name of the Compute node. Log in to Unisphere, go to "
"<literal>VNX00000</literal>-&gt;Hosts-&gt;Initiators, refresh and wait until"
" initiator <literal>iqn.1993-08.org.debian:01:1a2b3c4d5f6g</literal> with SP"
" Port <literal>A-8v0</literal> appears."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml167(para)
msgid ""
"Click <guibutton>Register</guibutton>, select "
"<guilabel>CLARiiON/VNX</guilabel>, and enter the <literal>myhost1</literal> "
"host name and <literal>myhost1</literal> IP address. Click "
"<guibutton>Register</guibutton>. Now the <literal>1.1.1.1</literal> host "
"appears under <guimenu>Hosts</guimenu><guimenuitem>Host List</guimenuitem> "
"as well."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml172(para)
msgid "Log out of VNX on the Compute node:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml174(para)
msgid ""
"Log in to VNX from the Compute node using the target corresponding to the "
"SPB port:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml177(para)
msgid "In Unisphere, register the initiator with the SPB port."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml179(para)
msgid "Log out:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml183(title)
msgid "Create a masking view on VMAX"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml184(para)
msgid ""
"For VMAX, you must set up the Unisphere for VMAX server. On the Unisphere "
"for VMAX server, create initiator group, storage group, and port group and "
"put them in a masking view. Initiator group contains the initiator names of "
"the OpenStack hosts. Storage group must have at least six gatekeepers."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml194(para)
msgid ""
"Make the following changes in <filename>/etc/cinder/cinder.conf</filename>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml196(para)
msgid ""
"For VMAX, add the following entries, where <literal>10.10.61.45</literal> is"
" the IP address of the VMAX iscsi target:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml203(para)
msgid ""
"For VNX, add the following entries, where <literal>10.10.61.35</literal> is "
"the IP address of the VNX iscsi target:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml210(para)
msgid ""
"Restart the <systemitem class=\"service\">cinder-volume</systemitem> "
"service."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml214(title)
msgid "<filename>cinder_emc_config.xml</filename> configuration file"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml216(para)
msgid ""
"Create the file <filename>/etc/cinder/cinder_emc_config.xml</filename>. You "
"do not need to restart the service for this change."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml220(para)
msgid "For VMAX, add the following lines to the XML file:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml231(para)
msgid "For VNX, add the following lines to the XML file:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml241(para)
msgid ""
"To attach VMAX volumes to an OpenStack VM, you must create a Masking View by"
" using Unisphere for VMAX. The Masking View must have an Initiator Group "
"that contains the initiator of the OpenStack compute node that hosts the VM."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml246(para)
msgid ""
"StorageType is the thin pool where user wants to create the volume from. "
"Only thin LUNs are supported by the plug-in. Thin pools can be created using"
" Unisphere for VMAX and VNX."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/emc-volume-driver.xml250(para)
msgid ""
"EcomServerIp and EcomServerPort are the IP address and port number of the "
"ECOM server which is packaged with SMI-S. EcomUserName and EcomPassword are "
"credentials for the ECOM server."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml4(title)
msgid "HP 3PAR Fibre Channel and iSCSI drivers"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml5(para)
msgid ""
"The <filename>HP3PARFCDriver</filename> and "
"<filename>HP3PARISCSIDriver</filename> drivers, which are based on the Block"
" Storage Service (Cinder) plug-in architecture, run volume operations by "
"communicating with the HP 3PAR storage system over HTTP, HTTPS, and SSH "
"connections. The HTTP and HTTPS communications use "
"<package>hp3parclient</package>, which is part of the Python standard "
"library."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml13(para)
msgid ""
"For information about how to manage HP 3PAR storage systems, see the HP 3PAR"
" user documentation."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml17(para)
msgid ""
"To use the HP 3PAR drivers, install the following software and components on"
" the HP 3PAR storage system:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml22(para)
msgid "HP 3PAR Operating System software version 3.1.2 (MU2) or higher"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml26(para)
msgid "HP 3PAR Web Services API Server must be enabled and running"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml30(para)
msgid "One Common Provisioning Group (CPG)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml33(para)
msgid ""
"Additionally, you must install the <package>hp3parclient</package> version "
"2.0 or newer from the Python standard library on the system with the enabled"
" Block Storage Service volume drivers."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml45(para)
msgid "Create volumes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml48(para)
msgid "Delete volumes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml51(para)
msgid "Extend volumes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml54(para)
msgid "Attach volumes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml57(para)
msgid "Detach volumes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml60(para)
msgid "Create snapshots."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml63(para)
msgid "Delete snapshots."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml66(para)
msgid "Create volumes from snapshots."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml69(para)
msgid "Create cloned volumes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml72(para)
msgid "Copy images to volumes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml75(para)
msgid "Copy volumes to images."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml78(para)
msgid ""
"Volume type support for both HP 3PAR drivers includes the ability to set the"
" following capabilities in the OpenStack Cinder API "
"<filename>cinder.api.contrib.types_extra_specs</filename> volume type extra "
"specs extension module:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml85(literal)
msgid "hp3par:cpg"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml88(literal)
msgid "hp3par:snap_cpg"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml91(literal)
msgid "hp3par:provisioning"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml94(literal)
msgid "hp3par:persona"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml97(literal)
msgid "hp3par:vvs"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml100(literal)
msgid "qos:maxBWS"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml103(literal)
msgid "qos:maxIOPS"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml106(para)
msgid ""
"To work with the default filter scheduler, the key values are case sensitive"
" and scoped with <literal>hp3par:</literal> or <literal>qos:</literal>. For "
"information about how to set the key-value pairs and associate them with a "
"volume type, run the following command: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml116(para)
msgid ""
"Volumes that are cloned only support extra specs keys cpg, snap_cpg, "
"provisioning and vvs. The others are ignored. In addition the comments "
"section of the cloned volume in the HP 3PAR StoreServ storage array is not "
"populated."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml122(para)
msgid ""
"The following keys require that the HP 3PAR StoreServ storage array has a "
"Priority Optimization license installed."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml127(para)
msgid ""
"<literal>hp3par:vvs</literal> - The virtual volume set name that has been "
"predefined by the Administrator with Quality of Service (QoS) rules "
"associated to it. If you specify <literal>hp3par:vvs</literal>, the "
"<literal>qos:maxIOPS</literal> and <literal>qos:maxBWS</literal> settings "
"are ignored."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml137(para)
msgid ""
"<literal>qos:maxBWS</literal> - The QoS I/O issue count rate limit in MBs. "
"If not set, the I/O issue bandwidth rate has no limit."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml142(para)
msgid ""
"<literal>qos:maxIOPS</literal> - The QoS I/O issue count rate limit. If not "
"set, the I/O issue count rate has no limit."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml147(para)
msgid ""
"If volume types are not used or a particular key is not set for a volume "
"type, the following defaults are used."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml152(para)
msgid ""
"<literal>hp3par:cpg</literal> - Defaults to the "
"<literal>hp3par_cpg</literal> setting in the "
"<filename>cinder.conf</filename> file."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml157(para)
msgid ""
"<literal>hp3par:snap_cpg</literal> - Defaults to the "
"<literal>hp3par_snap</literal> setting in the "
"<filename>cinder.conf</filename> file. If <literal>hp3par_snap</literal> is "
"not set, it defaults to the <literal>hp3par_cpg</literal> setting."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml165(para)
msgid ""
"<literal>hp3par:provisioning</literal> - Defaults to thin provisioning, the "
"valid values are <literal>thin</literal> and <literal>full</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml171(para)
msgid ""
"<literal>hp3par:persona</literal> - Defaults to the <literal>1 "
"Generic</literal> persona. The valid values are, <literal>1 "
"Generic</literal>, <literal>2 - Generic-ALUA</literal>, <literal>6 - "
"Generic-legacy</literal>, <literal>7 - HPUX-legacy</literal>, <literal>8 - "
"AIX-legacy</literal>, <literal>9 EGENERA</literal>, <literal>10 - ONTAP-"
"legacy</literal>, <literal>11 VMware</literal>, and <literal>12 - "
"OpenVMS</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml186(title)
msgid "Enable the HP 3PAR Fibre Channel and iSCSI drivers"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml188(para)
msgid ""
"The <filename>HP3PARFCDriver</filename> and "
"<filename>HP3PARISCSIDriver</filename> are installed with the OpenStack "
"software."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml193(para)
msgid ""
"Install the <filename>hp3parclient</filename> Python package on the "
"OpenStack Block Storage system. <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml200(para)
msgid ""
"Verify that the HP 3PAR Web Services API server is enabled and running on "
"the HP 3PAR storage system."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml205(para)
msgid ""
"Log onto the HP 3PAR storage system with administrator "
"access.<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml210(para)
msgid ""
"View the current state of the Web Services API Server. "
"<placeholder-1/><placeholder-2/><placeholder-3/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml217(para)
msgid "If the Web Services API Server is disabled, start it.<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml227(para)
msgid ""
"To stop the Web Services API Server, use the stopwsapi command. For other "
"options run the <placeholder-1/> command."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml224(para)
msgid ""
"If the HTTP or HTTPS state is disabled, enable one of them.<placeholder-1/> "
"or <placeholder-2/><placeholder-3/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml234(para)
msgid ""
"If you are not using an existing CPG, create a CPG on the HP 3PAR storage "
"system to be used as the default location for creating volumes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml239(para)
msgid ""
"Make the following changes in the "
"<filename>/etc/cinder/cinder.conf</filename> file."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml243(emphasis)
msgid "## REQUIRED SETTINGS"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml277(emphasis)
msgid "## OPTIONAL SETTINGS"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml291(para)
msgid ""
"You can enable only one driver on each cinder instance unless you enable "
"multiple back-end support. See the Cinder multiple back-end support "
"instructions to enable this feature."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml298(para)
msgid ""
"You can configure one or more iSCSI addresses by using the "
"<option>hp3par_iscsi_ips</option> option. When you configure multiple "
"addresses, the driver selects the iSCSI port with the fewest active volumes "
"at attach time. The IP address might include an IP port by using a colon "
"(<literal>:</literal>) to separate the address from port. If you do not "
"define an IP port, the default port 3260 is used. Separate IP addresses with"
" a comma (<literal>,</literal>). The "
"<option>iscsi_ip_address</option>/<option>iscsi_port</option> options might "
"be used as an alternative to <option>hp3par_iscsi_ips</option> for single "
"port iSCSI configuration."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml317(para)
msgid ""
"Save the changes to the <filename>cinder.conf</filename> file and restart "
"the <systemitem class=\"service\">cinder-volume</systemitem> service."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-3par-driver.xml323(para)
msgid ""
"The HP 3PAR Fibre Channel and iSCSI drivers are now enabled on your "
"OpenStack system. If you experience problems, review the Block Storage "
"Service log files for errors."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml6(title)
msgid "Dell EqualLogic volume driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml7(para)
msgid ""
"The Dell EqualLogic volume driver interacts with configured EqualLogic "
"arrays and supports various operations, such as volume creation and "
"deletion, volume attachment and detachment, snapshot creation and deletion, "
"and clone creation."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml12(para)
msgid ""
"To configure and use a Dell EqualLogic array with Block Storage, modify your"
" <filename>cinder.conf</filename> as follows."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml15(para)
msgid ""
"Set the <option>volume_driver</option> option to the Dell EqualLogic volume "
"driver:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml18(para)
msgid ""
"Set the <option>san_ip</option> option to the IP address to reach the "
"EqualLogic Group through SSH:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml21(para)
msgid ""
"Set the <option>san_login</option> option to the user name to login to the "
"Group manager:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml24(para)
msgid ""
"Set the <option>san_password</option> option to the password to login the "
"Group manager with:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml27(para)
msgid ""
"Optionally set the <option>san_thin_provision</option> option to false to "
"disable creation of thin-provisioned volumes:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml31(para)
msgid ""
"The following table describes additional options that the driver supports:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml5(title)
msgid "IBM Storwize family and SVC volume driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml6(para)
msgid ""
"The volume management driver for Storwize family and SAN Volume Controller "
"(SVC) provides OpenStack Compute instances with access to IBM Storwize "
"family or SVC storage systems."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml11(title)
msgid "Configure the Storwize family and SVC system"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml13(title)
msgid "Network configuration"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml14(para)
msgid ""
"The Storwize family or SVC system must be configured for iSCSI, Fibre "
"Channel, or both."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml16(para)
msgid ""
"If using iSCSI, each Storwize family or SVC node should have at least one "
"iSCSI IP address. The IBM Storwize/SVC driver uses an iSCSI IP address "
"associated with the volume's preferred node (if available) to attach the "
"volume to the instance, otherwise it uses the first available iSCSI IP "
"address of the system. The driver obtains the iSCSI IP address directly from"
" the storage system; you do not need to provide these iSCSI IP addresses "
"directly to the driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml27(para)
msgid ""
"If using iSCSI, ensure that the compute nodes have iSCSI network access to "
"the Storwize family or SVC system."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml32(para)
msgid ""
"OpenStack Nova's Grizzly version supports iSCSI multipath. Once this is "
"configured on the Nova host (outside the scope of this documentation), "
"multipath is enabled."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml37(para)
msgid ""
"If using Fibre Channel (FC), each Storwize family or SVC node should have at"
" least one WWPN port configured. If the "
"<literal>storwize_svc_multipath_enabled</literal> flag is set to True in the"
" Cinder configuration file, the driver uses all available WWPNs to attach "
"the volume to the instance (details about the configuration flags appear in "
"the <link linkend=\"ibm-storwize-svc-driver2\"> next section</link>). If the"
" flag is not set, the driver uses the WWPN associated with the volume's "
"preferred node (if available), otherwise it uses the first available WWPN of"
" the system. The driver obtains the WWPNs directly from the storage system; "
"you do not need to provide these WWPNs directly to the driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml53(para)
msgid ""
"If using FC, ensure that the compute nodes have FC connectivity to the "
"Storwize family or SVC system."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml59(title)
msgid "iSCSI CHAP authentication"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml67(para)
msgid ""
"CHAP secrets are added to existing hosts as well as newly-created ones. If "
"the CHAP option is enabled, hosts will not be able to access the storage "
"without the generated secrets."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml74(para)
msgid ""
"Not all OpenStack Compute drivers support CHAP authentication. Please check "
"compatibility before using."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml79(para)
msgid ""
"CHAP secrets are passed from OpenStack Block Storage to Compute in clear "
"text. This communication should be secured to ensure that CHAP secrets are "
"not discovered."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml60(para)
msgid ""
"If using iSCSI for data access and the "
"<literal>storwize_svc_iscsi_chap_enabled</literal> is set to "
"<literal>True</literal>, the driver will associate randomly-generated CHAP "
"secrets with all hosts on the Storwize family system. OpenStack compute "
"nodes use these secrets when creating iSCSI connections. "
"<placeholder-1/><placeholder-2/><placeholder-3/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml86(title)
msgid "Configure storage pools"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml87(para)
msgid ""
"Each instance of the IBM Storwize/SVC driver allocates all volumes in a "
"single pool. The pool should be created in advance and be provided to the "
"driver using the <literal>storwize_svc_volpool_name</literal> configuration "
"flag. Details about the configuration flags and how to provide the flags to "
"the driver appear in the <link linkend=\"ibm-storwize-svc-driver2\"> next "
"section</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml98(title)
msgid "Configure user authentication for the driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml100(para)
msgid ""
"The driver requires access to the Storwize family or SVC system management "
"interface. The driver communicates with the management using SSH. The driver"
" should be provided with the Storwize family or SVC management IP using the "
"<literal>san_ip</literal> flag, and the management port should be provided "
"by the <literal>san_ssh_port</literal> flag. By default, the port value is "
"configured to be port 22 (SSH)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml110(para)
msgid ""
"Make sure the compute node running the nova-volume management driver has SSH"
" network access to the storage system."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml114(para)
msgid ""
"To allow the driver to communicate with the Storwize family or SVC system, "
"you must provide the driver with a user on the storage system. The driver "
"has two authentication methods: password-based authentication and SSH key "
"pair authentication. The user should have an Administrator role. It is "
"suggested to create a new user for the management driver. Please consult "
"with your storage and security administrator regarding the preferred "
"authentication method and how passwords or SSH keys should be stored in a "
"secure manner."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml125(para)
msgid ""
"When creating a new user on the Storwize or SVC system, make sure the user "
"belongs to the Administrator group or to another group that has an "
"Administrator role."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml130(para)
msgid ""
"If using password authentication, assign a password to the user on the "
"Storwize or SVC system. The driver configuration flags for the user and "
"password are <literal>san_login</literal> and "
"<literal>san_password</literal>, respectively."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml136(para)
msgid ""
"If you are using the SSH key pair authentication, create SSH private and "
"public keys using the instructions below or by any other method. Associate "
"the public key with the user by uploading the public key: select the "
"\"choose file\" option in the Storwize family or SVC management GUI under "
"\"SSH public key\". Alternatively, you may associate the SSH public key "
"using the command line interface; details can be found in the Storwize and "
"SVC documentation. The private key should be provided to the driver using "
"the <literal>san_private_key</literal> configuration flag."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml150(title)
msgid "Create a SSH key pair with OpenSSH"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml151(para)
msgid "You can create an SSH key pair using OpenSSH, by running:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml154(para)
msgid ""
"The command prompts for a file to save the key pair. For example, if you "
"select 'key' as the filename, two files are created: <literal>key</literal> "
"and <literal>key.pub</literal>. The <literal>key</literal> file holds the "
"private SSH key and <literal>key.pub</literal> holds the public SSH key."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml161(para)
msgid "The command also prompts for a pass phrase, which should be empty."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml163(para)
msgid ""
"The private key file should be provided to the driver using the "
"<literal>san_private_key</literal> configuration flag. The public key should"
" be uploaded to the Storwize family or SVC system using the storage "
"management GUI or command line interface."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml169(para)
msgid "Ensure that Cinder has read permissions on the private key file."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml175(title)
msgid "Configure the Storwize family and SVC driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml177(title)
msgid "Enable the Storwize family and SVC driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml178(para)
msgid ""
"Set the volume driver to the Storwize family and SVC driver by setting the "
"<literal>volume_driver</literal> option in <filename>cinder.conf</filename> "
"as follows:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml185(title)
msgid "Storwize family and SVC driver options in cinder.conf"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml187(para)
msgid ""
"The following options specify default values for all volumes. Some can be "
"over-ridden using volume types, which are described below."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml191(caption)
msgid "List of configuration flags for Storwize storage and SVC driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml207(literal)
msgid "san_ip"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml211(para)
msgid "Management IP or host name"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml215(literal)
msgid "san_ssh_port"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml218(para)
msgid "22"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml219(para)
msgid "Management port"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml222(literal)
msgid "san_login"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml226(para)
msgid "Management login username"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml230(literal)
msgid "san_password"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml234(para)
msgid ""
"The authentication requires either a password "
"(<literal>san_password</literal>) or SSH private key "
"(<literal>san_private_key</literal>). One must be specified. If both are "
"specified, the driver uses only the SSH private key."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml232(para)
msgid "Required <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml244(para)
msgid "Management login password"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml248(literal)
msgid "san_private_key"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml250(para)
msgid "Required <footnoteref linkend=\"storwize-svc-fn1\"/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml254(para)
msgid "Management login SSH private key"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml260(literal)
msgid "storwize_svc_volpool_name"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml264(para)
msgid "Default pool name for volumes"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml269(literal)
msgid "storwize_svc_vol_rsize"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml272(para)
msgid "2"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml276(para)
msgid ""
"The driver creates thin-provisioned volumes by default. The "
"<literal>storwize_svc_vol_rsize</literal> flag defines the initial physical "
"allocation percentage for thin-provisioned volumes, or if set to "
"<literal>-1</literal>, the driver creates full allocated volumes. More "
"details about the available options are available in the Storwize family and"
" SVC documentation."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml273(para)
msgid "Initial physical allocation (percentage) <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml293(literal)
msgid "storwize_svc_vol_warning"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml296(para)
msgid "0 (disabled)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml297(para)
msgid ""
"Space allocation warning threshold (percentage) <footnoteref linkend"
"=\"storwize-svc-fn3\"/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml304(literal)
msgid "storwize_svc_vol_autoexpand"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml307(para)
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml363(para)
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml423(para)
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml449(para)
msgid "True"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml310(para)
msgid ""
"Defines whether thin-provisioned volumes can be auto expanded by the storage"
" system, a value of <literal>True</literal> means that auto expansion is "
"enabled, a value of <literal>False</literal> disables auto expansion. "
"Details about this option can be found in the <literal>autoexpand</literal>"
" flag of the Storwize family and SVC command line interface "
"<literal>mkvdisk</literal> command."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml308(para)
msgid "Enable or disable volume auto expand <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml328(literal)
msgid "storwize_svc_vol_grainsize"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml331(para)
msgid "256"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml332(para)
msgid "Volume grain size <footnoteref linkend=\"storwize-svc-fn3\"/> in KB"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml338(literal)
msgid "storwize_svc_vol_compression"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml342(para)
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml433(para)
msgid "False"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml346(para)
msgid ""
"Defines whether Real-time Compression is used for the volumes created with "
"OpenStack. Details on Real-time Compression can be found in the Storwize "
"family and SVC documentation. The Storwize or SVC system must have "
"compression enabled for this feature to work."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml343(para)
msgid "Enable or disable Real-time Compression <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml360(literal)
msgid "storwize_svc_vol_easytier"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml366(para)
msgid ""
"Defines whether Easy Tier is used for the volumes created with OpenStack. "
"Details on EasyTier can be found in the Storwize family and SVC "
"documentation. The Storwize or SVC system must have Easy Tier enabled for "
"this feature to work."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml364(para)
msgid "Enable or disable Easy Tier <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml378(literal)
msgid "storwize_svc_vol_iogrp"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml382(para)
msgid "The I/O group in which to allocate vdisks"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml387(literal)
msgid "storwize_svc_flashcopy_timeout"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml391(para)
msgid "120"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml394(para)
msgid ""
"The driver wait timeout threshold when creating an OpenStack snapshot. This "
"is actually the maximum amount of time that the driver waits for the "
"Storwize family or SVC system to prepare a new FlashCopy mapping. The driver"
" accepts a maximum wait time of 600 seconds (10 minutes)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml392(para)
msgid "FlashCopy timeout threshold <placeholder-1/> (seconds)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml408(literal)
msgid "storwize_svc_connection_protocol"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml413(para)
msgid "Connection protocol to use (currently supports 'iSCSI' or 'FC')"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml419(literal)
msgid "storwize_svc_iscsi_chap_enabled"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml424(para)
msgid "Configure CHAP authentication for iSCSI connections"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml429(literal)
msgid "storwize_svc_multipath_enabled"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml436(para)
msgid ""
"Multipath for iSCSI connections requires no storage-side configuration and "
"is enabled if the compute host has multipath configured."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml434(para)
msgid "Enable multipath for FC connections <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml445(literal)
msgid "storwize_svc_multihost_enabled"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml453(para)
msgid ""
"This option allows the driver to map a vdisk to more than one host at a "
"time. This scenario occurs during migration of a virtual machine with an "
"attached volume; the volume is simultaneously mapped to both the source and "
"destination compute hosts. If your deployment does not require attaching "
"vdisks to multiple hosts, setting this flag to False will provide added "
"safety."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml450(para)
msgid "Enable mapping vdisks to multiple hosts <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml474(title)
msgid "Placement with volume types"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml475(para)
msgid ""
"The IBM Storwize/SVC driver exposes capabilities that can be added to the "
"<literal>extra specs</literal> of volume types, and used by the filter "
"scheduler to determine placement of new volumes. Make sure to prefix these "
"keys with <literal>capabilities:</literal> to indicate that the scheduler "
"should use them. The following <literal>extra specs</literal> are supported:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml486(para)
msgid ""
"capabilities:volume_back-end_name - Specify a specific back-end where the "
"volume should be created. The back-end name is a concatenation of the name "
"of the IBM Storwize/SVC storage system as shown in "
"<literal>lssystem</literal>, an underscore, and the name of the pool (mdisk "
"group). For example: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml497(para)
msgid ""
"capabilities:compression_support - Specify a back-end according to "
"compression support. A value of <literal>True</literal> should be used to "
"request a back-end that supports compression, and a value of "
"<literal>False</literal> will request a back-end that does not support "
"compression. If you do not have constraints on compression support, do not "
"set this key. Note that specifying <literal>True</literal> does not enable "
"compression; it only requests that the volume be placed on a back-end that "
"supports compression. Example syntax: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml513(para)
msgid ""
"capabilities:easytier_support - Similar semantics as the "
"<literal>compression_support</literal> key, but for specifying according to "
"support of the Easy Tier feature. Example syntax: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml521(para)
msgid ""
"capabilities:storage_protocol - Specifies the connection protocol used to "
"attach volumes of this type to instances. Legal values are "
"<literal>iSCSI</literal> and <literal>FC</literal>. This <literal>extra "
"specs</literal> value is used for both placement and setting the protocol "
"used for this volume. In the example syntax, note &lt;in&gt; is used as "
"opposed to &lt;is&gt; used in the previous examples. <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml536(title)
msgid "Configure per-volume creation options"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml537(para)
msgid ""
"Volume types can also be used to pass options to the IBM Storwize/SVC "
"driver, which over-ride the default values set in the configuration file. "
"Contrary to the previous examples where the \"capabilities\" scope was used "
"to pass parameters to the Cinder scheduler, options can be passed to the IBM"
" Storwize/SVC driver with the \"drivers\" scope."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml547(para)
msgid "rsize"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml550(para)
msgid "warning"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml553(para)
msgid "autoexpand"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml556(para)
msgid "grainsize"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml559(para)
msgid "compression"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml562(para)
msgid "easytier"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml565(para)
msgid "multipath"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml568(para)
msgid "iogrp"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml544(para)
msgid ""
"The following <literal>extra specs</literal> keys are supported by the IBM "
"Storwize/SVC driver: <placeholder-1/> These keys have the same semantics as "
"their counterparts in the configuration file. They are set similarly; for "
"example, <literal>rsize=2</literal> or <literal>compression=False</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml576(title)
msgid "Example: Volume types"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml577(para)
msgid ""
"In the following example, we create a volume type to specify a controller "
"that supports iSCSI and compression, to use iSCSI when attaching the volume,"
" and to enable compression:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml583(para)
msgid "We can then create a 50GB volume using this type:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml587(para)
msgid "Volume types can be used, for example, to provide users with different"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml591(para)
msgid ""
"performance levels (such as, allocating entirely on an HDD tier, using Easy "
"Tier for an HDD-SDD mix, or allocating entirely on an SSD tier)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml597(para)
msgid ""
"resiliency levels (such as, allocating volumes in pools with different RAID "
"levels)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml602(para)
msgid "features (such as, enabling/disabling Real-time Compression)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml609(title)
msgid "Operational notes for the Storwize family and SVC driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml612(title)
msgid "Migrate volumes"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml613(para)
msgid ""
"In the context of OpenStack Block Storage's volume migration feature, the "
"IBM Storwize/SVC driver enables the storage's virtualization technology. "
"When migrating a volume from one pool to another, the volume will appear in "
"the destination pool almost immediately, while the storage moves the data in"
" the background."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml621(para)
msgid ""
"To enable this feature, both pools involved in a given volume migration must"
" have the same values for <literal>extent_size</literal>. If the pools have "
"different values for <literal>extent_size</literal>, the data will still be "
"moved directly between the pools (not host-side copy), but the operation "
"will be synchronous."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml632(title)
msgid "Extend volumes"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml633(para)
msgid ""
"The IBM Storwize/SVC driver allows for extending a volume's size, but only "
"for volumes without snapshots."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml638(title)
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml173(title)
msgid "Snapshots and clones"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-storwize-svc-driver.xml639(para)
msgid ""
"Snapshots are implemented using FlashCopy with no background copy (space-"
"efficient). Volume clones (volumes created from existing volumes) are "
"implemented with FlashCopy, but with background copy enabled. This means "
"that volume clones are independent, full copies. While this background copy "
"is taking place, attempting to delete or extend the source volume will "
"result in that operation waiting for the copy to complete."
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for
#. you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml62(None)
msgid ""
"@@image: '../../../common/figures/xenapinfs/local_config.png'; "
"md5=16a3864b0ec636518335246360438fd1"
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for
#. you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml78(None)
msgid ""
"@@image: '../../../common/figures/xenapinfs/remote_config.png'; "
"md5=eab22f6aa5413c2043936872ea44e459"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml4(title)
msgid "XenAPINFS"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml5(para)
msgid ""
"XenAPINFS is a Block Storage (Cinder) driver that uses an NFS share through "
"the XenAPI Storage Manager to store virtual disk images and expose those "
"virtual disks as volumes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml8(para)
msgid ""
"This driver does not access the NFS share directly. It accesses the share "
"only through XenAPI Storage Manager. Consider this driver as a reference "
"implementation for use of the XenAPI Storage Manager in OpenStack (present "
"in XenServer and XCP)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml17(para)
msgid ""
"A XenServer/XCP installation that acts as Storage Controller. This "
"hypervisor is known as the storage controller."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml22(para)
msgid "Use XenServer/XCP as your hypervisor for Compute nodes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml26(para)
msgid ""
"An NFS share that is configured for XenServer/XCP. For specific requirements"
" and export options, see the administration guide for your specific "
"XenServer version. The NFS share must be accessible by all XenServers "
"components within your cloud."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml34(para)
msgid ""
"To create volumes from XenServer type images (vhd tgz files), XenServer Nova"
" plug-ins are also required on the storage controller."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml40(para)
msgid ""
"You can use a XenServer as a storage controller and Compute node at the same"
" time. This minimal configuration consists of a XenServer/XCP box and an NFS"
" share."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml47(title)
msgid "Configuration patterns"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml50(para)
msgid ""
"Local configuration (Recommended): The driver runs in a virtual machine on "
"top of the storage controller. With this configuration, you can create "
"volumes from <literal>qemu-img</literal>-supported formats."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml57(title)
msgid "Local configuration"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml68(para)
msgid ""
"Remote configuration: The driver is not a guest VM of the storage "
"controller. With this configuration, you can only use XenServer vhd-type "
"images to create volumes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml73(title)
msgid "Remote configuration"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml87(para)
msgid "Assuming the following setup:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml90(para)
msgid "XenServer box at <literal>10.2.2.1</literal>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml94(para)
msgid "XenServer password is <literal>r00tme</literal>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml98(para)
msgid "NFS server is <literal>nfs.example.com</literal>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml102(para)
msgid "NFS export is at <literal>/volumes</literal>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml106(para)
msgid ""
"To use XenAPINFS as your cinder driver, set these configuration options in "
"the <filename>cinder.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/xenapi-nfs.xml115(para)
msgid ""
"The following table shows the configuration options that the XenAPINFS "
"driver supports:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml11(title)
msgid "HDS iSCSI volume driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml12(para)
msgid ""
"This Cinder volume driver provides iSCSI support for <link "
"href=\"http://www.hds.com/products/storage-systems/hitachi-unified-"
"storage-100-family.html\">HUS (Hitachi Unified Storage) </link> arrays such "
"as, HUS-110, HUS-130, and HUS-150."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml18(para)
msgid ""
"Use the HDS <placeholder-1/> command to communicate with an HUS array. You "
"can download this utility package from the HDS support site (<link "
"href=\"https://HDSSupport.hds.com\">https://HDSSupport.hds.com</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml23(para)
msgid "Platform: Ubuntu 12.04LTS or newer."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml26(title)
msgid "Supported Cinder operations"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml27(para)
msgid "These operations are supported:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml42(para)
msgid "Clone volume"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml45(para)
msgid "Extend volume"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml63(para)
msgid "get_volume_stats"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml66(para)
msgid ""
"Thin provisioning, also known as Hitachi Dynamic Pool (HDP), is supported "
"for volume or snapshot creation. Cinder volumes and snapshots do not have to"
" reside in the same pool."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml78(para)
msgid "Do not confuse differentiated services with the Cinder volume service."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml73(para)
msgid ""
"The HDS driver supports the concept of differentiated services, where volume"
" type can be associated with the fine-tuned performance characteristics of "
"HDP—the the dynamic pool where volumes shall be created<placeholder-1/>. For"
" instance, an HDP can consist of fast SSDs to provide speed. HDP can provide"
" a certain reliability based on things like its RAID level characteristics. "
"HDS driver maps volume type to the <option>volume_type</option> option in "
"its configuration file."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml86(para)
msgid ""
"Configuration is read from an XML-format file. Examples are shown for single"
" and multi back-end cases."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml91(para)
msgid ""
"Configuration is read from an XML file. This example shows the configuration"
" for single back-end and for multi-back-end cases."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml100(para)
msgid ""
"It is okay to manage multiple HUS arrays by using multiple Cinder instances "
"(or servers)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml96(para)
msgid ""
"It is not recommended to manage a HUS array simultaneously from multiple "
"Cinder instances or servers. <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml109(title)
msgid "Single back-end"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml110(para)
msgid ""
"In a single back-end deployment, only one Cinder instance runs on the Cinder"
" server and controls one HUS array: this setup requires these configuration "
"files:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml123(para)
msgid "The configuration file location is not fixed."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml116(para)
msgid ""
"Set the <option>hds_cinder_config_file</option> option in the "
"<filename>/etc/cinder/cinder.conf</filename> file to use the HDS volume "
"driver. This option points to a configuration file.<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml130(para)
msgid ""
"Configure <option>hds_cinder_config_file</option> at the location specified "
"previously. For example, "
"<filename>/opt/hds/hus/cinder_hds_conf.xml</filename>:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml160(title)
msgid "Multi back-end"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml161(para)
msgid ""
"In a multi back-end deployment, more than one Cinder instance runs on the "
"same server. In this example, two HUS arrays are used, possibly providing "
"different storage performance:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml167(para)
msgid ""
"Configure <filename>/etc/cinder/cinder.conf</filename>: the "
"<literal>hus1</literal><option>hus2</option> configuration blocks are "
"created. Set the <option>hds_cinder_config_file</option> option to point to "
"an unique configuration file for each block. Set the "
"<option>volume_driver</option> option for each back-end to "
"<literal>cinder.volume.drivers.hds.hds.HUSDriver</literal>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml189(para)
msgid "Configure <filename>/opt/hds/hus/cinder_hus1_conf.xml</filename>:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml214(para)
msgid ""
"Configure the <filename>/opt/hds/hus/cinder_hus2_conf.xml</filename> file:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml242(title)
msgid "Type extra specs: <option>volume_backend</option> and volume type"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml244(para)
msgid ""
"If you use volume types, you must configure them in the configuration file "
"and set the <option>volume_backend_name</option> option to the appropriate "
"back-end. In the previous multi back-end example, the "
"<literal>platinum</literal> volume type is served by hus-2, and the "
"<literal>regular</literal> volume type is served by hus-1."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml255(title)
msgid "Non differentiated deployment of HUS arrays"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml256(para)
msgid ""
"You can deploy multiple Cinder instances that each control a separate HUS "
"array. Each instance has no volume type associated with it. The Cinder "
"filtering algorithm selects the HUS array with the largest available free "
"space. In each configuration file, you must define the "
"<literal>default</literal><option>volume_type</option> in the service "
"labels."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml267(title)
msgid "HDS iSCSI volume driver configuration options"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml274(para)
msgid "There is no relative precedence or weight among these four labels."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml268(para)
msgid ""
"These details apply to the XML format configuration file that is read by HDS"
" volume driver. These differentiated service labels are predefined: "
"<literal>svc_0</literal>, <literal>svc_1</literal>, "
"<literal>svc_2</literal>, and <literal>svc_3</literal><placeholder-1/>. Each"
" respective service label associates with these parameters and tags:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml280(para)
msgid ""
"<option>volume-types</option>: A create_volume call with a certain volume "
"type shall be matched up with this tag. <literal>default</literal> is "
"special in that any service associated with this type is used to create "
"volume when no other labels match. Other labels are case sensitive and "
"should exactly match. If no configured volume_types match the incoming "
"requested type, an error occurs in volume creation."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml291(para)
msgid "<option>HDP</option>, the pool ID associated with the service."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml295(para)
msgid "An iSCSI port dedicated to the service."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml304(para)
msgid ""
"get_volume_stats() always provides the available capacity based on the "
"combined sum of all the HDPs that are used in these services labels."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml298(para)
msgid ""
"Typically a Cinder volume instance has only one such service label. For "
"example, any <literal>svc_0</literal>, <literal>svc_1</literal>, "
"<literal>svc_2</literal>, or <literal>svc_3</literal> can be associated with"
" it. But any mix of these service labels can be used in the same instance "
"<placeholder-1/>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml316(td)
msgid "Option"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml324(option)
msgid "mgmt_ip0"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml328(para)
msgid "Management Port 0 IP address"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml332(option)
msgid "mgmt_ip1"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml336(para)
msgid "Management Port 1 IP address"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml345(para)
msgid "Username is required only if secure mode is used"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml355(para)
msgid "Password is required only if secure mode is used"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml361(option)
msgid "svc_0, svc_1, svc_2, svc_3"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml365(para)
msgid "(at least one label has to be defined)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml368(para)
msgid ""
"Service labels: these four predefined names help four different sets of "
"configuration options -- each can specify iSCSI port address, HDP and an "
"unique volume type."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml377(option)
msgid "snapshot"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml382(para)
msgid ""
"A service label which helps specify configuration for snapshots, such as, "
"HDP."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml389(option)
msgid "volume_type"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml394(para)
msgid ""
"<option>volume_type</option> tag is used to match volume type. "
"<literal>Default</literal> meets any type of <option>volume_type</option>, "
"or if it is not specified. Any other volume_type is selected if exactly "
"matched during <literal>create_volume</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml406(option)
msgid "iscsi_ip"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml411(para)
msgid "iSCSI port IP address where volume attaches for this volume type."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml417(option)
msgid "hdp"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml422(para)
msgid "HDP, the pool number where volume, or snapshot should be created."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml428(option)
msgid "lun_start"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml433(para)
msgid "LUN allocation starts at this number."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml439(option)
msgid "lun_end"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml442(para)
msgid "4096"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hds-volume-driver.xml444(para)
msgid "LUN allocation is up to, but not including, this number."
msgstr ""
#. When image changes, this message will be marked fuzzy or untranslated for
#. you.
#. It doesn't matter what you translate it to: it's not used at all.
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml26(None)
msgid ""
"@@image: '../../../common/figures/ceph/ceph-architecture.png'; "
"md5=f7e854c9dbfb64534c47c3583e774c81"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml4(title)
msgid "Ceph RADOS Block Device (RBD)"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml5(para)
msgid ""
"By Sebastien Han from <link href=\"http://www.sebastien-"
"han.fr/blog/2012/06/10/introducing-ceph-to-openstack/\"/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml8(para)
msgid ""
"If you use KVM or QEMU as your hypervisor, you can configure the Compute "
"service to use <link href=\"http://ceph.com/ceph-storage/block-storage/\"> "
"Ceph RADOS block devices (RBD)</link> for volumes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml21(title)
msgid "Ceph architecture"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml12(para)
msgid ""
"Ceph is a massively scalable, open source, distributed storage system. It is"
" comprised of an object store, block store, and a POSIX-compliant "
"distributed file system. The platform can auto-scale to the exabyte level "
"and beyond. It runs on commodity hardware, is self-healing and self-"
"managing, and has no single point of failure. Ceph is in the Linux kernel "
"and is integrated with the OpenStack cloud operating system. Due to its open"
" source nature, you can install and use this portable storage platform in "
"public or private clouds. <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml32(title)
msgid "RADOS?"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml33(para)
msgid "You can easily get confused by the naming: Ceph? RADOS?"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml35(para)
msgid ""
"<emphasis>RADOS: Reliable Autonomic Distributed Object Store</emphasis> is "
"an object store. RADOS distributes objects across the storage cluster and "
"replicates objects for fault tolerance. RADOS contains the following major "
"components:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml42(para)
msgid ""
"<emphasis>Object Storage Device (ODS)</emphasis>. The storage daemon - RADOS"
" service, the location of your data. You must run this daemon on each server"
" in your cluster. For each OSD, you can have an associated hard drive disks."
" For performance purposes, pool your hard drive disk with raid arrays, "
"logical volume management (LVM) or B-tree file system (Btrfs) pooling. By "
"default, the following pools are created: data, metadata, and RBD."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml54(para)
msgid ""
"<emphasis>Meta-Data Server (MDS)</emphasis>. Stores metadata. MDSs build a "
"POSIX file system on top of objects for Ceph clients. However, if you do not"
" use the Ceph file system, you do not need a metadata server."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml61(para)
msgid ""
"<emphasis>Monitor (MON)</emphasis>. This lightweight daemon handles all "
"communications with external applications and clients. It also provides a "
"consensus for distributed decision making in a Ceph/RADOS cluster. For "
"instance, when you mount a Ceph shared on a client, you point to the address"
" of a MON server. It checks the state and the consistency of the data. In an"
" ideal setup, you must run at least three <code>ceph-mon</code> daemons on "
"separate servers."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml74(para)
msgid ""
"Ceph developers recommend that you use Btrfs as a file system for storage. "
"XFS might be a better alternative for production environments. Neither Ceph "
"nor Btrfs is ready for production and it could be risky to use them in "
"combination. XFS is an excellent alternative to Btrfs. The ext4 file system "
"is also compatible but does not exploit the power of Ceph."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml82(para)
msgid ""
"Currently, configure Ceph to use the XFS file system. Use Btrfs when it is "
"stable enough for production."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml86(para)
msgid ""
"See <link "
"href=\"http://ceph.com/docs/master/rec/filesystem/\">ceph.com/docs/master/rec/file"
" system/</link> for more information about usable file systems."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml92(title)
msgid "Ways to store, use, and expose data"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml93(para)
msgid ""
"To store and access your data, you can use the following storage systems:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml97(para)
msgid ""
"<emphasis>RADOS</emphasis>. Use as an object, default storage mechanism."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml101(para)
msgid ""
"<emphasis>RBD</emphasis>. Use as a block device. The Linux kernel RBD (rados"
" block device) driver allows striping a Linux block device over multiple "
"distributed object store data objects. It is compatible with the kvm RBD "
"image."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml108(para)
msgid ""
"<emphasis>CephFS</emphasis>. Use as a file, POSIX-compliant file system."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml112(para)
msgid ""
"Ceph exposes its distributed object store (RADOS). You can access it through"
" the following interfaces:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml116(para)
msgid ""
"<emphasis>RADOS Gateway</emphasis>. Swift and Amazon-S3 compatible RESTful "
"interface. See <link "
"href=\"http://ceph.com/wiki/RADOS_Gateway\">RADOS_Gateway</link> for more "
"information."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml123(para)
msgid "<emphasis>librados</emphasis>, and the related C/C++ bindings."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml127(para)
msgid ""
"<emphasis>rbd and QEMU-RBD</emphasis>. Linux kernel and QEMU block devices "
"that stripe data across multiple objects."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml132(para)
msgid ""
"For detailed installation instructions and benchmarking information, see "
"<link href=\"http://www.sebastien-han.fr/blog/2012/06/10/introducing-ceph-"
"to-openstack/\">http://www.sebastien-han.fr/blog/2012/06/10/introducing-"
"ceph-to-openstack/</link>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml138(title)
msgid "Driver options"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ceph-rbd-volume-driver.xml139(para)
msgid ""
"The following table contains the configuration options supported by the Ceph"
" RADOS Block Device driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml4(title)
msgid "HP / LeftHand SAN"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml5(para)
msgid ""
"HP/LeftHand SANs are optimized for virtualized environments with VMware ESX "
"&amp; Microsoft Hyper-V, though the OpenStack integration provides "
"additional support to various other virtualized environments, such as Xen, "
"KVM, and OpenVZ, by exposing the volumes through ISCSI to connect to the "
"instances."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml11(para)
msgid ""
"The HpSanISCSIDriver enables you to use a HP/Lefthand SAN that supports the "
"Cliq interface. Every supported volume operation translates into a cliq call"
" in the back-end."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml14(para)
msgid ""
"To use Cinder with HP/Lefthand SAN, you must set the following parameters in"
" the <filename>cinder.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml19(para)
msgid ""
"Set "
"<parameter>volume_driver=cinder.volume.drivers.san.HpSanISCSIDriver</parameter>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml23(para)
msgid ""
"Set <parameter>san_ip</parameter> flag to the hostname or VIP of your "
"Virtual Storage Appliance (VSA)."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml28(para)
msgid ""
"Set <parameter>san_login</parameter> and <parameter>san_password</parameter>"
" to the user name and password of the ssh user with all necessary privileges"
" on the appliance."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml34(para)
msgid ""
"Set <code>san_ssh_port=16022</code>. The default is 22. However, the default"
" for the VSA is usually 16022."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml39(para)
msgid ""
"Set <code>san_clustername</code> to the name of the cluster where the "
"associated volumes are created."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml44(para)
msgid "The following optional parameters have the following default values:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml48(para)
msgid ""
"<code>san_thin_provision=True</code>. To disable thin provisioning, set to "
"<literal>False</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml53(para)
msgid ""
"<code>san_is_local=False</code>. Typically, this parameter is set to "
"<literal>False</literal> for this driver. To configure the cliq commands to "
"run locally instead of over ssh, set this parameter to "
"<literal>True</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml60(para)
msgid ""
"In addition to configuring the <systemitem class=\"service\">cinder-"
"volume</systemitem> service, you must configure the VSA to function in an "
"OpenStack environment."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml64(title)
msgid "To configure the VSA"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml66(para)
msgid ""
"Configure Chap on each of the <systemitem class=\"service\">nova-"
"compute</systemitem> nodes."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml71(para)
msgid ""
"Add Server associations on the VSA with the associated Chap and initiator "
"information. The name should correspond to the <emphasis "
"role=\"italic\">'hostname'</emphasis> of the <systemitem class=\"service"
"\">nova-compute</systemitem> node. For Xen, this is the hypervisor host "
"name. To do this, use either Cliq or the Centralized Management Console."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/windows-volume-driver.xml6(title)
msgid "Windows"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/windows-volume-driver.xml7(para)
msgid ""
"There is a volume back-end for Windows. Set the following in your "
"<filename>cinder.conf</filename>, and use the options below to configure it."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/lvm-volume-driver.xml6(title)
msgid "LVM"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/lvm-volume-driver.xml7(para)
msgid "The default volume back-end uses local volumes managed by LVM."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/lvm-volume-driver.xml8(para)
msgid ""
"This driver supports different transport protocols to attach volumes, "
"currently ISCSI and ISER."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/lvm-volume-driver.xml10(para)
msgid ""
"Set the following in your <filename>cinder.conf</filename>, and use the "
"following options to configure for ISCSI transport:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/lvm-volume-driver.xml16(para)
msgid "and for the ISER transport:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml4(title)
msgid "NFS driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml5(para)
msgid ""
"The Network File System (NFS) is a distributed file system protocol "
"originally developed by Sun Microsystems in 1984. An NFS server "
"<emphasis>exports</emphasis> one or more of its file systems, known as "
"<emphasis>shares</emphasis>. An NFS client can mount these exported shares "
"on its own file system. You can perform file actions on this mounted remote "
"file system as if the file system were local."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml13(title)
msgid "How the NFS driver works"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml14(para)
msgid ""
"The NFS driver, and other drivers based off of it, work quite differently "
"than a traditional block storage driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml17(para)
msgid ""
"The NFS driver does not actually allow an instance to access a storage "
"device at the block level. Instead, files are created on an NFS share and "
"mapped to instances, which emulates a block device. This works in a similar "
"way to QEMU, which stores instances in the "
"<filename>/var/lib/nova/instances</filename> directory."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml26(title)
msgid "Enable the NFS driver and related options"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml27(para)
msgid ""
"To use Cinder with the NFS driver, first set the "
"<literal>volume_driver</literal> in <filename>cinder.conf</filename>:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml31(para)
msgid "The following table contains the options supported by the NFS driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml37(title)
msgid "How to use the NFS driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml40(para)
msgid ""
"Access to one or more NFS servers. Creating an NFS server is outside the "
"scope of this document. This example assumes access to the following NFS "
"servers and mount points:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml46(literal)
msgid "192.168.1.200:/storage"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml49(literal)
msgid "192.168.1.201:/storage"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml52(literal)
msgid "192.168.1.202:/storage"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml55(para)
msgid ""
"This example demonstrates the use of with this driver with multiple NFS "
"servers. Multiple servers are not required. One is usually enough."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml60(para)
msgid ""
"Add your list of NFS servers to the file you specified with the "
"<literal>nfs_shares_config</literal> option. For example, if the value of "
"this option was set to <literal>/etc/cinder/shares.txt</literal>, then:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml70(para)
msgid ""
"Comments are allowed in this file. They begin with a <literal>#</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml74(para)
msgid ""
"Configure the <literal>nfs_mount_point_base</literal> option. This is a "
"directory where <systemitem class=\"service\">cinder-volume</systemitem> "
"mounts all NFS shares stored in <literal>shares.txt</literal>. For this "
"example, <literal>/var/lib/cinder/nfs</literal> is used. You can, of course,"
" use the default value of <literal>$state_path/mnt</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml85(para)
msgid ""
"Start the <systemitem class=\"service\">cinder-volume</systemitem> service. "
"<literal>/var/lib/cinder/nfs</literal> should now contain a directory for "
"each NFS share specified in <literal>shares.txt</literal>. The name of each "
"directory is a hashed name:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml97(para)
msgid "You can now create volumes as you normally would:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml102(para)
msgid ""
"This volume can also be attached and deleted just like other volumes. "
"However, snapshotting is <emphasis>not</emphasis> supported."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml109(title)
msgid "NFS driver notes"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml112(para)
msgid ""
"<systemitem class=\"service\">cinder-volume</systemitem> manages the "
"mounting of the NFS shares as well as volume creation on the shares. Keep "
"this in mind when planning your OpenStack architecture. If you have one "
"master NFS server, it might make sense to only have one <systemitem "
"class=\"service\">cinder-volume</systemitem> service to handle all requests "
"to that NFS server. However, if that single server is unable to handle all "
"requests, more than one <systemitem class=\"service\">cinder-"
"volume</systemitem> service is needed as well as potentially more than one "
"NFS server."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml128(para)
msgid ""
"Because data is stored in a file and not actually on a block storage device,"
" you might not see the same IO performance as you would with a traditional "
"block storage driver. Please test accordingly."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml135(para)
msgid ""
"Despite possible IO performance loss, having volume data stored in a file "
"might be beneficial. For example, backing up volumes can be as easy as "
"copying the volume files."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/nfs-volume-driver.xml140(para)
msgid "Regular IO flushing and syncing still stands."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml4(title)
msgid "IBM GPFS volume driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml5(para)
msgid ""
"IBM General Parallel File System (GPFS) is a cluster file system that "
"provides concurrent access to file systems from multiple nodes. The storage "
"provided by these nodes can be direct attached, network attached, SAN "
"attached, or a combination of these methods. GPFS provides many features "
"beyond common data access, including data replication, policy based storage "
"management, and space efficient file snapshot and clone operations."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml14(title)
msgid "How the GPFS driver works"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml15(para)
msgid ""
"The GPFS driver enables the use of GPFS in a fashion similar to that of the "
"NFS driver. With the GPFS driver, instances do not actually access a storage"
" device at the block level. Instead, volume backing files are created in a "
"GPFS file system and mapped to instances, which emulate a block device."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml23(para)
msgid ""
"GPFS software must be installed and running on nodes where Block Storage and"
" Compute services run in the OpenStack environment. A GPFS file system must "
"also be created and mounted on these nodes before starting the <literal"
">cinder-volume</literal> service. The details of these GPFS specific steps "
"are covered in <citetitle>GPFS: Concepts, Planning, and Installation "
"Guide</citetitle> and <citetitle>GPFS: Administration and Programming "
"Reference</citetitle>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml35(para)
msgid ""
"Optionally, the Image Service can be configured to store images on a GPFS "
"file system. When a Block Storage volume is created from an image, if both "
"image data and volume data reside in the same GPFS file system, the data "
"from image file is moved efficiently to the volume file using copy-on-write "
"optimization strategy."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml43(title)
msgid "Enable the GPFS driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml44(para)
msgid ""
"To use the Block Storage Service with the GPFS driver, first set the "
"<literal>volume_driver</literal> in <filename>cinder.conf</filename>:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml48(para)
msgid ""
"The following table contains the configuration options supported by the GPFS"
" driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml53(para)
msgid ""
"The <literal>gpfs_images_share_mode</literal> flag is only valid if the "
"Image Service is configured to use GPFS with the "
"<literal>gpfs_images_dir</literal> flag. When the value of this flag is "
"<literal>copy_on_write</literal>, the paths specified by the "
"<literal>gpfs_mount_point_base</literal> and "
"<literal>gpfs_images_dir</literal> flags must both reside in the same GPFS "
"file system and in the same GPFS file set."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml66(title)
msgid "Volume creation options"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml67(para)
msgid ""
"It is possible to specify additional volume configuration options on a per-"
"volume basis by specifying volume metadata. The volume is created using the "
"specified options. Changing the metadata after the volume is created has no "
"effect. The following table lists the volume creation options supported by "
"the GPFS volume driver."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml75(caption)
msgid "Volume Create Options for GPFS Volume Drive"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml79(th)
msgid "Metadata Item Name"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml85(literal)
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml98(literal)
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml99(literal)
msgid "fstype"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml88(literal)
msgid "fstype=swap"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml86(td)
msgid ""
"Specifies whether to create a file system or a swap area on the new volume. "
"If <placeholder-1/> is specified, the mkswap command is used to create a "
"swap area. Otherwise the mkfs command is passed the specified file system "
"type, for example ext3, ext4 or ntfs."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml96(literal)
msgid "fslabel"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml97(td)
msgid ""
"Sets the file system label for the file system specified by <placeholder-1/>"
" option. This value is only used if <placeholder-2/> is specified."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml103(literal)
msgid "data_pool_name"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml105(para)
msgid ""
"Specifies the GPFS storage pool to which the volume is to be assigned. Note:"
" The GPFS storage pool must already have been created."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml111(literal)
msgid "replicas"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml113(para)
msgid ""
"Specifies how many copies of the volume file to create. Valid values are 1, "
"2, and, for GPFS V3.5.0.7 and later, 3. This value cannot be greater than "
"the value of the <literal>MaxDataReplicas</literal> attribute of the file "
"system."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml122(literal)
msgid "dio"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml124(para)
msgid ""
"Enables or disables the Direct I/O caching policy for the volume file. Valid"
" values are <literal>yes</literal> and <literal>no</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml130(literal)
msgid "write_affinity_depth"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml132(para)
msgid ""
"Specifies the allocation policy to be used for the volume file. Note: This "
"option only works if <literal>allow-write-affinity</literal> is set for the "
"GPFS data pool."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml139(literal)
msgid "block_group_factor"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml141(para)
msgid ""
"Specifies how many blocks are laid out sequentially in the volume file to "
"behave as a single large block. Note: This option only works if <literal"
">allow-write-affinity</literal> is set for the GPFS data pool."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml149(literal)
msgid "write_affinity_failure_group"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml151(para)
msgid ""
"Specifies the range of nodes (in GPFS shared nothing architecture) where "
"replicas of blocks in the volume file are to be written. See "
"<citetitle>GPFS: Administration and Programming Reference</citetitle> for "
"more details on this option."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml162(title)
msgid "Example: Volume creation options"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml163(para)
msgid ""
"This example shows the creation of a 50GB volume with an ext4 file system "
"labeled <literal>newfs</literal>and direct IO enabled:"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml171(title)
msgid "Operational notes for GPFS driver"
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml174(para)
msgid ""
"Volume snapshots are implemented using the GPFS file clone feature. Whenever"
" a new snapshot is created, the snapshot file is efficiently created as a "
"read-only clone parent of the volume, and the volume file uses copy-on-write"
" optimization strategy to minimize data movement."
msgstr ""
#: ./doc/config-reference/block-storage/drivers/ibm-gpfs-volume-driver.xml180(para)
msgid ""
"Similarly when a new volume is created from a snapshot or from an existing "
"volume, the same approach is taken. The same approach is also used when a "
"new volume is created from a Glance image, if the source image is in raw "
"format, and <literal>gpfs_images_share_mode</literal> is set to "
"<literal>copy_on_write</literal>."
msgstr ""
#: ./doc/config-reference/block-storage/backup/tsm-backup-driver.xml6(title)
msgid "IBM Tivoli Storage Manager backup driver"
msgstr ""
#: ./doc/config-reference/block-storage/backup/tsm-backup-driver.xml7(para)
msgid ""
"The IBM Tivoli Storage Manager (TSM) backup driver enables performing volume"
" backups to a TSM server."
msgstr ""
#: ./doc/config-reference/block-storage/backup/tsm-backup-driver.xml10(para)
msgid ""
"The TSM client should be installed and configured on the machine running the"
" <systemitem class=\"service\">cinder-backup </systemitem> service. See the "
"<citetitle>IBM Tivoli Storage Manager Backup-Archive Client Installation and"
" User's Guide</citetitle> for details on installing the TSM client."
msgstr ""
#: ./doc/config-reference/block-storage/backup/tsm-backup-driver.xml17(para)
msgid ""
"To enable the IBM TSM backup driver, include the following option in "
"<filename>cinder.conf</filename>:"
msgstr ""
#: ./doc/config-reference/block-storage/backup/tsm-backup-driver.xml20(para)
msgid ""
"The following configuration options are available for the TSM backup driver."
msgstr ""
#: ./doc/config-reference/block-storage/backup/tsm-backup-driver.xml23(para)
msgid "This example shows the default options for the TSM backup driver."
msgstr ""
#: ./doc/config-reference/block-storage/backup/ceph-backup-driver.xml11(title)
msgid "Ceph backup driver"
msgstr ""
#: ./doc/config-reference/block-storage/backup/ceph-backup-driver.xml12(para)
msgid ""
"The Ceph backup driver backs up volumes of any type to a Ceph back-end "
"store. The driver can also detect whether the volume to be backed up is a "
"Ceph RBD volume, and if so, it tries to perform incremental and differential"
" backups."
msgstr ""
#: ./doc/config-reference/block-storage/backup/ceph-backup-driver.xml17(para)
msgid ""
"For source Ceph RBD volumes, you can perform backups within the same Ceph "
"pool (not recommended) and backups between different Ceph pools and between "
"different Ceph clusters."
msgstr ""
#: ./doc/config-reference/block-storage/backup/ceph-backup-driver.xml21(para)
msgid ""
"At the time of writing, differential backup support in Ceph/librbd was quite"
" new. This driver attempts a differential backup in the first instance. If "
"the differential backup fails, the driver falls back to full backup/copy."
msgstr ""
#: ./doc/config-reference/block-storage/backup/ceph-backup-driver.xml26(para)
msgid ""
"If incremental backups are used, multiple backups of the same volume are "
"stored as snapshots so that minimal space is consumed in the backup store. "
"It takes far less time to restore a volume than to take a full copy."
msgstr ""
#: ./doc/config-reference/block-storage/backup/ceph-backup-driver.xml32(para)
msgid "Block Storage Service enables you to:"
msgstr ""
#: ./doc/config-reference/block-storage/backup/ceph-backup-driver.xml35(para)
msgid "Restore to a new volume, which is the default and recommended action."
msgstr ""
#: ./doc/config-reference/block-storage/backup/ceph-backup-driver.xml40(para)
msgid ""
"Restore to the original volume from which the backup was taken. The restore "
"action takes a full copy because this is the safest action."
msgstr ""
#: ./doc/config-reference/block-storage/backup/ceph-backup-driver.xml48(para)
msgid ""
"To enable the Ceph backup driver, include the following option in the "
"<filename>cinder.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/block-storage/backup/ceph-backup-driver.xml52(para)
msgid ""
"The following configuration options are available for the Ceph backup "
"driver."
msgstr ""
#: ./doc/config-reference/block-storage/backup/ceph-backup-driver.xml56(para)
msgid "This example shows the default options for the Ceph backup driver."
msgstr ""
#: ./doc/config-reference/block-storage/backup/swift-backup-driver.xml5(title)
msgid "Swift backup driver"
msgstr ""
#: ./doc/config-reference/block-storage/backup/swift-backup-driver.xml6(para)
msgid ""
"The backup driver for Swift back-end performs a volume backup to a Swift "
"object storage system."
msgstr ""
#: ./doc/config-reference/block-storage/backup/swift-backup-driver.xml8(para)
msgid ""
"To enable the Swift backup driver, include the following option in the "
"<filename>cinder.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/block-storage/backup/swift-backup-driver.xml12(para)
msgid ""
"The following configuration options are available for the Swift back-end "
"backup driver."
msgstr ""
#: ./doc/config-reference/block-storage/backup/swift-backup-driver.xml16(para)
msgid ""
"This example shows the default options for the Swift back-end backup driver."
msgstr ""
#: ./doc/config-reference/image/section_glance-property-protection.xml6(title)
msgid "Image property protection"
msgstr ""
#: ./doc/config-reference/image/section_glance-property-protection.xml7(para)
msgid ""
"There are currently two types of properties in the Image Service: \"core "
"properties,\" which are defined by the system, and \"additional "
"properties,\" which are arbitrary key/value pairs that can be set on an "
"image."
msgstr ""
#: ./doc/config-reference/image/section_glance-property-protection.xml11(para)
msgid ""
"With the Havana release, any such property can be protected through "
"configuration. When you put protections on a property, it limits the users "
"who can perform CRUD operations on the property based on their user role. "
"The use case is to enable the cloud provider to maintain extra properties on"
" images so typically this would be an administrator who has access to "
"protected properties, managed with <filename>policy.json</filename>. The "
"extra property could be licensing information or billing information, for "
"example."
msgstr ""
#: ./doc/config-reference/image/section_glance-property-protection.xml20(para)
msgid ""
"Properties that don't have protections defined for them will act as they do "
"now: the administrator can control core properties, with the image owner "
"having control over additional properties."
msgstr ""
#: ./doc/config-reference/image/section_glance-property-protection.xml23(para)
msgid ""
"Property protection can be set in <filename>/etc/glance/property-"
"protections.conf</filename>, using roles found in "
"<filename>policy.json</filename>."
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml6(title)
msgid "Configure Object Storage with the S3 API"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml7(para)
msgid ""
"The Swift3 middleware emulates the S3 REST API on top of Object Storage."
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml9(para)
msgid "The following operations are currently supported:"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml12(para)
msgid "GET Service"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml15(para)
msgid "DELETE Bucket"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml18(para)
msgid "GET Bucket (List Objects)"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml21(para)
msgid "PUT Bucket"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml24(para)
msgid "DELETE Object"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml27(para)
msgid "GET Object"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml30(para)
msgid "HEAD Object"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml33(para)
msgid "PUT Object"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml36(para)
msgid "PUT Object (Copy)"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml39(para)
msgid ""
"To use this middleware, first download the latest version from its "
"repository to your proxy server(s)."
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml43(para)
msgid ""
"Optional: To use this middleware with Swift 1.7.0 and previous versions, you"
" must use the v1.7 tag of the fujita/swift3 repository. Clone the "
"repository, as shown previously, and run this command:"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml48(para)
msgid "Then, install it using standard python mechanisms, such as:"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml51(para)
msgid ""
"Alternatively, if you have configured the Ubuntu Cloud Archive, you may use:"
" <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml54(para)
msgid ""
"To add this middleware to your configuration, add the swift3 middleware in "
"front of the auth middleware, and before any other middleware that look at "
"swift requests (like rate limiting)."
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml58(para)
msgid ""
"Ensure that your proxy-server.conf file contains swift3 in the pipeline and "
"the <code>[filter:swift3]</code> section, as shown below:"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml68(para)
msgid ""
"Next, configure the tool that you use to connect to the S3 API. For S3curl, "
"for example, you must add your host IP information by adding your host IP to"
" the @endpoints array (line 33 in s3curl.pl):"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml73(para)
msgid "Now you can send commands to the endpoint, such as:"
msgstr ""
#: ./doc/config-reference/object-storage/section_configure_s3.xml77(para)
msgid ""
"To set up your client, the access key will be the concatenation of the "
"account and user strings that should look like test:tester, and the secret "
"access key is the account password. The host should also point to the Swift "
"storage node's hostname. It also will have to use the old-style calling "
"format, and not the hostname-based container format. Here is an example "
"client setup using the Python boto library on a locally installed all-in-one"
" Swift installation."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml6(title)
msgid "Configure Object Storage features"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml8(title)
msgid "Object Storage zones"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml9(para)
msgid ""
"In OpenStack Object Storage, data is placed across different tiers of "
"failure domains. First, data is spread across regions, then zones, then "
"servers, and finally across drives. Data is placed to get the highest "
"failure domain isolation. If you deploy multiple regions, the Object Storage"
" service places the data across the regions. Within a region, each replica "
"of the data should be stored in unique zones, if possible. If there is only "
"one zone, data should be placed on different servers. And if there is only "
"one server, data should be placed on different drives."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml20(para)
msgid ""
"Regions are widely separated installations with a high-latency or otherwise "
"constrained network link between them. Zones are arbitrarily assigned, and "
"it is up to the administrator of the Object Storage cluster to choose an "
"isolation level and attempt to maintain the isolation level through "
"appropriate zone assignment. For example, a zone may be defined as a rack "
"with a single power source. Or a zone may be a DC room with a common utility"
" provider. Servers are identified by a unique IP/port. Drives are locally "
"attached storage volumes identified by mount point."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml31(para)
msgid ""
"In small clusters (five nodes or fewer), everything is normally in a single "
"zone. Larger Object Storage deployments may assign zone designations "
"differently; for example, an entire cabinet or rack of servers may be "
"designated as a single zone to maintain replica availability if the cabinet "
"becomes unavailable (for example, due to failure of the top of rack switches"
" or a dedicated circuit). In very large deployments, such as service "
"provider level deployments, each zone might have an entirely autonomous "
"switching and power infrastructure, so that even the loss of an electrical "
"circuit or switching aggregator would result in the loss of a single replica"
" at most."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml45(title)
msgid "Rackspace zone recommendations"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml46(para)
msgid ""
"For ease of maintenance on OpenStack Object Storage, Rackspace recommends "
"that you set up at least five nodes. Each node is assigned its own zone (for"
" a total of five zones), which gives you host level redundancy. This enables"
" you to take down a single zone for maintenance and still guarantee object "
"availability in the event that another zone fails during your maintenance."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml54(para)
msgid ""
"You could keep each server in its own cabinet to achieve cabinet level "
"isolation, but you may wish to wait until your swift service is better "
"established before developing cabinet-level isolation. OpenStack Object "
"Storage is flexible; if you later decide to change the isolation level, you "
"can take down one zone at a time and move them to appropriate new homes."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml65(title)
msgid "RAID controller configuration"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml66(para)
msgid ""
"OpenStack Object Storage does not require RAID. In fact, most RAID "
"configurations cause significant performance degradation. The main reason "
"for using a RAID controller is the battery-backed cache. It is very "
"important for data integrity reasons that when the operating system confirms"
" a write has been committed that the write has actually been committed to a "
"persistent location. Most disks lie about hardware commits by default, "
"instead writing to a faster write cache for performance reasons. In most "
"cases, that write cache exists only in non-persistent memory. In the case of"
" a loss of power, this data may never actually get committed to disk, "
"resulting in discrepancies that the underlying file system must handle."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml79(para)
msgid ""
"OpenStack Object Storage works best on the XFS file system, and this "
"document assumes that the hardware being used is configured appropriately to"
" be mounted with the <placeholder-1/> option.   For more information, refer "
"to the XFS FAQ: <link "
"href=\"http://xfs.org/index.php/XFS_FAQ\">http://xfs.org/index.php/XFS_FAQ</link>"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml87(para)
msgid ""
"To get the most out of your hardware, it is essential that every disk used "
"in OpenStack Object Storage is configured as a standalone, individual RAID 0"
" disk; in the case of 6 disks, you would have six RAID 0s or one JBOD. Some "
"RAID controllers do not support JBOD or do not support battery backed cache "
"with JBOD. To ensure the integrity of your data, you must ensure that the "
"individual drive caches are disabled and the battery backed cache in your "
"RAID card is configured and used. Failure to configure the controller "
"properly in this case puts data at risk in the case of sudden loss of power."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml99(para)
msgid ""
"You can also use hybrid drives or similar options for battery backed up "
"cache configurations without a RAID controller."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml105(title)
msgid "Throttle resources through rate limits"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml106(para)
msgid ""
"Rate limiting in OpenStack Object Storage is implemented as a pluggable "
"middleware that you configure on the proxy server. Rate limiting is "
"performed on requests that result in database writes to the account and "
"container SQLite databases. It uses memcached and is dependent on the proxy "
"servers having highly synchronized time. The rate limits are limited by the "
"accuracy of the proxy server clocks."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml115(title)
msgid "Configure rate limiting"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml116(para)
msgid ""
"All configuration is optional. If no account or container limits are "
"provided, no rate limiting occurs. Available configuration options include:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml122(para)
msgid ""
"The container rate limits are linearly interpolated from the values given. A"
" sample container rate limiting could be:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml125(para)
msgid "container_ratelimit_100 = 100"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml126(para)
msgid "container_ratelimit_200 = 50"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml127(para)
msgid "container_ratelimit_500 = 20"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml128(para)
msgid "This would result in:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml130(caption)
msgid "Values for Rate Limiting with Sample Configuration Settings"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml134(td)
msgid "Container Size"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml135(td)
msgid "Rate Limit"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml138(td)
msgid "0-99"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml139(td)
msgid "No limiting"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml142(td)
#: ./doc/config-reference/object-storage/section_object-storage-features.xml143(td)
msgid "100"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml147(td)
msgid "150"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml148(td)
msgid "75"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml151(td)
msgid "500"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml152(td)
#: ./doc/config-reference/object-storage/section_object-storage-features.xml156(td)
msgid "20"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml155(td)
msgid "1000"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml163(title)
msgid "Health check"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml164(para)
msgid ""
"Provides an easy way to monitor whether the swift proxy server is alive. If "
"you access the proxy with the path <filename>/healthcheck</filename>, it "
"respond <literal>OK</literal> in the response body, which monitoring tools "
"can use."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml174(title)
msgid "Domain remap"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml175(para)
msgid ""
"Middleware that translates container and account parts of a domain to path "
"parameters that the proxy server understands."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml183(title)
msgid "CNAME lookup"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml184(para)
msgid ""
"Middleware that translates an unknown domain in the host header to something"
" that ends with the configured storage_domain by looking up the given "
"domain's CNAME record in DNS."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml194(title)
msgid "Temporary URL"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml195(para)
msgid ""
"Allows the creation of URLs to provide temporary access to objects. For "
"example, a website may wish to provide a link to download a large object in "
"Swift, but the Swift account has no public access. The website can generate "
"a URL that provides GET access for a limited time to the resource. When the "
"web browser user clicks on the link, the browser downloads the object "
"directly from Swift, eliminating the need for the website to act as a proxy "
"for the request. If the user shares the link with all his friends, or "
"accidentally posts it on a forum, the direct access is limited to the "
"expiration time set when the website created the link."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml210(literal)
msgid "temp_url_sig"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml212(para)
msgid "A cryptographic signature"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml216(literal)
msgid "temp_url_expires"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml218(para)
msgid "An expiration date, in Unix time."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml207(para)
msgid ""
"A temporary URL is the typical URL associated with an object, with two "
"additional query parameters:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml222(para)
msgid "An example of a temporary URL:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml228(para)
msgid ""
"To create temporary URLs, first set the <literal>X-Account-Meta-Temp-URL-"
"Key</literal> header on your Swift account to an arbitrary string. This "
"string serves as a secret key. For example, to set a key of "
"<literal>b3968d0207b54ece87cccc06515a89d4</literal> using the "
"<placeholder-1/> command-line tool:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml235(replaceable)
msgid "b3968d0207b54ece87cccc06515a89d4"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml236(para)
msgid "Next, generate an HMAC-SHA1 (RFC 2104) signature to specify:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml240(para)
msgid ""
"Which HTTP method to allow (typically <literal>GET</literal> or "
"<literal>PUT</literal>)"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml245(para)
msgid "The expiry date as a Unix timestamp"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml248(para)
msgid "the full path to the object"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml251(para)
msgid ""
"The secret key set as the <literal>X-Account-Meta-Temp-URL-Key</literal>"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml255(para)
msgid ""
"Here is code generating the signature for a GET for 24 hours on "
"<code>/v1/AUTH_account/container/object</code>:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml279(para)
msgid ""
"Changing the <literal>X-Account-Meta-Temp-URL-Key</literal> invalidates any "
"previously generated temporary URLs within 60 seconds (the memcache time for"
" the key). Swift supports up to two keys, specified by <literal>X-Account-"
"Meta-Temp-URL-Key</literal> and <literal>X-Account-Meta-Temp-URL-"
"Key-2</literal>. Signatures are checked against both keys, if present. This "
"is to allow for key rotation without invalidating all existing temporary "
"URLs."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml270(para)
msgid ""
"Any alteration of the resource path or query arguments results in a "
"<errorcode>401</errorcode><errortext>Unauthorized</errortext> error. "
"Similarly, a PUT where GET was the allowed method returns a "
"<errorcode>401</errorcode>. HEAD is allowed if GET or PUT is allowed. Using "
"this in combination with browser form post translation middleware could also"
" allow direct-from-browser uploads to specific locations in Swift. Note that"
" <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml291(para)
msgid ""
"Swift includes a script called <placeholder-1/> that generates the query "
"parameters automatically:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml298(para)
msgid ""
"Because this command only returns the path, you must prefix the Swift "
"storage host name (for example, <literal>https://swift-"
"cluster.example.com</literal>)."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml301(para)
msgid ""
"With GET Temporary URLs, a <literal>Content-Disposition</literal> header is "
"set on the response so that browsers interpret this as a file attachment to "
"be saved. The file name chosen is based on the object name, but you can "
"override this with a <literal>filename</literal> query parameter. The "
"following example specifies a filename of <filename>My Test "
"File.pdf</filename>:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml324(emphasis)
msgid "tempurl"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml313(para)
msgid ""
"To enable Temporary URL functionality, edit <filename>/etc/swift/proxy-"
"server.conf</filename> to add <literal>tempurl</literal> to the "
"<literal>pipeline</literal> variable defined in the "
"<literal>[pipeline:main]</literal> section. The <literal>tempurl</literal> "
"entry should appear immediately before the authentication filters in the "
"pipeline, such as <literal>authtoken</literal>, <literal>tempauth</literal> "
"or <literal>keystoneauth</literal>. For example:<placeholder-1/>"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml331(title)
msgid "Name check filter"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml332(para)
msgid ""
"Name Check is a filter that disallows any paths that contain defined "
"forbidden characters or that exceed a defined length."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml340(title)
msgid "Constraints"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml341(para)
msgid ""
"To change the OpenStack Object Storage internal limits, update the values in"
" the <literal>swift-constraints</literal> section in the "
"<filename>swift.conf</filename> file. Use caution when you update these "
"values because they affect the performance in the entire cluster."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml352(title)
msgid "Cluster health"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml353(para)
msgid ""
"Use the <placeholder-1/> tool to measure overall cluster health. This tool "
"checks if a set of deliberately distributed containers and objects are "
"currently in their proper places within the cluster. For instance, a common "
"deployment has three replicas of each object. The health of that object can "
"be measured by checking if each replica is in its proper place. If only 2 of"
" the 3 is in place the objects health can be said to be at 66.66%, where "
"100% would be perfect. A single objects health, especially an older object,"
" usually reflects the health of that entire partition the object is in. If "
"you make enough objects on a distinct percentage of the partitions in the "
"cluster,you get a good estimate of the overall cluster health. In practice, "
"about 1% partition coverage seems to balance well between accuracy and the "
"amount of time it takes to gather results. The first thing that needs to be "
"done to provide this health value is create a new account solely for this "
"usage. Next, you need to place the containers and objects throughout the "
"system so that they are on distinct partitions. The swift-dispersion-"
"populate tool does this by making up random container and object names until"
" they fall on distinct partitions. Last, and repeatedly for the life of the "
"cluster, you must run the <placeholder-2/> tool to check the health of each "
"of these containers and objects. These tools need direct access to the "
"entire cluster and to the ring files (installing them on a proxy server "
"suffices). The <placeholder-3/> and <placeholder-4/> commands both use the "
"same configuration file, <filename>/etc/swift/dispersion.conf</filename>. "
"Example <filename>dispersion.conf</filename> file:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml393(para)
msgid ""
"There are also configuration options for specifying the dispersion coverage,"
" which defaults to 1%, retries, concurrency, and so on. However, the "
"defaults are usually fine. Once the configuration is in place, run "
"<placeholder-1/> to populate the containers and objects throughout the "
"cluster. Now that those containers and objects are in place, you can run "
"<placeholder-2/> to get a dispersion report, or the overall health of the "
"cluster. Here is an example of a cluster in perfect health:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml413(para)
msgid ""
"Now, deliberately double the weight of a device in the object ring (with "
"replication turned off) and re-run the dispersion report to show what impact"
" that has:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml429(para)
msgid ""
"You can see the health of the objects in the cluster has gone down "
"significantly. Of course, this test environment has just four devices, in a "
"production environment with many devices the impact of one device change is "
"much less. Next, run the replicators to get everything put back into place "
"and then rerun the dispersion report:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml446(para)
msgid ""
"Alternatively, the dispersion report can also be output in json format. This"
" allows it to be more easily consumed by third party utilities:"
msgstr ""
#. Usage documented in
#. http://docs.openstack.org/developer/swift/overview_large_objects.html
#: ./doc/config-reference/object-storage/section_object-storage-features.xml460(title)
msgid "Static Large Object (SLO) support"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml461(para)
msgid ""
"This feature is very similar to Dynamic Large Object (DLO) support in that "
"it enables the user to upload many objects concurrently and afterwards "
"download them as a single object. It is different in that it does not rely "
"on eventually consistent container listings to do so. Instead, a user "
"defined manifest of the object segments is used."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml473(title)
msgid "Container quotas"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml474(para)
msgid ""
"The container_quotas middleware implements simple quotas that can be imposed"
" on swift containers by a user with the ability to set container metadata, "
"most likely the account administrator. This can be useful for limiting the "
"scope of containers that are delegated to non-admin users, exposed to "
"formpost uploads, or just as a self-imposed sanity check."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml481(para)
msgid ""
"Any object PUT operations that exceed these quotas return a 413 response "
"(request entity too large) with a descriptive body."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml484(para)
msgid ""
"Quotas are subject to several limitations: eventual consistency, the "
"timeliness of the cached container_info (60 second ttl by default), and it "
"is unable to reject chunked transfer uploads that exceed the quota (though "
"once the quota is exceeded, new chunked transfers are refused)."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml490(para)
msgid ""
"Set quotas by adding meta values to the container. These values are "
"validated when you set them:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml494(para)
msgid "X-Container-Meta-Quota-Bytes: Maximum size of the container, in bytes."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml498(para)
msgid "X-Container-Meta-Quota-Count: Maximum object count of the container."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml507(title)
msgid "Account quotas"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml508(para)
msgid ""
"The <parameter>x-account-meta-quota-bytes</parameter> metadata entry must be"
" requests (PUT, POST) if a given account quota (in bytes) is exceeded while "
"DELETE requests are still allowed."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml512(para)
msgid ""
"The x-account-meta-quota-bytes metadata entry must be set to store and "
"enable the quota. Write requests to this metadata entry are only permitted "
"for resellers. There is no account quota limitation on a reseller account "
"even if x-account-meta-quota-bytes is set."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml517(para)
msgid ""
"Any object PUT operations that exceed the quota return a 413 response "
"(request entity too large) with a descriptive body."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml520(para)
msgid ""
"The following command uses an admin account that own the Reseller role to "
"set a quota on the test account:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml524(para)
msgid "Here is the stat listing of an account where quota has been set:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml534(para)
msgid "This command removes the account quota:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml538(title)
msgid "Bulk delete"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml539(para)
msgid ""
"Use bulk-delete to delete multiple files from an account with a single "
"request. Responds to DELETE requests with a header 'X-Bulk-Delete: "
"true_value'. The body of the DELETE request is a new line separated list of "
"files to delete. The files listed must be URL encoded and in the form:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml548(para)
msgid ""
"If all files are successfully deleted (or did not exist), the operation "
"returns HTTPOk. If any files failed to delete, the operation returns "
"HTTPBadGateway. In both cases the response body is a JSON dictionary that "
"shows the number of files that were successfully deleted or not found. The "
"files that failed are listed."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml560(title)
msgid "Drive audit"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml561(para)
msgid ""
"The <option>swift-drive-audit</option> configuration items reference a "
"script that can be run by using <placeholder-1/> to watch for bad drives. If"
" errors are detected, it unmounts the bad drive, so that OpenStack Object "
"Storage can work around it. It takes the following options:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml572(title)
msgid "Form post"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml573(para)
msgid ""
"Middleware that provides the ability to upload objects to a cluster using an"
" HTML form POST. The format of the form is:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml588(para)
msgid ""
"The <literal>swift-url</literal> is the URL to the Swift destination, such "
"as: <uri>https://swift-"
"cluster.example.com/v1/AUTH_account/container/object_prefix</uri> The name "
"of each file uploaded is appended to the specified <literal>swift-"
"url</literal>. So, you can upload directly to the root of container with a "
"url like: <uri>https://swift-"
"cluster.example.com/v1/AUTH_account/container/</uri> Optionally, you can "
"include an object prefix to better separate different users uploads, such "
"as: <uri>https://swift-"
"cluster.example.com/v1/AUTH_account/container/object_prefix</uri>"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml600(para)
msgid ""
"The form method must be POST and the enctype must be set as "
"<literal>multipart/form-data</literal>."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml603(para)
msgid ""
"The redirect attribute is the URL to redirect the browser to after the "
"upload completes. The URL has status and message query parameters added to "
"it, indicating the HTTP status code for the upload (2xx is success) and a "
"possible message for further information if there was an error (such as "
"<literal>“max_file_size exceeded”</literal>)."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml610(para)
msgid ""
"The <literal>max_file_size</literal> attribute must be included and "
"indicates the largest single file upload that can be done, in bytes."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml613(para)
msgid ""
"The <literal>max_file_count</literal> attribute must be included and "
"indicates the maximum number of files that can be uploaded with the form. "
"Include additional <code>&lt;![CDATA[&lt;input type=\"file\" "
"name=\"filexx\"/&gt;]]&gt;</code> attributes if desired."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml619(para)
msgid ""
"The expires attribute is the Unix timestamp before which the form must be "
"submitted before it is invalidated."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml622(para)
msgid ""
"The signature attribute is the HMAC-SHA1 signature of the form. This sample "
"Python code shows how to compute the signature:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml639(para)
msgid ""
"The key is the value of the <literal>X-Account-Meta-Temp-URL-Key</literal> "
"header on the account."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml642(para)
msgid ""
"Be certain to use the full path, from the <literal>/v1/</literal> onward."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml644(para)
msgid ""
"The command line tool <placeholder-1/> may be used (mostly just when "
"testing) to compute expires and signature."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml648(para)
msgid ""
"The file attributes must appear after the other attributes to be processed "
"correctly. If attributes come after the file, they are not sent with the "
"sub-request because on the server side, all attributes in the file cannot be"
" parsed unless the whole file is read into memory and the server does not "
"have enough memory to service these requests. So, attributes that follow the"
" file are ignored."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml661(title)
msgid "Static web sites"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-features.xml662(para)
msgid ""
"When configured, this middleware serves container data as a static web site "
"with index file and error file resolution and optional file listings. This "
"mode is normally only active for anonymous requests."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-cors.xml6(title)
msgid "Cross-origin resource sharing"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-cors.xml7(para)
msgid ""
"Cross-Origin Resource Sharing (CORS) is a mechanisim to allow code running "
"in a browser (Javascript for example) to make requests to a domain other "
"then the one from where it originated. Swift supports CORS requests to "
"containers and objects within the containers using metadata held on the "
"container."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-cors.xml13(para)
msgid ""
"In addition to the metadata on containers, the "
"<literal>cors_allow_origin</literal> flag in <filename>proxy-"
"server.conf</filename> may be used to set a list of hosts that are included "
"with any CORS request by default."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-general-service-conf.xml6(title)
msgid "Object Storage general service configuration"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-general-service-conf.xml7(para)
msgid ""
"Most Object Storage services fall into two categories, Object Storage's wsgi"
" servers and background daemons."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-general-service-conf.xml11(para)
msgid ""
"Object Storage uses paste.deploy to manage server configurations. Read more "
"at <link "
"href=\"http://pythonpaste.org/deploy/\">http://pythonpaste.org/deploy/</link>."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-general-service-conf.xml14(para)
msgid ""
"Default configuration options are set in the `[DEFAULT]` section, and any "
"options specified there can be overridden in any of the other sections when "
"the syntax <literal>set option_name = value</literal> is in place."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-general-service-conf.xml19(para)
msgid ""
"Configuration for servers and daemons can be expressed together in the same "
"file for each type of server, or separately. If a required section for the "
"service trying to start is missing there will be an error. The sections not "
"used by the service are ignored."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-general-service-conf.xml25(para)
msgid ""
"Consider the example of an object storage node. By convention configuration "
"for the object-server, object-updater, object-replicator, and object-auditor"
" exist in a single file <filename>/etc/swift/object-server.conf</filename>:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-general-service-conf.xml47(para)
msgid ""
"Object Storage services expect a configuration path as the first argument:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-general-service-conf.xml55(para)
msgid ""
"If you omit the object-auditor section this file can not be used as the "
"configuration path when starting the <placeholder-1/> daemon:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-general-service-conf.xml63(para)
msgid ""
"If the configuration path is a directory instead of a file all of the files "
"in the directory with the file extension \".conf\" will be combined to "
"generate the configuration object which is delivered to the Object Storage "
"service. This is referred to generally as \"directory based configuration\"."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-general-service-conf.xml70(para)
msgid ""
"Directory based configuration leverages ConfigParser's native multi-file "
"support. Files ending in \".conf\" in the given directory are parsed in "
"lexicographical order. File names starting with '.' are ignored. A mixture "
"of file and directory configuration paths is not supported - if the "
"configuration path is a file, only that file will be parsed."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-general-service-conf.xml78(para)
msgid ""
"The Object Storage service management tool <filename>swift-init</filename> "
"has adopted the convention of looking for "
"<filename>/etc/swift/{type}-server.conf.d/</filename> if the file "
"<filename>/etc/swift/{type}-server.conf</filename> file does not exist."
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-general-service-conf.xml86(para)
msgid ""
"When using directory based configuration, if the same option under the same "
"section appears more than once in different files, the last value parsed is "
"said to override previous occurrences. You can ensure proper override "
"precedence by prefixing the files in the configuration directory with "
"numerical values, as in the following example file layout:"
msgstr ""
#: ./doc/config-reference/object-storage/section_object-storage-general-service-conf.xml105(para)
msgid ""
"You can inspect the resulting combined configuration object using the "
"<placeholder-1/> command line tool."
msgstr ""
#: ./doc/config-reference/identity/section_keystone-token-binding.xml6(title)
msgid "Configure the Identity Service for token binding"
msgstr ""
#: ./doc/config-reference/identity/section_keystone-token-binding.xml7(para)
msgid ""
"Token binding refers to the practice of embedding information from external "
"authentication providers (like a company's Kerberos server) inside the token"
" such that a client may enforce that the token only be used in conjunction "
"with that specified authentication. This is an additional security mechanism"
" as it means that if a token is stolen it will not be usable without also "
"providing the external authentication."
msgstr ""
#: ./doc/config-reference/identity/section_keystone-token-binding.xml14(para)
msgid ""
"To activate token binding you must specify the types of authentication that "
"token binding should be used for in <filename>keystone.conf</filename>: "
"<placeholder-1/> Currently only <literal>kerberos</literal> is supported."
msgstr ""
#: ./doc/config-reference/identity/section_keystone-token-binding.xml25(para)
msgid "<literal>disabled</literal> disable token bind checking"
msgstr ""
#: ./doc/config-reference/identity/section_keystone-token-binding.xml29(para)
msgid ""
"<literal>permissive</literal> enable bind checking, if a token is bound to a"
" mechanism that is unknown to the server then ignore it. This is the "
"default."
msgstr ""
#: ./doc/config-reference/identity/section_keystone-token-binding.xml34(para)
msgid ""
"<literal>strict</literal> enable bind checking, if a token is bound to a "
"mechanism that is unknown to the server then this token should be rejected."
msgstr ""
#: ./doc/config-reference/identity/section_keystone-token-binding.xml39(para)
msgid ""
"<literal>required</literal> enable bind checking and require that at least 1"
" bind mechanism is used for tokens."
msgstr ""
#: ./doc/config-reference/identity/section_keystone-token-binding.xml44(para)
msgid ""
"<literal>named</literal> enable bind checking and require that the specified"
" authentication mechanism is used: <placeholder-1/>"
msgstr ""
#: ./doc/config-reference/identity/section_keystone-token-binding.xml51(para)
msgid ""
"Do not set <literal>enforce_token_bind = named</literal> as there is not an "
"authentication mechanism called <literal>named</literal>."
msgstr ""
#: ./doc/config-reference/identity/section_keystone-token-binding.xml20(para)
msgid ""
"To enforce checking of token binding the "
"<literal>enforce_token_bind</literal> parameter should be set to one of the "
"following modes: <placeholder-1/><placeholder-2/>"
msgstr ""
#. Put one translator per line, in the form of NAME <EMAIL>, YEAR1, YEAR2
#: ./doc/config-reference/identity/section_keystone-token-binding.xml0(None)
msgid "translator-credits"
msgstr ""