openstack-manuals/doc/admin-guide-cloud/source/locale/admin-guide-cloud.pot
OpenStack Proposal Bot 492fecc62e Imported Translations from Zanata
For more information about this automatic import see:
https://wiki.openstack.org/wiki/Translations/Infrastructure

Change-Id: I2e4b8636c944ed7e4f822497c85a7edc81b73e0c
2015-12-04 06:13:25 +00:00

28976 lines
938 KiB
Plaintext
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# SOME DESCRIPTIVE TITLE.
# Copyright (C) 2015, OpenStack contributors
# This file is distributed under the same license as the Cloud Administrator Guide package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: Cloud Administrator Guide 0.9\n"
"Report-Msgid-Bugs-To: \n"
"POT-Creation-Date: 2015-12-04 06:09+0000\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
#: ../baremetal.rst:5
msgid "Bare Metal"
msgstr ""
#: ../baremetal.rst:7
msgid "The Bare Metal service provides physical hardware management features."
msgstr ""
# #-#-#-#-# baremetal.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# database.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# orchestration-introduction.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# shared_file_systems_intro.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../baremetal.rst:10 ../database.rst:10 ../orchestration-introduction.rst:3
#: ../shared_file_systems_intro.rst:5
msgid "Introduction"
msgstr ""
#: ../baremetal.rst:12
msgid ""
"The Bare Metal service provides physical hardware as opposed to virtual "
"machines and provides several reference drivers which leverage common "
"technologies like PXE and IPMI, to cover a wide range of hardware. The "
"pluggable driver architecture also allows vendor-specific drivers to be "
"added for improved performance or functionality not provided by reference "
"drivers. The Bare Metal service makes physical servers as easy to provision "
"as virtual machines in a cloud, which in turn will open up new avenues for "
"enterprises and service providers."
msgstr ""
# #-#-#-#-# baremetal.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# compute_arch.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-system-architecture.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../baremetal.rst:23 ../compute_arch.rst:3
#: ../telemetry-system-architecture.rst:5
msgid "System architecture"
msgstr ""
#: ../baremetal.rst:25
msgid "The Bare Metal service is composed of the following components:"
msgstr ""
#: ../baremetal.rst:27
msgid ""
"An admin-only RESTful API service, by which privileged users, such as cloud "
"operators and other services within the cloud control plane, may interact "
"with the managed bare-metal servers."
msgstr ""
#: ../baremetal.rst:31
msgid ""
"A conductor service, which conducts all activity related to bare-metal "
"deployments. Functionality is exposed via the API service. The Bare Metal "
"service conductor and API service communicate via RPC."
msgstr ""
#: ../baremetal.rst:36
msgid ""
"Various drivers that support heterogeneous hardware, which enable features "
"specific to unique hardware platforms and leverage divergent capabilities "
"via a common API."
msgstr ""
#: ../baremetal.rst:40
msgid ""
"A message queue, which is a central hub for passing messages, such as "
"RabbitMQ. It should use the same implementation as that of the Compute "
"service."
msgstr ""
#: ../baremetal.rst:44
msgid ""
"A database for storing information about the resources. Among other things, "
"this includes the state of the conductors, nodes (physical servers), and "
"drivers."
msgstr ""
#: ../baremetal.rst:48
msgid ""
"When a user requests to boot an instance, the request is passed to the "
"Compute service via the Compute service API and scheduler. The Compute "
"service hands over this request to the Bare Metal service, where the request "
"passes from the Bare Metal service API, to the conductor which will invoke a "
"driver to successfully provision a physical server for the user."
msgstr ""
#: ../baremetal.rst:56
msgid "Bare Metal deployment"
msgstr ""
#: ../baremetal.rst:58
msgid "PXE deploy process"
msgstr ""
#: ../baremetal.rst:60
msgid "Agent deploy process"
msgstr ""
#: ../baremetal.rst:65
msgid "Use Bare Metal"
msgstr ""
#: ../baremetal.rst:67
msgid "Install the Bare Metal service."
msgstr ""
#: ../baremetal.rst:69
msgid "Setup the Bare Metal driver in the compute node's :file:`nova.conf`."
msgstr ""
#: ../baremetal.rst:71
msgid "Setup TFTP folder and prepare PXE boot loader file."
msgstr ""
#: ../baremetal.rst:73
msgid "Prepare the bare metal flavor."
msgstr ""
#: ../baremetal.rst:75
msgid "Register the nodes with correct drivers."
msgstr ""
#: ../baremetal.rst:77
msgid "Configure the driver information."
msgstr ""
#: ../baremetal.rst:79
msgid "Register the ports information."
msgstr ""
#: ../baremetal.rst:81
msgid "Use nova boot to kick off the bare metal provision."
msgstr ""
#: ../baremetal.rst:83
msgid "Check nodes' provision state and power state."
msgstr ""
# #-#-#-#-# baremetal.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# cross_project_cors.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../baremetal.rst:88 ../cross_project_cors.rst:109
msgid "Troubleshooting"
msgstr ""
#: ../baremetal.rst:91
msgid "No valid host found error"
msgstr ""
#: ../baremetal.rst:93
msgid ""
"Sometimes ``/var/log/nova/nova-conductor.log`` contains the following error:"
msgstr ""
#: ../baremetal.rst:99
msgid ""
"The message \"No valid host was found\" means that the Compute service "
"scheduler could not find a bare metal node suitable for booting the new "
"instance."
msgstr ""
#: ../baremetal.rst:102
msgid ""
"This means there will be some mismatch between resources that the Compute "
"service expects to find and resources that Bare Metal service advertised to "
"the Compute service."
msgstr ""
#: ../baremetal.rst:106
msgid "If this is true, check the following:"
msgstr ""
#: ../baremetal.rst:108
msgid ""
"Introspection should have succeeded for you before, or you should have "
"entered the required bare-metal node properties manually. For each node in "
"``ironic node-list`` use:"
msgstr ""
#: ../baremetal.rst:116
msgid ""
"and make sure that ``properties`` JSON field has valid values for keys "
"``cpus``, ``cpu_arch``, ``memory_mb`` and ``local_gb``."
msgstr ""
#: ../baremetal.rst:119
msgid ""
"The flavor in the Compute service that you are using does not exceed the "
"bare-metal node properties above for a required number of nodes. Use:"
msgstr ""
#: ../baremetal.rst:126
msgid ""
"Make sure that enough nodes are in ``available`` state according to ``ironic "
"node-list``. Nodes in ``manageable`` state usually mean they have failed "
"introspection."
msgstr ""
#: ../baremetal.rst:130
msgid ""
"Make sure nodes you are going to deploy to are not in maintenance mode. Use "
"``ironic node-list`` to check. A node automatically going to maintenance "
"mode usually means the incorrect credentials for this node. Check them and "
"then remove maintenance mode:"
msgstr ""
#: ../baremetal.rst:139
msgid ""
"It takes some time for nodes information to propagate from the Bare Metal "
"service to the Compute service after introspection. Our tooling usually "
"accounts for it, but if you did some steps manually, there may be a period "
"of time when nodes are not available to the Compute service yet. Check that "
"``nova hypervisor-stats`` correctly shows total amount of resources in your "
"system."
msgstr ""
#: ../blockstorage-api-throughput.rst:3
msgid "Increase Block Storage API service throughput"
msgstr ""
#: ../blockstorage-api-throughput.rst:5
msgid ""
"By default, the Block Storage API service runs in one process. This limits "
"the number of API requests that the Block Storage service can process at any "
"given time. In a production environment, you should increase the Block "
"Storage API throughput by allowing the Block Storage API service to run in "
"as many processes as the machine capacity allows."
msgstr ""
#: ../blockstorage-api-throughput.rst:13
msgid ""
"The Block Storage API service is named ``openstack-cinder-api`` on the "
"following distributions: CentOS, Fedora, openSUSE, Red Hat Enterprise Linux, "
"and SUSE Linux Enterprise. In Ubuntu and Debian distributions, the Block "
"Storage API service is named ``cinder-api``."
msgstr ""
#: ../blockstorage-api-throughput.rst:18
msgid ""
"To do so, use the Block Storage API service option ``osapi_volume_workers``. "
"This option allows you to specify the number of API service workers (or OS "
"processes) to launch for the Block Storage API service."
msgstr ""
#: ../blockstorage-api-throughput.rst:22
msgid ""
"To configure this option, open the :file:`/etc/cinder/cinder.conf` "
"configuration file and set the ``osapi_volume_workers`` configuration key to "
"the number of CPU cores/threads on a machine."
msgstr ""
# #-#-#-#-# blockstorage-api-throughput.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# blockstorage_glusterfs_backend.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# blockstorage_nfs_backend.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../blockstorage-api-throughput.rst:26
#: ../blockstorage_glusterfs_backend.rst:153
#: ../blockstorage_nfs_backend.rst:65 ../blockstorage_nfs_backend.rst:91
#: ../blockstorage_nfs_backend.rst:107 ../blockstorage_nfs_backend.rst:139
msgid ""
"On distributions that include ``openstack-config``, you can configure this "
"by running the following command instead::"
msgstr ""
#: ../blockstorage-api-throughput.rst:32
msgid "Replace ``CORES`` with the number of CPU cores/threads on a machine."
msgstr ""
#: ../blockstorage-boot-from-volume.rst:3
msgid "Boot from volume"
msgstr ""
#: ../blockstorage-boot-from-volume.rst:5
msgid ""
"In some cases, you can store and run instances from inside volumes. For "
"information, see the `Launch an instance from a volume`_ section in the "
"`OpenStack End User Guide`_."
msgstr ""
# #-#-#-#-# blockstorage-consistency-groups.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# shared_file_systems_cgroups.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../blockstorage-consistency-groups.rst:3
#: ../shared_file_systems_cgroups.rst:5 ../shared_file_systems_cgroups.rst:26
msgid "Consistency groups"
msgstr ""
#: ../blockstorage-consistency-groups.rst:5
msgid ""
"Consistency group support is available in OpenStack Block Storage. The "
"support is added for creating snapshots of consistency groups. This feature "
"leverages the storage level consistency technology. It allows snapshots of "
"multiple volumes in the same consistency group to be taken at the same point-"
"in-time to ensure data consistency. The consistency group operations can be "
"performed using the Block Storage command line."
msgstr ""
#: ../blockstorage-consistency-groups.rst:14
msgid ""
"Only Block Storage V2 API supports consistency groups. You can specify ``--"
"os-volume-api-version 2`` when using Block Storage command line for "
"consistency group operations."
msgstr ""
#: ../blockstorage-consistency-groups.rst:18
msgid ""
"Before using consistency groups, make sure the Block Storage driver that you "
"are running has consistency group support by reading the Block Storage "
"manual or consulting the driver maintainer. There are a small number of "
"drivers that have implemented this feature. The default LVM driver does not "
"support consistency groups yet because the consistency technology is not "
"available at the storage level."
msgstr ""
#: ../blockstorage-consistency-groups.rst:25
msgid ""
"Before using consistency groups, you must change policies for the "
"consistency group APIs in the :file:`/etc/cinder/policy.json` file. By "
"default, the consistency group APIs are disabled. Enable them before running "
"consistency group operations."
msgstr ""
#: ../blockstorage-consistency-groups.rst:30
msgid "Here are existing policy entries for consistency groups::"
msgstr ""
#: ../blockstorage-consistency-groups.rst:42
msgid "Remove ``group:nobody`` to enable these APIs::"
msgstr ""
#: ../blockstorage-consistency-groups.rst:54
msgid "Restart Block Storage API service after changing policies."
msgstr ""
#: ../blockstorage-consistency-groups.rst:56
msgid "The following consistency group operations are supported:"
msgstr ""
#: ../blockstorage-consistency-groups.rst:58
msgid "Create a consistency group, given volume types."
msgstr ""
#: ../blockstorage-consistency-groups.rst:62
msgid ""
"A consistency group can support more than one volume type. The scheduler is "
"responsible for finding a back end that can support all given volume types."
msgstr ""
#: ../blockstorage-consistency-groups.rst:66
msgid ""
"A consistency group can only contain volumes hosted by the same back end."
msgstr ""
#: ../blockstorage-consistency-groups.rst:69
msgid ""
"A consistency group is empty upon its creation. Volumes need to be created "
"and added to it later."
msgstr ""
#: ../blockstorage-consistency-groups.rst:72
msgid "Show a consistency group."
msgstr ""
#: ../blockstorage-consistency-groups.rst:74
msgid "List consistency groups."
msgstr ""
#: ../blockstorage-consistency-groups.rst:76
msgid ""
"Create a volume and add it to a consistency group, given volume type and "
"consistency group id."
msgstr ""
#: ../blockstorage-consistency-groups.rst:79
msgid "Create a snapshot for a consistency group."
msgstr ""
#: ../blockstorage-consistency-groups.rst:81
msgid "Show a snapshot of a consistency group."
msgstr ""
#: ../blockstorage-consistency-groups.rst:83
msgid "List consistency group snapshots."
msgstr ""
#: ../blockstorage-consistency-groups.rst:85
msgid "Delete a snapshot of a consistency group."
msgstr ""
#: ../blockstorage-consistency-groups.rst:87
msgid "Delete a consistency group."
msgstr ""
#: ../blockstorage-consistency-groups.rst:89
msgid "Modify a consistency group."
msgstr ""
#: ../blockstorage-consistency-groups.rst:91
msgid ""
"Create a consistency group from the snapshot of another consistency group."
msgstr ""
#: ../blockstorage-consistency-groups.rst:94
msgid "Create a consistency group from a source consistency group."
msgstr ""
#: ../blockstorage-consistency-groups.rst:96
msgid ""
"The following operations are not allowed if a volume is in a consistency "
"group:"
msgstr ""
#: ../blockstorage-consistency-groups.rst:99
msgid "Volume migration."
msgstr ""
#: ../blockstorage-consistency-groups.rst:101
msgid "Volume retype."
msgstr ""
#: ../blockstorage-consistency-groups.rst:103
msgid "Volume deletion."
msgstr ""
#: ../blockstorage-consistency-groups.rst:107
msgid "A consistency group has to be deleted as a whole with all the volumes."
msgstr ""
#: ../blockstorage-consistency-groups.rst:110
msgid ""
"The following operations are not allowed if a volume snapshot is in a "
"consistency group snapshot:"
msgstr ""
#: ../blockstorage-consistency-groups.rst:113
msgid "Volume snapshot deletion."
msgstr ""
#: ../blockstorage-consistency-groups.rst:117
msgid ""
"A consistency group snapshot has to be deleted as a whole with all the "
"volume snapshots."
msgstr ""
#: ../blockstorage-consistency-groups.rst:120
msgid "The details of consistency group operations are shown in the following."
msgstr ""
#: ../blockstorage-consistency-groups.rst:122
msgid "**Create a consistency group**::"
msgstr ""
#: ../blockstorage-consistency-groups.rst:132
msgid ""
"The parameter ``volume-types`` is required. It can be a list of names or "
"UUIDs of volume types separated by commas without spaces in between. For "
"example, ``volumetype1,volumetype2,volumetype3.``."
msgstr ""
#: ../blockstorage-consistency-groups.rst:151
msgid "**Show a consistency group**::"
msgstr ""
#: ../blockstorage-consistency-groups.rst:166
msgid "**List consistency groups**::"
msgstr ""
#: ../blockstorage-consistency-groups.rst:177
msgid "**Create a volume and add it to a consistency group**:"
msgstr ""
#: ../blockstorage-consistency-groups.rst:181
msgid ""
"When creating a volume and adding it to a consistency group, a volume type "
"and a consistency group id must be provided. This is because a consistency "
"group can support more than one volume type."
msgstr ""
#: ../blockstorage-consistency-groups.rst:218
msgid "**Create a snapshot for a consistency group**::"
msgstr ""
#: ../blockstorage-consistency-groups.rst:233
msgid "**Show a snapshot of a consistency group**::"
msgstr ""
#: ../blockstorage-consistency-groups.rst:237
msgid "**List consistency group snapshots**::"
msgstr ""
#: ../blockstorage-consistency-groups.rst:250
msgid "**Delete a snapshot of a consistency group**::"
msgstr ""
#: ../blockstorage-consistency-groups.rst:254
msgid "**Delete a consistency group**:"
msgstr ""
#: ../blockstorage-consistency-groups.rst:258
msgid ""
"The force flag is needed when there are volumes in the consistency group::"
msgstr ""
#: ../blockstorage-consistency-groups.rst:263
msgid "**Modify a consistency group**::"
msgstr ""
#: ../blockstorage-consistency-groups.rst:272
msgid ""
"The parameter ``CG`` is required. It can be a name or UUID of a consistency "
"group. UUID1,UUID2,...... are UUIDs of one or more volumes to be added to "
"the consistency group, separated by commas. Default is None. UUID3,"
"UUId4,...... are UUIDs of one or more volumes to be removed from the "
"consistency group, separated by commas. Default is None."
msgstr ""
#: ../blockstorage-consistency-groups.rst:285
msgid ""
"**Create a consistency group from the snapshot of another consistency "
"group**::"
msgstr ""
#: ../blockstorage-consistency-groups.rst:293
msgid ""
"The parameter ``CGSNAPSHOT`` is a name or UUID of a snapshot of a "
"consistency group::"
msgstr ""
#: ../blockstorage-consistency-groups.rst:299
msgid "**Create a consistency group from a source consistency group**::"
msgstr ""
#: ../blockstorage-consistency-groups.rst:306
msgid ""
"The parameter ``SOURCECG`` is a name or UUID of a source consistency group::"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:3
msgid "Configure and use driver filter and weighing for scheduler"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:5
msgid ""
"OpenStack Block Storage enables you to choose a volume back end based on "
"back-end specific properties by using the DriverFilter and GoodnessWeigher "
"for the scheduler. The driver filter and weigher scheduling can help ensure "
"that the scheduler chooses the best back end based on requested volume "
"properties as well as various back-end specific properties."
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:13
msgid "What is driver filter and weigher and when to use it"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:15
msgid ""
"The driver filter and weigher gives you the ability to more finely control "
"how the OpenStack Block Storage scheduler chooses the best back end to use "
"when handling a volume request. One example scenario where using the driver "
"filter and weigher can be if a back end that utilizes thin-provisioning is "
"used. The default filters use the ``free capacity`` property to determine "
"the best back end, but that is not always perfect. If a back end has the "
"ability to provide a more accurate back-end specific value you can use that "
"as part of the weighing. Another example of when the driver filter and "
"weigher can prove useful is if a back end exists where there is a hard limit "
"of 1000 volumes. The maximum volume size is 500 GB. Once 75% of the total "
"space is occupied the performance of the back end degrades. The driver "
"filter and weigher can provide a way for these limits to be checked for."
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:30
msgid "Enable driver filter and weighing"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:32
msgid ""
"To enable the driver filter, set the ``scheduler_default_filters`` option in "
"the :file:`cinder.conf` file to ``DriverFilter`` or add it to the list if "
"other filters are already present."
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:36
msgid ""
"To enable the goodness filter as a weigher, set the "
"``scheduler_default_weighers`` option in the :file:`cinder.conf` file to "
"``GoodnessWeigher`` or add it to the list if other weighers are already "
"present."
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:41
msgid ""
"You can choose to use the ``DriverFilter`` without the ``GoodnessWeigher`` "
"or vice-versa. The filter and weigher working together, however, create the "
"most benefits when helping the scheduler choose an ideal back end."
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:48
msgid ""
"The support for the ``DriverFilter`` and ``GoodnessWeigher`` is optional for "
"back ends. If you are using a back end that does not support the filter and "
"weigher functionality you may not get the full benefit."
msgstr ""
# #-#-#-#-# blockstorage-driver-filter-weighing.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# blockstorage_image_volume_cache.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../blockstorage-driver-filter-weighing.rst:53
#: ../blockstorage_image_volume_cache.rst:36
msgid "Example :file:`cinder.conf` configuration file::"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:60
msgid ""
"It is useful to use the other filters and weighers available in OpenStack in "
"combination with these custom ones. For example, the ``CapacityFilter`` and "
"``CapacityWeigher`` can be combined with these."
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:66
msgid "Defining your own filter and goodness functions"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:68
msgid ""
"You can define your own filter and goodness functions through the use of "
"various properties that OpenStack Block Storage has exposed. Properties "
"exposed include information about the volume request being made, "
"``volume_type`` settings, and back-end specific information about drivers. "
"All of these allow for a lot of control over how the ideal back end for a "
"volume request will be decided."
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:75
msgid ""
"The ``filter_function`` option is a string defining an equation that will "
"determine whether a back end should be considered as a potential candidate "
"in the scheduler."
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:79
msgid ""
"The ``goodness_function`` option is a string defining an equation that will "
"rate the quality of the potential host (0 to 100, 0 lowest, 100 highest)."
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:85
msgid ""
"Default values for the filter and goodness functions will be used for each "
"back end if you do not define them yourself. If complete control is desired "
"then a filter and goodness function should be defined for each of the back "
"ends in the :file:`cinder.conf` file."
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:92
msgid "Supported operations in filter and goodness functions"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:94
msgid ""
"Below is a table of all the operations currently usable in custom filter and "
"goodness functions created by you:"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:98
msgid "Operations"
msgstr ""
# #-#-#-#-# blockstorage-driver-filter-weighing.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# compute-manage-logs.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# networking_adv-features.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-measurements.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../blockstorage-driver-filter-weighing.rst:98
#: ../compute-manage-logs.rst:203 ../networking_adv-features.rst:128
#: ../telemetry-measurements.rst:31 ../telemetry-measurements.rst:97
#: ../telemetry-measurements.rst:430 ../telemetry-measurements.rst:484
#: ../telemetry-measurements.rst:528 ../telemetry-measurements.rst:589
#: ../telemetry-measurements.rst:664 ../telemetry-measurements.rst:697
#: ../telemetry-measurements.rst:762 ../telemetry-measurements.rst:812
#: ../telemetry-measurements.rst:849 ../telemetry-measurements.rst:920
#: ../telemetry-measurements.rst:983 ../telemetry-measurements.rst:1069
#: ../telemetry-measurements.rst:1147 ../telemetry-measurements.rst:1212
#: ../telemetry-measurements.rst:1263 ../telemetry-measurements.rst:1289
#: ../telemetry-measurements.rst:1312 ../telemetry-measurements.rst:1332
msgid "Type"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:100
msgid "+, -, \\*, /, ^"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:100
msgid "standard math"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:102
msgid "logic"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:102
msgid "not, and, or, &, \\|, !"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:104
msgid ">, >=, <, <=, ==, <>, !="
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:104
msgid "equality"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:106
msgid "+, -"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:106
msgid "sign"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:108
msgid "ternary"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:108
msgid "x ? a : b"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:110
msgid "abs(x), max(x, y), min(x, y)"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:110
msgid "math helper functions"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:115
msgid ""
"Syntax errors in filter or goodness strings defined by you will cause errors "
"to be thrown at volume request time."
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:119
msgid "Available properties when creating custom functions"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:121
msgid ""
"There are various properties that can be used in either the "
"``filter_function`` or the ``goodness_function`` strings. The properties "
"allow access to volume info, qos settings, extra specs, and so on."
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:125
msgid ""
"The following properties and their sub-properties are currently available "
"for use:"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:129
msgid "Host stats for a back end"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:131
msgid "The host's name"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:131
msgid "host"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:134
msgid "The volume back end name"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:134
msgid "volume\\_backend\\_name"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:137
msgid "The vendor name"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:137
msgid "vendor\\_name"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:140
msgid "The driver version"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:140
msgid "driver\\_version"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:143
msgid "The storage protocol"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:143
msgid "storage\\_protocol"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:146
msgid "Boolean signifying whether QoS is supported"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:146
msgid "QoS\\_support"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:149
msgid "The total capacity in GB"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:149
msgid "total\\_capacity\\_gb"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:152
msgid "The allocated capacity in GB"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:152
msgid "allocated\\_capacity\\_gb"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:155
msgid "The reserved storage percentage"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:155
msgid "reserved\\_percentage"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:158
msgid "Capabilities specific to a back end"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:160
msgid ""
"These properties are determined by the specific back end you are creating "
"filter and goodness functions for. Some back ends may not have any "
"properties available here."
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:165
msgid "Requested volume properties"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:168
msgid "Status for the requested volume"
msgstr ""
# #-#-#-#-# blockstorage-driver-filter-weighing.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# compute-live-migration-usage.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../blockstorage-driver-filter-weighing.rst:168
#: ../compute-live-migration-usage.rst:75
msgid "status"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:171
msgid "The volume type ID"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:171
msgid "volume\\_type\\_id"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:174
msgid "The display name of the volume"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:174
msgid "display\\_name"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:177
msgid "Any metadata the volume has"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:177
msgid "volume\\_metadata"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:180
msgid "Any reservations the volume has"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:180
msgid "reservations"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:183
msgid "The volume's user ID"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:183
msgid "user\\_id"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:186
msgid "The attach status for the volume"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:186
msgid "attach\\_status"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:189
msgid "The volume's display description"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:189
msgid "display\\_description"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:192
msgid "The volume's ID"
msgstr ""
# #-#-#-#-# blockstorage-driver-filter-weighing.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# compute-live-migration-usage.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../blockstorage-driver-filter-weighing.rst:192
#: ../compute-live-migration-usage.rst:68
msgid "id"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:195
msgid "The volume's replication status"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:195
msgid "replication\\_status"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:198
msgid "The volume's snapshot ID"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:198
msgid "snapshot\\_id"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:201
msgid "The volume's encryption key ID"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:201
msgid "encryption\\_key\\_id"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:204
msgid "The source volume ID"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:204
msgid "source\\_volid"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:207
msgid "Any admin metadata for this volume"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:207
msgid "volume\\_admin\\_metadata"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:210
msgid "The source replication ID"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:210
msgid "source\\_replicaid"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:213
msgid "The consistency group ID"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:213
msgid "consistencygroup\\_id"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:216
msgid "The size of the volume in GB"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:216
msgid "size"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:219
msgid "General metadata"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:219
msgid "metadata"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:221
msgid ""
"The property most used from here will most likely be the ``size`` sub-"
"property."
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:224
msgid "Extra specs for the requested volume type"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:226
#: ../blockstorage-driver-filter-weighing.rst:233
msgid "View the available properties for volume types by running::"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:231
msgid "Current QoS specs for the requested volume type"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:237
msgid ""
"In order to access these properties in a custom string use the following "
"format:"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:240
msgid "``<property>.<sub_property>``"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:243
msgid "Driver filter and weigher usage examples"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:245
msgid ""
"Below are examples for using the filter and weigher separately, together, "
"and using driver-specific properties."
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:248
msgid ""
"Example :file:`cinder.conf` file configuration for customizing the filter "
"function::"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:265
msgid ""
"The above example will filter volumes to different back ends depending on "
"the size of the requested volume. Default OpenStack Block Storage scheduler "
"weighing is done. Volumes with a size less than 10 GB are sent to lvm-1 and "
"volumes with a size greater than or equal to 10 GB are sent to lvm-2."
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:271
msgid ""
"Example :file:`cinder.conf` file configuration for customizing the goodness "
"function::"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:288
msgid ""
"The above example will determine the goodness rating of a back end based off "
"of the requested volume's size. Default OpenStack Block Storage scheduler "
"filtering is done. The example shows how the ternary if statement can be "
"used in a filter or goodness function. If a requested volume is of size "
"10 GB then lvm-1 is rated as 50 and lvm-2 is rated as 100. In this case "
"lvm-2 wins. If a requested volume is of size 3 GB then lvm-1 is rated 100 "
"and lvm-2 is rated 25. In this case lvm-1 would win."
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:296
msgid ""
"Example :file:`cinder.conf` file configuration for customizing both the "
"filter and goodness functions::"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:316
msgid ""
"The above example combines the techniques from the first two examples. The "
"best back end is now decided based off of the total capacity of the back end "
"and the requested volume's size."
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:320
msgid ""
"Example :file:`cinder.conf` file configuration for accessing driver specific "
"properties::"
msgstr ""
#: ../blockstorage-driver-filter-weighing.rst:348
msgid ""
"The above is an example of how back-end specific properties can be used in "
"the filter and goodness functions. In this example the LVM driver's "
"``total_volumes`` capability is being used to determine which host gets used "
"during a volume request. In the above example, lvm-1 and lvm-2 will handle "
"volume requests for all volumes with a size less than 5 GB. The lvm-1 host "
"will have priority until it contains three or more volumes. After than lvm-2 "
"will have priority until it contains eight or more volumes. The lvm-3 will "
"collect all volumes greater or equal to 5 GB as well as all volumes once "
"lvm-1 and lvm-2 lose priority."
msgstr ""
#: ../blockstorage-lio-iscsi-support.rst:3
msgid "Use LIO iSCSI support"
msgstr ""
#: ../blockstorage-lio-iscsi-support.rst:5
msgid ""
"The default mode for the ``iscsi_helper`` tool is ``tgtadm``. To use LIO "
"iSCSI, install the ``python-rtslib`` package, and set "
"``iscsi_helper=lioadm`` in the :file:`cinder.conf` file."
msgstr ""
#: ../blockstorage-lio-iscsi-support.rst:9
msgid ""
"Once configured, you can use the :command:`cinder-rtstool` command to manage "
"the volumes. This command enables you to create, delete, and verify volumes "
"and determine targets and add iSCSI initiators to the system."
msgstr ""
# #-#-#-#-# blockstorage-manage-volumes.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# compute-manage-volumes.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../blockstorage-manage-volumes.rst:3 ../compute-manage-volumes.rst:3
msgid "Manage volumes"
msgstr ""
#: ../blockstorage-manage-volumes.rst:5
msgid ""
"The default OpenStack Block Storage service implementation is an iSCSI "
"solution that uses Logical Volume Manager (LVM) for Linux."
msgstr ""
#: ../blockstorage-manage-volumes.rst:10
msgid ""
"The OpenStack Block Storage service is not a shared storage solution like a "
"Network Attached Storage (NAS) of NFS volumes, where you can attach a volume "
"to multiple servers. With the OpenStack Block Storage service, you can "
"attach a volume to only one instance at a time."
msgstr ""
#: ../blockstorage-manage-volumes.rst:16
msgid ""
"The OpenStack Block Storage service also provides drivers that enable you to "
"use several vendors' back-end storage devices, in addition to or instead of "
"the base LVM implementation."
msgstr ""
#: ../blockstorage-manage-volumes.rst:20
msgid ""
"This high-level procedure shows you how to create and attach a volume to a "
"server instance."
msgstr ""
#: ../blockstorage-manage-volumes.rst:23
msgid "**To create and attach a volume to an instance**"
msgstr ""
#: ../blockstorage-manage-volumes.rst:25
msgid ""
"Configure the OpenStack Compute and the OpenStack Block Storage services "
"through the :file:`cinder.conf` file."
msgstr ""
#: ../blockstorage-manage-volumes.rst:27
msgid ""
"Use the :command:`cinder create` command to create a volume. This command "
"creates an LV into the volume group (VG) ``cinder-volumes``."
msgstr ""
#: ../blockstorage-manage-volumes.rst:29
msgid ""
"Use the nova :command:`volume-attach` command to attach the volume to an "
"instance. This command creates a unique :term:`IQN` that is exposed to the "
"compute node."
msgstr ""
#: ../blockstorage-manage-volumes.rst:33
msgid ""
"The compute node, which runs the instance, now has an active iSCSI session "
"and new local storage (usually a :file:`/dev/sdX` disk)."
msgstr ""
#: ../blockstorage-manage-volumes.rst:36
msgid ""
"Libvirt uses that local storage as storage for the instance. The instance "
"gets a new disk (usually a :file:`/dev/vdX` disk)."
msgstr ""
#: ../blockstorage-manage-volumes.rst:39
msgid ""
"For this particular walk through, one cloud controller runs ``nova-api``, "
"``nova-scheduler``, ``nova-objectstore``, ``nova-network`` and ``cinder-*`` "
"services. Two additional compute nodes run ``nova-compute``. The walk "
"through uses a custom partitioning scheme that carves out 60 GB of space and "
"labels it as LVM. The network uses the ``FlatManager`` and "
"``NetworkManager`` settings for OpenStack Compute."
msgstr ""
#: ../blockstorage-manage-volumes.rst:47
msgid ""
"The network mode does not interfere with OpenStack Block Storage operations, "
"but you must set up networking for Block Storage to work. For details, see :"
"ref:`networking`."
msgstr ""
#: ../blockstorage-manage-volumes.rst:51
msgid ""
"To set up Compute to use volumes, ensure that Block Storage is installed "
"along with ``lvm2``. This guide describes how to troubleshoot your "
"installation and back up your Compute volumes."
msgstr ""
#: ../blockstorage-troubleshoot.rst:3
msgid "Troubleshoot your installation"
msgstr ""
#: ../blockstorage-troubleshoot.rst:5
msgid ""
"This section provides useful tips to help you troubleshoot your Block "
"Storage installation."
msgstr ""
# #-#-#-#-# blockstorage.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# compute-images-instances.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../blockstorage.rst:5 ../compute-images-instances.rst:130
msgid "Block Storage"
msgstr ""
#: ../blockstorage.rst:7
msgid ""
"The OpenStack Block Storage service works through the interaction of a "
"series of daemon processes named ``cinder-*`` that reside persistently on "
"the host machine or machines. The binaries can all be run from a single "
"node, or spread across multiple nodes. They can also be run on the same node "
"as other OpenStack services."
msgstr ""
#: ../blockstorage.rst:13
msgid ""
"To administer the OpenStack Block Storage service, it is helpful to "
"understand a number of concepts. You must make certain choices when you "
"configure the Block Storage service in OpenStack. The bulk of the options "
"come down to two choices, single node or multi-node install. You can read a "
"longer discussion about `Storage Decisions`_ in the `OpenStack Operations "
"Guide`_."
msgstr ""
#: ../blockstorage.rst:20
msgid ""
"OpenStack Block Storage enables you to add extra block-level storage to your "
"OpenStack Compute instances. This service is similar to the Amazon EC2 "
"Elastic Block Storage (EBS) offering."
msgstr ""
#: ../blockstorage_backup_disks.rst:3
msgid "Back up Block Storage service disks"
msgstr ""
#: ../blockstorage_backup_disks.rst:5
msgid ""
"While you can use the LVM snapshot to create snapshots, you can also use it "
"to back up your volumes. By using LVM snapshot, you reduce the size of the "
"backup; only existing data is backed up instead of the entire volume."
msgstr ""
#: ../blockstorage_backup_disks.rst:10
msgid ""
"To back up a volume, you must create a snapshot of it. An LVM snapshot is "
"the exact copy of a logical volume, which contains data in a frozen state. "
"This prevents data corruption, because data cannot be manipulated during the "
"volume creation process. Remember that the volumes created through a :"
"command:`nova volume-create` command exist in an LVM logical volume."
msgstr ""
#: ../blockstorage_backup_disks.rst:17
msgid ""
"You must also make sure that the operating system is not using the volume, "
"and that all data has been flushed on the guest file systems. This usually "
"means that those file systems have to be unmounted during the snapshot "
"creation. They can be mounted again as soon as the logical volume snapshot "
"has been created."
msgstr ""
#: ../blockstorage_backup_disks.rst:23
msgid ""
"Before you create the snapshot, you must have enough space to save it. As a "
"precaution, you should have at least twice as much space as the potential "
"snapshot size. If insufficient space is available, the snapshot might become "
"corrupted."
msgstr ""
#: ../blockstorage_backup_disks.rst:28
msgid ""
"For this example, assume that a 100 GB volume named ``volume-00000001`` was "
"created for an instance while only 4 GB are used. This example uses these "
"commands to back up only those 4 GB:"
msgstr ""
#: ../blockstorage_backup_disks.rst:32
msgid ":command:`lvm2` command. Directly manipulates the volumes."
msgstr ""
#: ../blockstorage_backup_disks.rst:34
msgid ""
":command:`kpartx` command. Discovers the partition table created inside the "
"instance."
msgstr ""
#: ../blockstorage_backup_disks.rst:37
msgid ":command:`tar` command. Creates a minimum-sized backup."
msgstr ""
#: ../blockstorage_backup_disks.rst:39
msgid ""
":command:`sha1sum` command. Calculates the backup checksum to check its "
"consistency."
msgstr ""
#: ../blockstorage_backup_disks.rst:42
msgid "You can apply this process to volumes of any size."
msgstr ""
#: ../blockstorage_backup_disks.rst:44
msgid "**To back up Block Storage service disks**"
msgstr ""
#: ../blockstorage_backup_disks.rst:46
msgid "Create a snapshot of a used volume"
msgstr ""
#: ../blockstorage_backup_disks.rst:48
msgid "Use this command to list all volumes"
msgstr ""
#: ../blockstorage_backup_disks.rst:54
msgid ""
"Create the snapshot; you can do this while the volume is attached to an "
"instance::"
msgstr ""
#: ../blockstorage_backup_disks.rst:60
msgid ""
"Use the :option:`--snapshot` configuration option to tell LVM that you want "
"a snapshot of an already existing volume. The command includes the size of "
"the space reserved for the snapshot volume, the name of the snapshot, and "
"the path of an already existing volume. Generally, this path is :file:`/dev/"
"cinder-volumes/VOLUME_NAME`."
msgstr ""
#: ../blockstorage_backup_disks.rst:66
msgid ""
"The size does not have to be the same as the volume of the snapshot. The :"
"option:`--size` parameter defines the space that LVM reserves for the "
"snapshot volume. As a precaution, the size should be the same as that of the "
"original volume, even if the whole space is not currently used by the "
"snapshot."
msgstr ""
#: ../blockstorage_backup_disks.rst:72
msgid "Run the :command:`lvdisplay` command again to verify the snapshot::"
msgstr ""
#: ../blockstorage_backup_disks.rst:111
msgid "Partition table discovery"
msgstr ""
#: ../blockstorage_backup_disks.rst:113
msgid ""
"To exploit the snapshot with the :command:`tar` command, mount your "
"partition on the Block Storage service server."
msgstr ""
#: ../blockstorage_backup_disks.rst:116
msgid ""
"The :command:`kpartx` utility discovers and maps table partitions. You can "
"use it to view partitions that are created inside the instance. Without "
"using the partitions created inside instances, you cannot see its content "
"and create efficient backups."
msgstr ""
#: ../blockstorage_backup_disks.rst:127
msgid ""
"On a Debian-based distribution, you can use the :command:`apt-get install "
"kpartx` command to install :command:`kpartx`."
msgstr ""
#: ../blockstorage_backup_disks.rst:131
msgid ""
"If the tools successfully find and map the partition table, no errors are "
"returned."
msgstr ""
#: ../blockstorage_backup_disks.rst:134
msgid "To check the partition table map, run this command::"
msgstr ""
#: ../blockstorage_backup_disks.rst:138
msgid ""
"You can see the ``cinder--volumes-volume--00000001--snapshot1`` partition."
msgstr ""
#: ../blockstorage_backup_disks.rst:141
msgid ""
"If you created more than one partition on that volume, you see several "
"partitions; for example: ``cinder--volumes-volume--00000001--snapshot2``, "
"``cinder--volumes-volume--00000001--snapshot3``, and so on."
msgstr ""
#: ../blockstorage_backup_disks.rst:146
msgid "Mount your partition"
msgstr ""
#: ../blockstorage_backup_disks.rst:152
msgid "If the partition mounts successfully, no errors are returned."
msgstr ""
#: ../blockstorage_backup_disks.rst:154
msgid ""
"You can directly access the data inside the instance. If a message prompts "
"you for a partition or you cannot mount it, determine whether enough space "
"was allocated for the snapshot or the :command:`kpartx` command failed to "
"discover the partition table."
msgstr ""
#: ../blockstorage_backup_disks.rst:159
msgid "Allocate more space to the snapshot and try the process again."
msgstr ""
#: ../blockstorage_backup_disks.rst:161
msgid "Use the :command:`tar` command to create archives"
msgstr ""
#: ../blockstorage_backup_disks.rst:163
msgid "Create a backup of the volume::"
msgstr ""
#: ../blockstorage_backup_disks.rst:168
msgid ""
"This command creates a :file:`tar.gz` file that contains the data, *and data "
"only*. This ensures that you do not waste space by backing up empty sectors."
msgstr ""
#: ../blockstorage_backup_disks.rst:172
msgid "Checksum calculation I"
msgstr ""
#: ../blockstorage_backup_disks.rst:174
msgid ""
"You should always have the checksum for your backup files. When you transfer "
"the same file over the network, you can run a checksum calculation to ensure "
"that your file was not corrupted during its transfer. The checksum is a "
"unique ID for a file. If the checksums are different, the file is corrupted."
msgstr ""
#: ../blockstorage_backup_disks.rst:180
msgid ""
"Run this command to run a checksum for your file and save the result to a "
"file::"
msgstr ""
#: ../blockstorage_backup_disks.rst:187
msgid ""
"Use the :command:`sha1sum` command carefully because the time it takes to "
"complete the calculation is directly proportional to the size of the file."
msgstr ""
#: ../blockstorage_backup_disks.rst:191
msgid ""
"For files larger than around 4 to 6 GB, and depending on your CPU, the "
"process might take a long time."
msgstr ""
#: ../blockstorage_backup_disks.rst:194
msgid "After work cleaning"
msgstr ""
#: ../blockstorage_backup_disks.rst:196
msgid ""
"Now that you have an efficient and consistent backup, use this command to "
"clean up the file system:"
msgstr ""
#: ../blockstorage_backup_disks.rst:199
msgid "Unmount the volume::"
msgstr ""
#: ../blockstorage_backup_disks.rst:203
msgid "Delete the partition table::"
msgstr ""
#: ../blockstorage_backup_disks.rst:207
msgid "Remove the snapshot::"
msgstr ""
#: ../blockstorage_backup_disks.rst:211
msgid "Repeat these steps for all your volumes."
msgstr ""
#: ../blockstorage_backup_disks.rst:213
msgid "Automate your backups"
msgstr ""
#: ../blockstorage_backup_disks.rst:215
msgid ""
"Because more and more volumes might be allocated to your Block Storage "
"service, you might want to automate your backups. The `SCR_5005_V01_NUAC-"
"OPENSTACK-EBS-volumes-backup.sh`_ script assists you with this task. The "
"script performs the operations from the previous example, but also provides "
"a mail report and runs the backup based on the ``backups_retention_days`` "
"setting."
msgstr ""
#: ../blockstorage_backup_disks.rst:222
msgid "Launch this script from the server that runs the Block Storage service."
msgstr ""
#: ../blockstorage_backup_disks.rst:224
msgid "This example shows a mail report::"
msgstr ""
#: ../blockstorage_backup_disks.rst:240
msgid ""
"The script also enables you to SSH to your instances and run a :command:"
"`mysqldump` command into them. To make this work, enable the connection to "
"the Compute project keys. If you do not want to run the :command:`mysqldump` "
"command, you can add ``enable_mysql_dump=0`` to the script to turn off this "
"functionality."
msgstr ""
#: ../blockstorage_get_capabilities.rst:6
msgid "Get capabilities"
msgstr ""
#: ../blockstorage_get_capabilities.rst:8
msgid ""
"When an administrator configures *volume type* and *extra specs* of storage "
"on the back end, the administrator has to read the right documentation that "
"corresponds to the version of the storage back end. Deep knowledge of "
"storage is also required."
msgstr ""
#: ../blockstorage_get_capabilities.rst:13
msgid ""
"OpenStack Block Storage enables administrators to configure *volume type* "
"and *extra specs* without specific knowledge of the storage back end."
msgstr ""
#: ../blockstorage_get_capabilities.rst:17
msgid "*Volume Type:* A group of volume policies."
msgstr ""
#: ../blockstorage_get_capabilities.rst:18
msgid ""
"*Extra Specs:* The definition of a volume type. This is a group of policies. "
"For example, provision type, QOS that will be used to define a volume at "
"creation time."
msgstr ""
#: ../blockstorage_get_capabilities.rst:21
msgid ""
"*Capabilities:* What the current deployed back end in Cinder is able to do. "
"These correspond to extra specs."
msgstr ""
#: ../blockstorage_get_capabilities.rst:25
msgid "Usage of cinder client"
msgstr ""
#: ../blockstorage_get_capabilities.rst:27
msgid ""
"When an administrator wants to define new volume types for their OpenStack "
"cloud, the administrator would fetch a list of ``capabilities`` for a "
"particular back end using the cinder client."
msgstr ""
#: ../blockstorage_get_capabilities.rst:31
msgid "First, get a list of the services::"
msgstr ""
#: ../blockstorage_get_capabilities.rst:41
msgid ""
"With one of the listed hosts, pass that to ``get-capabilities``, then the "
"administrator can obtain volume stats and also back end ``capabilities`` as "
"listed below."
msgstr ""
#: ../blockstorage_get_capabilities.rst:75
msgid "Usage of REST API"
msgstr ""
#: ../blockstorage_get_capabilities.rst:76
msgid ""
"New endpoint to ``get capabilities`` list for specific storage back end is "
"also available. For more details, refer to the Block Storage API reference."
msgstr ""
#: ../blockstorage_get_capabilities.rst:79
msgid "API request::"
msgstr ""
#: ../blockstorage_get_capabilities.rst:83
msgid "Example of return value::"
msgstr ""
#: ../blockstorage_get_capabilities.rst:142
msgid "Usage of volume type access extension"
msgstr ""
#: ../blockstorage_get_capabilities.rst:143
msgid ""
"Some volume types should be restricted only. For example, test volume types "
"where you are testing a new technology or ultra high performance volumes "
"(for special cases) where you do not want most users to be able to select "
"these volumes. An administrator/operator can then define private volume "
"types using cinder client. Volume type access extension adds the ability to "
"manage volume type access. Volume types are public by default. Private "
"volume types can be created by setting the 'is_public' Boolean field to "
"'False' at creation time. Access to a private volume type can be controlled "
"by adding or removing a project from it. Private volume types without "
"projects are only visible by users with the admin role/context."
msgstr ""
#: ../blockstorage_get_capabilities.rst:155
msgid "Create a public volume type by setting 'is_public' field to 'True'::"
msgstr ""
#: ../blockstorage_get_capabilities.rst:164
msgid "Create a private volume type by setting 'is_public' field to 'False'::"
msgstr ""
#: ../blockstorage_get_capabilities.rst:173
msgid "Get a list of the volume types::"
msgstr ""
#: ../blockstorage_get_capabilities.rst:184
msgid "Get a list of the projects::"
msgstr ""
#: ../blockstorage_get_capabilities.rst:197
msgid ""
"Add volume type access for the given demo project, using its project-id::"
msgstr ""
#: ../blockstorage_get_capabilities.rst:201
msgid "List the access information about the given volume type::"
msgstr ""
#: ../blockstorage_get_capabilities.rst:210
msgid "Remove volume type access for the given project::"
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:3
msgid "Configure a GlusterFS back end"
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:5
msgid ""
"This section explains how to configure OpenStack Block Storage to use "
"GlusterFS as a back end. You must be able to access the GlusterFS shares "
"from the server that hosts the ``cinder`` volume service."
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:11
msgid ""
"The cinder volume service is named ``openstack-cinder-volume`` on the "
"following distributions:"
msgstr ""
# #-#-#-#-# blockstorage_glusterfs_backend.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# blockstorage_nfs_backend.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../blockstorage_glusterfs_backend.rst:14
#: ../blockstorage_glusterfs_backend.rst:135
#: ../blockstorage_nfs_backend.rst:14 ../blockstorage_nfs_backend.rst:73
msgid "CentOS"
msgstr ""
# #-#-#-#-# blockstorage_glusterfs_backend.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# blockstorage_nfs_backend.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../blockstorage_glusterfs_backend.rst:16
#: ../blockstorage_glusterfs_backend.rst:137
#: ../blockstorage_nfs_backend.rst:16 ../blockstorage_nfs_backend.rst:75
msgid "Fedora"
msgstr ""
# #-#-#-#-# blockstorage_glusterfs_backend.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# blockstorage_nfs_backend.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../blockstorage_glusterfs_backend.rst:18
#: ../blockstorage_glusterfs_backend.rst:139
#: ../blockstorage_nfs_backend.rst:18 ../blockstorage_nfs_backend.rst:77
msgid "openSUSE"
msgstr ""
# #-#-#-#-# blockstorage_glusterfs_backend.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# blockstorage_nfs_backend.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../blockstorage_glusterfs_backend.rst:20
#: ../blockstorage_glusterfs_backend.rst:141
#: ../blockstorage_nfs_backend.rst:20 ../blockstorage_nfs_backend.rst:79
msgid "Red Hat Enterprise Linux"
msgstr ""
# #-#-#-#-# blockstorage_glusterfs_backend.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# blockstorage_nfs_backend.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../blockstorage_glusterfs_backend.rst:22
#: ../blockstorage_glusterfs_backend.rst:143
#: ../blockstorage_nfs_backend.rst:22 ../blockstorage_nfs_backend.rst:81
msgid "SUSE Linux Enterprise"
msgstr ""
# #-#-#-#-# blockstorage_glusterfs_backend.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# blockstorage_nfs_backend.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../blockstorage_glusterfs_backend.rst:24 ../blockstorage_nfs_backend.rst:24
msgid ""
"In Ubuntu and Debian distributions, the ``cinder`` volume service is named "
"``cinder-volume``."
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:27
msgid ""
"Mounting GlusterFS volumes requires utilities and libraries from the "
"``glusterfs-fuse`` package. This package must be installed on all systems "
"that will access volumes backed by GlusterFS."
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:33
msgid ""
"The utilities and libraries required for mounting GlusterFS volumes on "
"Ubuntu and Debian distributions are available from the ``glusterfs-client`` "
"package instead."
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:37
msgid ""
"For information on how to install and configure GlusterFS, refer to the "
"`GlusterDocumentation`_ page."
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:40
msgid "**Configure GlusterFS for OpenStack Block Storage**"
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:42
msgid ""
"The GlusterFS server must also be configured accordingly in order to allow "
"OpenStack Block Storage to use GlusterFS shares:"
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:45
msgid "Log in as ``root`` to the GlusterFS server."
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:47
msgid ""
"Set each Gluster volume to use the same UID and GID as the ``cinder`` user::"
msgstr ""
# #-#-#-#-# blockstorage_glusterfs_backend.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# blockstorage_nfs_backend.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# compute-flavors.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# identity_service_api_protection.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# keystone_secure_identity_to_ldap_backend.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../blockstorage_glusterfs_backend.rst:52
#: ../blockstorage_glusterfs_backend.rst:96 ../blockstorage_nfs_backend.rst:41
#: ../compute-flavors.rst:329 ../compute-flavors.rst:350
#: ../identity_service_api_protection.rst:19
#: ../keystone_secure_identity_to_ldap_backend.rst:62
msgid "Where:"
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:54
msgid "VOL_NAME is the Gluster volume name."
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:56
msgid "CINDER_UID is the UID of the ``cinder`` user."
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:58
msgid "CINDER_GID is the GID of the ``cinder`` user."
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:62
msgid ""
"The default UID and GID of the ``cinder`` user is 165 on most distributions."
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:65
msgid ""
"Configure each Gluster volume to accept ``libgfapi`` connections. To do "
"this, set each Gluster volume to allow insecure ports::"
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:70
msgid ""
"Enable client connections from unprivileged ports. To do this, add the "
"following line to :file:`/etc/glusterfs/glusterd.vol`::"
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:75
msgid "Restart the ``glusterd`` service::"
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:81
msgid "**Configure Block Storage to use a GlusterFS back end**"
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:83
msgid "After you configure the GlusterFS service, complete these steps:"
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:85
msgid "Log in as ``root`` to the system hosting the Block Storage service."
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:87
msgid "Create a text file named :file:`glusterfs` in :file:`/etc/cinder/`."
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:89
msgid ""
"Add an entry to :file:`/etc/cinder/glusterfs` for each GlusterFS share that "
"OpenStack Block Storage should use for back end storage. Each entry should "
"be a separate line, and should use the following format::"
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:98
msgid "HOST is the IP address or host name of the Red Hat Storage server."
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:100
msgid ""
"VOL_NAME is the name of an existing and accessible volume on the GlusterFS "
"server."
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:105
msgid ""
"Optionally, if your environment requires additional mount options for a "
"share, you can add them to the share's entry::"
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:110
msgid "Replace OPTIONS with a comma-separated list of mount options."
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:112
msgid ""
"Set :file:`/etc/cinder/glusterfs` to be owned by the root user and the "
"``cinder`` group::"
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:117
msgid ""
"Set :file:`/etc/cinder/glusterfs` to be readable by members of the "
"``cinder`` group::"
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:122
msgid ""
"Configure OpenStack Block Storage to use the :file:`/etc/cinder/glusterfs` "
"file created earlier. To do so, open the :file:`/etc/cinder/cinder.conf` "
"configuration file and set the ``glusterfs_shares_config`` configuration key "
"to :file:`/etc/cinder/glusterfs`."
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:127
msgid ""
"On distributions that include openstack-config, you can configure this by "
"running the following command instead::"
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:133
msgid "The following distributions include ``openstack-config``:"
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:147
msgid ""
"Configure OpenStack Block Storage to use the correct volume driver, namely "
"``cinder.volume.drivers.glusterfs.GlusterfsDriver``. To do so, open the :"
"file:`/etc/cinder/cinder.conf` configuration file and set the "
"``volume_driver`` configuration key to ``cinder.volume.drivers.glusterfs."
"GlusterfsDriver``."
msgstr ""
# #-#-#-#-# blockstorage_glusterfs_backend.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# blockstorage_nfs_backend.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../blockstorage_glusterfs_backend.rst:159
#: ../blockstorage_nfs_backend.rst:113
msgid "You can now restart the service to apply the configuration."
msgstr ""
# #-#-#-#-# blockstorage_glusterfs_backend.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# blockstorage_nfs_backend.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../blockstorage_glusterfs_backend.rst:161
#: ../blockstorage_nfs_backend.rst:115
msgid ""
"To restart the ``cinder`` volume service on CentOS, Fedora, openSUSE, Red "
"Hat Enterprise Linux, or SUSE Linux Enterprise, run::"
msgstr ""
# #-#-#-#-# blockstorage_glusterfs_backend.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# blockstorage_nfs_backend.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../blockstorage_glusterfs_backend.rst:166
#: ../blockstorage_nfs_backend.rst:121
msgid "To restart the ``cinder`` volume service on Ubuntu or Debian, run::"
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:170
msgid "OpenStack Block Storage is now configured to use a GlusterFS back end."
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:174
msgid ""
"If a client host has SELinux enabled, the ``virt_use_fusefs`` boolean should "
"also be enabled if the host requires access to GlusterFS volumes on an "
"instance. To enable this Boolean, run the following command as the ``root`` "
"user::"
msgstr ""
#: ../blockstorage_glusterfs_backend.rst:181
msgid ""
"This command also makes the Boolean persistent across reboots. Run this "
"command on all client hosts that require access to GlusterFS volumes on an "
"instance. This includes all compute nodes."
msgstr ""
#: ../blockstorage_glusterfs_removal.rst:5
msgid "Gracefully remove a GlusterFS volume from usage"
msgstr ""
#: ../blockstorage_glusterfs_removal.rst:7
msgid ""
"Configuring the ``cinder`` volume service to use GlusterFS involves creating "
"a shares file (for example, :file:`/etc/cinder/glusterfs`). This shares file "
"lists each GlusterFS volume (with its corresponding storage server) that the "
"``cinder`` volume service can use for back end storage."
msgstr ""
#: ../blockstorage_glusterfs_removal.rst:12
msgid ""
"To remove a GlusterFS volume from usage as a back end, delete the volume's "
"corresponding entry from the shares file. After doing so, restart the Block "
"Storage services."
msgstr ""
#: ../blockstorage_glusterfs_removal.rst:16
msgid ""
"To restart the Block Storage services on CentOS, Fedora, openSUSE, Red Hat "
"Enterprise Linux, or SUSE Linux Enterprise, run::"
msgstr ""
#: ../blockstorage_glusterfs_removal.rst:21
msgid "To restart the Block Storage services on Ubuntu or Debian, run::"
msgstr ""
#: ../blockstorage_glusterfs_removal.rst:25
msgid ""
"Restarting the Block Storage services will prevent the ``cinder`` volume "
"service from exporting the deleted GlusterFS volume. This will prevent any "
"instances from mounting the volume from that point onwards."
msgstr ""
#: ../blockstorage_glusterfs_removal.rst:29
msgid ""
"However, the removed GlusterFS volume might still be mounted on an instance "
"at this point. Typically, this is the case when the volume was already "
"mounted while its entry was deleted from the shares file. Whenever this "
"occurs, you will have to unmount the volume as normal after the Block "
"Storage services are restarted."
msgstr ""
#: ../blockstorage_image_volume_cache.rst:6
msgid "Image-Volume cache"
msgstr ""
#: ../blockstorage_image_volume_cache.rst:8
msgid ""
"OpenStack Block Storage has an optional Image cache which can dramatically "
"improve the performance of creating a volume from an image. The improvement "
"depends on many factors, primarily how quickly the configured back end can "
"clone a volume."
msgstr ""
#: ../blockstorage_image_volume_cache.rst:13
msgid ""
"When a volume is first created from an image, a new cached image-volume will "
"be created that is owned by the Block Storage Internal Tenant. Subsequent "
"requests to create volumes from that image will clone the cached version "
"instead of downloading the image contents and copying data to the volume."
msgstr ""
#: ../blockstorage_image_volume_cache.rst:18
msgid ""
"The cache itself is configurable per back end and will contain the most "
"recently used images."
msgstr ""
#: ../blockstorage_image_volume_cache.rst:22
msgid "Configure the Internal Tenant"
msgstr ""
#: ../blockstorage_image_volume_cache.rst:24
msgid ""
"The Image-Volume cache requires that the Internal Tenant be configured for "
"the Block Storage services. This tenant will own the cached image-volumes so "
"they can be managed like normal users including tools like volume quotas. "
"This protects normal users from having to see the cached image-volumes, but "
"does not make them globally hidden."
msgstr ""
#: ../blockstorage_image_volume_cache.rst:30
msgid ""
"To enable the Block Storage services to have access to an Internal Tenant, "
"set the following options in the :file:`cinder.conf` file::"
msgstr ""
#: ../blockstorage_image_volume_cache.rst:43
msgid ""
"The actual user and project that are configured for the Internal Tenant do "
"not require any special privileges. They can be the Block Storage service "
"tenant or can be any normal project and user."
msgstr ""
#: ../blockstorage_image_volume_cache.rst:48
msgid "Configure the Image-Volume cache"
msgstr ""
#: ../blockstorage_image_volume_cache.rst:50
msgid ""
"To enable the Image-Volume cache, set the following configuration option in :"
"file:`cinder.conf`::"
msgstr ""
#: ../blockstorage_image_volume_cache.rst:55
msgid "This can be scoped per back end definition or in the default options."
msgstr ""
#: ../blockstorage_image_volume_cache.rst:57
msgid ""
"There are optional configuration settings that can limit the size of the "
"cache. These can also be scoped per back end or in the default options in :"
"file:`cinder.conf`::"
msgstr ""
#: ../blockstorage_image_volume_cache.rst:64
msgid "By default they will be set to 0, which means unlimited."
msgstr ""
#: ../blockstorage_image_volume_cache.rst:66
msgid ""
"For example, a configuration which would limit the max size to 200 GB and 50 "
"cache entries will be configured as::"
msgstr ""
# #-#-#-#-# blockstorage_image_volume_cache.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# networking_adv-operational-features.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-data-collection.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../blockstorage_image_volume_cache.rst:73
#: ../networking_adv-operational-features.rst:47
#: ../telemetry-data-collection.rst:24 ../telemetry-data-collection.rst:35
msgid "Notifications"
msgstr ""
#: ../blockstorage_image_volume_cache.rst:75
msgid ""
"Cache actions will trigger Telemetry messages. There are several that will "
"be sent."
msgstr ""
#: ../blockstorage_image_volume_cache.rst:78
msgid ""
"``image_volume_cache.miss`` - A volume is being created from an image which "
"was not found in the cache. Typically this will mean a new cache entry would "
"be created for it."
msgstr ""
#: ../blockstorage_image_volume_cache.rst:82
msgid ""
"``image_volume_cache.hit`` - A volume is being created from an image which "
"was found in the cache and the fast path can be taken."
msgstr ""
#: ../blockstorage_image_volume_cache.rst:85
msgid ""
"``image_volume_cache.evict`` - A cached image-volume has been deleted from "
"the cache."
msgstr ""
#: ../blockstorage_image_volume_cache.rst:90
msgid "Managing cached Image-Volumes"
msgstr ""
#: ../blockstorage_image_volume_cache.rst:92
msgid ""
"In normal usage there should be no need for manual intervention with the "
"cache. The entries and their backing Image-Volumes are managed automatically."
msgstr ""
#: ../blockstorage_image_volume_cache.rst:95
msgid ""
"If needed, you can delete these volumes manually to clear the cache. By "
"using the standard volume deletion APIs, the Block Storage service will "
"clean up correctly."
msgstr ""
# #-#-#-#-# blockstorage_multi_backend.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# blockstorage_volume_number_weigher.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../blockstorage_multi_backend.rst:5
#: ../blockstorage_volume_number_weigher.rst:22
msgid "Configure multiple-storage back ends"
msgstr ""
#: ../blockstorage_multi_backend.rst:7
msgid ""
"When you configure multiple-storage back ends, you can create several back-"
"end storage solutions that serve the same OpenStack Compute configuration "
"and one ``cinder-volume`` is launched for each back-end storage or back-end "
"storage pool."
msgstr ""
#: ../blockstorage_multi_backend.rst:12
msgid ""
"In a multiple-storage back-end configuration, each back end has a name "
"(``volume_backend_name``). Several back ends can have the same name. In that "
"case, the scheduler properly decides which back end the volume has to be "
"created in."
msgstr ""
#: ../blockstorage_multi_backend.rst:17
msgid ""
"The name of the back end is declared as an extra-specification of a volume "
"type (such as, ``volume_backend_name=LVM``). When a volume is created, the "
"scheduler chooses an appropriate back end to handle the request, according "
"to the volume type specified by the user."
msgstr ""
#: ../blockstorage_multi_backend.rst:23
msgid "Enable multiple-storage back ends"
msgstr ""
#: ../blockstorage_multi_backend.rst:25
msgid ""
"To enable a multiple-storage back ends, you must set the `enabled_backends` "
"flag in the :file:`cinder.conf` file. This flag defines the names (separated "
"by a comma) of the configuration groups for the different back ends: one "
"name is associated to one configuration group for a back end (such as, "
"``[lvmdriver-1]``)."
msgstr ""
#: ../blockstorage_multi_backend.rst:33
msgid ""
"The configuration group name is not related to the ``volume_backend_name``."
msgstr ""
#: ../blockstorage_multi_backend.rst:37
msgid ""
"After setting the `enabled_backends` flag on an existing cinder service, and "
"restarting the Block Storage services, the original ``host`` service is "
"replaced with a new host service. The new service appears with a name like "
"``host@backend``. Use::"
msgstr ""
#: ../blockstorage_multi_backend.rst:44
msgid "to convert current block devices to the new hostname."
msgstr ""
#: ../blockstorage_multi_backend.rst:46
msgid ""
"The options for a configuration group must be defined in the group (or "
"default options are used). All the standard Block Storage configuration "
"options (``volume_group``, ``volume_driver``, and so on) might be used in a "
"configuration group. Configuration values in the ``[DEFAULT]`` configuration "
"group are not used."
msgstr ""
#: ../blockstorage_multi_backend.rst:52
msgid "These examples show three back ends:"
msgstr ""
#: ../blockstorage_multi_backend.rst:70
msgid ""
"In this configuration, ``lvmdriver-1`` and ``lvmdriver-2`` have the same "
"``volume_backend_name``. If a volume creation requests the ``LVM`` back end "
"name, the scheduler uses the capacity filter scheduler to choose the most "
"suitable driver, which is either ``lvmdriver-1`` or ``lvmdriver-2``. The "
"capacity filter scheduler is enabled by default. The next section provides "
"more information. In addition, this example presents a ``lvmdriver-3`` back "
"end."
msgstr ""
#: ../blockstorage_multi_backend.rst:80
msgid ""
"For Fiber Channel drivers that support multipath, the configuration group "
"requires the ``use_multipath_for_image_xfer=true`` option. In the example "
"below, you can see details for HPE 3PAR and EMC Fiber Channel drivers."
msgstr ""
#: ../blockstorage_multi_backend.rst:98
msgid "Configure Block Storage scheduler multi back end"
msgstr ""
#: ../blockstorage_multi_backend.rst:100
msgid ""
"You must enable the `filter_scheduler` option to use multiple-storage back "
"ends. The filter scheduler:"
msgstr ""
#: ../blockstorage_multi_backend.rst:103
msgid ""
"Filters the available back ends. By default, ``AvailabilityZoneFilter``, "
"``CapacityFilter`` and ``CapabilitiesFilter`` are enabled."
msgstr ""
#: ../blockstorage_multi_backend.rst:106
msgid ""
"Weights the previously filtered back ends. By default, the `CapacityWeigher` "
"option is enabled. When this option is enabled, the filter scheduler assigns "
"the highest weight to back ends with the most available capacity."
msgstr ""
#: ../blockstorage_multi_backend.rst:111
msgid ""
"The scheduler uses filters and weights to pick the best back end to handle "
"the request. The scheduler uses volume types to explicitly create volumes on "
"specific back ends."
msgstr ""
# #-#-#-#-# blockstorage_multi_backend.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# blockstorage_volume_number_weigher.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../blockstorage_multi_backend.rst:119
#: ../blockstorage_volume_number_weigher.rst:47
msgid "Volume type"
msgstr ""
#: ../blockstorage_multi_backend.rst:121
msgid ""
"Before using it, a volume type has to be declared to Block Storage. This can "
"be done by the following command::"
msgstr ""
#: ../blockstorage_multi_backend.rst:126
msgid ""
"Then, an extra-specification has to be created to link the volume type to a "
"back end name. Run this command::"
msgstr ""
#: ../blockstorage_multi_backend.rst:132
msgid ""
"This example creates a ``lvm`` volume type with "
"``volume_backend_name=LVM_iSCSI`` as extra-specifications."
msgstr ""
#: ../blockstorage_multi_backend.rst:135
msgid "Create another volume type::"
msgstr ""
#: ../blockstorage_multi_backend.rst:142
msgid ""
"This second volume type is named ``lvm_gold`` and has ``LVM_iSCSI_b`` as "
"back end name."
msgstr ""
#: ../blockstorage_multi_backend.rst:147
msgid "To list the extra-specifications, use this command::"
msgstr ""
#: ../blockstorage_multi_backend.rst:153
msgid ""
"If a volume type points to a ``volume_backend_name`` that does not exist in "
"the Block Storage configuration, the ``filter_scheduler`` returns an error "
"that it cannot find a valid host with the suitable back end."
msgstr ""
# #-#-#-#-# blockstorage_multi_backend.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# blockstorage_volume_number_weigher.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../blockstorage_multi_backend.rst:159
#: ../blockstorage_volume_number_weigher.rst:61
msgid "Usage"
msgstr ""
#: ../blockstorage_multi_backend.rst:161
msgid ""
"When you create a volume, you must specify the volume type. The extra-"
"specifications of the volume type are used to determine which back end has "
"to be used."
msgstr ""
#: ../blockstorage_multi_backend.rst:169
msgid ""
"Considering the :file:`cinder.conf` described previously, the scheduler "
"creates this volume on ``lvmdriver-1`` or ``lvmdriver-2``."
msgstr ""
#: ../blockstorage_multi_backend.rst:176
msgid "This second volume is created on ``lvmdriver-3``."
msgstr ""
#: ../blockstorage_nfs_backend.rst:3
msgid "Configure an NFS storage back end"
msgstr ""
#: ../blockstorage_nfs_backend.rst:5
msgid ""
"This section explains how to configure OpenStack Block Storage to use NFS "
"storage. You must be able to access the NFS shares from the server that "
"hosts the ``cinder`` volume service."
msgstr ""
#: ../blockstorage_nfs_backend.rst:11
msgid ""
"The ``cinder`` volume service is named ``openstack-cinder-volume`` on the "
"following distributions:"
msgstr ""
#: ../blockstorage_nfs_backend.rst:27
msgid "**Configure Block Storage to use an NFS storage back end**"
msgstr ""
#: ../blockstorage_nfs_backend.rst:29
msgid "Log in as ``root`` to the system hosting the ``cinder`` volume service."
msgstr ""
#: ../blockstorage_nfs_backend.rst:32
msgid "Create a text file named :file:`nfsshares` in :file:`/etc/cinder/`."
msgstr ""
#: ../blockstorage_nfs_backend.rst:34
msgid ""
"Add an entry to :file:`/etc/cinder/nfsshares` for each NFS share that the "
"``cinder`` volume service should use for back end storage. Each entry should "
"be a separate line, and should use the following format:"
msgstr ""
#: ../blockstorage_nfs_backend.rst:39
msgid "``HOST:SHARE``"
msgstr ""
#: ../blockstorage_nfs_backend.rst:43
msgid "HOST is the IP address or host name of the NFS server."
msgstr ""
#: ../blockstorage_nfs_backend.rst:45
msgid "SHARE is the absolute path to an existing and accessible NFS share."
msgstr ""
#: ../blockstorage_nfs_backend.rst:49
msgid ""
"Set :file:`/etc/cinder/nfsshares` to be owned by the ``root`` user and the "
"``cinder`` group::"
msgstr ""
#: ../blockstorage_nfs_backend.rst:54
msgid ""
"Set :file:`/etc/cinder/nfsshares` to be readable by members of the cinder "
"group::"
msgstr ""
#: ../blockstorage_nfs_backend.rst:59
msgid ""
"Configure the cinder volume service to use the :file:`/etc/cinder/nfsshares` "
"file created earlier. To do so, open the :file:`/etc/cinder/cinder.conf` "
"configuration file and set the ``nfs_shares_config`` configuration key to :"
"file:`/etc/cinder/nfsshares`."
msgstr ""
#: ../blockstorage_nfs_backend.rst:71
msgid "The following distributions include openstack-config:"
msgstr ""
#: ../blockstorage_nfs_backend.rst:85
msgid ""
"Optionally, provide any additional NFS mount options required in your "
"environment in the ``nfs_mount_options`` configuration key of :file:`/etc/"
"cinder/cinder.conf`. If your NFS shares do not require any additional mount "
"options (or if you are unsure), skip this step."
msgstr ""
#: ../blockstorage_nfs_backend.rst:97
msgid ""
"Replace OPTIONS with the mount options to be used when accessing NFS shares. "
"See the manual page for NFS for more information on available mount options "
"(:command:`man nfs`)."
msgstr ""
#: ../blockstorage_nfs_backend.rst:101
msgid ""
"Configure the ``cinder`` volume service to use the correct volume driver, "
"namely cinder.volume.drivers.nfs.NfsDriver. To do so, open the :file:`/etc/"
"cinder/cinder.conf` configuration file and set the volume_driver "
"configuration key to ``cinder.volume.drivers.nfs.NfsDriver``."
msgstr ""
#: ../blockstorage_nfs_backend.rst:127
msgid ""
"The ``nfs_sparsed_volumes`` configuration key determines whether volumes are "
"created as sparse files and grown as needed or fully allocated up front. The "
"default and recommended value is ``true``, which ensures volumes are "
"initially created as sparse files."
msgstr ""
#: ../blockstorage_nfs_backend.rst:132
msgid ""
"Setting ``nfs_sparsed_volumes`` to ``false`` will result in volumes being "
"fully allocated at the time of creation. This leads to increased delays in "
"volume creation."
msgstr ""
#: ../blockstorage_nfs_backend.rst:136
msgid ""
"However, should you choose to set ``nfs_sparsed_volumes`` to false, you can "
"do so directly in :file:`/etc/cinder/cinder.conf`."
msgstr ""
#: ../blockstorage_nfs_backend.rst:147
msgid ""
"If a client host has SELinux enabled, the ``virt_use_nfs`` boolean should "
"also be enabled if the host requires access to NFS volumes on an instance. "
"To enable this boolean, run the following command as the ``root`` user::"
msgstr ""
#: ../blockstorage_nfs_backend.rst:154
msgid ""
"This command also makes the boolean persistent across reboots. Run this "
"command on all client hosts that require access to NFS volumes on an "
"instance. This includes all compute nodes."
msgstr ""
#: ../blockstorage_over_subscription.rst:5
msgid "Oversubscription in thin provisioning"
msgstr ""
#: ../blockstorage_over_subscription.rst:7
msgid ""
"OpenStack Block Storage enables you to choose a volume back end based on "
"virtual capacities for thin provisioning using the oversubscription ratio."
msgstr ""
#: ../blockstorage_over_subscription.rst:10
msgid ""
"A reference implementation is provided for the default LVM driver. The "
"illustration below uses the LVM driver as an example."
msgstr ""
#: ../blockstorage_over_subscription.rst:14
msgid "Configure oversubscription settings"
msgstr ""
#: ../blockstorage_over_subscription.rst:16
msgid ""
"To support oversubscription in thin provisioning, a flag "
"``max_over_subscription_ratio`` is introduced into :file:`cinder.conf`. This "
"is a float representation of the oversubscription ratio when thin "
"provisioning is involved. Default ratio is 20.0, meaning provisioned "
"capacity can be 20 times of the total physical capacity. A ratio of 10.5 "
"means provisioned capacity can be 10.5 times of the total physical capacity. "
"A ratio of 1.0 means provisioned capacity cannot exceed the total physical "
"capacity. A ratio lower than 1.0 is ignored and the default value is used "
"instead."
msgstr ""
#: ../blockstorage_over_subscription.rst:28
msgid ""
"``max_over_subscription_ratio`` can be configured for each back end when "
"multiple-storage back ends are enabled. It is provided as a reference "
"implementation and is used by the LVM driver. However, it is not a "
"requirement for a driver to use this option from :file:`cinder.conf`."
msgstr ""
#: ../blockstorage_over_subscription.rst:33
msgid ""
"``max_over_subscription_ratio`` is for configuring a back end. For a driver "
"that supports multiple pools per back end, it can report this ratio for each "
"pool. The LVM driver does not support multiple pools."
msgstr ""
#: ../blockstorage_over_subscription.rst:37
msgid ""
"The existing ``reserved_percentage`` flag is used to prevent over "
"provisioning. This flag represents the percentage of the back-end capacity "
"that is reserved."
msgstr ""
#: ../blockstorage_over_subscription.rst:42
msgid ""
"There is a change on how ``reserved_percentage`` is used. It was measured "
"against the free capacity in the past. Now it is measured against the total "
"capacity."
msgstr ""
#: ../blockstorage_over_subscription.rst:47
msgid "Capabilities"
msgstr ""
#: ../blockstorage_over_subscription.rst:49
msgid "Drivers can report the following capabilities for a back end or a pool:"
msgstr ""
#: ../blockstorage_over_subscription.rst:58
msgid ""
"Where ``PROVISIONED_CAPACITY`` is the apparent allocated space indicating "
"how much capacity has been provisioned and ``MAX_RATIO`` is the maximum "
"oversubscription ratio. For the LVM driver, it is "
"``max_over_subscription_ratio`` in :file:`cinder.conf`."
msgstr ""
#: ../blockstorage_over_subscription.rst:63
msgid ""
"Two capabilities are added here to allow a back end or pool to claim support "
"for thin provisioning, or thick provisioning, or both."
msgstr ""
#: ../blockstorage_over_subscription.rst:66
msgid ""
"The LVM driver reports ``thin_provisioning_support=True`` and "
"``thick_provisioning_support=False`` if the ``lvm_type`` flag in :file:"
"`cinder.conf` is ``thin``. Otherwise it reports "
"``thin_provisioning_support=False`` and ``thick_provisioning_support=True``."
msgstr ""
#: ../blockstorage_over_subscription.rst:72
msgid "Volume type extra specs"
msgstr ""
#: ../blockstorage_over_subscription.rst:74
msgid ""
"If volume type is provided as part of the volume creation request, it can "
"have the following extra specs defined:"
msgstr ""
#: ../blockstorage_over_subscription.rst:84
msgid ""
"``capabilities`` scope key before ``thin_provisioning_support`` and "
"``thick_provisioning_support`` is not required. So the following works too:"
msgstr ""
#: ../blockstorage_over_subscription.rst:92
msgid ""
"The above extra specs are used by the scheduler to find a back end that "
"supports thin provisioning, thick provisioning, or both to match the needs "
"of a specific volume type."
msgstr ""
#: ../blockstorage_over_subscription.rst:97
msgid "Volume replication extra specs"
msgstr ""
#: ../blockstorage_over_subscription.rst:99
msgid ""
"OpenStack Block Storage has the ability to create volume replicas. Cloud "
"administrators can define a storage policy that includes replication by "
"adjusting the cinder volume driver. Volume replication for OpenStack Block "
"Storage helps safeguard OpenStack environments from data loss during "
"disaster recovery."
msgstr ""
#: ../blockstorage_over_subscription.rst:105
msgid ""
"To enable replication when creating volume types, configure the cinder "
"volume with ``capabilities:replication=\"<is> True\"``."
msgstr ""
#: ../blockstorage_over_subscription.rst:108
msgid ""
"Each volume created with the replication capability set to `True` generates "
"a copy of the volume on a storage back end."
msgstr ""
#: ../blockstorage_over_subscription.rst:111
msgid ""
"One use case for replication involves an OpenStack cloud environment "
"installed across two data centers located nearby each other. The distance "
"between the two data centers in this use case is the length of a city."
msgstr ""
#: ../blockstorage_over_subscription.rst:116
msgid ""
"At each data center, a cinder host supports the Block Storage service. Both "
"data centers include storage back ends."
msgstr ""
#: ../blockstorage_over_subscription.rst:119
msgid ""
"Depending on the storage requirements, there can be one or two cinder hosts. "
"The cloud administrator accesses the :file:`/etc/cinder/cinder.conf` "
"configuration file and sets ``capabilities:replication=\"<is> True\"``."
msgstr ""
#: ../blockstorage_over_subscription.rst:124
msgid ""
"If one data center experiences a service failure, cloud administrators can "
"redeploy the VM. The VM will run using a replicated, backed up volume on a "
"host in the second data center."
msgstr ""
#: ../blockstorage_over_subscription.rst:129
msgid "Capacity filter"
msgstr ""
#: ../blockstorage_over_subscription.rst:131
msgid ""
"In the capacity filter, ``max_over_subscription_ratio`` is used when "
"choosing a back end if ``thin_provisioning_support`` is True and "
"``max_over_subscription_ratio`` is greater than 1.0."
msgstr ""
#: ../blockstorage_over_subscription.rst:136
msgid "Capacity weigher"
msgstr ""
#: ../blockstorage_over_subscription.rst:138
msgid ""
"In the capacity weigher, virtual free capacity is used for ranking if "
"``thin_provisioning_support`` is True. Otherwise, real free capacity will be "
"used as before."
msgstr ""
#: ../blockstorage_ratelimit_volume_copy_bandwidth.rst:5
msgid "Rate-limit volume copy bandwidth"
msgstr ""
#: ../blockstorage_ratelimit_volume_copy_bandwidth.rst:7
msgid ""
"When you create a new volume from an image or an existing volume, or when "
"you upload a volume image to the Image service, large data copy may stress "
"disk and network bandwidth. To mitigate slow down of data access from the "
"instances, OpenStack Block Storage supports rate-limiting of volume data "
"copy bandwidth."
msgstr ""
#: ../blockstorage_ratelimit_volume_copy_bandwidth.rst:14
msgid "Configure volume copy bandwidth limit"
msgstr ""
#: ../blockstorage_ratelimit_volume_copy_bandwidth.rst:16
msgid ""
"To configure the volume copy bandwidth limit, set the "
"``volume_copy_bps_limit`` option in the configuration groups for each back "
"end in the :file:`cinder.conf` file. This option takes the integer of "
"maximum bandwidth allowed for volume data copy in byte per second. If this "
"option is set to ``0``, the rate-limit is disabled."
msgstr ""
#: ../blockstorage_ratelimit_volume_copy_bandwidth.rst:22
msgid ""
"While multiple volume data copy operations are running in the same back end, "
"the specified bandwidth is divided to each copy."
msgstr ""
#: ../blockstorage_ratelimit_volume_copy_bandwidth.rst:25
msgid ""
"Example :file:`cinder.conf` configuration file to limit volume copy "
"bandwidth of ``lvmdriver-1`` up to 100 MiB/s:"
msgstr ""
#: ../blockstorage_ratelimit_volume_copy_bandwidth.rst:38
msgid ""
"This feature requires libcgroup to set up blkio cgroup for disk I/O "
"bandwidth limit. The libcgroup is provided by the cgroup-bin package in "
"Debian and Ubuntu, or by the libcgroup-tools package in Fedora, Red Hat "
"Enterprise Linux, CentOS, openSUSE, and SUSE Linux Enterprise."
msgstr ""
#: ../blockstorage_ratelimit_volume_copy_bandwidth.rst:45
msgid ""
"Some back ends which use remote file systems such as NFS are not supported "
"by this feature."
msgstr ""
#: ../blockstorage_volume_backed_image.rst:6
msgid "Volume-backed image"
msgstr ""
#: ../blockstorage_volume_backed_image.rst:8
msgid ""
"OpenStack Block Storage can quickly create a volume from an image that "
"refers to a volume storing image data (Image-Volume). Compared to the other "
"stores such as file and swift, creating a volume from a Volume-backed image "
"performs better when the block storage driver supports efficient volume "
"cloning."
msgstr ""
#: ../blockstorage_volume_backed_image.rst:13
msgid ""
"If the image is set to public in the Image service, the volume data can be "
"shared among tenants."
msgstr ""
#: ../blockstorage_volume_backed_image.rst:17
msgid "Configure the Volume-backed image"
msgstr ""
#: ../blockstorage_volume_backed_image.rst:19
msgid ""
"Volume-backed image feature requires locations information from the cinder "
"store of the Image service. To enable the Image service to use the cinder "
"store, add ``cinder`` to the ``stores`` option in the ``glance_store`` "
"section of the :file:`glance-api.conf` file::"
msgstr ""
#: ../blockstorage_volume_backed_image.rst:26
msgid ""
"To expose locations information, set the following options in the "
"``DEFAULT`` section of the :file:`glance-api.conf` file::"
msgstr ""
#: ../blockstorage_volume_backed_image.rst:31
msgid ""
"To enable the Block Storage services to create a new volume by cloning "
"Image- Volume, set the following options in the ``DEFAULT`` section of the :"
"file:`cinder.conf` file. For example::"
msgstr ""
#: ../blockstorage_volume_backed_image.rst:38
msgid ""
"To enable the :command:`cinder upload-to-image` command to create an image "
"that refers an Image-Volume, set the following options in each back-end "
"section of the :file:`cinder.conf` file::"
msgstr ""
#: ../blockstorage_volume_backed_image.rst:44
msgid ""
"By default, the :command:`upload-to-image` command creates the Image-Volume "
"in the current tenant. To store the Image-Volume into the internal tenant, "
"set the following options in each back-end section of the :file:`cinder."
"conf` file::"
msgstr ""
#: ../blockstorage_volume_backed_image.rst:52
msgid "Creating a Volume-backed image"
msgstr ""
#: ../blockstorage_volume_backed_image.rst:54
msgid ""
"To register an existing volume as a new Volume-backed image, use the "
"following commands::"
msgstr ""
#: ../blockstorage_volume_backed_image.rst:61
msgid ""
"If the ``image_upload_use_cinder_backend`` option is enabled, the following "
"command creates a new Image-Volume by cloning the specified volume and then "
"registers its location to a new image. The disk format and the container "
"format must be raw and bare (default). Otherwise, the image is uploaded to "
"the default store of the Image service.::"
msgstr ""
#: ../blockstorage_volume_backed_image.rst:70
msgid ""
"Currently, the cinder store of the Image services does not support uploading "
"and downloading of image data. By this limitation, Volume-backed images can "
"only be used to create a new volume."
msgstr ""
#: ../blockstorage_volume_backups.rst:5
msgid "Back up and restore volumes"
msgstr ""
#: ../blockstorage_volume_backups.rst:7
msgid ""
"The **cinder** command-line interface provides the tools for creating a "
"volume backup. You can restore a volume from a backup as long as the "
"backup's associated database information (or backup metadata) is intact in "
"the Block Storage database."
msgstr ""
#: ../blockstorage_volume_backups.rst:12
msgid "Run this command to create a backup of a volume::"
msgstr ""
#: ../blockstorage_volume_backups.rst:16
msgid ""
"Where *VOLUME* is the name or ID of the volume, ``incremental`` is a flag "
"that indicates whether an incremental backup should be performed, and "
"``force`` is a flag that allows or disallows backup of a volume when the "
"volume is attached to an instance."
msgstr ""
#: ../blockstorage_volume_backups.rst:21
msgid ""
"Without the ``incremental`` flag, a full backup is created by default. With "
"the ``incremental`` flag, an incremental backup is created."
msgstr ""
#: ../blockstorage_volume_backups.rst:24
msgid ""
"Without the ``force`` flag, the volume will be backed up only if its status "
"is ``available``. With the ``force`` flag, the volume will be backed up "
"whether its status is ``available`` or ``in-use``. A volume is ``in-use`` "
"when it is attached to an instance. The backup of an ``in-use`` volume means "
"your data is crash consistent. The ``force`` flag is False by default."
msgstr ""
#: ../blockstorage_volume_backups.rst:33
msgid ""
"The ``incremental`` and ``force`` flags are only available for block storage "
"API v2. You have to specify [--os-volume-api-version 2] in the **cinder** "
"command-line interface to use this parameter."
msgstr ""
#: ../blockstorage_volume_backups.rst:38
msgid "The ``force`` flag is new in OpenStack Liberty."
msgstr ""
#: ../blockstorage_volume_backups.rst:40
msgid ""
"The incremental backup is based on a parent backup which is an existing "
"backup with the latest timestamp. The parent backup can be a full backup or "
"an incremental backup depending on the timestamp."
msgstr ""
#: ../blockstorage_volume_backups.rst:47
msgid ""
"The first backup of a volume has to be a full backup. Attempting to do an "
"incremental backup without any existing backups will fail. There is an "
"``is_incremental`` flag that indicates whether a backup is incremental when "
"showing details on the backup. Another flag, ``has_dependent_backups``, "
"returned when showing backup details, will indicate whether the backup has "
"dependent backups. If it is true, attempting to delete this backup will fail."
msgstr ""
#: ../blockstorage_volume_backups.rst:55
msgid ""
"A new configure option ``backup_swift_block_size`` is introduced into :file:"
"`cinder.conf` for the default Swift backup driver. This is the size in bytes "
"that changes are tracked for incremental backups. The existing "
"``backup_swift_object_size`` option, the size in bytes of Swift backup "
"objects, has to be a multiple of ``backup_swift_block_size``. The default is "
"32768 for ``backup_swift_block_size``, and the default is 52428800 for "
"``backup_swift_object_size``."
msgstr ""
#: ../blockstorage_volume_backups.rst:63
msgid ""
"The configuration option ``backup_swift_enable_progress_timer`` in ``cinder."
"conf`` is used when backing up the volume to Object Storage back end. This "
"option enables or disables the timer. It is enabled by default to send the "
"periodic progress notifications to the Telemetry service."
msgstr ""
#: ../blockstorage_volume_backups.rst:68
msgid ""
"This command also returns a backup ID. Use this backup ID when restoring the "
"volume::"
msgstr ""
#: ../blockstorage_volume_backups.rst:73
msgid "When restoring from a full backup, it is a full restore."
msgstr ""
#: ../blockstorage_volume_backups.rst:75
msgid ""
"When restoring from an incremental backup, a list of backups is built based "
"on the IDs of the parent backups. A full restore is performed based on the "
"full backup first, then restore is done based on the incremental backup, "
"laying on top of it in order."
msgstr ""
#: ../blockstorage_volume_backups.rst:80
msgid ""
"You can view a backup list with the :command:`cinder backup-list` command. "
"Optional arguments to clarify the status of your backups include: running "
"``--name``, ``--status``, and ``--volume-id`` to filter through backups by "
"the specified name, status, or volume-id. Search with ``--all-tenants`` for "
"details of the tenants associated with the listed backups."
msgstr ""
#: ../blockstorage_volume_backups.rst:87
msgid ""
"Because volume backups are dependent on the Block Storage database, you must "
"also back up your Block Storage database regularly to ensure data recovery."
msgstr ""
#: ../blockstorage_volume_backups.rst:92
msgid ""
"Alternatively, you can export and save the metadata of selected volume "
"backups. Doing so precludes the need to back up the entire Block Storage "
"database. This is useful if you need only a small subset of volumes to "
"survive a catastrophic database failure."
msgstr ""
#: ../blockstorage_volume_backups.rst:97
msgid ""
"If you specify a UUID encryption key when setting up the volume "
"specifications, the backup metadata ensures that the key will remain valid "
"when you back up and restore the volume."
msgstr ""
#: ../blockstorage_volume_backups.rst:101
msgid ""
"For more information about how to export and import volume backup metadata, "
"see the section called :ref:`volume_backups_export_import`."
msgstr ""
#: ../blockstorage_volume_backups.rst:104
msgid "By default, the swift object store is used for the backup repository."
msgstr ""
#: ../blockstorage_volume_backups.rst:106
msgid ""
"If instead you want to use an NFS export as the backup repository, add the "
"following configuration options to the ``[DEFAULT]`` section of the :file:"
"`cinder.conf` file and restart the Block Storage services:"
msgstr ""
#: ../blockstorage_volume_backups.rst:115
msgid ""
"For the ``backup_share`` option, replace *HOST* with the DNS resolvable host "
"name or the IP address of the storage server for the NFS share, and "
"*EXPORT_PATH* with the path to that share. If your environment requires that "
"non-default mount options be specified for the share, set these as follows:"
msgstr ""
#: ../blockstorage_volume_backups.rst:125
msgid ""
"*MOUNT_OPTIONS* is a comma-separated string of NFS mount options as detailed "
"in the NFS man page."
msgstr ""
#: ../blockstorage_volume_backups.rst:128
msgid ""
"There are several other options whose default values may be overridden as "
"appropriate for your environment:"
msgstr ""
#: ../blockstorage_volume_backups.rst:137
msgid ""
"The option ``backup_compression_algorithm`` can be set to ``bz2`` or "
"``None``. The latter can be a useful setting when the server providing the "
"share for the backup repository itself performs deduplication or compression "
"on the backup data."
msgstr ""
#: ../blockstorage_volume_backups.rst:142
msgid ""
"The option ``backup_file_size`` must be a multiple of "
"``backup_sha_block_size_bytes``. It is effectively the maximum file size to "
"be used, given your environment, to hold backup data. Volumes larger than "
"this will be stored in multiple files in the backup repository. The "
"``backup_sha_block_size_bytes`` option determines the size of blocks from "
"the cinder volume being backed up on which digital signatures are calculated "
"in order to enable incremental backup capability."
msgstr ""
#: ../blockstorage_volume_backups.rst:150
msgid ""
"You also have the option of resetting the state of a backup. When creating "
"or restoring a backup, sometimes it may get stuck in the creating or "
"restoring states due to problems like the database or rabbitmq being down. "
"In situations like these resetting the state of the backup can restore it to "
"a functional status."
msgstr ""
#: ../blockstorage_volume_backups.rst:156
msgid "Run this command to restore the state of a backup::"
msgstr ""
#: ../blockstorage_volume_backups_export_import.rst:5
msgid "Export and import backup metadata"
msgstr ""
#: ../blockstorage_volume_backups_export_import.rst:8
msgid ""
"A volume backup can only be restored on the same Block Storage service. This "
"is because restoring a volume from a backup requires metadata available on "
"the database used by the Block Storage service."
msgstr ""
#: ../blockstorage_volume_backups_export_import.rst:14
msgid ""
"For information about how to back up and restore a volume, see the section "
"called :ref:`volume_backups`."
msgstr ""
#: ../blockstorage_volume_backups_export_import.rst:17
msgid ""
"You can, however, export the metadata of a volume backup. To do so, run this "
"command as an OpenStack ``admin`` user (presumably, after creating a volume "
"backup)::"
msgstr ""
#: ../blockstorage_volume_backups_export_import.rst:23
msgid ""
"Where *BACKUP_ID* is the volume backup's ID. This command should return the "
"backup's corresponding database information as encoded string metadata."
msgstr ""
#: ../blockstorage_volume_backups_export_import.rst:26
msgid ""
"Exporting and storing this encoded string metadata allows you to completely "
"restore the backup, even in the event of a catastrophic database failure. "
"This will preclude the need to back up the entire Block Storage database, "
"particularly if you only need to keep complete backups of a small subset of "
"volumes."
msgstr ""
#: ../blockstorage_volume_backups_export_import.rst:32
msgid ""
"If you have placed encryption on your volumes, the encryption will still be "
"in place when you restore the volume if a UUID encryption key is specified "
"when creating volumes. Using backup metadata support, UUID keys set up for a "
"volume (or volumes) will remain valid when you restore a backed-up volume. "
"The restored volume will remain encrypted, and will be accessible with your "
"credentials."
msgstr ""
#: ../blockstorage_volume_backups_export_import.rst:39
msgid ""
"In addition, having a volume backup and its backup metadata also provides "
"volume portability. Specifically, backing up a volume and exporting its "
"metadata will allow you to restore the volume on a completely different "
"Block Storage database, or even on a different cloud service. To do so, "
"first import the backup metadata to the Block Storage database and then "
"restore the backup."
msgstr ""
#: ../blockstorage_volume_backups_export_import.rst:46
msgid ""
"To import backup metadata, run the following command as an OpenStack "
"``admin``::"
msgstr ""
#: ../blockstorage_volume_backups_export_import.rst:51
msgid "Where *METADATA* is the backup metadata exported earlier."
msgstr ""
#: ../blockstorage_volume_backups_export_import.rst:53
msgid ""
"Once you have imported the backup metadata into a Block Storage database, "
"restore the volume (see the section called :ref:`volume_backups`)."
msgstr ""
#: ../blockstorage_volume_migration.rst:5
msgid "Migrate volumes"
msgstr ""
#: ../blockstorage_volume_migration.rst:7
msgid ""
"OpenStack has the ability to migrate volumes between back-ends which support "
"its volume-type. Migrating a volume transparently moves its data from the "
"current back-end for the volume to a new one. This is an administrator "
"function, and can be used for functions including storage evacuation (for "
"maintenance or decommissioning), or manual optimizations (for example, "
"performance, reliability, or cost)."
msgstr ""
#: ../blockstorage_volume_migration.rst:14
msgid "These workflows are possible for a migration:"
msgstr ""
#: ../blockstorage_volume_migration.rst:16
msgid ""
"If the storage can migrate the volume on its own, it is given the "
"opportunity to do so. This allows the Block Storage driver to enable "
"optimizations that the storage might be able to perform. If the back-end is "
"not able to perform the migration, the Block Storage uses one of two generic "
"flows, as follows."
msgstr ""
#: ../blockstorage_volume_migration.rst:22
msgid ""
"If the volume is not attached, the Block Storage service creates a volume "
"and copies the data from the original to the new volume."
msgstr ""
#: ../blockstorage_volume_migration.rst:27
msgid ""
"While most back-ends support this function, not all do. See the driver "
"documentation in the `OpenStack Configuration Reference <http://docs."
"openstack.org/liberty/config-reference/content/>`__ for more details."
msgstr ""
#: ../blockstorage_volume_migration.rst:32
msgid ""
"If the volume is attached to a VM instance, the Block Storage creates a "
"volume, and calls Compute to copy the data from the original to the new "
"volume. Currently this is supported only by the Compute libvirt driver."
msgstr ""
#: ../blockstorage_volume_migration.rst:36
msgid ""
"As an example, this scenario shows two LVM back-ends and migrates an "
"attached volume from one to the other. This scenario uses the third "
"migration flow."
msgstr ""
#: ../blockstorage_volume_migration.rst:39
msgid "First, list the available back-ends:"
msgstr ""
#: ../blockstorage_volume_migration.rst:60
msgid "Only Block Storage V2 API supports :command:`get-pools`."
msgstr ""
#: ../blockstorage_volume_migration.rst:62
msgid "You can also get available back-ends like following:"
msgstr ""
#: ../blockstorage_volume_migration.rst:70
msgid ""
"But it needs to add pool name in the end. For example, "
"``server1@lvmstorage-1#zone1``."
msgstr ""
#: ../blockstorage_volume_migration.rst:73
msgid ""
"Next, as the admin user, you can see the current status of the volume "
"(replace the example ID with your own):"
msgstr ""
#: ../blockstorage_volume_migration.rst:104
msgid "Note these attributes:"
msgstr ""
#: ../blockstorage_volume_migration.rst:106
msgid "``os-vol-host-attr:host`` - the volume's current back-end."
msgstr ""
#: ../blockstorage_volume_migration.rst:107
msgid ""
"``os-vol-mig-status-attr:migstat`` - the status of this volume's migration "
"(None means that a migration is not currently in progress)."
msgstr ""
#: ../blockstorage_volume_migration.rst:109
msgid ""
"``os-vol-mig-status-attr:name_id`` - the volume ID that this volume's name "
"on the back-end is based on. Before a volume is ever migrated, its name on "
"the back-end storage may be based on the volume's ID (see the "
"``volume_name_template`` configuration parameter). For example, if "
"``volume_name_template`` is kept as the default value (``volume-%s``), your "
"first LVM back-end has a logical volume named ``volume-6088f80a-f116-4331-"
"ad48-9afb0dfb196c``. During the course of a migration, if you create a "
"volume and copy over the data, the volume get the new name but keeps its "
"original ID. This is exposed by the ``name_id`` attribute."
msgstr ""
#: ../blockstorage_volume_migration.rst:122
msgid ""
"If you plan to decommission a block storage node, you must stop the "
"``cinder`` volume service on the node after performing the migration."
msgstr ""
#: ../blockstorage_volume_migration.rst:125
msgid ""
"On nodes that run CentOS, Fedora, openSUSE, Red Hat Enterprise Linux, or "
"SUSE Linux Enterprise, run:"
msgstr ""
#: ../blockstorage_volume_migration.rst:133
msgid "On nodes that run Ubuntu or Debian, run:"
msgstr ""
#: ../blockstorage_volume_migration.rst:140
msgid ""
"Stopping the cinder volume service will prevent volumes from being allocated "
"to the node."
msgstr ""
#: ../blockstorage_volume_migration.rst:143
msgid "Migrate this volume to the second LVM back-end:"
msgstr ""
#: ../blockstorage_volume_migration.rst:150
msgid ""
"You can use the :command:`cinder show` command to see the status of the "
"migration. While migrating, the ``migstat`` attribute shows states such as "
"``migrating`` or ``completing``. On error, ``migstat`` is set to None and "
"the host attribute shows the original ``host``. On success, in this example, "
"the output looks like:"
msgstr ""
#: ../blockstorage_volume_migration.rst:180
msgid ""
"Note that ``migstat`` is None, host is the new host, and ``name_id`` holds "
"the ID of the volume created by the migration. If you look at the second LVM "
"back end, you find the logical volume ``volume-133d1f56-9ffc-4f57-8798-"
"d5217d851862``."
msgstr ""
#: ../blockstorage_volume_migration.rst:187
msgid ""
"The migration is not visible to non-admin users (for example, through the "
"volume ``status``). However, some operations are not allowed while a "
"migration is taking place, such as attaching/detaching a volume and deleting "
"a volume. If a user performs such an action during a migration, an error is "
"returned."
msgstr ""
#: ../blockstorage_volume_migration.rst:195
msgid "Migrating volumes that have snapshots are currently not allowed."
msgstr ""
#: ../blockstorage_volume_number_weigher.rst:5
msgid "Configure and use volume number weigher"
msgstr ""
#: ../blockstorage_volume_number_weigher.rst:7
msgid ""
"OpenStack Block Storage enables you to choose a volume back end according to "
"``free_capacity`` and ``allocated_capacity``. The volume number weigher "
"feature lets the scheduler choose a volume back end based on its volume "
"number in the volume back end. This can provide another means to improve the "
"volume back ends' I/O balance and the volumes' I/O performance."
msgstr ""
#: ../blockstorage_volume_number_weigher.rst:14
msgid "Enable volume number weigher"
msgstr ""
#: ../blockstorage_volume_number_weigher.rst:16
msgid ""
"To enable a volume number weigher, set the ``scheduler_default_weighers`` to "
"``VolumeNumberWeigher`` flag in the :file:`cinder.conf` file to define "
"``VolumeNumberWeigher`` as the selected weigher."
msgstr ""
#: ../blockstorage_volume_number_weigher.rst:25
msgid ""
"To configure ``VolumeNumberWeigher``, use ``LVMVolumeDriver`` as the volume "
"driver."
msgstr ""
#: ../blockstorage_volume_number_weigher.rst:28
msgid ""
"This configuration defines two LVM volume groups: ``stack-volumes`` with 10 "
"GB capacity and ``stack-volumes-1`` with 60 GB capacity. This example "
"configuration defines two back ends:"
msgstr ""
#: ../blockstorage_volume_number_weigher.rst:49
msgid "Define a volume type in Block Storage::"
msgstr ""
#: ../blockstorage_volume_number_weigher.rst:53
msgid ""
"Create an extra specification that links the volume type to a back-end name::"
msgstr ""
#: ../blockstorage_volume_number_weigher.rst:57
msgid ""
"This example creates a lvm volume type with ``volume_backend_name=LVM`` as "
"extra specifications."
msgstr ""
#: ../blockstorage_volume_number_weigher.rst:63
msgid ""
"To create six 1-GB volumes, run the :command:`cinder create --volume-type "
"lvm 1` command six times::"
msgstr ""
#: ../blockstorage_volume_number_weigher.rst:68
msgid ""
"This command creates three volumes in ``stack-volumes`` and three volumes in "
"``stack-volumes-1``."
msgstr ""
#: ../blockstorage_volume_number_weigher.rst:71
msgid "List the available volumes::"
msgstr ""
#: ../compute-admin-password-injection.rst:5
msgid "Injecting the administrator password"
msgstr ""
#: ../compute-admin-password-injection.rst:7
msgid ""
"Compute can generate a random administrator (root) password and inject that "
"password into an instance. If this feature is enabled, users can run :"
"command:`ssh` to an instance without an :command:`ssh` keypair. The random "
"password appears in the output of the :command:`nova boot` command. You can "
"also view and set the admin password from the dashboard."
msgstr ""
#: ../compute-admin-password-injection.rst:13
msgid "**Password injection using the dashboard**"
msgstr ""
#: ../compute-admin-password-injection.rst:15
msgid ""
"By default, the dashboard will display the ``admin`` password and allow the "
"user to modify it."
msgstr ""
#: ../compute-admin-password-injection.rst:18
msgid ""
"If you do not want to support password injection, disable the password "
"fields by editing the dashboard's :file:`local_settings` file. On Fedora/"
"RHEL/CentOS, the file location is :file:`/etc/openstack-dashboard/"
"local_settings`. On Ubuntu and Debian, it is :file:`/etc/openstack-dashboard/"
"local_settings.py`. On openSUSE and SUSE Linux Enterprise Server, it is :"
"file:`/srv/www/openstack-dashboard/openstack_dashboard/local/local_settings."
"py`"
msgstr ""
#: ../compute-admin-password-injection.rst:33
msgid "**Password injection on libvirt-based hypervisors**"
msgstr ""
#: ../compute-admin-password-injection.rst:35
msgid ""
"For hypervisors that use the libvirt back end (such as KVM, QEMU, and LXC), "
"admin password injection is disabled by default. To enable it, set this "
"option in :file:`/etc/nova/nova.conf`:"
msgstr ""
#: ../compute-admin-password-injection.rst:44
msgid ""
"When enabled, Compute will modify the password of the admin account by "
"editing the :file:`/etc/shadow` file inside the virtual machine instance."
msgstr ""
#: ../compute-admin-password-injection.rst:49
msgid ""
"Users can only use :command:`ssh` to access the instance by using the admin "
"password if the virtual machine image is a Linux distribution, and it has "
"been configured to allow users to use :command:`ssh` as the root user. This "
"is not the case for `Ubuntu cloud images`_ which, by default, does not allow "
"users to use :command:`ssh` to access the root account."
msgstr ""
#: ../compute-admin-password-injection.rst:55
msgid "**Password injection and XenAPI (XenServer/XCP)**"
msgstr ""
#: ../compute-admin-password-injection.rst:57
msgid ""
"When using the XenAPI hypervisor back end, Compute uses the XenAPI agent to "
"inject passwords into guests. The virtual machine image must be configured "
"with the agent for password injection to work."
msgstr ""
#: ../compute-admin-password-injection.rst:61
msgid "**Password injection and Windows images (all hypervisors)**"
msgstr ""
#: ../compute-admin-password-injection.rst:65
msgid ""
"For Windows virtual machines, configure the Windows image to retrieve the "
"admin password on boot by installing an agent such as `cloudbase-init`_."
msgstr ""
#: ../compute-configuring-migrations.rst:5
msgid "Configure migrations"
msgstr ""
#: ../compute-configuring-migrations.rst:12
msgid ""
"Only cloud administrators can perform live migrations. If your cloud is "
"configured to use cells, you can perform live migration within but not "
"between cells."
msgstr ""
#: ../compute-configuring-migrations.rst:16
msgid ""
"Migration enables an administrator to move a virtual-machine instance from "
"one compute host to another. This feature is useful when a compute host "
"requires maintenance. Migration can also be useful to redistribute the load "
"when many VM instances are running on a specific physical machine."
msgstr ""
#: ../compute-configuring-migrations.rst:22
msgid "The migration types are:"
msgstr ""
#: ../compute-configuring-migrations.rst:24
msgid ""
"**Non-live migration** (sometimes referred to simply as 'migration'). The "
"instance is shut down for a period of time to be moved to another "
"hypervisor. In this case, the instance recognizes that it was rebooted."
msgstr ""
#: ../compute-configuring-migrations.rst:29
msgid ""
"**Live migration** (or 'true live migration'). Almost no instance downtime. "
"Useful when the instances must be kept running during the migration. The "
"different types of live migration are:"
msgstr ""
#: ../compute-configuring-migrations.rst:33
msgid ""
"**Shared storage-based live migration**. Both hypervisors have access to "
"shared storage."
msgstr ""
#: ../compute-configuring-migrations.rst:36
msgid ""
"**Block live migration**. No shared storage is required. Incompatible with "
"read-only devices such as CD-ROMs and `Configuration Drive (config\\_drive) "
"<http://docs.openstack.org/user-guide/cli_config_drive.html>`_."
msgstr ""
#: ../compute-configuring-migrations.rst:40
msgid ""
"**Volume-backed live migration**. Instances are backed by volumes rather "
"than ephemeral disk, no shared storage is required, and migration is "
"supported (currently only available for libvirt-based hypervisors)."
msgstr ""
#: ../compute-configuring-migrations.rst:45
msgid ""
"The following sections describe how to configure your hosts and compute "
"nodes for migrations by using the KVM and XenServer hypervisors."
msgstr ""
#: ../compute-configuring-migrations.rst:51
msgid "KVM-Libvirt"
msgstr ""
#: ../compute-configuring-migrations.rst:59
#: ../compute-configuring-migrations.rst:330
msgid "Shared storage"
msgstr ""
#: ../compute-configuring-migrations.rst:64
#: ../compute-configuring-migrations.rst:332
msgid "**Prerequisites**"
msgstr ""
#: ../compute-configuring-migrations.rst:66
msgid "**Hypervisor:** KVM with libvirt"
msgstr ""
#: ../compute-configuring-migrations.rst:68
msgid ""
"**Shared storage:** :file:`NOVA-INST-DIR/instances/` (for example, :file:`/"
"var/lib/nova/instances`) has to be mounted by shared storage. This guide "
"uses NFS but other options, including the `OpenStack Gluster Connector "
"<http://gluster.org/community/documentation//index.php/OSConnect>`_ are "
"available."
msgstr ""
#: ../compute-configuring-migrations.rst:74
msgid "**Instances:** Instance can be migrated with iSCSI-based volumes."
msgstr ""
#: ../compute-configuring-migrations.rst:76
msgid "**Notes**"
msgstr ""
#: ../compute-configuring-migrations.rst:78
msgid ""
"Because the Compute service does not use the libvirt live migration "
"functionality by default, guests are suspended before migration and might "
"experience several minutes of downtime. For details, see `Enabling true live "
"migration`."
msgstr ""
#: ../compute-configuring-migrations.rst:83
msgid ""
"Compute calculates the amount of downtime required using the RAM size of the "
"disk being migrated, in accordance with the ``live_migration_downtime`` "
"configuration parameters. Migration downtime is measured in steps, with an "
"exponential backoff between each step. This means that the maximum downtime "
"between each step starts off small, and is increased in ever larger amounts "
"as Compute waits for the migration to complete. This gives the guest a "
"chance to complete the migration successfully, with a minimum amount of "
"downtime."
msgstr ""
#: ../compute-configuring-migrations.rst:92
msgid ""
"This guide assumes the default value for ``instances_path`` in your :file:"
"`nova.conf` file (:file:`NOVA-INST-DIR/instances`). If you have changed the "
"``state_path`` or ``instances_path`` variables, modify the commands "
"accordingly."
msgstr ""
#: ../compute-configuring-migrations.rst:97
msgid ""
"You must specify ``vncserver_listen=0.0.0.0`` or live migration will not "
"work correctly."
msgstr ""
#: ../compute-configuring-migrations.rst:100
msgid ""
"You must specify the ``instances_path`` in each node that runs nova-compute. "
"The mount point for ``instances_path`` must be the same value for each node, "
"or live migration will not work correctly."
msgstr ""
#: ../compute-configuring-migrations.rst:108
msgid "Example Compute installation environment"
msgstr ""
#: ../compute-configuring-migrations.rst:110
msgid ""
"Prepare at least three servers. In this example, we refer to the servers as "
"``HostA``, ``HostB``, and ``HostC``:"
msgstr ""
#: ../compute-configuring-migrations.rst:113
msgid ""
"``HostA`` is the Cloud Controller, and should run these services: nova-api, "
"nova-scheduler, ``nova-network``, cinder-volume, and ``nova-objectstore``."
msgstr ""
#: ../compute-configuring-migrations.rst:117
msgid "``HostB`` and ``HostC`` are the compute nodes that run nova-compute."
msgstr ""
#: ../compute-configuring-migrations.rst:120
msgid ""
"Ensure that ``NOVA-INST-DIR`` (set with ``state_path`` in the :file:`nova."
"conf` file) is the same on all hosts."
msgstr ""
#: ../compute-configuring-migrations.rst:123
msgid ""
"In this example, ``HostA`` is the NFSv4 server that exports ``NOVA-INST-DIR/"
"instances`` directory. ``HostB`` and ``HostC`` are NFSv4 clients that mount "
"``HostA``."
msgstr ""
#: ../compute-configuring-migrations.rst:127
msgid "**Configuring your system**"
msgstr ""
#: ../compute-configuring-migrations.rst:129
msgid ""
"Configure your DNS or ``/etc/hosts`` and ensure it is consistent across all "
"hosts. Make sure that the three hosts can perform name resolution with each "
"other. As a test, use the :command:`ping` command to ping each host from one "
"another:"
msgstr ""
#: ../compute-configuring-migrations.rst:140
msgid ""
"Ensure that the UID and GID of your Compute and libvirt users are identical "
"between each of your servers. This ensures that the permissions on the NFS "
"mount works correctly."
msgstr ""
#: ../compute-configuring-migrations.rst:144
msgid ""
"Ensure you can access SSH without a password and without "
"StrictHostKeyChecking between ``HostB`` and ``HostC`` as ``nova`` user (set "
"with the owner of nova-compute service). Direct access from one compute host "
"to another is needed to copy the VM file across. It is also needed to detect "
"if the source and target compute nodes share a storage subsystem."
msgstr ""
#: ../compute-configuring-migrations.rst:151
msgid ""
"Export ``NOVA-INST-DIR/instances`` from ``HostA``, and ensure it is readable "
"and writable by the Compute user on ``HostB`` and ``HostC``."
msgstr ""
#: ../compute-configuring-migrations.rst:154
msgid ""
"For more information, see: `SettingUpNFSHowTo <https://help.ubuntu.com/"
"community/SettingUpNFSHowTo>`_ or `CentOS/Red Hat: Setup NFS v4.0 File "
"Server <http://www.cyberciti.biz/faq/centos-fedora-rhel-nfs-v4-configuration/"
">`_"
msgstr ""
#: ../compute-configuring-migrations.rst:157
msgid ""
"Configure the NFS server at ``HostA`` by adding the following line to the :"
"file:`/etc/exports` file:"
msgstr ""
#: ../compute-configuring-migrations.rst:164
msgid ""
"Change the subnet mask (``255.255.0.0``) to the appropriate value to include "
"the IP addresses of ``HostB`` and ``HostC``. Then restart the NFS server:"
msgstr ""
#: ../compute-configuring-migrations.rst:173
msgid ""
"On both compute nodes, enable the 'execute/search' bit on your shared "
"directory to allow qemu to be able to use the images within the directories. "
"On all hosts, run the following command:"
msgstr ""
#: ../compute-configuring-migrations.rst:181
msgid ""
"Configure NFS on ``HostB`` and ``HostC`` by adding the following line to "
"the :file:`/etc/fstab` file"
msgstr ""
#: ../compute-configuring-migrations.rst:188
msgid "Ensure that you can mount the exported directory"
msgstr ""
#: ../compute-configuring-migrations.rst:194
msgid ""
"Check that ``HostA`` can see the :file:`NOVA-INST-DIR/instances/` directory"
msgstr ""
#: ../compute-configuring-migrations.rst:202
msgid ""
"Perform the same check on ``HostB`` and ``HostC``, paying special attention "
"to the permissions (Compute should be able to write)"
msgstr ""
#: ../compute-configuring-migrations.rst:220
msgid ""
"Update the libvirt configurations so that the calls can be made securely. "
"These methods enable remote access over TCP and are not documented here."
msgstr ""
#: ../compute-configuring-migrations.rst:224
msgid "SSH tunnel to libvirtd's UNIX socket"
msgstr ""
#: ../compute-configuring-migrations.rst:226
msgid "libvirtd TCP socket, with GSSAPI/Kerberos for auth+data encryption"
msgstr ""
#: ../compute-configuring-migrations.rst:228
msgid ""
"libvirtd TCP socket, with TLS for encryption and x509 client certs for "
"authentication"
msgstr ""
#: ../compute-configuring-migrations.rst:231
msgid ""
"libvirtd TCP socket, with TLS for encryption and Kerberos for authentication"
msgstr ""
#: ../compute-configuring-migrations.rst:234
msgid ""
"Restart libvirt. After you run the command, ensure that libvirt is "
"successfully restarted"
msgstr ""
#: ../compute-configuring-migrations.rst:243
msgid ""
"Configure your firewall to allow libvirt to communicate between nodes. By "
"default, libvirt listens on TCP port 16509, and an ephemeral TCP range from "
"49152 to 49261 is used for the KVM communications. Based on the secure "
"remote access TCP configuration you chose, be careful which ports you open, "
"and always understand who has access. For information about ports that are "
"used with libvirt, see the `libvirt documentation <http://libvirt.org/remote."
"html#Remote_libvirtd_configuration>`_."
msgstr ""
#: ../compute-configuring-migrations.rst:251
msgid ""
"Configure the downtime required for the migration by adjusting these "
"parameters in the :file:`nova.conf` file:"
msgstr ""
#: ../compute-configuring-migrations.rst:260
msgid ""
"The ``live_migration_downtime`` parameter sets the maximum permitted "
"downtime for a live migration, in milliseconds. This setting defaults to 500 "
"milliseconds."
msgstr ""
#: ../compute-configuring-migrations.rst:264
msgid ""
"The ``live_migration_downtime_steps`` parameter sets the total number of "
"incremental steps to reach the maximum downtime value. This setting defaults "
"to 10 steps."
msgstr ""
#: ../compute-configuring-migrations.rst:268
msgid ""
"The ``live_migration_downtime_delay`` parameter sets the amount of time to "
"wait between each step, in seconds. This setting defaults to 75 seconds."
msgstr ""
#: ../compute-configuring-migrations.rst:271
msgid ""
"You can now configure other options for live migration. In most cases, you "
"will not need to configure any options. For advanced configuration options, "
"see the `OpenStack Configuration Reference Guide <http://docs.openstack.org/ "
"liberty/config-reference/content/list-of-compute-config-options.html "
"#config_table_nova_livemigration>`_."
msgstr ""
#: ../compute-configuring-migrations.rst:280
msgid "Enabling true live migration"
msgstr ""
#: ../compute-configuring-migrations.rst:282
msgid ""
"Prior to the Kilo release, the Compute service did not use the libvirt live "
"migration function by default. To enable this function, add the following "
"line to the ``[libvirt]`` section of the :file:`nova.conf` file:"
msgstr ""
#: ../compute-configuring-migrations.rst:290
msgid ""
"On versions older than Kilo, the Compute service does not use libvirt's live "
"migration by default because there is a risk that the migration process will "
"never end. This can happen if the guest operating system uses blocks on the "
"disk faster than they can be migrated."
msgstr ""
#: ../compute-configuring-migrations.rst:298
#: ../compute-configuring-migrations.rst:403
msgid "Block migration"
msgstr ""
#: ../compute-configuring-migrations.rst:300
msgid ""
"Configuring KVM for block migration is exactly the same as the above "
"configuration in :ref:`configuring-migrations-kvm-shared-storage` the "
"section called shared storage, except that ``NOVA-INST-DIR/instances`` is "
"local to each host rather than shared. No NFS client or server configuration "
"is required."
msgstr ""
#: ../compute-configuring-migrations.rst:308
#: ../compute-configuring-migrations.rst:412
msgid ""
"To use block migration, you must use the :option:`--block-migrate` parameter "
"with the live migration command."
msgstr ""
#: ../compute-configuring-migrations.rst:311
msgid ""
"Block migration is incompatible with read-only devices such as CD-ROMs and "
"`Configuration Drive (config_drive) <http://docs.openstack.org/user-guide/"
"cli_config_drive.html>`_."
msgstr ""
#: ../compute-configuring-migrations.rst:314
msgid ""
"Since the ephemeral drives are copied over the network in block migration, "
"migrations of instances with heavy I/O loads may never complete if the "
"drives are writing faster than the data can be copied over the network."
msgstr ""
#: ../compute-configuring-migrations.rst:322
msgid "XenServer"
msgstr ""
#: ../compute-configuring-migrations.rst:334
msgid ""
"**Compatible XenServer hypervisors**. For more information, see the "
"`Requirements for Creating Resource Pools <http://docs.vmd.citrix.com/"
"XenServer/6.0.0/1.0/en_gb/reference.html#pooling_homogeneity_requirements>`_ "
"section of the XenServer Administrator's Guide."
msgstr ""
#: ../compute-configuring-migrations.rst:338
msgid "**Shared storage**. An NFS export, visible to all XenServer hosts."
msgstr ""
#: ../compute-configuring-migrations.rst:342
msgid ""
"For the supported NFS versions, see the `NFS VHD <http://docs.vmd.citrix.com/"
"XenServer/6.0.0/1.0/en_gb/reference.html#id1002701>`_ section of the "
"XenServer Administrator's Guide."
msgstr ""
#: ../compute-configuring-migrations.rst:346
msgid ""
"To use shared storage live migration with XenServer hypervisors, the hosts "
"must be joined to a XenServer pool. To create that pool, a host aggregate "
"must be created with specific metadata. This metadata is used by the XAPI "
"plug-ins to establish the pool."
msgstr ""
#: ../compute-configuring-migrations.rst:351
msgid "**Using shared storage live migrations with XenServer Hypervisors**"
msgstr ""
#: ../compute-configuring-migrations.rst:353
msgid ""
"Add an NFS VHD storage to your master XenServer, and set it as the default "
"storage repository. For more information, see NFS VHD in the XenServer "
"Administrator's Guide."
msgstr ""
#: ../compute-configuring-migrations.rst:357
msgid ""
"Configure all compute nodes to use the default storage repository (``sr``) "
"for pool operations. Add this line to your :file:`nova.conf` configuration "
"files on all compute nodes:"
msgstr ""
#: ../compute-configuring-migrations.rst:365
msgid ""
"Create a host aggregate. This command creates the aggregate, and then "
"displays a table that contains the ID of the new aggregate"
msgstr ""
#: ../compute-configuring-migrations.rst:372
msgid "Add metadata to the aggregate, to mark it as a hypervisor pool"
msgstr ""
#: ../compute-configuring-migrations.rst:380
msgid "Make the first compute node part of that aggregate"
msgstr ""
#: ../compute-configuring-migrations.rst:386
msgid "The host is now part of a XenServer pool."
msgstr ""
#: ../compute-configuring-migrations.rst:388
msgid "Add hosts to the pool"
msgstr ""
#: ../compute-configuring-migrations.rst:396
msgid ""
"The added compute node and the host will shut down to join the host to the "
"XenServer pool. The operation will fail if any server other than the compute "
"node is running or suspended on the host."
msgstr ""
#: ../compute-configuring-migrations.rst:405
msgid ""
"**Compatible XenServer hypervisors**. The hypervisors must support the "
"Storage XenMotion feature. See your XenServer manual to make sure your "
"edition has this feature."
msgstr ""
#: ../compute-configuring-migrations.rst:415
msgid ""
"Block migration works only with EXT local storage storage repositories, and "
"the server must not have any volumes attached."
msgstr ""
#: ../compute-default-ports.rst:5
msgid "Compute service node firewall requirements"
msgstr ""
#: ../compute-default-ports.rst:7
msgid ""
"Console connections for virtual machines, whether direct or through a proxy, "
"are received on ports ``5900`` to ``5999``. The firewall on each Compute "
"service node must allow network traffic on these ports."
msgstr ""
#: ../compute-default-ports.rst:11
msgid ""
"This procedure modifies the iptables firewall to allow incoming connections "
"to the Compute services."
msgstr ""
#: ../compute-default-ports.rst:14
msgid "**Configuring the service-node firewall**"
msgstr ""
#: ../compute-default-ports.rst:16
msgid "Log in to the server that hosts the Compute service, as root."
msgstr ""
#: ../compute-default-ports.rst:18
msgid ""
"Edit the :file:`/etc/sysconfig/iptables` file, to add an INPUT rule that "
"allows TCP traffic on ports from ``5900`` to ``5999``. Make sure the new "
"rule appears before any INPUT rules that REJECT traffic:"
msgstr ""
#: ../compute-default-ports.rst:26
msgid ""
"Save the changes to :file:`/etc/sysconfig/iptables` file, and restart the "
"iptables service to pick up the changes:"
msgstr ""
#: ../compute-default-ports.rst:33
msgid "Repeat this process for each Compute service node."
msgstr ""
#: ../compute-euca2ools.rst:5
msgid "Managing the cloud with euca2ools"
msgstr ""
#: ../compute-euca2ools.rst:7
msgid ""
"The ``euca2ools`` command-line tool provides a command line interface to EC2 "
"API calls. For more information about ``euca2ools``, see `https://www."
"eucalyptus.com/docs/eucalyptus/4.1.2/index.html <https://www.eucalyptus.com/"
"docs/eucalyptus/4.1.2/index.html>`__."
msgstr ""
#: ../compute-flavors.rst:5
msgid "Flavors"
msgstr ""
#: ../compute-flavors.rst:7
msgid ""
"Admin users can use the :command:`nova flavor-` commands to customize and "
"manage flavors. To see the available flavor-related commands, run:"
msgstr ""
#: ../compute-flavors.rst:24
msgid ""
"Configuration rights can be delegated to additional users by redefining the "
"access controls for ``compute_extension:flavormanage`` in :file:`/etc/nova/"
"policy.json` on the nova-api server."
msgstr ""
#: ../compute-flavors.rst:29
msgid ""
"To modify an existing flavor in the dashboard, you must delete the flavor "
"and create a modified one with the same name."
msgstr ""
#: ../compute-flavors.rst:32
msgid "Flavors define these elements:"
msgstr ""
# #-#-#-#-# compute-flavors.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# compute-manage-volumes.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# compute-networking-nova.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# compute-remote-console-access.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# compute-security.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# database.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# networking_adv-features.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# networking_arch.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# networking_introduction.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# networking_multi-dhcp-agents.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# objectstorage-troubleshoot.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# shared_file_systems_crud_share.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-measurements.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../compute-flavors.rst:35 ../compute-manage-volumes.rst:14
#: ../compute-networking-nova.rst:310 ../compute-networking-nova.rst:507
#: ../compute-remote-console-access.rst:36
#: ../compute-remote-console-access.rst:153 ../compute-security.rst:103
#: ../database.rst:103 ../database.rst:141 ../networking_adv-features.rst:47
#: ../networking_adv-features.rst:130 ../networking_adv-features.rst:711
#: ../networking_arch.rst:29 ../networking_introduction.rst:36
#: ../networking_introduction.rst:129 ../networking_multi-dhcp-agents.rst:40
#: ../objectstorage-troubleshoot.rst:62
#: ../shared_file_systems_crud_share.rst:43
#: ../shared_file_systems_crud_share.rst:383 ../telemetry-measurements.rst:31
msgid "Description"
msgstr ""
#: ../compute-flavors.rst:35
msgid "Element"
msgstr ""
#: ../compute-flavors.rst:37
msgid ""
"A descriptive name. XX.SIZE_NAME is typically not required, though some "
"third party tools may rely on it."
msgstr ""
# #-#-#-#-# compute-flavors.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# compute-live-migration-usage.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-measurements.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# ts-eql-volume-size.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../compute-flavors.rst:37 ../compute-live-migration-usage.rst:35
#: ../telemetry-measurements.rst:97 ../telemetry-measurements.rst:430
#: ../telemetry-measurements.rst:484 ../telemetry-measurements.rst:528
#: ../telemetry-measurements.rst:589 ../telemetry-measurements.rst:664
#: ../telemetry-measurements.rst:697 ../telemetry-measurements.rst:762
#: ../telemetry-measurements.rst:812 ../telemetry-measurements.rst:849
#: ../telemetry-measurements.rst:920 ../telemetry-measurements.rst:983
#: ../telemetry-measurements.rst:1069 ../telemetry-measurements.rst:1147
#: ../telemetry-measurements.rst:1212 ../telemetry-measurements.rst:1263
#: ../telemetry-measurements.rst:1289 ../telemetry-measurements.rst:1312
#: ../telemetry-measurements.rst:1332 ../ts-eql-volume-size.rst:107
msgid "Name"
msgstr ""
#: ../compute-flavors.rst:40
msgid "Memory_MB"
msgstr ""
#: ../compute-flavors.rst:40
msgid "Virtual machine memory in megabytes."
msgstr ""
#: ../compute-flavors.rst:42
msgid "Disk"
msgstr ""
#: ../compute-flavors.rst:42
msgid ""
"Virtual root disk size in gigabytes. This is an ephemeral di\\ sk that the "
"base image is copied into. When booting from a p\\ ersistent volume it is "
"not used. The \"0\" size is a special c\\ ase which uses the native base "
"image size as the size of the ephemeral root volume."
msgstr ""
#: ../compute-flavors.rst:48
msgid "Ephemeral"
msgstr ""
#: ../compute-flavors.rst:48
msgid ""
"Specifies the size of a secondary ephemeral data disk. This is an empty, "
"unformatted disk and exists only for the life o\\ f the instance."
msgstr ""
#: ../compute-flavors.rst:52
msgid "Optional swap space allocation for the instance."
msgstr ""
#: ../compute-flavors.rst:52
msgid "Swap"
msgstr ""
#: ../compute-flavors.rst:54
msgid "Number of virtual CPUs presented to the instance."
msgstr ""
#: ../compute-flavors.rst:54
msgid "VCPUs"
msgstr ""
#: ../compute-flavors.rst:56
msgid ""
"Optional property allows created servers to have a different bandwidth cap "
"than that defined in the network they are att\\ ached to. This factor is "
"multiplied by the rxtx_base propert\\ y of the network. Default value is "
"1.0. That is, the same as attached network. This parameter is only available "
"for Xen or NSX based systems."
msgstr ""
#: ../compute-flavors.rst:56
msgid "RXTX_Factor"
msgstr ""
#: ../compute-flavors.rst:63
msgid ""
"Boolean value, whether flavor is available to all users or p\\ rivate to the "
"tenant it was created in. Defaults to True."
msgstr ""
#: ../compute-flavors.rst:63
msgid "Is_Public"
msgstr ""
#: ../compute-flavors.rst:66
msgid ""
"Key and value pairs that define on which compute nodes a fla\\ vor can run. "
"These pairs must match corresponding pairs on t\\ he compute nodes. Use to "
"implement special resources, such a\\ s flavors that run on only compute "
"nodes with GPU hardware."
msgstr ""
#: ../compute-flavors.rst:66
msgid "extra_specs"
msgstr ""
#: ../compute-flavors.rst:74
msgid ""
"Flavor customization can be limited by the hypervisor in use. For example "
"the libvirt driver enables quotas on CPUs available to a VM, disk tuning, "
"bandwidth I/O, watchdog behavior, random number generator device control, "
"and instance VIF traffic control."
msgstr ""
#: ../compute-flavors.rst:80
msgid ""
"You can configure the CPU limits with control parameters with the ``nova`` "
"client. For example, to configure the I/O limit, use:"
msgstr ""
#: ../compute-flavors.rst:88
msgid ""
"Use these optional parameters to control weight shares, enforcement "
"intervals for runtime quotas, and a quota for maximum allowed bandwidth:"
msgstr ""
#: ../compute-flavors.rst:92
msgid ""
"``cpu_shares``. Specifies the proportional weighted share for the domain. If "
"this element is omitted, the service defaults to the OS provided defaults. "
"There is no unit for the value; it is a relative measure based on the "
"setting of other VMs. For example, a VM configured with value 2048 gets "
"twice as much CPU time as a VM configured with value 1024."
msgstr ""
#: ../compute-flavors.rst:99
msgid ""
"``cpu_shares_level``. On VMWare, specifies the allocation level. Can be "
"``custom``, ``high``, ``normal``, or ``low``. If you choose ``custom``, set "
"the number of shares using ``cpu_shares_share``."
msgstr ""
#: ../compute-flavors.rst:103
msgid ""
"``cpu_period``. Specifies the enforcement interval (unit: microseconds) for "
"QEMU and LXC hypervisors. Within a period, each VCPU of the domain is not "
"allowed to consume more than the quota worth of runtime. The value should be "
"in range ``[1000, 1000000]``. A period with value 0 means no value."
msgstr ""
#: ../compute-flavors.rst:109
msgid ""
"``cpu_limit``. Specifies the upper limit for VMware machine CPU allocation "
"in MHz. This parameter ensures that a machine never uses more than the "
"defined amount of CPU time. It can be used to enforce a limit on the "
"machine's CPU performance."
msgstr ""
#: ../compute-flavors.rst:114
msgid ""
"``cpu_reservation``. Specifies the guaranteed minimum CPU reservation in MHz "
"for VMware. This means that if needed, the machine will definitely get "
"allocated the reserved amount of CPU cycles."
msgstr ""
#: ../compute-flavors.rst:119
msgid ""
"``cpu_quota``. Specifies the maximum allowed bandwidth (unit: microseconds). "
"A domain with a negative-value quota indicates that the domain has infinite "
"bandwidth, which means that it is not bandwidth controlled. The value should "
"be in range ``[1000, 18446744073709551]`` or less than 0. A quota with value "
"0 means no value. You can use this feature to ensure that all vCPUs run at "
"the same speed. For example:"
msgstr ""
#: ../compute-flavors.rst:132
msgid ""
"In this example, the instance of ``m1.low_cpu`` can only consume a maximum "
"of 50% CPU of a physical CPU computing capability."
msgstr ""
#: ../compute-flavors.rst:133
msgid "CPU limits"
msgstr ""
#: ../compute-flavors.rst:136
msgid ""
"For VMware, you can configure the memory limits with control parameters."
msgstr ""
#: ../compute-flavors.rst:138
msgid ""
"Use these optional parameters to limit the memory allocation, guarantee "
"minimum memory reservation, and to specify shares used in case of resource "
"contention:"
msgstr ""
#: ../compute-flavors.rst:142
msgid ""
"``memory_limit``: Specifies the upper limit for VMware machine memory "
"allocation in MB. The utilization of a virtual machine will not exceed this "
"limit, even if there are available resources. This is typically used to "
"ensure a consistent performance of virtual machines independent of available "
"resources."
msgstr ""
#: ../compute-flavors.rst:148
msgid ""
"``memory_reservation``: Specifies the guaranteed minimum memory reservation "
"in MB for VMware. This means the specified amount of memory will definitely "
"be allocated to the machine."
msgstr ""
#: ../compute-flavors.rst:152
msgid ""
"``memory_shares_level``: On VMware, specifies the allocation level. This can "
"be ``custom``, ``high``, ``normal`` or ``low``. If you choose ``custom``, "
"set the number of shares using ``memory_shares_share``."
msgstr ""
#: ../compute-flavors.rst:156
msgid ""
"``memory_shares_share``: Specifies the number of shares allocated in the "
"event that ``custom`` is used. There is no unit for this value. It is a "
"relative measure based on the settings for other VMs. For example:"
msgstr ""
#: ../compute-flavors.rst:164
msgid "Memory limits"
msgstr ""
#: ../compute-flavors.rst:167
msgid ""
"For VMware, you can configure the resource limits for disk with control "
"parameters."
msgstr ""
#: ../compute-flavors.rst:170
msgid ""
"Use these optional parameters to limit the disk utilization, guarantee disk "
"allocation, and to specify shares used in case of resource contention. This "
"allows the VMWare driver to enable disk allocations for the running instance."
msgstr ""
#: ../compute-flavors.rst:175
msgid ""
"``disk_io_limit``: Specifies the upper limit for disk utilization in I/O per "
"second. The utilization of a virtual machine will not exceed this limit, "
"even if there are available resources. The default value is -1 which "
"indicates unlimited usage."
msgstr ""
#: ../compute-flavors.rst:181
msgid ""
"``disk_io_reservation``: Specifies the guaranteed minimum disk allocation in "
"terms of IOPS."
msgstr ""
#: ../compute-flavors.rst:184
msgid ""
"``disk_io_shares_level``: Specifies the allocation level. This can be "
"``custom``, ``high``, ``normal`` or ``low``. If you choose custom, set the "
"number of shares using ``disk_io_shares_share``."
msgstr ""
#: ../compute-flavors.rst:189
msgid ""
"``disk_io_shares_share``: Specifies the number of shares allocated in the "
"event that ``custom`` is used. When there is resource contention, this value "
"is used to determine the resource allocation."
msgstr ""
#: ../compute-flavors.rst:194
msgid "The example below sets the ``disk_io_reservation`` to 2000 IOPS."
msgstr ""
#: ../compute-flavors.rst:198
msgid "Disk I/O limits"
msgstr ""
#: ../compute-flavors.rst:201
msgid ""
"Using disk I/O quotas, you can set maximum disk write to 10 MB per second "
"for a VM user. For example:"
msgstr ""
#: ../compute-flavors.rst:208
msgid "The disk I/O options are:"
msgstr ""
#: ../compute-flavors.rst:210
msgid "disk\\_read\\_bytes\\_sec"
msgstr ""
#: ../compute-flavors.rst:212
msgid "disk\\_read\\_iops\\_sec"
msgstr ""
#: ../compute-flavors.rst:214
msgid "disk\\_write\\_bytes\\_sec"
msgstr ""
#: ../compute-flavors.rst:216
msgid "disk\\_write\\_iops\\_sec"
msgstr ""
#: ../compute-flavors.rst:218
msgid "disk\\_total\\_bytes\\_sec"
msgstr ""
#: ../compute-flavors.rst:220
msgid "Disk tuning"
msgstr ""
#: ../compute-flavors.rst:220
msgid "disk\\_total\\_iops\\_sec"
msgstr ""
#: ../compute-flavors.rst:223
msgid "The vif I/O options are:"
msgstr ""
#: ../compute-flavors.rst:225
msgid "vif\\_inbound\\_ average"
msgstr ""
#: ../compute-flavors.rst:227
msgid "vif\\_inbound\\_burst"
msgstr ""
#: ../compute-flavors.rst:229
msgid "vif\\_inbound\\_peak"
msgstr ""
#: ../compute-flavors.rst:231
msgid "vif\\_outbound\\_ average"
msgstr ""
#: ../compute-flavors.rst:233
msgid "vif\\_outbound\\_burst"
msgstr ""
#: ../compute-flavors.rst:235
msgid "vif\\_outbound\\_peak"
msgstr ""
#: ../compute-flavors.rst:237
msgid ""
"Incoming and outgoing traffic can be shaped independently. The bandwidth "
"element can have at most, one inbound and at most, one outbound child "
"element. If you leave any of these child elements out, no quality of service "
"(QoS) is applied on that traffic direction. So, if you want to shape only "
"the network's incoming traffic, use inbound only (and vice versa). Each "
"element has one mandatory attribute average, which specifies the average bit "
"rate on the interface being shaped."
msgstr ""
#: ../compute-flavors.rst:246
msgid ""
"There are also two optional attributes (integer): ``peak``, which specifies "
"the maximum rate at which a bridge can send data (kilobytes/second), and "
"``burst``, the amount of bytes that can be burst at peak speed (kilobytes). "
"The rate is shared equally within domains connected to the network."
msgstr ""
#: ../compute-flavors.rst:252
msgid ""
"The example below sets network traffic bandwidth limits for existing flavor "
"as follows:"
msgstr ""
#: ../compute-flavors.rst:255
msgid "Outbound traffic:"
msgstr ""
#: ../compute-flavors.rst:257 ../compute-flavors.rst:265
msgid "average: 256 Mbps (32768 kilobytes/second)"
msgstr ""
#: ../compute-flavors.rst:259 ../compute-flavors.rst:267
msgid "peak: 512 Mbps (65536 kilobytes/second)"
msgstr ""
#: ../compute-flavors.rst:261 ../compute-flavors.rst:269
msgid "burst: 65536 kilobytes"
msgstr ""
#: ../compute-flavors.rst:263
msgid "Inbound traffic:"
msgstr ""
#: ../compute-flavors.rst:283
msgid ""
"All the speed limit values in above example are specified in kilobytes/"
"second. And burst values are in kilobytes."
msgstr ""
#: ../compute-flavors.rst:284
msgid "Bandwidth I/O"
msgstr ""
#: ../compute-flavors.rst:287
msgid ""
"For the libvirt driver, you can enable and set the behavior of a virtual "
"hardware watchdog device for each flavor. Watchdog devices keep an eye on "
"the guest server, and carry out the configured action, if the server hangs. "
"The watchdog uses the i6300esb device (emulating a PCI Intel 6300ESB). If "
"``hw:watchdog_action`` is not specified, the watchdog is disabled."
msgstr ""
#: ../compute-flavors.rst:294
msgid "To set the behavior, use:"
msgstr ""
#: ../compute-flavors.rst:300
msgid "Valid ACTION values are:"
msgstr ""
#: ../compute-flavors.rst:302
msgid "``disabled``—(default) The device is not attached."
msgstr ""
#: ../compute-flavors.rst:304
msgid "``reset``—Forcefully reset the guest."
msgstr ""
#: ../compute-flavors.rst:306
msgid "``poweroff``—Forcefully power off the guest."
msgstr ""
#: ../compute-flavors.rst:308
msgid "``pause``—Pause the guest."
msgstr ""
#: ../compute-flavors.rst:310
msgid "``none``—Only enable the watchdog; do nothing if the server hangs."
msgstr ""
#: ../compute-flavors.rst:315
msgid ""
"Watchdog behavior set using a specific image's properties will override "
"behavior set using flavors."
msgstr ""
#: ../compute-flavors.rst:316
msgid "Watchdog behavior"
msgstr ""
#: ../compute-flavors.rst:319
msgid ""
"If a random-number generator device has been added to the instance through "
"its image properties, the device can be enabled and configured using:"
msgstr ""
#: ../compute-flavors.rst:331
msgid ""
"RATE-BYTES—(Integer) Allowed amount of bytes that the guest can read from "
"the host's entropy per period."
msgstr ""
#: ../compute-flavors.rst:334
msgid "RATE-PERIOD—(Integer) Duration of the read period in seconds."
msgstr ""
#: ../compute-flavors.rst:334
msgid "Random-number generator"
msgstr ""
#: ../compute-flavors.rst:337
msgid ""
"For the libvirt driver, you can define the topology of the processors in the "
"virtual machine using properties. The properties with ``max`` limit the "
"number that can be selected by the user with image properties."
msgstr ""
#: ../compute-flavors.rst:352
msgid ""
"FLAVOR-SOCKETS—(Integer) The number of sockets for the guest VM. By this is "
"set to the number of vCPUs requested."
msgstr ""
#: ../compute-flavors.rst:355
msgid ""
"FLAVOR-CORES—(Integer) The number of cores per socket for the guest VM. By "
"this is set to 1."
msgstr ""
#: ../compute-flavors.rst:358
msgid ""
"FLAVOR-THREADS—(Integer) The number of threads per core for the guest VM. By "
"this is set to 1."
msgstr ""
#: ../compute-flavors.rst:359
msgid "CPU toplogy"
msgstr ""
#: ../compute-flavors.rst:362
msgid ""
"Flavors can also be assigned to particular projects. By default, a flavor is "
"public and available to all projects. Private flavors are only accessible to "
"those on the access list and are invisible to other projects. To create and "
"assign a private flavor to a project, run these commands:"
msgstr ""
#: ../compute-flavors.rst:370
msgid "Project private flavors"
msgstr ""
#: ../compute-images-instances.rst:3
msgid "Images and instances"
msgstr ""
#: ../compute-images-instances.rst:6
msgid ""
"Disk images provide templates for virtual machine file systems. The Image "
"service controls storage and management of images."
msgstr ""
#: ../compute-images-instances.rst:9
msgid ""
"Instances are the individual virtual machines that run on physical compute "
"nodes. Users can launch any number of instances from the same image. Each "
"launched instance runs from a copy of the base image so that any changes "
"made to the instance do not affect the base image. You can take snapshots of "
"running instances to create an image based on the current disk state of a "
"particular instance. The Compute service manages instances."
msgstr ""
#: ../compute-images-instances.rst:17
msgid ""
"When you launch an instance, you must choose a ``flavor``, which represents "
"a set of virtual resources. Flavors define how many virtual CPUs an instance "
"has, the amount of RAM available to it, and the size of its ephemeral disks. "
"Users must select from the set of available flavors defined on their cloud. "
"OpenStack provides a number of predefined flavors that you can edit or add "
"to."
msgstr ""
#: ../compute-images-instances.rst:26
msgid ""
"For more information about creating and troubleshooting images, see the "
"`OpenStack Virtual Machine Image Guide <http://docs.openstack.org/image-"
"guide/>`__."
msgstr ""
#: ../compute-images-instances.rst:30
msgid ""
"For more information about image configuration options, see the `Image "
"services <http://docs.openstack.org/liberty/config-reference/content/"
"ch_configuring-openstack-image-service.html>`__ section of the OpenStack "
"Configuration Reference."
msgstr ""
#: ../compute-images-instances.rst:35
msgid ""
"For more information about flavors, see :ref:`compute-flavors` or `Flavors "
"<http://docs.openstack.org/openstack-ops/content/flavors.html>`__ in the "
"OpenStack Operations Guide."
msgstr ""
#: ../compute-images-instances.rst:39
msgid ""
"You can add and remove additional resources from running instances, such as "
"persistent volume storage, or public IP addresses. The example used in this "
"chapter is of a typical virtual system within an OpenStack cloud. It uses "
"the ``cinder-volume`` service, which provides persistent block storage, "
"instead of the ephemeral storage provided by the selected instance flavor."
msgstr ""
#: ../compute-images-instances.rst:46
msgid ""
"This diagram shows the system state prior to launching an instance. The "
"image store, fronted by the Image service (glance) has a number of "
"predefined images. Inside the cloud, a compute node contains the available "
"vCPU, memory, and local disk resources. Additionally, the ``cinder-volume`` "
"service provides a number of predefined volumes."
msgstr ""
#: ../compute-images-instances.rst:52
msgid "|Base image state with no running instances|"
msgstr ""
#: ../compute-images-instances.rst:54
msgid ""
"To launch an instance select an image, flavor, and any optional attributes. "
"The selected flavor provides a root volume, labeled ``vda`` in this diagram, "
"and additional ephemeral storage, labeled ``vdb``. In this example, the "
"``cinder-volume`` store is mapped to the third virtual disk on this "
"instance, ``vdc``."
msgstr ""
#: ../compute-images-instances.rst:60
msgid "|Instance creation from image and runtime state|"
msgstr ""
#: ../compute-images-instances.rst:62
msgid ""
"The base image is copied from the image store to the local disk. The local "
"disk is the first disk that the instance accesses, labeled ``vda`` in this "
"diagram. Your instances will start up faster if you use smaller images, as "
"less data needs to be copied across the network."
msgstr ""
#: ../compute-images-instances.rst:67
msgid ""
"A new empty ephemeral disk is also created, labeled ``vdb`` in this diagram. "
"This disk is deleted when you delete the instance."
msgstr ""
#: ../compute-images-instances.rst:70
msgid ""
"The compute node connects to the attached ``cinder-volume`` using iSCSI. The "
"``cinder-volume`` is mapped to the third disk, labeled ``vdc`` in this "
"diagram. After the compute node provisions the vCPU and memory resources, "
"the instance boots up from root volume ``vda``. The instance runs and "
"changes data on the disks (highlighted in red on the diagram). If the volume "
"store is located on a separate network, the ``my_block_storage_ip`` option "
"specified in the storage node configuration file directs image traffic to "
"the compute node."
msgstr ""
#: ../compute-images-instances.rst:81
msgid ""
"Some details in this example scenario might be different in your "
"environment. For example, you might use a different type of back-end "
"storage, or different network protocols. One common variant is that the "
"ephemeral storage used for volumes ``vda`` and ``vdb`` could be backed by "
"network storage rather than a local disk."
msgstr ""
#: ../compute-images-instances.rst:87
msgid ""
"When the instance is deleted, the state is reclaimed with the exception of "
"the persistent volume. The ephemeral storage is purged; memory and vCPU "
"resources are released. The image remains unchanged throughout this process."
msgstr ""
#: ../compute-images-instances.rst:92
msgid "|End state of image and volume after instance exits|"
msgstr ""
#: ../compute-images-instances.rst:96
msgid "Image management"
msgstr ""
#: ../compute-images-instances.rst:98
msgid ""
"The OpenStack Image service discovers, registers, and retrieves virtual "
"machine images. The service also includes a RESTful API that allows you to "
"query VM image metadata and retrieve the actual image with HTTP requests. "
"For more information about the API, see the `OpenStack API Complete "
"Reference <http://developer.openstack.org/api-ref.html>`__ and the `Python "
"API <http://docs.openstack.org/developer/python-glanceclient/>`__."
msgstr ""
#: ../compute-images-instances.rst:106
msgid ""
"The OpenStack Image service can be controlled using a command-line tool. For "
"more information about using the OpenStack Image command-line tool, see the "
"`Manage Images <http://docs.openstack.org/user-guide/common/"
"cli_manage_images.html>`__ section in the OpenStack End User Guide."
msgstr ""
#: ../compute-images-instances.rst:112
msgid ""
"Virtual images that have been made available through the Image service can "
"be stored in a variety of ways. In order to use these services, you must "
"have a working installation of the Image service, with a working endpoint, "
"and users that have been created in OpenStack Identity. Additionally, you "
"must meet the environment variables required by the Compute and Image "
"service clients."
msgstr ""
#: ../compute-images-instances.rst:119
msgid "The Image service supports these back-end stores:"
msgstr ""
#: ../compute-images-instances.rst:122
msgid ""
"The OpenStack Image service stores virtual machine images in the file system "
"back end by default. This simple back end writes image files to the local "
"file system."
msgstr ""
#: ../compute-images-instances.rst:124
msgid "File system"
msgstr ""
# #-#-#-#-# compute-images-instances.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# objectstorage.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../compute-images-instances.rst:127 ../objectstorage.rst:3
msgid "Object Storage"
msgstr ""
#: ../compute-images-instances.rst:127
msgid "The OpenStack highly available service for storing objects."
msgstr ""
#: ../compute-images-instances.rst:130
msgid "The OpenStack highly available service for storing blocks."
msgstr ""
#: ../compute-images-instances.rst:133
msgid "ESX/ESXi or vCenter Server target system."
msgstr ""
#: ../compute-images-instances.rst:133
msgid "VMware"
msgstr ""
#: ../compute-images-instances.rst:136
msgid "S3"
msgstr ""
#: ../compute-images-instances.rst:136
msgid "The Amazon S3 service."
msgstr ""
#: ../compute-images-instances.rst:139
msgid ""
"OpenStack Image service can read virtual machine images that are available "
"on the Internet using HTTP. This store is read only."
msgstr ""
#: ../compute-images-instances.rst:140
msgid "HTTP"
msgstr ""
#: ../compute-images-instances.rst:143
msgid ""
"Stores images inside of a Ceph storage cluster using Ceph's RBD interface."
msgstr ""
#: ../compute-images-instances.rst:144
msgid "RADOS Block Device (RBD)"
msgstr ""
#: ../compute-images-instances.rst:147
msgid "A distributed storage system for QEMU/KVM."
msgstr ""
#: ../compute-images-instances.rst:147
msgid "Sheepdog"
msgstr ""
#: ../compute-images-instances.rst:150
msgid "Stores images using MongoDB."
msgstr ""
#: ../compute-images-instances.rst:151
msgid "GridFS"
msgstr ""
#: ../compute-images-instances.rst:154
msgid "Image properties and property protection"
msgstr ""
#: ../compute-images-instances.rst:155
msgid ""
"An image property is a key and value pair that the cloud administrator or "
"the image owner attaches to an OpenStack Image service image, as follows:"
msgstr ""
#: ../compute-images-instances.rst:159
msgid ""
"The cloud administrator defines core properties, such as the image name."
msgstr ""
#: ../compute-images-instances.rst:162
msgid ""
"The cloud administrator and the image owner can define additional "
"properties, such as licensing and billing information."
msgstr ""
#: ../compute-images-instances.rst:165
msgid ""
"The cloud administrator can configure any property as protected, which "
"limits which policies or user roles can perform CRUD operations on that "
"property. Protected properties are generally additional properties to which "
"only cloud administrators have access."
msgstr ""
#: ../compute-images-instances.rst:170
msgid ""
"For unprotected image properties, the cloud administrator can manage core "
"properties and the image owner can manage additional properties."
msgstr ""
#: ../compute-images-instances.rst:174
msgid "**To configure property protection**"
msgstr ""
#: ../compute-images-instances.rst:176
msgid ""
"To configure property protection, the cloud administrator completes these "
"steps:"
msgstr ""
#: ../compute-images-instances.rst:179
msgid "Define roles or policies in the :file:`policy.json` file::"
msgstr ""
#: ../compute-images-instances.rst:242
msgid ""
"For each parameter, use ``\"rule:restricted\"`` to restrict access to all "
"users or ``\"role:admin\"`` to limit access to administrator roles. For "
"example::"
msgstr ""
#: ../compute-images-instances.rst:249
msgid ""
"Define which roles or policies can manage which properties in a property "
"protections configuration file. For example::"
msgstr ""
#: ../compute-images-instances.rst:270
msgid "A value of ``@`` allows the corresponding operation for a property."
msgstr ""
#: ../compute-images-instances.rst:272
msgid "A value of ``!`` disallows the corresponding operation for a property."
msgstr ""
#: ../compute-images-instances.rst:275
msgid ""
"In the :file:`glance-api.conf` file, define the location of a property "
"protections configuration file::"
msgstr ""
#: ../compute-images-instances.rst:280
msgid ""
"This file contains the rules for property protections and the roles and "
"policies associated with it."
msgstr ""
#: ../compute-images-instances.rst:283
msgid "By default, property protections are not enforced."
msgstr ""
#: ../compute-images-instances.rst:285
msgid ""
"If you specify a file name value and the file is not found, the `glance-api` "
"service does not start."
msgstr ""
#: ../compute-images-instances.rst:288 ../compute-images-instances.rst:298
msgid ""
"To view a sample configuration file, see `glance-api.conf <http://docs."
"openstack.org/liberty/config-reference/content/section_glance-api.conf."
"html>`__."
msgstr ""
#: ../compute-images-instances.rst:291
msgid ""
"Optionally, in the :file:`glance-api.conf` file, specify whether roles or "
"policies are used in the property protections configuration file::"
msgstr ""
#: ../compute-images-instances.rst:296
msgid "The default is ``roles``."
msgstr ""
#: ../compute-images-instances.rst:302
msgid "Image download: how it works"
msgstr ""
#: ../compute-images-instances.rst:303
msgid ""
"Prior to starting a virtual machine, the virtual machine image used must be "
"transferred to the compute node from the Image service. How this works can "
"change depending on the settings chosen for the compute node and the Image "
"service."
msgstr ""
#: ../compute-images-instances.rst:308
msgid ""
"Typically, the Compute service will use the image identifier passed to it by "
"the scheduler service and request the image from the Image API. Though "
"images are not stored in glance—rather in a back end, which could be Object "
"Storage, a filesystem or any other supported method—the connection is made "
"from the compute node to the Image service and the image is transferred over "
"this connection. The Image service streams the image from the back end to "
"the compute node."
msgstr ""
#: ../compute-images-instances.rst:316
msgid ""
"It is possible to set up the Object Storage node on a separate network, and "
"still allow image traffic to flow between the Compute and Object Storage "
"nodes. Configure the ``my_block_storage_ip`` option in the storage node "
"configuration to allow block storage traffic to reach the Compute node."
msgstr ""
#: ../compute-images-instances.rst:322
msgid ""
"Certain back ends support a more direct method, where on request the Image "
"service will return a URL that can be used to download the image directly "
"from the back-end store. Currently the only store to support the direct "
"download approach is the filesystem store. It can be configured using the "
"``filesystems`` option in the ``image_file_url`` section of the :file:`nova."
"conf` file on compute nodes."
msgstr ""
#: ../compute-images-instances.rst:329
msgid ""
"Compute nodes also implement caching of images, meaning that if an image has "
"been used before it won't necessarily be downloaded every time. Information "
"on the configuration options for caching on compute nodes can be found in "
"the `Configuration Reference <http://docs.openstack.org/liberty/config-"
"reference/content/>`__."
msgstr ""
#: ../compute-images-instances.rst:336
msgid "Instance building blocks"
msgstr ""
#: ../compute-images-instances.rst:338
msgid ""
"In OpenStack, the base operating system is usually copied from an image "
"stored in the OpenStack Image service. This results in an ephemeral instance "
"that starts from a known template state and loses all accumulated states on "
"shutdown."
msgstr ""
#: ../compute-images-instances.rst:343
msgid ""
"You can also put an operating system on a persistent volume in Compute or "
"the Block Storage volume system. This gives a more traditional, persistent "
"system that accumulates states that are preserved across restarts. To get a "
"list of available images on your system, run:"
msgstr ""
# #-#-#-#-# compute_arch.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# compute-images-instances.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../compute-images-instances.rst:364 ../compute_arch.rst:259
msgid "The displayed image attributes are:"
msgstr ""
#: ../compute-images-instances.rst:367
msgid "Automatically generated UUID of the image."
msgstr ""
# #-#-#-#-# compute_arch.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# compute-images-instances.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../compute-images-instances.rst:367 ../compute_arch.rst:262
msgid "``ID``"
msgstr ""
#: ../compute-images-instances.rst:370
msgid "Free form, human-readable name for the image."
msgstr ""
# #-#-#-#-# compute_arch.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# compute-images-instances.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../compute-images-instances.rst:370 ../compute_arch.rst:265
msgid "``Name``"
msgstr ""
# #-#-#-#-# compute_arch.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# compute-images-instances.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../compute-images-instances.rst:373 ../compute_arch.rst:268
msgid ""
"The status of the image. Images marked ``ACTIVE`` are available for use."
msgstr ""
# #-#-#-#-# compute_arch.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# compute-images-instances.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../compute-images-instances.rst:374 ../compute_arch.rst:269
msgid "``Status``"
msgstr ""
# #-#-#-#-# compute_arch.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# compute-images-instances.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../compute-images-instances.rst:377 ../compute_arch.rst:272
msgid ""
"For images that are created as snapshots of running instances, this is the "
"UUID of the instance the snapshot derives from. For uploaded images, this "
"field is blank."
msgstr ""
# #-#-#-#-# compute_arch.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# compute-images-instances.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../compute-images-instances.rst:379 ../compute_arch.rst:274
msgid "``Server``"
msgstr ""
#: ../compute-images-instances.rst:381
msgid ""
"Virtual hardware templates are called ``flavors``. The default installation "
"provides five predefined flavors."
msgstr ""
#: ../compute-images-instances.rst:384
msgid "For a list of flavors that are available on your system, run:"
msgstr ""
#: ../compute-images-instances.rst:399
msgid ""
"By default, administrative users can configure the flavors. You can change "
"this behavior by redefining the access controls for ``compute_extension:"
"flavormanage`` in :file:`/etc/nova/policy.json` on the ``compute-api`` "
"server."
msgstr ""
#: ../compute-images-instances.rst:406
msgid "Instance management tools"
msgstr ""
#: ../compute-images-instances.rst:408
msgid ""
"OpenStack provides command-line, web interface, and API-based instance "
"management tools. Third-party management tools are also available, using "
"either the native API or the provided EC2-compatible API."
msgstr ""
#: ../compute-images-instances.rst:412
msgid ""
"The OpenStack python-novaclient package provides a basic command-line "
"utility, which uses the :command:`nova` command. This is available as a "
"native package for most Linux distributions, or you can install the latest "
"version using the pip python package installer:"
msgstr ""
#: ../compute-images-instances.rst:421
msgid ""
"For more information about python-novaclient and other command-line tools, "
"see the `OpenStack End User Guide <http://docs.openstack.org/user-guide/"
"index.html>`__."
msgstr ""
#: ../compute-images-instances.rst:427
msgid "Control where instances run"
msgstr ""
#: ../compute-images-instances.rst:428
msgid ""
"The `OpenStack Configuration Reference <http://docs.openstack.org/liberty/"
"config-reference/content/>`__ provides detailed information on controlling "
"where your instances run, including ensuring a set of instances run on "
"different compute nodes for service resiliency or on the same node for high "
"performance inter-instance communications."
msgstr ""
#: ../compute-images-instances.rst:435
msgid ""
"Administrative users can specify which compute node their instances run on. "
"To do this, specify the ``--availability-zone AVAILABILITY_ZONE:"
"COMPUTE_HOST`` parameter."
msgstr ""
#: ../compute-live-migration-usage.rst:0
msgid "**nova service-list**"
msgstr ""
#: ../compute-live-migration-usage.rst:5
msgid "Migrate instances"
msgstr ""
#: ../compute-live-migration-usage.rst:7
msgid ""
"This section discusses how to migrate running instances from one OpenStack "
"Compute server to another OpenStack Compute server."
msgstr ""
#: ../compute-live-migration-usage.rst:10
msgid "Before starting a migration, review the Configure migrations section."
msgstr ""
#: ../compute-live-migration-usage.rst:22
msgid "**Migrating instances**"
msgstr ""
#: ../compute-live-migration-usage.rst:24
msgid "Check the ID of the instance to be migrated:"
msgstr ""
#: ../compute-live-migration-usage.rst:34
msgid "ID"
msgstr ""
# #-#-#-#-# compute-live-migration-usage.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# shared_file_systems_crud_share.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# ts-eql-volume-size.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../compute-live-migration-usage.rst:36
#: ../compute-live-migration-usage.rst:109
#: ../shared_file_systems_crud_share.rst:383 ../ts-eql-volume-size.rst:107
msgid "Status"
msgstr ""
#: ../compute-live-migration-usage.rst:37
msgid "Networks"
msgstr ""
#: ../compute-live-migration-usage.rst:38
#: ../compute-live-migration-usage.rst:88
msgid "d1df1b5a-70c4-4fed-98b7-423362f2c47c"
msgstr ""
#: ../compute-live-migration-usage.rst:39
#: ../compute-live-migration-usage.rst:90
msgid "vm1"
msgstr ""
#: ../compute-live-migration-usage.rst:40
#: ../compute-live-migration-usage.rst:44
#: ../compute-live-migration-usage.rst:94
msgid "ACTIVE"
msgstr ""
#: ../compute-live-migration-usage.rst:41
msgid "private=a.b.c.d"
msgstr ""
#: ../compute-live-migration-usage.rst:42
msgid "d693db9e-a7cf-45ef-a7c9-b3ecb5f22645"
msgstr ""
#: ../compute-live-migration-usage.rst:43
msgid "vm2"
msgstr ""
#: ../compute-live-migration-usage.rst:45
msgid "private=e.f.g.h"
msgstr ""
#: ../compute-live-migration-usage.rst:47
msgid ""
"Check the information associated with the instance. In this example, ``vm1`` "
"is running on ``HostB``:"
msgstr ""
#: ../compute-live-migration-usage.rst:58
msgid "Property"
msgstr ""
# #-#-#-#-# compute-live-migration-usage.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# networking_adv-config.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../compute-live-migration-usage.rst:59 ../networking_adv-config.rst:26
msgid "Value"
msgstr ""
#: ../compute-live-migration-usage.rst:60
#: ../compute-live-migration-usage.rst:64
#: ../compute-live-migration-usage.rst:77
#: ../compute-live-migration-usage.rst:80
#: ../compute-live-migration-usage.rst:84
#: ../compute-live-migration-usage.rst:96
msgid "..."
msgstr ""
#: ../compute-live-migration-usage.rst:62
msgid "OS-EXT-SRV-ATTR:host"
msgstr ""
#: ../compute-live-migration-usage.rst:66
msgid "flavor"
msgstr ""
# #-#-#-#-# compute-live-migration-usage.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# database.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-data-collection.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../compute-live-migration-usage.rst:71 ../database.rst:108
#: ../telemetry-data-collection.rst:691
msgid "name"
msgstr ""
#: ../compute-live-migration-usage.rst:73
msgid "private network"
msgstr ""
# #-#-#-#-# compute-live-migration-usage.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# networking_multi-dhcp-agents.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../compute-live-migration-usage.rst:82
#: ../compute-live-migration-usage.rst:135
#: ../networking_multi-dhcp-agents.rst:48
msgid "HostB"
msgstr ""
#: ../compute-live-migration-usage.rst:86
msgid "m1.tiny"
msgstr ""
#: ../compute-live-migration-usage.rst:92
msgid "a.b.c.d"
msgstr ""
#: ../compute-live-migration-usage.rst:98
msgid ""
"Select the compute node the instance will be migrated to. In this example, "
"we will migrate the instance to ``HostC``, because nova-compute is running "
"on it:"
msgstr ""
#: ../compute-live-migration-usage.rst:106
msgid "Binary"
msgstr ""
# #-#-#-#-# compute-live-migration-usage.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# networking_multi-dhcp-agents.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../compute-live-migration-usage.rst:107
#: ../networking_multi-dhcp-agents.rst:39
msgid "Host"
msgstr ""
#: ../compute-live-migration-usage.rst:108
msgid "Zone"
msgstr ""
#: ../compute-live-migration-usage.rst:110
msgid "State"
msgstr ""
#: ../compute-live-migration-usage.rst:111
msgid "Updated_at"
msgstr ""
#: ../compute-live-migration-usage.rst:112
msgid "Disabled Reason"
msgstr ""
#: ../compute-live-migration-usage.rst:113
msgid "nova-consoleauth"
msgstr ""
# #-#-#-#-# compute-live-migration-usage.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# networking_multi-dhcp-agents.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../compute-live-migration-usage.rst:114
#: ../compute-live-migration-usage.rst:121
#: ../compute-live-migration-usage.rst:128
#: ../compute-live-migration-usage.rst:149
#: ../networking_multi-dhcp-agents.rst:46
msgid "HostA"
msgstr ""
#: ../compute-live-migration-usage.rst:115
#: ../compute-live-migration-usage.rst:122
#: ../compute-live-migration-usage.rst:129
#: ../compute-live-migration-usage.rst:150
msgid "internal"
msgstr ""
# #-#-#-#-# compute-live-migration-usage.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-alarms.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../compute-live-migration-usage.rst:116
#: ../compute-live-migration-usage.rst:123
#: ../compute-live-migration-usage.rst:130
#: ../compute-live-migration-usage.rst:137
#: ../compute-live-migration-usage.rst:144
#: ../compute-live-migration-usage.rst:151 ../telemetry-alarms.rst:189
msgid "enabled"
msgstr ""
#: ../compute-live-migration-usage.rst:117
#: ../compute-live-migration-usage.rst:124
#: ../compute-live-migration-usage.rst:131
#: ../compute-live-migration-usage.rst:138
#: ../compute-live-migration-usage.rst:145
#: ../compute-live-migration-usage.rst:152
msgid "up"
msgstr ""
#: ../compute-live-migration-usage.rst:118
#: ../compute-live-migration-usage.rst:125
msgid "2014-03-25T10:33:25.000000"
msgstr ""
#: ../compute-live-migration-usage.rst:120
msgid "nova-scheduler"
msgstr ""
#: ../compute-live-migration-usage.rst:127
msgid "nova-conductor"
msgstr ""
#: ../compute-live-migration-usage.rst:132
msgid "2014-03-25T10:33:27.000000"
msgstr ""
#: ../compute-live-migration-usage.rst:134
#: ../compute-live-migration-usage.rst:141
msgid "nova-compute"
msgstr ""
#: ../compute-live-migration-usage.rst:136
#: ../compute-live-migration-usage.rst:143
msgid "nova"
msgstr ""
#: ../compute-live-migration-usage.rst:139
#: ../compute-live-migration-usage.rst:146
#: ../compute-live-migration-usage.rst:153
msgid "2014-03-25T10:33:31.000000"
msgstr ""
#: ../compute-live-migration-usage.rst:142
#: ../compute-live-migration-usage.rst:171
#: ../compute-live-migration-usage.rst:172
#: ../compute-live-migration-usage.rst:173
#: ../compute-live-migration-usage.rst:174
#: ../compute-live-migration-usage.rst:175
msgid "HostC"
msgstr ""
#: ../compute-live-migration-usage.rst:148
msgid "nova-cert"
msgstr ""
#: ../compute-live-migration-usage.rst:156
msgid "Check that ``HostC`` has enough resources for migration:"
msgstr ""
#: ../compute-live-migration-usage.rst:166
msgid "HOST"
msgstr ""
#: ../compute-live-migration-usage.rst:167
msgid "PROJECT"
msgstr ""
# #-#-#-#-# compute-live-migration-usage.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-measurements.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../compute-live-migration-usage.rst:168 ../telemetry-measurements.rst:119
msgid "cpu"
msgstr ""
#: ../compute-live-migration-usage.rst:169
msgid "memory_mb"
msgstr ""
#: ../compute-live-migration-usage.rst:170
msgid "disk_bg"
msgstr ""
#: ../compute-live-migration-usage.rst:176
msgid "(total)"
msgstr ""
#: ../compute-live-migration-usage.rst:177
msgid "(used_now)"
msgstr ""
#: ../compute-live-migration-usage.rst:178
msgid "(used_max)"
msgstr ""
#: ../compute-live-migration-usage.rst:179
msgid "p1"
msgstr ""
#: ../compute-live-migration-usage.rst:180
msgid "p2"
msgstr ""
#: ../compute-live-migration-usage.rst:181
msgid "32232"
msgstr ""
#: ../compute-live-migration-usage.rst:182
#: ../compute-live-migration-usage.rst:183
#: ../compute-live-migration-usage.rst:184
#: ../compute-live-migration-usage.rst:185
msgid "21284"
msgstr ""
#: ../compute-live-migration-usage.rst:186
msgid "878"
msgstr ""
#: ../compute-live-migration-usage.rst:187
msgid "442"
msgstr ""
#: ../compute-live-migration-usage.rst:188
#: ../compute-live-migration-usage.rst:189
#: ../compute-live-migration-usage.rst:190
msgid "422"
msgstr ""
#: ../compute-live-migration-usage.rst:192
msgid "``cpu``: Number of CPUs"
msgstr ""
#: ../compute-live-migration-usage.rst:194
msgid "``memory_mb``: Total amount of memory, in MB"
msgstr ""
#: ../compute-live-migration-usage.rst:196
msgid "``disk_gb``: Total amount of space for NOVA-INST-DIR/instances, in GB"
msgstr ""
#: ../compute-live-migration-usage.rst:198
msgid ""
"In this table, the first row shows the total amount of resources available "
"on the physical server. The second line shows the currently used resources. "
"The third line shows the maximum used resources. The fourth line and below "
"shows the resources available for each project."
msgstr ""
#: ../compute-live-migration-usage.rst:203
msgid "Migrate the instance using the :command:`nova live-migration` command:"
msgstr ""
#: ../compute-live-migration-usage.rst:209
msgid ""
"In this example, SERVER can be the ID or name of the instance. Another "
"example:"
msgstr ""
#: ../compute-live-migration-usage.rst:219
msgid ""
"When using live migration to move workloads between Icehouse and Juno "
"compute nodes, it may cause data loss because libvirt live migration with "
"shared block storage was buggy (potential loss of data) before version 3.32. "
"This issue can be solved when we upgrade to RPC API version 4.0."
msgstr ""
#: ../compute-live-migration-usage.rst:225
msgid ""
"Check that the instance has been migrated successfully, using :command:`nova "
"list`. If the instance is still running on ``HostB``, check the log files "
"at :file:`src/dest` for nova-compute and nova-scheduler to determine why."
msgstr ""
# #-#-#-#-# compute-manage-logs.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# identity_logging.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../compute-manage-logs.rst:5 ../identity_logging.rst:3
msgid "Logging"
msgstr ""
#: ../compute-manage-logs.rst:8
msgid "Logging module"
msgstr ""
#: ../compute-manage-logs.rst:10
msgid ""
"Logging behavior can be changed by creating a configuration file. To specify "
"the configuration file, add this line to the :file:`/etc/nova/nova.conf` "
"file:"
msgstr ""
#: ../compute-manage-logs.rst:18
msgid ""
"To change the logging level, add ``DEBUG``, ``INFO``, ``WARNING``, or "
"``ERROR`` as a parameter."
msgstr ""
#: ../compute-manage-logs.rst:21
msgid ""
"The logging configuration file is an INI-style configuration file, which "
"must contain a section called ``logger_nova``. This controls the behavior of "
"the logging facility in the ``nova-*`` services. For example:"
msgstr ""
#: ../compute-manage-logs.rst:33
msgid ""
"This example sets the debugging level to ``INFO`` (which is less verbose "
"than the default ``DEBUG`` setting)."
msgstr ""
#: ../compute-manage-logs.rst:36
msgid ""
"For more about the logging configuration syntax, including the ``handlers`` "
"and ``quaname`` variables, see the `Python documentation <http://docs.python."
"org/release/2.7/library/logging.html#configuration-file-format>`__ on "
"logging configuration files."
msgstr ""
#: ../compute-manage-logs.rst:41
msgid ""
"For an example :file:`logging.conf` file with various defined handlers, see "
"the `OpenStack Configuration Reference <http://docs.openstack.org/liberty/"
"config-reference/content/>`__."
msgstr ""
#: ../compute-manage-logs.rst:45
msgid "Syslog"
msgstr ""
#: ../compute-manage-logs.rst:47
msgid ""
"OpenStack Compute services can send logging information to syslog. This is "
"useful if you want to use rsyslog to forward logs to a remote machine. "
"Separately configure the Compute service (nova), the Identity service "
"(keystone), the Image service (glance), and, if you are using it, the Block "
"Storage service (cinder) to send log messages to syslog. Open these "
"configuration files:"
msgstr ""
#: ../compute-manage-logs.rst:54
msgid ":file:`/etc/nova/nova.conf`"
msgstr ""
#: ../compute-manage-logs.rst:56
msgid ":file:`/etc/keystone/keystone.conf`"
msgstr ""
#: ../compute-manage-logs.rst:58
msgid ":file:`/etc/glance/glance-api.conf`"
msgstr ""
#: ../compute-manage-logs.rst:60
msgid ":file:`/etc/glance/glance-registry.conf`"
msgstr ""
#: ../compute-manage-logs.rst:62
msgid ":file:`/etc/cinder/cinder.conf`"
msgstr ""
#: ../compute-manage-logs.rst:64
msgid "In each configuration file, add these lines:"
msgstr ""
#: ../compute-manage-logs.rst:73
msgid ""
"In addition to enabling syslog, these settings also turn off verbose and "
"debugging output from the log."
msgstr ""
#: ../compute-manage-logs.rst:78
msgid ""
"Although this example uses the same local facility for each service "
"(``LOG_LOCAL0``, which corresponds to syslog facility ``LOCAL0``), we "
"recommend that you configure a separate local facility for each service, as "
"this provides better isolation and more flexibility. For example, you can "
"capture logging information at different severity levels for different "
"services. syslog allows you to define up to eight local facilities, "
"``LOCAL0, LOCAL1, ..., LOCAL7``. For more information, see the syslog "
"documentation."
msgstr ""
#: ../compute-manage-logs.rst:88
msgid "Rsyslog"
msgstr ""
#: ../compute-manage-logs.rst:90
msgid ""
"rsyslog is useful for setting up a centralized log server across multiple "
"machines. This section briefly describe the configuration to set up an "
"rsyslog server. A full treatment of rsyslog is beyond the scope of this "
"book. This section assumes rsyslog has already been installed on your hosts "
"(it is installed by default on most Linux distributions)."
msgstr ""
#: ../compute-manage-logs.rst:97
msgid ""
"This example provides a minimal configuration for :file:`/etc/rsyslog.conf` "
"on the log server host, which receives the log files"
msgstr ""
#: ../compute-manage-logs.rst:106
msgid ""
"Add a filter rule to :file:`/etc/rsyslog.conf` which looks for a host name. "
"This example uses COMPUTE_01 as the compute host name:"
msgstr ""
#: ../compute-manage-logs.rst:113
msgid ""
"On each compute host, create a file named :file:`/etc/rsyslog.d/60-nova."
"conf`, with the following content:"
msgstr ""
#: ../compute-manage-logs.rst:123
msgid ""
"Once you have created the file, restart the rsyslog service. Error-level log "
"messages on the compute hosts should now be sent to the log server."
msgstr ""
#: ../compute-manage-logs.rst:127
msgid "Serial console"
msgstr ""
#: ../compute-manage-logs.rst:129
msgid ""
"The serial console provides a way to examine kernel output and other system "
"messages during troubleshooting if the instance lacks network connectivity."
msgstr ""
#: ../compute-manage-logs.rst:133
msgid ""
"Read-only access from server serial console is possible using the ``os-"
"GetSerialOutput`` server action. Most cloud images enable this feature by "
"default. For more information, see :ref:`compute-common-errors-and-fixes`."
msgstr ""
#: ../compute-manage-logs.rst:138
msgid ""
"OpenStack Juno and later supports read-write access using the serial console "
"using the ``os-GetSerialConsole`` server action. This feature also requires "
"a websocket client to access the serial console."
msgstr ""
#: ../compute-manage-logs.rst:142
msgid "**Configuring read-write serial console access**"
msgstr ""
#: ../compute-manage-logs.rst:144
msgid "On a compute node, edit the :file:`/etc/nova/nova.conf` file:"
msgstr ""
#: ../compute-manage-logs.rst:146
msgid "In the ``[serial_console]`` section, enable the serial console:"
msgstr ""
#: ../compute-manage-logs.rst:154
msgid ""
"In the ``[serial_console]`` section, configure the serial console proxy "
"similar to graphical console proxies:"
msgstr ""
#: ../compute-manage-logs.rst:165
msgid ""
"The ``base_url`` option specifies the base URL that clients receive from the "
"API upon requesting a serial console. Typically, this refers to the host "
"name of the controller node."
msgstr ""
#: ../compute-manage-logs.rst:169
msgid ""
"The ``listen`` option specifies the network interface nova-compute should "
"listen on for virtual console connections. Typically, 0.0.0.0 will enable "
"listening on all interfaces."
msgstr ""
#: ../compute-manage-logs.rst:173
msgid ""
"The ``proxyclient_address`` option specifies which network interface the "
"proxy should connect to. Typically, this refers to the IP address of the "
"management interface."
msgstr ""
#: ../compute-manage-logs.rst:177
msgid ""
"When you enable read-write serial console access, Compute will add serial "
"console information to the Libvirt XML file for the instance. For example:"
msgstr ""
#: ../compute-manage-logs.rst:190
msgid "**Accessing the serial console on an instance**"
msgstr ""
#: ../compute-manage-logs.rst:192
msgid ""
"Use the :command:`nova get-serial-proxy` command to retrieve the websocket "
"URL for the serial console on the instance:"
msgstr ""
#: ../compute-manage-logs.rst:204
msgid "Url"
msgstr ""
#: ../compute-manage-logs.rst:205
msgid "serial"
msgstr ""
#: ../compute-manage-logs.rst:206
msgid "ws://127.0.0.1:6083/?token=18510769-71ad-4e5a-8348-4218b5613b3d"
msgstr ""
#: ../compute-manage-logs.rst:208
msgid "Alternatively, use the API directly:"
msgstr ""
#: ../compute-manage-logs.rst:221
msgid ""
"Use Python websocket with the URL to generate ``.send``, ``.recv``, and ``."
"fileno`` methods for serial console access. For example:"
msgstr ""
#: ../compute-manage-logs.rst:231
msgid ""
"Alternatively, use a `Python websocket client <https://github.com/larsks/"
"novaconsole/>`__."
msgstr ""
#: ../compute-manage-logs.rst:235
msgid ""
"When you enable the serial console, typical instance logging using the :"
"command:`nova console-log` command is disabled. Kernel output and other "
"system messages will not be visible unless you are actively viewing the "
"serial console."
msgstr ""
#: ../compute-manage-the-cloud.rst:5
msgid "Manage the cloud"
msgstr ""
#: ../compute-manage-the-cloud.rst:12
msgid ""
"System administrators can use :command:`nova` client and :command:"
"`euca2ools` commands to manage their clouds."
msgstr ""
#: ../compute-manage-the-cloud.rst:15
msgid ""
"``nova`` client and ``euca2ools`` can be used by all users, though specific "
"commands might be restricted by Role Based Access Control in the Identity "
"Service."
msgstr ""
#: ../compute-manage-the-cloud.rst:19
msgid "**Managing the cloud with nova client**"
msgstr ""
#: ../compute-manage-the-cloud.rst:21
msgid ""
"The python-novaclient package provides a ``nova`` shell that enables Compute "
"API interactions from the command line. Install the client, and provide your "
"user name and password (which can be set as environment variables for "
"convenience), for the ability to administer the cloud from the command line."
msgstr ""
#: ../compute-manage-the-cloud.rst:27
msgid ""
"To install python-novaclient, download the tarball from `http://pypi.python."
"org/pypi/python-novaclient/#downloads <http://pypi.python.org/pypi/python-"
"novaclient/#downloads>`__ and then install it in your favorite Python "
"environment:"
msgstr ""
#: ../compute-manage-the-cloud.rst:37
msgid "As root, run:"
msgstr ""
#: ../compute-manage-the-cloud.rst:43
msgid "Confirm the installation was successful:"
msgstr ""
#: ../compute-manage-the-cloud.rst:62
msgid ""
"Running :command:`nova help` returns a list of ``nova`` commands and "
"parameters. To get help for a subcommand, run:"
msgstr ""
#: ../compute-manage-the-cloud.rst:69
msgid ""
"For a complete list of ``nova`` commands and parameters, see the `OpenStack "
"Command-Line Reference <http://docs.openstack.org/cli-reference/content/"
"novaclient_commands.html>`__."
msgstr ""
#: ../compute-manage-the-cloud.rst:72
msgid ""
"Set the required parameters as environment variables to make running "
"commands easier. For example, you can add :option:`--os-username` as a "
"``nova`` option, or set it as an environment variable. To set the user name, "
"password, and tenant as environment variables, use:"
msgstr ""
#: ../compute-manage-the-cloud.rst:83
msgid ""
"The Identity service will give you an authentication endpoint, which Compute "
"recognizes as ``OS_AUTH_URL``:"
msgstr ""
#: ../compute-manage-users.rst:5
msgid "Manage Compute users"
msgstr ""
#: ../compute-manage-users.rst:7
msgid ""
"Access to the Euca2ools (ec2) API is controlled by an access key and a "
"secret key. The user's access key needs to be included in the request, and "
"the request must be signed with the secret key. Upon receipt of API "
"requests, Compute verifies the signature and runs commands on behalf of the "
"user."
msgstr ""
#: ../compute-manage-users.rst:13
msgid ""
"To begin using Compute, you must create a user with the Identity service."
msgstr ""
#: ../compute-manage-volumes.rst:0
msgid "**nova volume commands**"
msgstr ""
#: ../compute-manage-volumes.rst:5
msgid ""
"Depending on the setup of your cloud provider, they may give you an endpoint "
"to use to manage volumes, or there may be an extension under the covers. In "
"either case, you can use the ``nova`` CLI to manage volumes."
msgstr ""
# #-#-#-#-# compute-manage-volumes.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# networking_adv-features.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# networking_config-agents.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# networking_use.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../compute-manage-volumes.rst:13 ../networking_adv-features.rst:207
#: ../networking_adv-features.rst:375 ../networking_adv-features.rst:510
#: ../networking_adv-features.rst:802 ../networking_config-agents.rst:471
#: ../networking_use.rst:47 ../networking_use.rst:123
#: ../networking_use.rst:183 ../networking_use.rst:241
msgid "Command"
msgstr ""
#: ../compute-manage-volumes.rst:15
msgid "volume-attach"
msgstr ""
#: ../compute-manage-volumes.rst:16
msgid "Attach a volume to a server."
msgstr ""
#: ../compute-manage-volumes.rst:17
msgid "volume-create"
msgstr ""
#: ../compute-manage-volumes.rst:18
msgid "Add a new volume."
msgstr ""
#: ../compute-manage-volumes.rst:19
msgid "volume-delete"
msgstr ""
#: ../compute-manage-volumes.rst:20
msgid "Remove a volume."
msgstr ""
#: ../compute-manage-volumes.rst:21
msgid "volume-detach"
msgstr ""
#: ../compute-manage-volumes.rst:22
msgid "Detach a volume from a server."
msgstr ""
#: ../compute-manage-volumes.rst:23
msgid "volume-list"
msgstr ""
#: ../compute-manage-volumes.rst:24
msgid "List all the volumes."
msgstr ""
#: ../compute-manage-volumes.rst:25
msgid "volume-show"
msgstr ""
#: ../compute-manage-volumes.rst:26
msgid "Show details about a volume."
msgstr ""
#: ../compute-manage-volumes.rst:27
msgid "volume-snapshot-create"
msgstr ""
#: ../compute-manage-volumes.rst:28
msgid "Add a new snapshot."
msgstr ""
#: ../compute-manage-volumes.rst:29
msgid "volume-snapshot-delete"
msgstr ""
#: ../compute-manage-volumes.rst:30
msgid "Remove a snapshot."
msgstr ""
#: ../compute-manage-volumes.rst:31
msgid "volume-snapshot-list"
msgstr ""
#: ../compute-manage-volumes.rst:32
msgid "List all the snapshots."
msgstr ""
#: ../compute-manage-volumes.rst:33
msgid "volume-snapshot-show"
msgstr ""
#: ../compute-manage-volumes.rst:34
msgid "Show details about a snapshot."
msgstr ""
#: ../compute-manage-volumes.rst:35
msgid "volume-type-create"
msgstr ""
#: ../compute-manage-volumes.rst:36
msgid "Create a new volume type."
msgstr ""
#: ../compute-manage-volumes.rst:37
msgid "volume-type-delete"
msgstr ""
#: ../compute-manage-volumes.rst:38
msgid "Delete a specific flavor"
msgstr ""
#: ../compute-manage-volumes.rst:39
msgid "volume-type-list"
msgstr ""
#: ../compute-manage-volumes.rst:40
msgid "Print a list of available 'volume types'."
msgstr ""
#: ../compute-manage-volumes.rst:41
msgid "volume-update"
msgstr ""
#: ../compute-manage-volumes.rst:42
msgid "Update an attached volume."
msgstr ""
#: ../compute-manage-volumes.rst:46
msgid "For example, to list IDs and names of Compute volumes, run:"
msgstr ""
#: ../compute-networking-nova.rst:0
msgid "Description of IPv6 configuration options"
msgstr ""
#: ../compute-networking-nova.rst:0
msgid "Description of metadata configuration options"
msgstr ""
#: ../compute-networking-nova.rst:3
msgid "Networking with nova-network"
msgstr ""
#: ../compute-networking-nova.rst:5
msgid ""
"Understanding the networking configuration options helps you design the best "
"configuration for your Compute instances."
msgstr ""
#: ../compute-networking-nova.rst:8
msgid ""
"You can choose to either install and configure nova-network or use the "
"OpenStack Networking service (neutron). This section contains a brief "
"overview of nova-network. For more information about OpenStack Networking, "
"see :ref:`networking`."
msgstr ""
#: ../compute-networking-nova.rst:14
msgid "Networking concepts"
msgstr ""
#: ../compute-networking-nova.rst:16
msgid ""
"Compute assigns a private IP address to each VM instance. Compute makes a "
"distinction between fixed IPs and floating IP. Fixed IPs are IP addresses "
"that are assigned to an instance on creation and stay the same until the "
"instance is explicitly terminated. Floating IPs are addresses that can be "
"dynamically associated with an instance. A floating IP address can be "
"disassociated and associated with another instance at any time. A user can "
"reserve a floating IP for their project."
msgstr ""
#: ../compute-networking-nova.rst:26
msgid ""
"Currently, Compute with ``nova-network`` only supports Linux bridge "
"networking that allows virtual interfaces to connect to the outside network "
"through the physical interface."
msgstr ""
#: ../compute-networking-nova.rst:30
msgid ""
"The network controller with ``nova-network`` provides virtual networks to "
"enable compute servers to interact with each other and with the public "
"network. Compute with ``nova-network`` supports the following network modes, "
"which are implemented as Network Manager types:"
msgstr ""
#: ../compute-networking-nova.rst:36
msgid ""
"In this mode, a network administrator specifies a subnet. IP addresses for "
"VM instances are assigned from the subnet, and then injected into the image "
"on launch. Each instance receives a fixed IP address from the pool of "
"available addresses. A system administrator must create the Linux networking "
"bridge (typically named ``br100``, although this is configurable) on the "
"systems running the ``nova-network`` service. All instances of the system "
"are attached to the same bridge, which is configured manually by the network "
"administrator."
msgstr ""
#: ../compute-networking-nova.rst:44
msgid "Flat Network Manager"
msgstr ""
#: ../compute-networking-nova.rst:48
msgid ""
"Configuration injection currently only works on Linux-style systems that "
"keep networking configuration in :file:`/etc/network/interfaces`."
msgstr ""
#: ../compute-networking-nova.rst:53
msgid ""
"In this mode, OpenStack starts a DHCP server (dnsmasq) to allocate IP "
"addresses to VM instances from the specified subnet, in addition to manually "
"configuring the networking bridge. IP addresses for VM instances are "
"assigned from a subnet specified by the network administrator."
msgstr ""
#: ../compute-networking-nova.rst:59
msgid ""
"Like flat mode, all instances are attached to a single bridge on the compute "
"node. Additionally, a DHCP server configures instances depending on single-/"
"multi-host mode, alongside each ``nova-network``. In this mode, Compute does "
"a bit more configuration. It attempts to bridge into an Ethernet device "
"(``flat_interface``, eth0 by default). For every instance, Compute allocates "
"a fixed IP address and configures dnsmasq with the MAC ID and IP address for "
"the VM. Dnsmasq does not take part in the IP address allocation process, it "
"only hands out IPs according to the mapping done by Compute. Instances "
"receive their fixed IPs with the :command:`dhcpdiscover` command. These IPs "
"are not assigned to any of the host's network interfaces, only to the guest-"
"side interface for the VM."
msgstr ""
#: ../compute-networking-nova.rst:72
msgid ""
"In any setup with flat networking, the hosts providing the ``nova-network`` "
"service are responsible for forwarding traffic from the private network. "
"They also run and configure dnsmasq as a DHCP server listening on this "
"bridge, usually on IP address 10.0.0.1 (see :ref:`compute-dnsmasq`). Compute "
"can determine the NAT entries for each network, although sometimes NAT is "
"not used, such as when the network has been configured with all public IPs, "
"or if a hardware router is used (which is a high availability option). In "
"this case, hosts need to have ``br100`` configured and physically connected "
"to any other nodes that are hosting VMs. You must set the "
"``flat_network_bridge`` option or create networks with the bridge parameter "
"in order to avoid raising an error. Compute nodes have iptables or ebtables "
"entries created for each project and instance to protect against MAC ID or "
"IP address spoofing and ARP poisoning."
msgstr ""
#: ../compute-networking-nova.rst:86
msgid "Flat DHCP Network Manager"
msgstr ""
#: ../compute-networking-nova.rst:90
msgid ""
"In single-host Flat DHCP mode you will be able to ping VMs through their "
"fixed IP from the ``nova-network`` node, but you cannot ping them from the "
"compute nodes. This is expected behavior."
msgstr ""
#: ../compute-networking-nova.rst:96
msgid ""
"This is the default mode for OpenStack Compute. In this mode, Compute "
"creates a VLAN and bridge for each tenant. For multiple-machine "
"installations, the VLAN Network Mode requires a switch that supports VLAN "
"tagging (IEEE 802.1Q). The tenant gets a range of private IPs that are only "
"accessible from inside the VLAN. In order for a user to access the instances "
"in their tenant, a special VPN instance (code named cloudpipe) needs to be "
"created. Compute generates a certificate and key for the user to access the "
"VPN and starts the VPN automatically. It provides a private network segment "
"for each tenant's instances that can be accessed through a dedicated VPN "
"connection from the internet. In this mode, each tenant gets its own VLAN, "
"Linux networking bridge, and subnet."
msgstr ""
#: ../compute-networking-nova.rst:109
msgid ""
"The subnets are specified by the network administrator, and are assigned "
"dynamically to a tenant when required. A DHCP server is started for each "
"VLAN to pass out IP addresses to VM instances from the subnet assigned to "
"the tenant. All instances belonging to one tenant are bridged into the same "
"VLAN for that tenant. OpenStack Compute creates the Linux networking bridges "
"and VLANs when required."
msgstr ""
#: ../compute-networking-nova.rst:115
msgid "VLAN Network Manager"
msgstr ""
#: ../compute-networking-nova.rst:117
msgid ""
"These network managers can co-exist in a cloud system. However, because you "
"cannot select the type of network for a given tenant, you cannot configure "
"multiple network types in a single Compute installation."
msgstr ""
#: ../compute-networking-nova.rst:121
msgid ""
"All network managers configure the network using network drivers. For "
"example, the Linux L3 driver (``l3.py`` and ``linux_net.py``), which makes "
"use of ``iptables``, ``route`` and other network management facilities, and "
"the libvirt `network filtering facilities <http://libvirt.org/formatnwfilter."
"html>`__. The driver is not tied to any particular network manager; all "
"network managers use the same driver. The driver usually initializes only "
"when the first VM lands on this host node."
msgstr ""
#: ../compute-networking-nova.rst:130
msgid ""
"All network managers operate in either single-host or multi-host mode. This "
"choice greatly influences the network configuration. In single-host mode, a "
"single ``nova-network`` service provides a default gateway for VMs and hosts "
"a single DHCP server (dnsmasq). In multi-host mode, each compute node runs "
"its own ``nova-network`` service. In both cases, all traffic between VMs and "
"the internet flows through ``nova-network``. Each mode has benefits and "
"drawbacks. For more on this, see the Network Topology section in the "
"`OpenStack Operations Guide <http://docs.openstack.org/openstack-ops/content/"
"network_design.html#network_topology>`__."
msgstr ""
#: ../compute-networking-nova.rst:140
msgid ""
"All networking options require network connectivity to be already set up "
"between OpenStack physical nodes. OpenStack does not configure any physical "
"network interfaces. All network managers automatically create VM virtual "
"interfaces. Some network managers can also create network bridges such as "
"``br100``."
msgstr ""
#: ../compute-networking-nova.rst:146
msgid ""
"The internal network interface is used for communication with VMs. The "
"interface should not have an IP address attached to it before OpenStack "
"installation, it serves only as a fabric where the actual endpoints are VMs "
"and dnsmasq. Additionally, the internal network interface must be in "
"``promiscuous`` mode, so that it can receive packets whose target MAC "
"address is the guest VM, not the host."
msgstr ""
#: ../compute-networking-nova.rst:153
msgid ""
"All machines must have a public and internal network interface (controlled "
"by these options: ``public_interface`` for the public interface, and "
"``flat_interface`` and ``vlan_interface`` for the internal interface with "
"flat or VLAN managers). This guide refers to the public network as the "
"external network and the private network as the internal or tenant network."
msgstr ""
#: ../compute-networking-nova.rst:160
msgid ""
"For flat and flat DHCP modes, use the :command:`nova network-create` command "
"to create a network:"
msgstr ""
#: ../compute-networking-nova.rst:169
msgid "specifies the network subnet."
msgstr ""
#: ../compute-networking-nova.rst:170
msgid ""
"specifies a range of fixed IP addresses to allocate, and can be a subset of "
"the ``--fixed-range-v4`` argument."
msgstr ""
#: ../compute-networking-nova.rst:173
msgid ""
"specifies the bridge device to which this network is connected on every "
"compute node."
msgstr ""
#: ../compute-networking-nova.rst:174
msgid "This example uses the following parameters:"
msgstr ""
#: ../compute-networking-nova.rst:179
msgid "DHCP server: dnsmasq"
msgstr ""
#: ../compute-networking-nova.rst:181
msgid ""
"The Compute service uses `dnsmasq <http://www.thekelleys.org.uk/dnsmasq/doc."
"html>`__ as the DHCP server when using either Flat DHCP Network Manager or "
"VLAN Network Manager. For Compute to operate in IPv4/IPv6 dual-stack mode, "
"use at least dnsmasq v2.63. The ``nova-network`` service is responsible for "
"starting dnsmasq processes."
msgstr ""
#: ../compute-networking-nova.rst:188
msgid ""
"The behavior of dnsmasq can be customized by creating a dnsmasq "
"configuration file. Specify the configuration file using the "
"``dnsmasq_config_file`` configuration option:"
msgstr ""
#: ../compute-networking-nova.rst:196
msgid ""
"For more information about creating a dnsmasq configuration file, see the "
"`OpenStack Configuration Reference <http://docs.openstack.org/liberty/config-"
"reference/content/>`__, and `the dnsmasq documentation <http://www."
"thekelleys.org.uk/dnsmasq/docs/dnsmasq.conf.example>`__."
msgstr ""
#: ../compute-networking-nova.rst:202
msgid ""
"Dnsmasq also acts as a caching DNS server for instances. You can specify the "
"DNS server that dnsmasq uses by setting the ``dns_server`` configuration "
"option in :file:`/etc/nova/nova.conf`. This example configures dnsmasq to "
"use Google's public DNS server:"
msgstr ""
#: ../compute-networking-nova.rst:211
msgid ""
"Dnsmasq logs to syslog (typically :file:`/var/log/syslog` or :file:`/var/log/"
"messages`, depending on Linux distribution). Logs can be useful for "
"troubleshooting, especially in a situation where VM instances boot "
"successfully but are not reachable over the network."
msgstr ""
#: ../compute-networking-nova.rst:216
msgid ""
"Administrators can specify the starting point IP address to reserve with the "
"DHCP server (in the format n.n.n.n) with this command:"
msgstr ""
#: ../compute-networking-nova.rst:223
msgid ""
"This reservation only affects which IP address the VMs start at, not the "
"fixed IP addresses that ``nova-network`` places on the bridges."
msgstr ""
#: ../compute-networking-nova.rst:228
msgid "Configure Compute to use IPv6 addresses"
msgstr ""
#: ../compute-networking-nova.rst:230
msgid ""
"If you are using OpenStack Compute with ``nova-network``, you can put "
"Compute into dual-stack mode, so that it uses both IPv4 and IPv6 addresses "
"for communication. In dual-stack mode, instances can acquire their IPv6 "
"global unicast addresses by using a stateless address auto-configuration "
"mechanism [RFC 4862/2462]. IPv4/IPv6 dual-stack mode works with both "
"``VlanManager`` and ``FlatDHCPManager`` networking modes."
msgstr ""
#: ../compute-networking-nova.rst:238
msgid ""
"In ``VlanManager`` networking mode, each project uses a different 64-bit "
"global routing prefix. In ``FlatDHCPManager`` mode, all instances use one 64-"
"bit global routing prefix."
msgstr ""
#: ../compute-networking-nova.rst:242
msgid ""
"This configuration was tested with virtual machine images that have an IPv6 "
"stateless address auto-configuration capability. This capability is required "
"for any VM to run with an IPv6 address. You must use an EUI-64 address for "
"stateless address auto-configuration. Each node that executes a ``nova-*`` "
"service must have ``python-netaddr`` and ``radvd`` installed."
msgstr ""
#: ../compute-networking-nova.rst:249
msgid "**Switch into IPv4/IPv6 dual-stack mode**"
msgstr ""
#: ../compute-networking-nova.rst:251
msgid "For every node running a ``nova-*`` service, install python-netaddr:"
msgstr ""
#: ../compute-networking-nova.rst:257
msgid ""
"For every node running ``nova-network``, install ``radvd`` and configure "
"IPv6 networking:"
msgstr ""
#: ../compute-networking-nova.rst:266
msgid ""
"On all nodes, edit the :file:`nova.conf` file and specify ``use_ipv6 = "
"True``."
msgstr ""
#: ../compute-networking-nova.rst:269
msgid "Restart all ``nova-*`` services."
msgstr ""
#: ../compute-networking-nova.rst:271
msgid "**IPv6 configuration options**"
msgstr ""
#: ../compute-networking-nova.rst:273
msgid ""
"You can use the following options with the :command:`nova network-create` "
"command:"
msgstr ""
#: ../compute-networking-nova.rst:276
msgid ""
"Add a fixed range for IPv6 addresses to the :command:`nova network-create` "
"command. Specify ``public`` or ``private`` after the ``network-create`` "
"parameter."
msgstr ""
#: ../compute-networking-nova.rst:285
msgid ""
"Set the IPv6 global routing prefix by using the ``--fixed_range_v6`` "
"parameter. The default value for the parameter is ``fd00::/48``."
msgstr ""
#: ../compute-networking-nova.rst:289
msgid ""
"When you use ``FlatDHCPManager``, the command uses the original ``--"
"fixed_range_v6`` value. For example:"
msgstr ""
#: ../compute-networking-nova.rst:297
msgid ""
"When you use ``VlanManager``, the command increments the subnet ID to create "
"subnet prefixes. Guest VMs use this prefix to generate their IPv6 global "
"unicast addresses. For example:"
msgstr ""
# #-#-#-#-# compute-networking-nova.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# compute-remote-console-access.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# compute-security.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# objectstorage-troubleshoot.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../compute-networking-nova.rst:309 ../compute-networking-nova.rst:506
#: ../compute-remote-console-access.rst:152 ../compute-security.rst:102
#: ../objectstorage-troubleshoot.rst:61
msgid "Configuration option = Default value"
msgstr ""
#: ../compute-networking-nova.rst:311 ../compute-networking-nova.rst:508
msgid "[DEFAULT]"
msgstr ""
#: ../compute-networking-nova.rst:313
msgid "fixed_range_v6 = fd00::/48"
msgstr ""
#: ../compute-networking-nova.rst:314
msgid "(StrOpt) Fixed IPv6 address block"
msgstr ""
#: ../compute-networking-nova.rst:315
msgid "gateway_v6 = None"
msgstr ""
#: ../compute-networking-nova.rst:316
msgid "(StrOpt) Default IPv6 gateway"
msgstr ""
#: ../compute-networking-nova.rst:317
msgid "ipv6_backend = rfc2462"
msgstr ""
#: ../compute-networking-nova.rst:318
msgid "(StrOpt) Backend to use for IPv6 generation"
msgstr ""
#: ../compute-networking-nova.rst:319
msgid "use_ipv6 = False"
msgstr ""
#: ../compute-networking-nova.rst:320
msgid "(BoolOpt) Use IPv6"
msgstr ""
#: ../compute-networking-nova.rst:323
msgid "Metadata service"
msgstr ""
#: ../compute-networking-nova.rst:325
msgid ""
"Compute uses a metadata service for virtual machine instances to retrieve "
"instance-specific data. Instances access the metadata service at "
"``http://169.254.169.254``. The metadata service supports two sets of APIs: "
"an OpenStack metadata API and an EC2-compatible API. Both APIs are versioned "
"by date."
msgstr ""
#: ../compute-networking-nova.rst:331
msgid ""
"To retrieve a list of supported versions for the OpenStack metadata API, "
"make a GET request to ``http://169.254.169.254/openstack``:"
msgstr ""
#: ../compute-networking-nova.rst:342
msgid ""
"To list supported versions for the EC2-compatible metadata API, make a GET "
"request to ``http://169.254.169.254``:"
msgstr ""
#: ../compute-networking-nova.rst:359
msgid ""
"If you write a consumer for one of these APIs, always attempt to access the "
"most recent API version supported by your consumer first, then fall back to "
"an earlier version if the most recent one is not available."
msgstr ""
#: ../compute-networking-nova.rst:363
msgid ""
"Metadata from the OpenStack API is distributed in JSON format. To retrieve "
"the metadata, make a GET request to ``http://169.254.169.254/"
"openstack/2012-08-10/meta_data.json``:"
msgstr ""
#: ../compute-networking-nova.rst:392
msgid ""
"Instances also retrieve user data (passed as the ``user_data`` parameter in "
"the API call or by the ``--user_data`` flag in the :command:`nova boot` "
"command) through the metadata service, by making a GET request to "
"``http://169.254.169.254/openstack/2012-08-10/user_data``:"
msgstr ""
#: ../compute-networking-nova.rst:403
msgid ""
"The metadata service has an API that is compatible with version 2009-04-04 "
"of the `Amazon EC2 metadata service <http://docs.amazonwebservices.com/"
"AWSEC2/2009-04-04/UserGuide/AESDG-chapter-instancedata.html>`__. This means "
"that virtual machine images designed for EC2 will work properly with "
"OpenStack."
msgstr ""
#: ../compute-networking-nova.rst:409
msgid ""
"The EC2 API exposes a separate URL for each metadata element. Retrieve a "
"listing of these elements by making a GET query to "
"``http://169.254.169.254/2009-04-04/meta-data/``:"
msgstr ""
#: ../compute-networking-nova.rst:450
msgid ""
"Instances can retrieve the public SSH key (identified by keypair name when a "
"user requests a new instance) by making a GET request to "
"``http://169.254.169.254/2009-04-04/meta-data/public-keys/0/openssh-key``:"
msgstr ""
#: ../compute-networking-nova.rst:462
msgid ""
"Instances can retrieve user data by making a GET request to "
"``http://169.254.169.254/2009-04-04/user-data``:"
msgstr ""
#: ../compute-networking-nova.rst:471
msgid ""
"The metadata service is implemented by either the nova-api service or the "
"nova-api-metadata service. Note that the nova-api-metadata service is "
"generally only used when running in multi-host mode, as it retrieves "
"instance-specific metadata. If you are running the nova-api service, you "
"must have ``metadata`` as one of the elements listed in the ``enabled_apis`` "
"configuration option in :file:`/etc/nova/nova.conf`. The default "
"``enabled_apis`` configuration setting includes the metadata service, so you "
"do not need to modify it."
msgstr ""
#: ../compute-networking-nova.rst:480
msgid ""
"Hosts access the service at ``169.254.169.254:80``, and this is translated "
"to ``metadata_host:metadata_port`` by an iptables rule established by the "
"``nova-network`` service. In multi-host mode, you can set ``metadata_host`` "
"to ``127.0.0.1``."
msgstr ""
#: ../compute-networking-nova.rst:485
msgid ""
"For instances to reach the metadata service, the ``nova-network`` service "
"must configure iptables to NAT port ``80`` of the ``169.254.169.254`` "
"address to the IP address specified in ``metadata_host`` (this defaults to ``"
"$my_ip``, which is the IP address of the ``nova-network`` service) and port "
"specified in ``metadata_port`` (which defaults to ``8775``) in :file:`/etc/"
"nova/nova.conf`."
msgstr ""
#: ../compute-networking-nova.rst:494
msgid ""
"The ``metadata_host`` configuration option must be an IP address, not a host "
"name."
msgstr ""
#: ../compute-networking-nova.rst:497
msgid ""
"The default Compute service settings assume that ``nova-network`` and ``nova-"
"api`` are running on the same host. If this is not the case, in the :file:`/"
"etc/nova/nova.conf` file on the host running ``nova-network``, set the "
"``metadata_host`` configuration option to the IP address of the host where "
"nova-api is running."
msgstr ""
#: ../compute-networking-nova.rst:510
msgid "metadata_cache_expiration = 15"
msgstr ""
#: ../compute-networking-nova.rst:511
msgid ""
"(IntOpt) Time in seconds to cache metadata; 0 to disable metadata caching "
"entirely (not recommended). Increasing this should improve response times of "
"the metadata API when under heavy load. Higher values may increase memory "
"usage and result in longer times for host metadata changes to take effect."
msgstr ""
#: ../compute-networking-nova.rst:516
msgid "metadata_host = $my_ip"
msgstr ""
#: ../compute-networking-nova.rst:517
msgid "(StrOpt) The IP address for the metadata API server"
msgstr ""
#: ../compute-networking-nova.rst:518
msgid "metadata_listen = 0.0.0.0"
msgstr ""
#: ../compute-networking-nova.rst:519
msgid "(StrOpt) The IP address on which the metadata API will listen."
msgstr ""
#: ../compute-networking-nova.rst:520
msgid "metadata_listen_port = 8775"
msgstr ""
#: ../compute-networking-nova.rst:521
msgid "(IntOpt) The port on which the metadata API will listen."
msgstr ""
#: ../compute-networking-nova.rst:522
msgid "metadata_manager = nova.api.manager.MetadataManager"
msgstr ""
#: ../compute-networking-nova.rst:523
msgid "(StrOpt) OpenStack metadata service manager"
msgstr ""
#: ../compute-networking-nova.rst:524
msgid "metadata_port = 8775"
msgstr ""
#: ../compute-networking-nova.rst:525
msgid "(IntOpt) The port for the metadata API port"
msgstr ""
#: ../compute-networking-nova.rst:526
msgid "metadata_workers = None"
msgstr ""
#: ../compute-networking-nova.rst:527
msgid ""
"(IntOpt) Number of workers for metadata service. The default will be the "
"number of CPUs available."
msgstr ""
#: ../compute-networking-nova.rst:528
msgid ""
"vendordata_driver = nova.api.metadata.vendordata_json.JsonFileVendorData"
msgstr ""
#: ../compute-networking-nova.rst:529
msgid "(StrOpt) Driver to use for vendor data"
msgstr ""
#: ../compute-networking-nova.rst:530
msgid "vendordata_jsonfile_path = None"
msgstr ""
#: ../compute-networking-nova.rst:531
msgid "(StrOpt) File to load JSON formatted vendor data from"
msgstr ""
#: ../compute-networking-nova.rst:534
msgid "Enable ping and SSH on VMs"
msgstr ""
#: ../compute-networking-nova.rst:536
msgid ""
"You need to enable ``ping`` and ``ssh`` on your VMs for network access. This "
"can be done with either the :command:`nova` or :command:`euca2ools` commands."
msgstr ""
#: ../compute-networking-nova.rst:542
msgid ""
"Run these commands as root only if the credentials used to interact with "
"nova-api are in :file:`/root/.bashrc`. If the EC2 credentials in the :file:`."
"bashrc` file are for an unprivileged user, you must run these commands as "
"that user instead."
msgstr ""
#: ../compute-networking-nova.rst:547
msgid "Enable ping and SSH with :command:`nova` commands:"
msgstr ""
#: ../compute-networking-nova.rst:554
msgid "Enable ping and SSH with ``euca2ools``:"
msgstr ""
#: ../compute-networking-nova.rst:561
msgid ""
"If you have run these commands and still cannot ping or SSH your instances, "
"check the number of running ``dnsmasq`` processes, there should be two. If "
"not, kill the processes and restart the service with these commands:"
msgstr ""
#: ../compute-networking-nova.rst:572
msgid "Configure public (floating) IP addresses"
msgstr ""
#: ../compute-networking-nova.rst:574
msgid ""
"This section describes how to configure floating IP addresses with ``nova-"
"network``. For information about doing this with OpenStack Networking, see :"
"ref:`L3-routing-and-NAT`."
msgstr ""
#: ../compute-networking-nova.rst:579
msgid "Private and public IP addresses"
msgstr ""
#: ../compute-networking-nova.rst:581
msgid ""
"In this section, the term floating IP address is used to refer to an IP "
"address, usually public, that you can dynamically add to a running virtual "
"instance."
msgstr ""
#: ../compute-networking-nova.rst:585
msgid ""
"Every virtual instance is automatically assigned a private IP address. You "
"can choose to assign a public (or floating) IP address instead. OpenStack "
"Compute uses network address translation (NAT) to assign floating IPs to "
"virtual instances."
msgstr ""
#: ../compute-networking-nova.rst:590
msgid ""
"To be able to assign a floating IP address, edit the :file:`/etc/nova/nova."
"conf` file to specify which interface the ``nova-network`` service should "
"bind public IP addresses to:"
msgstr ""
#: ../compute-networking-nova.rst:598
msgid ""
"If you make changes to the :file:`/etc/nova/nova.conf` file while the ``nova-"
"network`` service is running, you will need to restart the service to pick "
"up the changes."
msgstr ""
#: ../compute-networking-nova.rst:604
msgid ""
"Floating IPs are implemented by using a source NAT (SNAT rule in iptables), "
"so security groups can sometimes display inconsistent behavior if VMs use "
"their floating IP to communicate with other VMs, particularly on the same "
"physical host. Traffic from VM to VM across the fixed network does not have "
"this issue, and so this is the recommended setup. To ensure that traffic "
"does not get SNATed to the floating range, explicitly set:"
msgstr ""
#: ../compute-networking-nova.rst:616
msgid ""
"The ``x.x.x.x/y`` value specifies the range of floating IPs for each pool of "
"floating IPs that you define. This configuration is also required if the VMs "
"in the source group have floating IPs."
msgstr ""
#: ../compute-networking-nova.rst:621
msgid "Enable IP forwarding"
msgstr ""
#: ../compute-networking-nova.rst:623
msgid ""
"IP forwarding is disabled by default on most Linux distributions. You will "
"need to enable it in order to use floating IPs."
msgstr ""
#: ../compute-networking-nova.rst:628
msgid ""
"IP forwarding only needs to be enabled on the nodes that run ``nova-"
"network``. However, you will need to enable it on all compute nodes if you "
"use ``multi_host`` mode."
msgstr ""
#: ../compute-networking-nova.rst:632
msgid "To check if IP forwarding is enabled, run:"
msgstr ""
#: ../compute-networking-nova.rst:639 ../compute-networking-nova.rst:654
msgid "Alternatively, run:"
msgstr ""
#: ../compute-networking-nova.rst:646
msgid "In these examples, IP forwarding is disabled."
msgstr ""
#: ../compute-networking-nova.rst:648
msgid "To enable IP forwarding dynamically, run:"
msgstr ""
#: ../compute-networking-nova.rst:660
msgid ""
"To make the changes permanent, edit the ``/etc/sysctl.conf`` file and update "
"the IP forwarding setting:"
msgstr ""
#: ../compute-networking-nova.rst:667
msgid "Save the file and run this command to apply the changes:"
msgstr ""
#: ../compute-networking-nova.rst:673
msgid "You can also apply the changes by restarting the network service:"
msgstr ""
#: ../compute-networking-nova.rst:675
msgid "on Ubuntu, Debian:"
msgstr ""
#: ../compute-networking-nova.rst:681
msgid "on RHEL, Fedora, CentOS, openSUSE and SLES:"
msgstr ""
#: ../compute-networking-nova.rst:688
msgid "Create a list of available floating IP addresses"
msgstr ""
#: ../compute-networking-nova.rst:690
msgid ""
"Compute maintains a list of floating IP addresses that are available for "
"assigning to instances. Use the :command:`nova-manage floating` commands to "
"perform floating IP operations:"
msgstr ""
#: ../compute-networking-nova.rst:694
msgid "Add entries to the list:"
msgstr ""
#: ../compute-networking-nova.rst:700
msgid "List the floating IP addresses in the pool:"
msgstr ""
#: ../compute-networking-nova.rst:706
msgid "Create specific floating IPs for either a single address or a subnet:"
msgstr ""
#: ../compute-networking-nova.rst:713
msgid ""
"Remove floating IP addresses using the same parameters as the create command:"
msgstr ""
#: ../compute-networking-nova.rst:720
msgid ""
"For more information about how administrators can associate floating IPs "
"with instances, see `Manage IP addresses <http://docs.openstack.org/user-"
"guide-admin/cli_admin_manage_ip_addresses.html>`__ in the OpenStack Admin "
"User Guide."
msgstr ""
#: ../compute-networking-nova.rst:726
msgid "Automatically add floating IPs"
msgstr ""
#: ../compute-networking-nova.rst:728
msgid ""
"You can configure ``nova-network`` to automatically allocate and assign a "
"floating IP address to virtual instances when they are launched. Add this "
"line to the :file:`/etc/nova/nova.conf` file:"
msgstr ""
#: ../compute-networking-nova.rst:736
msgid "Save the file, and restart ``nova-network``"
msgstr ""
#: ../compute-networking-nova.rst:740
msgid ""
"If this option is enabled, but all floating IP addresses have already been "
"allocated, the :command:`nova boot` command will fail."
msgstr ""
#: ../compute-networking-nova.rst:744
msgid "Remove a network from a project"
msgstr ""
#: ../compute-networking-nova.rst:746
msgid ""
"You cannot delete a network that has been associated to a project. This "
"section describes the procedure for dissociating it so that it can be "
"deleted."
msgstr ""
#: ../compute-networking-nova.rst:750
msgid ""
"In order to disassociate the network, you will need the ID of the project it "
"has been associated to. To get the project ID, you will need to be an "
"administrator."
msgstr ""
#: ../compute-networking-nova.rst:754
msgid ""
"Disassociate the network from the project using the :command:`scrub` "
"command, with the project ID as the final parameter:"
msgstr ""
#: ../compute-networking-nova.rst:762
msgid "Multiple interfaces for instances (multinic)"
msgstr ""
#: ../compute-networking-nova.rst:764
msgid ""
"The multinic feature allows you to use more than one interface with your "
"instances. This is useful in several scenarios:"
msgstr ""
#: ../compute-networking-nova.rst:767
msgid "SSL Configurations (VIPs)"
msgstr ""
#: ../compute-networking-nova.rst:769
msgid "Services failover/HA"
msgstr ""
#: ../compute-networking-nova.rst:771
msgid "Bandwidth Allocation"
msgstr ""
#: ../compute-networking-nova.rst:773
msgid "Administrative/Public access to your instances"
msgstr ""
#: ../compute-networking-nova.rst:775
msgid ""
"Each VIP represents a separate network with its own IP block. Every network "
"mode has its own set of changes regarding multinic usage:"
msgstr ""
#: ../compute-networking-nova.rst:778
msgid "|multinic flat manager|"
msgstr ""
#: ../compute-networking-nova.rst:780
msgid "|multinic flatdhcp manager|"
msgstr ""
#: ../compute-networking-nova.rst:782
msgid "|multinic VLAN manager|"
msgstr ""
#: ../compute-networking-nova.rst:785
msgid "Using multinic"
msgstr ""
#: ../compute-networking-nova.rst:787
msgid ""
"In order to use multinic, create two networks, and attach them to the tenant "
"(named ``project`` on the command line):"
msgstr ""
#: ../compute-networking-nova.rst:795
msgid ""
"Each new instance will now receive two IP addresses from their respective "
"DHCP servers:"
msgstr ""
#: ../compute-networking-nova.rst:809
msgid ""
"Make sure you start the second interface on the instance, or it won't be "
"reachable through the second IP."
msgstr ""
#: ../compute-networking-nova.rst:812
msgid ""
"This example demonstrates how to set up the interfaces within the instance. "
"This is the configuration that needs to be applied inside the image."
msgstr ""
#: ../compute-networking-nova.rst:816
msgid "Edit the :file:`/etc/network/interfaces` file:"
msgstr ""
#: ../compute-networking-nova.rst:830
msgid ""
"If the Virtual Network Service Neutron is installed, you can specify the "
"networks to attach to the interfaces by using the ``--nic`` flag with the :"
"command:`nova` command:"
msgstr ""
#: ../compute-networking-nova.rst:839
msgid "Troubleshooting Networking"
msgstr ""
#: ../compute-networking-nova.rst:842
msgid "Cannot reach floating IPs"
msgstr ""
#: ../compute-networking-nova.rst:844
msgid "If you cannot reach your instances through the floating IP address:"
msgstr ""
#: ../compute-networking-nova.rst:846
msgid ""
"Check that the default security group allows ICMP (ping) and SSH (port 22), "
"so that you can reach the instances:"
msgstr ""
#: ../compute-networking-nova.rst:859
msgid ""
"Check the NAT rules have been added to iptables on the node that is running "
"``nova-network``:"
msgstr ""
#: ../compute-networking-nova.rst:868
msgid ""
"Check that the public address (`68.99.26.170 <68.99.26.170>`__ in this "
"example), has been added to your public interface. You should see the "
"address in the listing when you use the :command:`ip addr` command:"
msgstr ""
#: ../compute-networking-nova.rst:884
msgid ""
"You cannot use ``SSH`` to access an instance with a public IP from within "
"the same server because the routing configuration does not allow it."
msgstr ""
#: ../compute-networking-nova.rst:888
msgid ""
"Use ``tcpdump`` to identify if packets are being routed to the inbound "
"interface on the compute host. If the packets are reaching the compute hosts "
"but the connection is failing, the issue may be that the packet is being "
"dropped by reverse path filtering. Try disabling reverse-path filtering on "
"the inbound interface. For example, if the inbound interface is ``eth2``, "
"run:"
msgstr ""
#: ../compute-networking-nova.rst:899
msgid ""
"If this solves the problem, add the following line to :file:`/etc/sysctl."
"conf` so that the reverse-path filter is persistent:"
msgstr ""
#: ../compute-networking-nova.rst:907
msgid "Temporarily disable firewall"
msgstr ""
#: ../compute-networking-nova.rst:909
msgid ""
"To help debug networking issues with reaching VMs, you can disable the "
"firewall by setting this option in :file:`/etc/nova/nova.conf`:"
msgstr ""
#: ../compute-networking-nova.rst:916
msgid ""
"We strongly recommend you remove this line to re-enable the firewall once "
"your networking issues have been resolved."
msgstr ""
#: ../compute-networking-nova.rst:920
msgid "Packet loss from instances to nova-network server (VLANManager mode)"
msgstr ""
#: ../compute-networking-nova.rst:922
msgid ""
"If you can access your instances with ``SSH`` but the network to your "
"instance is slow, or if you find that running certain operations are slower "
"than they should be (for example, ``sudo``), packet loss could be occurring "
"on the connection to the instance."
msgstr ""
#: ../compute-networking-nova.rst:927
msgid ""
"Packet loss can be caused by Linux networking configuration settings related "
"to bridges. Certain settings can cause packets to be dropped between the "
"VLAN interface (for example, ``vlan100``) and the associated bridge "
"interface (for example, ``br100``) on the host running ``nova-network``."
msgstr ""
#: ../compute-networking-nova.rst:933
msgid ""
"One way to check whether this is the problem is to open three terminals and "
"run the following commands:"
msgstr ""
#: ../compute-networking-nova.rst:936
msgid ""
"In the first terminal, on the host running ``nova-network``, use ``tcpdump`` "
"on the VLAN interface to monitor DNS-related traffic (UDP, port 53). As "
"root, run:"
msgstr ""
#: ../compute-networking-nova.rst:944
msgid ""
"In the second terminal, also on the host running ``nova-network``, use "
"``tcpdump`` to monitor DNS-related traffic on the bridge interface. As root, "
"run:"
msgstr ""
#: ../compute-networking-nova.rst:952
msgid ""
"In the third terminal, use ``SSH`` to access the instance and generate DNS "
"requests by using the :command:`nslookup` command:"
msgstr ""
#: ../compute-networking-nova.rst:959
msgid ""
"The symptoms may be intermittent, so try running :command:`nslookup` "
"multiple times. If the network configuration is correct, the command should "
"return immediately each time. If it is not correct, the command hangs for "
"several seconds before returning."
msgstr ""
#: ../compute-networking-nova.rst:964
msgid ""
"If the :command:`nslookup` command sometimes hangs, and there are packets "
"that appear in the first terminal but not the second, then the problem may "
"be due to filtering done on the bridges. Try disabling filtering, and "
"running these commands as root:"
msgstr ""
#: ../compute-networking-nova.rst:975
msgid ""
"If this solves your issue, add the following line to :file:`/etc/sysctl."
"conf` so that the changes are persistent:"
msgstr ""
#: ../compute-networking-nova.rst:985
msgid "KVM: Network connectivity works initially, then fails"
msgstr ""
#: ../compute-networking-nova.rst:987
msgid ""
"With KVM hypervisors, instances running Ubuntu 12.04 sometimes lose network "
"connectivity after functioning properly for a period of time. Try loading "
"the ``vhost_net`` kernel module as a workaround for this issue (see `bug "
"#997978 <https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/997978/"
">`__) . This kernel module may also `improve network performance <http://www."
"linux-kvm.org/page/VhostNet>`__ on KVM. To load the kernel module:"
msgstr ""
#: ../compute-networking-nova.rst:1002
msgid "Loading the module has no effect on running instances."
msgstr ""
#: ../compute-node-down.rst:5
msgid "Recover from a failed compute node"
msgstr ""
#: ../compute-node-down.rst:7
msgid ""
"If Compute is deployed with a shared file system, and a node fails, there "
"are several methods to quickly recover from the failure. This section "
"discusses manual recovery."
msgstr ""
#: ../compute-node-down.rst:12
msgid "Evacuate instances"
msgstr ""
#: ../compute-node-down.rst:13
msgid ""
"If a cloud compute node fails due to a hardware malfunction or another "
"reason, you can evacuate instances using the :command:`nova evacuate` "
"command. See the `Admin User Guide <http://docs.openstack.org/user-guide-"
"admin/cli_nova_evacuate.html>`__."
msgstr ""
#: ../compute-node-down.rst:22
msgid "Manual recovery"
msgstr ""
#: ../compute-node-down.rst:23
msgid "Use this procedure to recover a failed compute node manually:"
msgstr ""
#: ../compute-node-down.rst:25
msgid ""
"Identify the VMs on the affected hosts. To do this, you can use a "
"combination of :command:`nova list` and :command:`nova show` or :command:"
"`euca-describe-instances`. For example, this command displays information "
"about instance i-000015b9 that is running on node np-rcc54:"
msgstr ""
#: ../compute-node-down.rst:35
msgid ""
"Query the Compute database to check the status of the host. This example "
"converts an EC2 API instance ID into an OpenStack ID. If you use the :"
"command:`nova` commands, you can substitute the ID directly (the output in "
"this example has been truncated):"
msgstr ""
#: ../compute-node-down.rst:63
msgid ""
"The credentials for your database can be found in :file:`/etc/nova.conf`."
msgstr ""
#: ../compute-node-down.rst:66
msgid ""
"Decide which compute host the affected VM should be moved to, and run this "
"database command to move the VM to the new host:"
msgstr ""
#: ../compute-node-down.rst:73
msgid ""
"If you are using a hypervisor that relies on libvirt (such as KVM), update "
"the :file:`libvirt.xml` file (found in :file:`/var/lib/nova/instances/"
"[instance ID]`) with these changes:"
msgstr ""
#: ../compute-node-down.rst:77
msgid ""
"Change the ``DHCPSERVER`` value to the host IP address of the new compute "
"host."
msgstr ""
#: ../compute-node-down.rst:80
msgid "Update the VNC IP to `0.0.0.0`"
msgstr ""
#: ../compute-node-down.rst:82
msgid "Reboot the VM:"
msgstr ""
#: ../compute-node-down.rst:88
msgid ""
"The database update and :command:`nova reboot` command should be all that is "
"required to recover a VM from a failed host. However, if you continue to "
"have problems try recreating the network filter configuration using "
"``virsh``, restarting the Compute services, or updating the ``vm_state`` and "
"``power_state`` in the Compute database."
msgstr ""
#: ../compute-node-down.rst:97
msgid "Recover from a UID/GID mismatch"
msgstr ""
#: ../compute-node-down.rst:99
msgid ""
"In some cases, files on your compute node can end up using the wrong UID or "
"GID. This can happen when running OpenStack Compute, using a shared file "
"system, or with an automated configuration tool. This can cause a number of "
"problems, such as inability to perform live migrations, or start virtual "
"machines."
msgstr ""
#: ../compute-node-down.rst:105
msgid "This procedure runs on nova-compute hosts, based on the KVM hypervisor:"
msgstr ""
#: ../compute-node-down.rst:107
msgid ""
"Set the nova UID in :file:`/etc/passwd` to the same number on all hosts (for "
"example, 112)."
msgstr ""
#: ../compute-node-down.rst:112
msgid ""
"Make sure you choose UIDs or GIDs that are not in use for other users or "
"groups."
msgstr ""
#: ../compute-node-down.rst:115
msgid ""
"Set the ``libvirt-qemu`` UID in :file:`/etc/passwd` to the same number on "
"all hosts (for example, 119)."
msgstr ""
#: ../compute-node-down.rst:118
msgid ""
"Set the ``nova`` group in :file:`/etc/group` file to the same number on all "
"hosts (for example, 120)."
msgstr ""
#: ../compute-node-down.rst:121
msgid ""
"Set the ``libvirtd`` group in :file:`/etc/group` file to the same number on "
"all hosts (for example, 119)."
msgstr ""
#: ../compute-node-down.rst:124
msgid "Stop the services on the compute node."
msgstr ""
#: ../compute-node-down.rst:126
msgid "Change all the files owned by user or group nova. For example:"
msgstr ""
#: ../compute-node-down.rst:134
msgid "Repeat all steps for the :file:`libvirt-qemu` files, if required."
msgstr ""
#: ../compute-node-down.rst:136
msgid "Restart the services."
msgstr ""
#: ../compute-node-down.rst:138
msgid ""
"Run the :command:`find` command to verify that all files use the correct "
"identifiers."
msgstr ""
#: ../compute-node-down.rst:144
msgid "Recover cloud after disaster"
msgstr ""
#: ../compute-node-down.rst:146
msgid ""
"This section covers procedures for managing your cloud after a disaster, and "
"backing up persistent storage volumes. Backups are mandatory, even outside "
"of disaster scenarios."
msgstr ""
#: ../compute-node-down.rst:150
msgid ""
"For a definition of a disaster recovery plan (DRP), see `http://en.wikipedia."
"org/wiki/Disaster\\_Recovery\\_Plan <http://en.wikipedia.org/wiki/"
"Disaster_Recovery_Plan>`_."
msgstr ""
#: ../compute-node-down.rst:153
msgid ""
"A disaster could happen to several components of your architecture (for "
"example, a disk crash, network loss, or a power failure). In this example, "
"the following components are configured:"
msgstr ""
#: ../compute-node-down.rst:157
msgid "A cloud controller (nova-api, nova-objectstore, nova-network)"
msgstr ""
#: ../compute-node-down.rst:159
msgid "A compute node (nova-compute)"
msgstr ""
#: ../compute-node-down.rst:161
msgid ""
"A storage area network (SAN) used by OpenStack Block Storage (cinder-volumes)"
msgstr ""
#: ../compute-node-down.rst:164
msgid ""
"The worst disaster for a cloud is power loss, which applies to all three "
"components. Before a power loss:"
msgstr ""
#: ../compute-node-down.rst:167
msgid ""
"Create an active iSCSI session from the SAN to the cloud controller (used "
"for the ``cinder-volumes`` LVM's VG)."
msgstr ""
#: ../compute-node-down.rst:170
msgid ""
"Create an active iSCSI session from the cloud controller to the compute node "
"(managed by cinder-volume)."
msgstr ""
#: ../compute-node-down.rst:173
msgid ""
"Create an iSCSI session for every volume (so 14 EBS volumes requires 14 "
"iSCSI sessions)."
msgstr ""
#: ../compute-node-down.rst:176
msgid ""
"Create iptables or ebtables rules from the cloud controller to the compute "
"node. This allows access from the cloud controller to the running instance."
msgstr ""
#: ../compute-node-down.rst:180
msgid ""
"Save the current state of the database, the current state of the running "
"instances, and the attached volumes (mount point, volume ID, volume status, "
"etc), at least from the cloud controller to the compute node."
msgstr ""
#: ../compute-node-down.rst:184
msgid "After power is recovered and all hardware components have restarted:"
msgstr ""
#: ../compute-node-down.rst:186
msgid "The iSCSI session from the SAN to the cloud no longer exists."
msgstr ""
#: ../compute-node-down.rst:188
msgid ""
"The iSCSI session from the cloud controller to the compute node no longer "
"exists."
msgstr ""
#: ../compute-node-down.rst:191
msgid ""
"The iptables and ebtables from the cloud controller to the compute node are "
"recreated. This is because nova-network reapplies configurations on boot."
msgstr ""
#: ../compute-node-down.rst:195
msgid "Instances are no longer running."
msgstr ""
#: ../compute-node-down.rst:197
msgid ""
"Note that instances will not be lost, because neither ``destroy`` nor "
"``terminate`` was invoked. The files for the instances will remain on the "
"compute node."
msgstr ""
#: ../compute-node-down.rst:201
msgid "The database has not been updated."
msgstr ""
#: ../compute-node-down.rst:203
msgid "**Begin recovery**"
msgstr ""
#: ../compute-node-down.rst:207
msgid ""
"Do not add any extra steps to this procedure, or perform the steps out of "
"order."
msgstr ""
#: ../compute-node-down.rst:210
msgid ""
"Check the current relationship between the volume and its instance, so that "
"you can recreate the attachment."
msgstr ""
#: ../compute-node-down.rst:213
msgid ""
"This information can be found using the :command:`nova volume-list` command. "
"Note that the ``nova`` client also includes the ability to get volume "
"information from OpenStack Block Storage."
msgstr ""
#: ../compute-node-down.rst:217
msgid ""
"Update the database to clean the stalled state. Do this for every volume, "
"using these queries:"
msgstr ""
#: ../compute-node-down.rst:228
msgid "Use :command:`nova volume-list` commands to list all volumes."
msgstr ""
#: ../compute-node-down.rst:230
msgid ""
"Restart the instances using the :command:`nova reboot INSTANCE` command."
msgstr ""
#: ../compute-node-down.rst:234
msgid ""
"Some instances will completely reboot and become reachable, while some might "
"stop at the plymouth stage. This is expected behavior, DO NOT reboot a "
"second time."
msgstr ""
#: ../compute-node-down.rst:238
msgid ""
"Instance state at this stage depends on whether you added an `/etc/fstab` "
"entry for that volume. Images built with the cloud-init package remain in a "
"``pending`` state, while others skip the missing volume and start. This step "
"is performed in order to ask Compute to reboot every instance, so that the "
"stored state is preserved. It does not matter if not all instances come up "
"successfully. For more information about cloud-init, see `help.ubuntu.com/"
"community/CloudInit/ <https://help.ubuntu.com/community/CloudInit/>`__."
msgstr ""
#: ../compute-node-down.rst:247
msgid ""
"Reattach the volumes to their respective instances, if required, using the :"
"command:`nova volume-attach` command. This example uses a file of listed "
"volumes to reattach them:"
msgstr ""
#: ../compute-node-down.rst:264
msgid ""
"Instances that were stopped at the plymouth stage will now automatically "
"continue booting and start normally. Instances that previously started "
"successfully will now be able to see the volume."
msgstr ""
#: ../compute-node-down.rst:268
msgid "Log in to the instances with SSH and reboot them."
msgstr ""
#: ../compute-node-down.rst:270
msgid ""
"If some services depend on the volume, or if a volume has an entry in fstab, "
"you should now be able to restart the instance. Restart directly from the "
"instance itself, not through ``nova``:"
msgstr ""
#: ../compute-node-down.rst:278
msgid ""
"When you are planning for and performing a disaster recovery, follow these "
"tips:"
msgstr ""
#: ../compute-node-down.rst:281
msgid ""
"Use the ``errors=remount`` parameter in the :file:`fstab` file to prevent "
"data corruption."
msgstr ""
#: ../compute-node-down.rst:284
msgid ""
"This parameter will cause the system to disable the ability to write to the "
"disk if it detects an I/O error. This configuration option should be added "
"into the cinder-volume server (the one which performs the iSCSI connection "
"to the SAN), and into the instances' :file:`fstab` files."
msgstr ""
#: ../compute-node-down.rst:290
msgid ""
"Do not add the entry for the SAN's disks to the cinder-volume's :file:"
"`fstab` file."
msgstr ""
#: ../compute-node-down.rst:293
msgid ""
"Some systems hang on that step, which means you could lose access to your "
"cloud-controller. To re-run the session manually, run this command before "
"performing the mount:"
msgstr ""
#: ../compute-node-down.rst:301
msgid ""
"On your instances, if you have the whole ``/home/`` directory on the disk, "
"leave a user's directory with the user's bash files and the :file:"
"`authorized_keys` file (instead of emptying the ``/home`` directory and "
"mapping the disk on it)."
msgstr ""
#: ../compute-node-down.rst:306
msgid ""
"This allows you to connect to the instance even without the volume attached, "
"if you allow only connections through public keys."
msgstr ""
#: ../compute-node-down.rst:309
msgid ""
"If you want to script the disaster recovery plan (DRP), a bash script is "
"available from `https://github.com/Razique <https://github.com/Razique/"
"BashStuff/blob/master/SYSTEMS/OpenStack/SCR_5006_V00_NUAC-OPENSTACK-DRP-"
"OpenStack.sh>`_ which performs the following steps:"
msgstr ""
#: ../compute-node-down.rst:313
msgid "An array is created for instances and their attached volumes."
msgstr ""
#: ../compute-node-down.rst:315
msgid "The MySQL database is updated."
msgstr ""
#: ../compute-node-down.rst:317
msgid "All instances are restarted with euca2ools."
msgstr ""
#: ../compute-node-down.rst:319
msgid "The volumes are reattached."
msgstr ""
#: ../compute-node-down.rst:321
msgid ""
"An SSH connection is performed into every instance using Compute credentials."
msgstr ""
#: ../compute-node-down.rst:324
msgid ""
"The script includes a ``test mode``, which allows you to perform that whole "
"sequence for only one instance."
msgstr ""
#: ../compute-node-down.rst:327
msgid ""
"To reproduce the power loss, connect to the compute node which runs that "
"instance and close the iSCSI session. Do not detach the volume using the :"
"command:`nova volume-detach` command, manually close the iSCSI session. This "
"example closes an iSCSI session with the number 15:"
msgstr ""
#: ../compute-node-down.rst:336
msgid "Do not forget the ``-r`` flag. Otherwise, you will close all sessions."
msgstr ""
#: ../compute-remote-console-access.rst:0
msgid "**Description of SPICE configuration options**"
msgstr ""
#: ../compute-remote-console-access.rst:0
msgid "**Description of VNC configuration options**"
msgstr ""
#: ../compute-remote-console-access.rst:3
msgid "Configure remote console access"
msgstr ""
#: ../compute-remote-console-access.rst:5
msgid ""
"To provide a remote console or remote desktop access to guest virtual "
"machines, use VNC or SPICE HTML5 through either the OpenStack dashboard or "
"the command line. Best practice is to select one or the other to run."
msgstr ""
#: ../compute-remote-console-access.rst:10
msgid "SPICE console"
msgstr ""
#: ../compute-remote-console-access.rst:12
msgid ""
"OpenStack Compute supports VNC consoles to guests. The VNC protocol is "
"fairly limited, lacking support for multiple monitors, bi-directional audio, "
"reliable cut-and-paste, video streaming and more. SPICE is a new protocol "
"that aims to address the limitations in VNC and provide good remote desktop "
"support."
msgstr ""
#: ../compute-remote-console-access.rst:18
msgid ""
"SPICE support in OpenStack Compute shares a similar architecture to the VNC "
"implementation. The OpenStack dashboard uses a SPICE-HTML5 widget in its "
"console tab that communicates to the ``nova-spicehtml5proxy`` service by "
"using SPICE-over-websockets. The ``nova-spicehtml5proxy`` service "
"communicates directly with the hypervisor process by using SPICE."
msgstr ""
#: ../compute-remote-console-access.rst:24
msgid ""
"VNC must be explicitly disabled to get access to the SPICE console. Set the "
"``vnc_enabled`` option to ``False`` in the ``[DEFAULT]`` section to disable "
"the VNC console."
msgstr ""
#: ../compute-remote-console-access.rst:28
msgid ""
"Use the following options to configure SPICE as the console for OpenStack "
"Compute:"
msgstr ""
#: ../compute-remote-console-access.rst:35
msgid "Spice configuration option = Default value"
msgstr ""
#: ../compute-remote-console-access.rst:37
msgid "``agent_enabled = True``"
msgstr ""
#: ../compute-remote-console-access.rst:38
msgid "(BoolOpt) Enable spice guest agent support"
msgstr ""
#: ../compute-remote-console-access.rst:39
msgid "``enabled = False``"
msgstr ""
#: ../compute-remote-console-access.rst:40
msgid "(BoolOpt) Enable spice related features"
msgstr ""
#: ../compute-remote-console-access.rst:41
msgid "``html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html``"
msgstr ""
#: ../compute-remote-console-access.rst:42
msgid ""
"(StrOpt) Location of spice HTML5 console proxy, in the form "
"\"http://127.0.0.1:6082/spice_auto.html\""
msgstr ""
#: ../compute-remote-console-access.rst:44
msgid "``html5proxy_host = 0.0.0.0``"
msgstr ""
#: ../compute-remote-console-access.rst:45
#: ../compute-remote-console-access.rst:161
msgid "(StrOpt) Host on which to listen for incoming requests"
msgstr ""
#: ../compute-remote-console-access.rst:46
msgid "``html5proxy_port = 6082``"
msgstr ""
#: ../compute-remote-console-access.rst:47
#: ../compute-remote-console-access.rst:163
msgid "(IntOpt) Port on which to listen for incoming requests"
msgstr ""
#: ../compute-remote-console-access.rst:48
msgid "``keymap = en-us``"
msgstr ""
#: ../compute-remote-console-access.rst:49
msgid "(StrOpt) Keymap for spice"
msgstr ""
#: ../compute-remote-console-access.rst:50
msgid "``server_listen = 127.0.0.1``"
msgstr ""
#: ../compute-remote-console-access.rst:51
msgid "(StrOpt) IP address on which instance spice server should listen"
msgstr ""
#: ../compute-remote-console-access.rst:52
msgid "``server_proxyclient_address = 127.0.0.1``"
msgstr ""
#: ../compute-remote-console-access.rst:53
msgid ""
"(StrOpt) The address to which proxy clients (like nova-spicehtml5proxy) "
"should connect"
msgstr ""
#: ../compute-remote-console-access.rst:57
msgid "VNC console proxy"
msgstr ""
#: ../compute-remote-console-access.rst:59
msgid ""
"The VNC proxy is an OpenStack component that enables compute service users "
"to access their instances through VNC clients."
msgstr ""
#: ../compute-remote-console-access.rst:64
msgid ""
"The web proxy console URLs do not support the websocket protocol scheme "
"(ws://) on python versions less than 2.7.4."
msgstr ""
#: ../compute-remote-console-access.rst:67
msgid "The VNC console connection works as follows:"
msgstr ""
#: ../compute-remote-console-access.rst:69
msgid ""
"A user connects to the API and gets an ``access_url`` such as, ``http://ip:"
"port/?token=xyz``."
msgstr ""
#: ../compute-remote-console-access.rst:72
msgid "The user pastes the URL in a browser or uses it as a client parameter."
msgstr ""
#: ../compute-remote-console-access.rst:75
msgid "The browser or client connects to the proxy."
msgstr ""
#: ../compute-remote-console-access.rst:77
msgid ""
"The proxy talks to ``nova-consoleauth`` to authorize the token for the user, "
"and maps the token to the *private* host and port of the VNC server for an "
"instance."
msgstr ""
#: ../compute-remote-console-access.rst:81
msgid ""
"The compute host specifies the address that the proxy should use to connect "
"through the :file:`nova.conf` file option, "
"``vncserver_proxyclient_address``. In this way, the VNC proxy works as a "
"bridge between the public network and private host network."
msgstr ""
#: ../compute-remote-console-access.rst:86
msgid ""
"The proxy initiates the connection to VNC server and continues to proxy "
"until the session ends."
msgstr ""
#: ../compute-remote-console-access.rst:89
msgid ""
"The proxy also tunnels the VNC protocol over WebSockets so that the "
"``noVNC`` client can talk to VNC servers. In general, the VNC proxy:"
msgstr ""
#: ../compute-remote-console-access.rst:92
msgid ""
"Bridges between the public network where the clients live and the private "
"network where VNC servers live."
msgstr ""
#: ../compute-remote-console-access.rst:95
msgid "Mediates token authentication."
msgstr ""
#: ../compute-remote-console-access.rst:97
msgid ""
"Transparently deals with hypervisor-specific connection details to provide a "
"uniform client experience."
msgstr ""
#: ../compute-remote-console-access.rst:105
msgid "About nova-consoleauth"
msgstr ""
#: ../compute-remote-console-access.rst:107
msgid ""
"Both client proxies leverage a shared service to manage token authentication "
"called ``nova-consoleauth``. This service must be running for either proxy "
"to work. Many proxies of either type can be run against a single ``nova-"
"consoleauth`` service in a cluster configuration."
msgstr ""
#: ../compute-remote-console-access.rst:112
msgid ""
"Do not confuse the ``nova-consoleauth`` shared service with ``nova-"
"console``, which is a XenAPI-specific service that most recent VNC proxy "
"architectures do not use."
msgstr ""
#: ../compute-remote-console-access.rst:117
msgid "Typical deployment"
msgstr ""
#: ../compute-remote-console-access.rst:119
msgid "A typical deployment has the following components:"
msgstr ""
#: ../compute-remote-console-access.rst:121
msgid "A ``nova-consoleauth`` process. Typically runs on the controller host."
msgstr ""
#: ../compute-remote-console-access.rst:123
msgid ""
"One or more ``nova-novncproxy`` services. Supports browser-based noVNC "
"clients. For simple deployments, this service typically runs on the same "
"machine as ``nova-api`` because it operates as a proxy between the public "
"network and the private compute host network."
msgstr ""
#: ../compute-remote-console-access.rst:128
msgid ""
"One or more ``nova-xvpvncproxy`` services. Supports the special Java client "
"discussed here. For simple deployments, this service typically runs on the "
"same machine as ``nova-api`` because it acts as a proxy between the public "
"network and the private compute host network."
msgstr ""
#: ../compute-remote-console-access.rst:133
msgid ""
"One or more compute hosts. These compute hosts must have correctly "
"configured options, as follows."
msgstr ""
#: ../compute-remote-console-access.rst:137
msgid "VNC configuration options"
msgstr ""
#: ../compute-remote-console-access.rst:139
msgid ""
"To customize the VNC console, use the following configuration options in "
"your :file:`nova.conf`:"
msgstr ""
#: ../compute-remote-console-access.rst:144
msgid ""
"To support :ref:`live migration <section_configuring-compute-migrations>`, "
"you cannot specify a specific IP address for ``vncserver_listen``, because "
"that IP address does not exist on the destination host."
msgstr ""
#: ../compute-remote-console-access.rst:154
msgid "**[DEFAULT]**"
msgstr ""
#: ../compute-remote-console-access.rst:156
msgid "``daemon = False``"
msgstr ""
#: ../compute-remote-console-access.rst:157
msgid "(BoolOpt) Become a daemon (background process)"
msgstr ""
#: ../compute-remote-console-access.rst:158
msgid "``key = None``"
msgstr ""
#: ../compute-remote-console-access.rst:159
msgid "(StrOpt) SSL key file (if separate from cert)"
msgstr ""
#: ../compute-remote-console-access.rst:160
msgid "``novncproxy_host = 0.0.0.0``"
msgstr ""
#: ../compute-remote-console-access.rst:162
msgid "``novncproxy_port = 6080``"
msgstr ""
#: ../compute-remote-console-access.rst:164
msgid "``record = False``"
msgstr ""
#: ../compute-remote-console-access.rst:165
msgid "(BoolOpt) Record sessions to FILE.[session_number]"
msgstr ""
#: ../compute-remote-console-access.rst:166
msgid "``source_is_ipv6 = False``"
msgstr ""
#: ../compute-remote-console-access.rst:167
msgid "(BoolOpt) Source is ipv6"
msgstr ""
#: ../compute-remote-console-access.rst:168
msgid "``ssl_only = False``"
msgstr ""
#: ../compute-remote-console-access.rst:169
msgid "(BoolOpt) Disallow non-encrypted connections"
msgstr ""
#: ../compute-remote-console-access.rst:170
msgid "``web = /usr/share/spice-html5``"
msgstr ""
#: ../compute-remote-console-access.rst:171
msgid "(StrOpt) Run webserver on same port. Serve files from DIR."
msgstr ""
#: ../compute-remote-console-access.rst:172
msgid "**[vmware]**"
msgstr ""
#: ../compute-remote-console-access.rst:174
msgid "``vnc_port = 5900``"
msgstr ""
#: ../compute-remote-console-access.rst:175
msgid "(IntOpt) VNC starting port"
msgstr ""
#: ../compute-remote-console-access.rst:176
msgid "``vnc_port_total = 10000``"
msgstr ""
#: ../compute-remote-console-access.rst:177
msgid "vnc_port_total = 10000"
msgstr ""
#: ../compute-remote-console-access.rst:178
msgid "**[vnc]**"
msgstr ""
#: ../compute-remote-console-access.rst:180
msgid "enabled = True"
msgstr ""
#: ../compute-remote-console-access.rst:181
msgid "(BoolOpt) Enable VNC related features"
msgstr ""
#: ../compute-remote-console-access.rst:182
msgid "novncproxy_base_url = http://127.0.0.1:6080/vnc_auto.html"
msgstr ""
#: ../compute-remote-console-access.rst:183
msgid ""
"(StrOpt) Location of VNC console proxy, in the form \"http://127.0.0.1:6080/"
"vnc_auto.html\""
msgstr ""
#: ../compute-remote-console-access.rst:185
msgid "vncserver_listen = 127.0.0.1"
msgstr ""
#: ../compute-remote-console-access.rst:186
msgid "(StrOpt) IP address on which instance vncservers should listen"
msgstr ""
#: ../compute-remote-console-access.rst:187
msgid "vncserver_proxyclient_address = 127.0.0.1"
msgstr ""
#: ../compute-remote-console-access.rst:188
msgid ""
"(StrOpt) The address to which proxy clients (like nova-xvpvncproxy) should "
"connect"
msgstr ""
#: ../compute-remote-console-access.rst:190
msgid "xvpvncproxy_base_url = http://127.0.0.1:6081/console"
msgstr ""
#: ../compute-remote-console-access.rst:191
msgid ""
"(StrOpt) Location of nova xvp VNC console proxy, in the form "
"\"http://127.0.0.1:6081/console\""
msgstr ""
#: ../compute-remote-console-access.rst:196
msgid ""
"The ``vncserver_proxyclient_address`` defaults to ``127.0.0.1``, which is "
"the address of the compute host that Compute instructs proxies to use when "
"connecting to instance servers."
msgstr ""
#: ../compute-remote-console-access.rst:200
msgid "For all-in-one XenServer domU deployments, set this to ``169.254.0.1.``"
msgstr ""
#: ../compute-remote-console-access.rst:203
msgid ""
"For multi-host XenServer domU deployments, set to a ``dom0 management IP`` "
"on the same network as the proxies."
msgstr ""
#: ../compute-remote-console-access.rst:206
msgid ""
"For multi-host libvirt deployments, set to a host management IP on the same "
"network as the proxies."
msgstr ""
#: ../compute-remote-console-access.rst:210
msgid "nova-novncproxy (noVNC)"
msgstr ""
#: ../compute-remote-console-access.rst:212
msgid ""
"You must install the noVNC package, which contains the ``nova-novncproxy`` "
"service. As root, run the following command:"
msgstr ""
#: ../compute-remote-console-access.rst:219
msgid "The service starts automatically on installation."
msgstr ""
#: ../compute-remote-console-access.rst:221
msgid "To restart the service, run:"
msgstr ""
#: ../compute-remote-console-access.rst:227
msgid ""
"The configuration option parameter should point to your :file:`nova.conf` "
"file, which includes the message queue server address and credentials."
msgstr ""
#: ../compute-remote-console-access.rst:230
msgid "By default, ``nova-novncproxy`` binds on ``0.0.0.0:6080``."
msgstr ""
#: ../compute-remote-console-access.rst:232
msgid ""
"To connect the service to your Compute deployment, add the following "
"configuration options to your :file:`nova.conf`:"
msgstr ""
#: ../compute-remote-console-access.rst:235
msgid "``vncserver_listen=0.0.0.0``"
msgstr ""
#: ../compute-remote-console-access.rst:237
msgid ""
"Specifies the address on which the VNC service should bind. Make sure it is "
"assigned one of the compute node interfaces. This address is the one used by "
"your domain file."
msgstr ""
#: ../compute-remote-console-access.rst:247
msgid "To use live migration, use the 0.0.0.0 address."
msgstr ""
#: ../compute-remote-console-access.rst:249
msgid "``vncserver_proxyclient_address=127.0.0.1``"
msgstr ""
#: ../compute-remote-console-access.rst:251
msgid ""
"The address of the compute host that Compute instructs proxies to use when "
"connecting to instance ``vncservers``."
msgstr ""
#: ../compute-remote-console-access.rst:255
msgid "Frequently asked questions about VNC access to virtual machines"
msgstr ""
#: ../compute-remote-console-access.rst:257
msgid ""
"**Q: What is the difference between ``nova-xvpvncproxy`` and ``nova-"
"novncproxy``?**"
msgstr ""
#: ../compute-remote-console-access.rst:260
msgid ""
"A: ``nova-xvpvncproxy``, which ships with OpenStack Compute, is a proxy that "
"supports a simple Java client. nova-novncproxy uses noVNC to provide VNC "
"support through a web browser."
msgstr ""
#: ../compute-remote-console-access.rst:264
msgid ""
"**Q: I want VNC support in the OpenStack dashboard. What services do I need?"
"**"
msgstr ""
#: ../compute-remote-console-access.rst:267
msgid ""
"A: You need ``nova-novncproxy``, ``nova-consoleauth``, and correctly "
"configured compute hosts."
msgstr ""
#: ../compute-remote-console-access.rst:270
msgid ""
"**Q: When I use ``nova get-vnc-console`` or click on the VNC tab of the "
"OpenStack dashboard, it hangs. Why?**"
msgstr ""
#: ../compute-remote-console-access.rst:273
msgid ""
"A: Make sure you are running ``nova-consoleauth`` (in addition to ``nova-"
"novncproxy``). The proxies rely on ``nova-consoleauth`` to validate tokens, "
"and waits for a reply from them until a timeout is reached."
msgstr ""
#: ../compute-remote-console-access.rst:277
msgid ""
"**Q: My VNC proxy worked fine during my all-in-one test, but now it doesn't "
"work on multi host. Why?**"
msgstr ""
#: ../compute-remote-console-access.rst:280
msgid ""
"A: The default options work for an all-in-one install, but changes must be "
"made on your compute hosts once you start to build a cluster. As an example, "
"suppose you have two servers:"
msgstr ""
#: ../compute-remote-console-access.rst:289
msgid ""
"Your :file:`nova-compute` configuration file must set the following values:"
msgstr ""
#: ../compute-remote-console-access.rst:304
msgid ""
"``novncproxy_base_url`` and ``xvpvncproxy_base_url`` use a public IP; this "
"is the URL that is ultimately returned to clients, which generally do not "
"have access to your private network. Your PROXYSERVER must be able to reach "
"``vncserver_proxyclient_address``, because that is the address over which "
"the VNC connection is proxied."
msgstr ""
#: ../compute-remote-console-access.rst:310
msgid ""
"**Q: My noVNC does not work with recent versions of web browsers. Why?**"
msgstr ""
#: ../compute-remote-console-access.rst:312
msgid ""
"A: Make sure you have installed ``python-numpy``, which is required to "
"support a newer version of the WebSocket protocol (HyBi-07+)."
msgstr ""
#: ../compute-remote-console-access.rst:315
msgid ""
"**Q: How do I adjust the dimensions of the VNC window image in the OpenStack "
"dashboard?**"
msgstr ""
#: ../compute-remote-console-access.rst:318
msgid ""
"A: These values are hard-coded in a Django HTML template. To alter them, "
"edit the :file:`_detail_vnc.html` template file. The location of this file "
"varies based on Linux distribution. On Ubuntu 14.04, the file is at ``/usr/"
"share/pyshared/horizon/dashboards/nova/instances/templates/instances/"
"_detail_vnc.html``."
msgstr ""
#: ../compute-remote-console-access.rst:324
msgid "Modify the ``width`` and ``height`` options, as follows:"
msgstr ""
#: ../compute-remote-console-access.rst:330
msgid ""
"**Q: My noVNC connections failed with ValidationError: Origin header "
"protocol does not match. Why?**"
msgstr ""
#: ../compute-remote-console-access.rst:333
msgid ""
"A: Make sure the ``base_url`` match your TLS setting. If you are using https "
"console connections, make sure that the value of ``novncproxy_base_url`` is "
"set explicitly where the ``nova-novncproxy`` service is running."
msgstr ""
#: ../compute-root-wrap-reference.rst:0
msgid "**Filters configuration options**"
msgstr ""
#: ../compute-root-wrap-reference.rst:0
msgid "**rootwrap.conf configuration options**"
msgstr ""
#: ../compute-root-wrap-reference.rst:5
msgid "Secure with rootwrap"
msgstr ""
#: ../compute-root-wrap-reference.rst:7
msgid ""
"Rootwrap allows unprivileged users to safely run Compute actions as the root "
"user. Compute previously used :command:`sudo` for this purpose, but this was "
"difficult to maintain, and did not allow advanced filters. The :command:"
"`rootwrap` command replaces :command:`sudo` for Compute."
msgstr ""
#: ../compute-root-wrap-reference.rst:12
msgid ""
"To use rootwrap, prefix the Compute command with :command:`nova-rootwrap`. "
"For example:"
msgstr ""
#: ../compute-root-wrap-reference.rst:19
msgid ""
"A generic ``sudoers`` entry lets the Compute user run :command:`nova-"
"rootwrap` as root. The :command:`nova-rootwrap` code looks for filter "
"definition directories in its configuration file, and loads command filters "
"from them. It then checks if the command requested by Compute matches one of "
"those filters and, if so, executes the command (as root). If no filter "
"matches, it denies the request."
msgstr ""
#: ../compute-root-wrap-reference.rst:28
msgid ""
"Be aware of issues with using NFS and root-owned files. The NFS share must "
"be configured with the ``no_root_squash`` option enabled, in order for "
"rootwrap to work correctly."
msgstr ""
#: ../compute-root-wrap-reference.rst:32
msgid ""
"Rootwrap is fully controlled by the root user. The root user owns the "
"sudoers entry which allows Compute to run a specific rootwrap executable as "
"root, and only with a specific configuration file (which should also be "
"owned by root). The :command:`nova-rootwrap` command imports the Python "
"modules it needs from a cleaned, system-default PYTHONPATH. The root-owned "
"configuration file points to root-owned filter definition directories, which "
"contain root-owned filters definition files. This chain ensures that the "
"Compute user itself is not in control of the configuration or modules used "
"by the :command:`nova-rootwrap` executable."
msgstr ""
#: ../compute-root-wrap-reference.rst:44
msgid ""
"Rootwrap is configured using the :file:`rootwrap.conf` file. Because it's in "
"the trusted security path, it must be owned and writable by only the root "
"user. The file's location is specified in both the sudoers entry and in the :"
"file:`nova.conf` configuration file with the ``rootwrap_config=entry`` "
"parameter."
msgstr ""
#: ../compute-root-wrap-reference.rst:50
msgid ""
"The :file:`rootwrap.conf` file uses an INI file format with these sections "
"and parameters:"
msgstr ""
#: ../compute-root-wrap-reference.rst:56 ../compute-root-wrap-reference.rst:95
msgid "Configuration option=Default value"
msgstr ""
#: ../compute-root-wrap-reference.rst:57 ../compute-root-wrap-reference.rst:96
msgid "(Type) Description"
msgstr ""
#: ../compute-root-wrap-reference.rst:58
msgid "[DEFAULT] filters\\_path=/etc/nova/rootwrap.d,/usr/share/nova/rootwrap"
msgstr ""
#: ../compute-root-wrap-reference.rst:60
msgid ""
"(ListOpt) Comma-separated list of directories containing filter definition "
"files. Defines where rootwrap filters are stored. Directories defined on "
"this line should all exist, and be owned and writable only by the root user."
msgstr ""
#: ../compute-root-wrap-reference.rst:67
msgid ""
"If the root wrapper is not performing correctly, you can add a workaround "
"option into the :file:`nova.conf` configuration file. This workaround re-"
"configures the root wrapper configuration to fall back to running commands "
"as sudo, and is a Kilo release feature."
msgstr ""
#: ../compute-root-wrap-reference.rst:72
msgid ""
"Including this workaround in your configuration file safeguards your "
"environment from issues that can impair root wrapper performance. Tool "
"changes that have impacted `Python Build Reasonableness (PBR) <https://git."
"openstack.org/cgit/openstack-dev/pbr/>`__ for example, are a known issue "
"that affects root wrapper performance."
msgstr ""
#: ../compute-root-wrap-reference.rst:78
msgid ""
"To set up this workaround, configure the ``disable_rootwrap`` option in the "
"``[workaround]`` section of the :file:`nova.conf` configuration file."
msgstr ""
#: ../compute-root-wrap-reference.rst:81
msgid ""
"The filters definition files contain lists of filters that rootwrap will use "
"to allow or deny a specific command. They are generally suffixed by ``."
"filters`` . Since they are in the trusted security path, they need to be "
"owned and writable only by the root user. Their location is specified in "
"the :file:`rootwrap.conf` file."
msgstr ""
#: ../compute-root-wrap-reference.rst:87
msgid ""
"Filter definition files use an INI file format with a ``[Filters]`` section "
"and several lines, each with a unique parameter name, which should be "
"different for each filter you define:"
msgstr ""
#: ../compute-root-wrap-reference.rst:97
msgid "[Filters] filter\\_name=kpartx: CommandFilter, /sbin/kpartx, root"
msgstr ""
#: ../compute-root-wrap-reference.rst:99
msgid ""
"(ListOpt) Comma-separated list containing the filter class to use, followed "
"by the Filter arguments (which vary depending on the Filter class selected)."
msgstr ""
#: ../compute-security.rst:0
msgid "**Description of trusted computing configuration options**"
msgstr ""
#: ../compute-security.rst:5
msgid "Security hardening"
msgstr ""
#: ../compute-security.rst:7
msgid ""
"OpenStack Compute can be integrated with various third-party technologies to "
"increase security. For more information, see the `OpenStack Security Guide "
"<http://docs.openstack.org/sec/>`_."
msgstr ""
#: ../compute-security.rst:12
msgid "Trusted compute pools"
msgstr ""
#: ../compute-security.rst:14
msgid ""
"Administrators can designate a group of compute hosts as trusted using "
"trusted compute pools. The trusted hosts use hardware-based security "
"features, such as the Intel Trusted Execution Technology (TXT), to provide "
"an additional level of security. Combined with an external stand-alone, web-"
"based remote attestation server, cloud providers can ensure that the compute "
"node runs only software with verified measurements and can ensure a secure "
"cloud stack."
msgstr ""
#: ../compute-security.rst:22
msgid ""
"Trusted compute pools provide the ability for cloud subscribers to request "
"services run only on verified compute nodes."
msgstr ""
#: ../compute-security.rst:25
msgid "The remote attestation server performs node verification like this:"
msgstr ""
#: ../compute-security.rst:27
msgid "Compute nodes boot with Intel TXT technology enabled."
msgstr ""
#: ../compute-security.rst:29
msgid "The compute node BIOS, hypervisor, and operating system are measured."
msgstr ""
#: ../compute-security.rst:31
msgid ""
"When the attestation server challenges the compute node, the measured data "
"is sent to the attestation server."
msgstr ""
#: ../compute-security.rst:34
msgid ""
"The attestation server verifies the measurements against a known good "
"database to determine node trustworthiness."
msgstr ""
#: ../compute-security.rst:37
msgid ""
"A description of how to set up an attestation service is beyond the scope of "
"this document. For an open source project that you can use to implement an "
"attestation service, see the `Open Attestation <https://github.com/"
"OpenAttestation/OpenAttestation>`__ project."
msgstr ""
# #-#-#-#-# compute-security.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# networking_multi-dhcp-agents.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../compute-security.rst:43 ../networking_multi-dhcp-agents.rst:31
msgid "|image0|"
msgstr ""
#: ../compute-security.rst:45
msgid "**Configuring Compute to use trusted compute pools**"
msgstr ""
#: ../compute-security.rst:47
msgid ""
"Enable scheduling support for trusted compute pools by adding these lines to "
"the ``DEFAULT`` section of the :file:`/etc/nova/nova.conf` file:"
msgstr ""
#: ../compute-security.rst:57
msgid ""
"Specify the connection information for your attestation service by adding "
"these lines to the ``trusted_computing`` section of the :file:`/etc/nova/"
"nova.conf` file:"
msgstr ""
# #-#-#-#-# compute-security.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# database.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../compute-security.rst:75 ../database.rst:104 ../database.rst:142
msgid "In this example:"
msgstr ""
#: ../compute-security.rst:78
msgid "Host name or IP address of the host that runs the attestation service"
msgstr ""
#: ../compute-security.rst:79
msgid "server"
msgstr ""
#: ../compute-security.rst:82
msgid "HTTPS port for the attestation service"
msgstr ""
# #-#-#-#-# compute-security.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-measurements.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../compute-security.rst:82 ../telemetry-measurements.rst:944
#: ../telemetry-measurements.rst:947 ../telemetry-measurements.rst:950
#: ../telemetry-measurements.rst:990
msgid "port"
msgstr ""
#: ../compute-security.rst:85
msgid "Certificate file used to verify the attestation server's identity"
msgstr ""
#: ../compute-security.rst:85
msgid "server_ca_file"
msgstr ""
#: ../compute-security.rst:88
msgid "The attestation service's URL path"
msgstr ""
#: ../compute-security.rst:88
msgid "api_url"
msgstr ""
#: ../compute-security.rst:91
msgid "An authentication blob, required by the attestation service."
msgstr ""
#: ../compute-security.rst:91
msgid "auth_blob"
msgstr ""
#: ../compute-security.rst:93
msgid ""
"Save the file, and restart the nova-compute and nova-scheduler services to "
"pick up the changes."
msgstr ""
#: ../compute-security.rst:96
msgid ""
"To customize the trusted compute pools, use these configuration option "
"settings:"
msgstr ""
#: ../compute-security.rst:104
msgid "[trusted_computing]"
msgstr ""
#: ../compute-security.rst:106
msgid "attestation_api_url = /OpenAttestationWebServices/V1.0"
msgstr ""
#: ../compute-security.rst:107
msgid "(StrOpt) Attestation web API URL"
msgstr ""
#: ../compute-security.rst:108
msgid "attestation_auth_blob = None"
msgstr ""
#: ../compute-security.rst:109
msgid "(StrOpt) Attestation authorization blob - must change"
msgstr ""
#: ../compute-security.rst:110
msgid "attestation_auth_timeout = 60"
msgstr ""
#: ../compute-security.rst:111
msgid "(IntOpt) Attestation status cache valid period length"
msgstr ""
#: ../compute-security.rst:112
msgid "attestation_insecure_ssl = False"
msgstr ""
#: ../compute-security.rst:113
msgid "(BoolOpt) Disable SSL cert verification for Attestation service"
msgstr ""
#: ../compute-security.rst:114
msgid "attestation_port = 8443"
msgstr ""
#: ../compute-security.rst:115
msgid "(StrOpt) Attestation server port"
msgstr ""
#: ../compute-security.rst:116
msgid "attestation_server = None"
msgstr ""
#: ../compute-security.rst:117
msgid "(StrOpt) Attestation server HTTP"
msgstr ""
#: ../compute-security.rst:118
msgid "attestation_server_ca_file = None"
msgstr ""
#: ../compute-security.rst:119
msgid "(StrOpt) Attestation server Cert file for Identity verification"
msgstr ""
#: ../compute-security.rst:121
msgid "**Specifying trusted flavors**"
msgstr ""
#: ../compute-security.rst:123
msgid ""
"Flavors can be designated as trusted using the ``nova flavor-key set`` "
"command. In this example, the ``m1.tiny`` flavor is being set as trusted:"
msgstr ""
#: ../compute-security.rst:131
msgid ""
"You can request that your instance is run on a trusted host by specifying a "
"trusted flavor when booting the instance:"
msgstr ""
#: ../compute-security.rst:138
msgid "|Trusted compute pool|"
msgstr ""
#: ../compute-security.rst:145
msgid "Encrypt Compute metadata traffic"
msgstr ""
#: ../compute-security.rst:147
msgid "**Enabling SSL encryption**"
msgstr ""
#: ../compute-security.rst:149
msgid ""
"OpenStack supports encrypting Compute metadata traffic with HTTPS. Enable "
"SSL encryption in the :file:`metadata_agent.ini` file."
msgstr ""
#: ../compute-security.rst:152
msgid "Enable the HTTPS protocol::"
msgstr ""
#: ../compute-security.rst:156
msgid ""
"Determine whether insecure SSL connections are accepted for Compute metadata "
"server requests. The default value is ``False``::"
msgstr ""
#: ../compute-security.rst:161
msgid "Specify the path to the client certificate::"
msgstr ""
#: ../compute-security.rst:165
msgid "Specify the path to the private key::"
msgstr ""
#: ../compute-service-groups.rst:5
msgid "Configure Compute service groups"
msgstr ""
#: ../compute-service-groups.rst:7
msgid ""
"The Compute service must know the status of each compute node to effectively "
"manage and use them. This can include events like a user launching a new VM, "
"the scheduler sending a request to a live node, or a query to the "
"ServiceGroup API to determine if a node is live."
msgstr ""
#: ../compute-service-groups.rst:12
msgid ""
"When a compute worker running the nova-compute daemon starts, it calls the "
"join API to join the compute group. Any service (such as the scheduler) can "
"query the group's membership and the status of its nodes. Internally, the "
"ServiceGroup client driver automatically updates the compute worker status."
msgstr ""
#: ../compute-service-groups.rst:21
msgid "Database ServiceGroup driver"
msgstr ""
#: ../compute-service-groups.rst:23
msgid ""
"By default, Compute uses the database driver to track if a node is live. In "
"a compute worker, this driver periodically sends a ``db update`` command to "
"the database, saying “I'm OK” with a timestamp. Compute uses a pre-defined "
"timeout (``service_down_time``) to determine if a node is dead."
msgstr ""
#: ../compute-service-groups.rst:29
msgid ""
"The driver has limitations, which can be problematic depending on your "
"environment. If a lot of compute worker nodes need to be checked, the "
"database can be put under heavy load, which can cause the timeout to "
"trigger, and a live node could incorrectly be considered dead. By default, "
"the timeout is 60 seconds. Reducing the timeout value can help in this "
"situation, but you must also make the database update more frequently, which "
"again increases the database workload."
msgstr ""
#: ../compute-service-groups.rst:37
msgid ""
"The database contains data that is both transient (such as whether the node "
"is alive) and persistent (such as entries for VM owners). With the "
"ServiceGroup abstraction, Compute can treat each type separately."
msgstr ""
#: ../compute-service-groups.rst:44
msgid "ZooKeeper ServiceGroup driver"
msgstr ""
#: ../compute-service-groups.rst:46
msgid ""
"The ZooKeeper ServiceGroup driver works by using ZooKeeper ephemeral nodes. "
"ZooKeeper, unlike databases, is a distributed system, with its load divided "
"among several servers. On a compute worker node, the driver can establish a "
"ZooKeeper session, then create an ephemeral znode in the group directory. "
"Ephemeral znodes have the same lifespan as the session. If the worker node "
"or the nova-compute daemon crashes, or a network partition is in place "
"between the worker and the ZooKeeper server quorums, the ephemeral znodes "
"are removed automatically. The driver can be given group membership by "
"running the :command:`ls` command in the group directory."
msgstr ""
#: ../compute-service-groups.rst:57
msgid ""
"The ZooKeeper driver requires the ZooKeeper servers and client libraries. "
"Setting up ZooKeeper servers is outside the scope of this guide (for more "
"information, see `Apache Zookeeper <http://zookeeper.apache.org/>`_). These "
"client-side Python libraries must be installed on every compute node:"
msgstr ""
#: ../compute-service-groups.rst:63
msgid "**python-zookeeper**"
msgstr ""
#: ../compute-service-groups.rst:63
msgid "The official Zookeeper Python binding"
msgstr ""
#: ../compute-service-groups.rst:66
msgid "**evzookeeper**"
msgstr ""
#: ../compute-service-groups.rst:66
msgid "This library makes the binding work with the eventlet threading model."
msgstr ""
#: ../compute-service-groups.rst:68
msgid ""
"This example assumes the ZooKeeper server addresses and ports are "
"``192.168.2.1:2181``, ``192.168.2.2:2181``, and ``192.168.2.3:2181``."
msgstr ""
#: ../compute-service-groups.rst:71
msgid ""
"These values in the :file:`/etc/nova/nova.conf` file are required on every "
"node for the ZooKeeper driver:"
msgstr ""
#: ../compute-service-groups.rst:82
msgid ""
"For Compute Service groups customization options, see the `OpenStack "
"Configuration Reference Guide <http://docs.openstack.org/liberty/config-"
"reference/ content/list-of-compute-config-options."
"html#config_table_nova_zookeeper>`_."
msgstr ""
#: ../compute-service-groups.rst:89
msgid "Memcache ServiceGroup driver"
msgstr ""
#: ../compute-service-groups.rst:91
msgid ""
"The memcache ServiceGroup driver uses memcached, a distributed memory object "
"caching system that is used to increase site performance. For more details, "
"see `memcached.org <http://memcached.org/>`_."
msgstr ""
#: ../compute-service-groups.rst:95
msgid ""
"To use the memcache driver, you must install memcached. You might already "
"have it installed, as the same driver is also used for the OpenStack Object "
"Storage and OpenStack dashboard. If you need to install memcached, see the "
"instructions in the `OpenStack Installation Guide <http://docs.openstack.org/"
"index.html#install-guides>`_."
msgstr ""
#: ../compute-service-groups.rst:100
msgid ""
"These values in the :file:`/etc/nova/nova.conf` file are required on every "
"node for the memcache driver:"
msgstr ""
#: ../compute-system-admin.rst:5
msgid "System administration"
msgstr ""
#: ../compute-system-admin.rst:25
msgid ""
"To effectively administer Compute, you must understand how the different "
"installed nodes interact with each other. Compute can be installed in many "
"different ways using multiple servers, but generally multiple compute nodes "
"control the virtual servers and a cloud controller node contains the "
"remaining Compute services."
msgstr ""
#: ../compute-system-admin.rst:31
msgid ""
"The Compute cloud works using a series of daemon processes named nova-\\* "
"that exist persistently on the host machine. These binaries can all run on "
"the same machine or be spread out on multiple boxes in a large deployment. "
"The responsibilities of services and drivers are:"
msgstr ""
#: ../compute-system-admin.rst:36
msgid "**Services**"
msgstr ""
#: ../compute-system-admin.rst:39
msgid ""
"receives XML requests and sends them to the rest of the system. A WSGI app "
"routes and authenticates requests. Supports the EC2 and OpenStack APIs. A :"
"file:`nova.conf` configuration file is created when Compute is installed."
msgstr ""
#: ../compute-system-admin.rst:42
msgid "``nova-api``"
msgstr ""
#: ../compute-system-admin.rst:45
msgid "``nova-cert``"
msgstr ""
#: ../compute-system-admin.rst:45
msgid "manages certificates."
msgstr ""
#: ../compute-system-admin.rst:48
msgid ""
"manages virtual machines. Loads a Service object, and exposes the public "
"methods on ComputeManager through a Remote Procedure Call (RPC)."
msgstr ""
#: ../compute-system-admin.rst:50
msgid "``nova-compute``"
msgstr ""
#: ../compute-system-admin.rst:53
msgid ""
"provides database-access support for Compute nodes (thereby reducing "
"security risks)."
msgstr ""
#: ../compute-system-admin.rst:54
msgid "``nova-conductor``"
msgstr ""
#: ../compute-system-admin.rst:57
msgid "``nova-consoleauth``"
msgstr ""
#: ../compute-system-admin.rst:57
msgid "manages console authentication."
msgstr ""
#: ../compute-system-admin.rst:60
msgid ""
"a simple file-based storage system for images that replicates most of the S3 "
"API. It can be replaced with OpenStack Image service and either a simple "
"image manager or OpenStack Object Storage as the virtual machine image "
"storage facility. It must exist on the same node as nova-compute."
msgstr ""
#: ../compute-system-admin.rst:64
msgid "``nova-objectstore``"
msgstr ""
#: ../compute-system-admin.rst:67
msgid ""
"manages floating and fixed IPs, DHCP, bridging and VLANs. Loads a Service "
"object which exposes the public methods on one of the subclasses of "
"NetworkManager. Different networking strategies are available by changing "
"the ``network_manager`` configuration option to ``FlatManager``, "
"``FlatDHCPManager``, or ``VLANManager`` (defaults to ``VLANManager`` if "
"nothing is specified)."
msgstr ""
#: ../compute-system-admin.rst:72
msgid "``nova-network``"
msgstr ""
#: ../compute-system-admin.rst:75
msgid "dispatches requests for new virtual machines to the correct node."
msgstr ""
#: ../compute-system-admin.rst:76
msgid "``nova-scheduler``"
msgstr ""
#: ../compute-system-admin.rst:79
msgid ""
"provides a VNC proxy for browsers, allowing VNC consoles to access virtual "
"machines."
msgstr ""
#: ../compute-system-admin.rst:80
msgid "``nova-novncproxy``"
msgstr ""
#: ../compute-system-admin.rst:84
msgid ""
"Some services have drivers that change how the service implements its core "
"functionality. For example, the nova-compute service supports drivers that "
"let you choose which hypervisor type it can use. nova-network and nova-"
"scheduler also have drivers."
msgstr ""
# #-#-#-#-# compute.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# networking_config-identity.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../compute.rst:3 ../networking_config-identity.rst:123
msgid "Compute"
msgstr ""
#: ../compute.rst:5
msgid ""
"The OpenStack Compute service allows you to control an Infrastructure-as-a-"
"Service (IaaS) cloud computing platform. It gives you control over instances "
"and networks, and allows you to manage access to the cloud through users and "
"projects."
msgstr ""
#: ../compute.rst:10
msgid ""
"Compute does not include virtualization software. Instead, it defines "
"drivers that interact with underlying virtualization mechanisms that run on "
"your host operating system, and exposes functionality over a web-based API."
msgstr ""
#: ../compute_arch.rst:5
msgid "OpenStack Compute contains several main components."
msgstr ""
#: ../compute_arch.rst:7
msgid ""
"The :term:`cloud controller` represents the global state and interacts with "
"the other components. The ``API server`` acts as the web services front end "
"for the cloud controller. The ``compute controller`` provides compute server "
"resources and usually also contains the Compute service."
msgstr ""
#: ../compute_arch.rst:13
msgid ""
"The ``object store`` is an optional component that provides storage "
"services; you can also use OpenStack Object Storage instead."
msgstr ""
#: ../compute_arch.rst:16
msgid ""
"An ``auth manager`` provides authentication and authorization services when "
"used with the Compute system; you can also use OpenStack Identity as a "
"separate authentication service instead."
msgstr ""
#: ../compute_arch.rst:20
msgid ""
"A ``volume controller`` provides fast and permanent block-level storage for "
"the compute servers."
msgstr ""
#: ../compute_arch.rst:23
msgid ""
"The ``network controller`` provides virtual networks to enable compute "
"servers to interact with each other and with the public network. You can "
"also use OpenStack Networking instead."
msgstr ""
#: ../compute_arch.rst:27
msgid ""
"The ``scheduler`` is used to select the most suitable compute controller to "
"host an instance."
msgstr ""
#: ../compute_arch.rst:30
msgid ""
"Compute uses a messaging-based, ``shared nothing`` architecture. All major "
"components exist on multiple servers, including the compute, volume, and "
"network controllers, and the object store or image service. The state of the "
"entire system is stored in a database. The cloud controller communicates "
"with the internal object store using HTTP, but it communicates with the "
"scheduler, network controller, and volume controller using AMQP (advanced "
"message queuing protocol). To avoid blocking a component while waiting for a "
"response, Compute uses asynchronous calls, with a callback that is triggered "
"when a response is received."
msgstr ""
#: ../compute_arch.rst:42
msgid "Hypervisors"
msgstr ""
#: ../compute_arch.rst:43
msgid ""
"Compute controls hypervisors through an API server. Selecting the best "
"hypervisor to use can be difficult, and you must take budget, resource "
"constraints, supported features, and required technical specifications into "
"account. However, the majority of OpenStack development is done on systems "
"using KVM and Xen-based hypervisors. For a detailed list of features and "
"support across different hypervisors, see http://wiki.openstack.org/"
"HypervisorSupportMatrix."
msgstr ""
#: ../compute_arch.rst:51
msgid ""
"You can also orchestrate clouds using multiple hypervisors in different "
"availability zones. Compute supports the following hypervisors:"
msgstr ""
#: ../compute_arch.rst:54
msgid "`Baremetal <https://wiki.openstack.org/wiki/Ironic>`__"
msgstr ""
#: ../compute_arch.rst:56
msgid "`Docker <https://www.docker.io>`__"
msgstr ""
# #-#-#-#-# compute_arch.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-system-architecture.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../compute_arch.rst:58 ../telemetry-system-architecture.rst:133
msgid ""
"`Hyper-V <http://www.microsoft.com/en-us/server-cloud/hyper-v-server/default."
"aspx>`__"
msgstr ""
# #-#-#-#-# compute_arch.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-system-architecture.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../compute_arch.rst:60 ../telemetry-system-architecture.rst:118
msgid ""
"`Kernel-based Virtual Machine (KVM) <http://www.linux-kvm.org/page/"
"Main_Page>`__"
msgstr ""
# #-#-#-#-# compute_arch.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-system-architecture.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../compute_arch.rst:63 ../telemetry-system-architecture.rst:123
msgid "`Linux Containers (LXC) <https://linuxcontainers.org/>`__"
msgstr ""
#: ../compute_arch.rst:65
msgid "`Quick Emulator (QEMU) <http://wiki.qemu.org/Manual>`__"
msgstr ""
#: ../compute_arch.rst:67
msgid "`User Mode Linux (UML) <http://user-mode-linux.sourceforge.net/>`__"
msgstr ""
#: ../compute_arch.rst:69
msgid ""
"`VMware vSphere <http://www.vmware.com/products/vsphere-hypervisor/support."
"html>`__"
msgstr ""
#: ../compute_arch.rst:72
msgid "`Xen <http://www.xen.org/support/documentation.html>`__"
msgstr ""
#: ../compute_arch.rst:74
msgid ""
"For more information about hypervisors, see the `Hypervisors <http://docs."
"openstack.org/liberty/config-reference/content/section_compute-hypervisors."
"html>`__ section in the OpenStack Configuration Reference."
msgstr ""
#: ../compute_arch.rst:79
msgid "Tenants, users, and roles"
msgstr ""
#: ../compute_arch.rst:80
msgid ""
"The Compute system is designed to be used by different consumers in the form "
"of tenants on a shared system, and role-based access assignments. Roles "
"control the actions that a user is allowed to perform."
msgstr ""
#: ../compute_arch.rst:84
msgid ""
"Tenants are isolated resource containers that form the principal "
"organizational structure within the Compute service. They consist of an "
"individual VLAN, and volumes, instances, images, keys, and users. A user can "
"specify the tenant by appending ``project_id`` to their access key. If no "
"tenant is specified in the API request, Compute attempts to use a tenant "
"with the same ID as the user."
msgstr ""
#: ../compute_arch.rst:91
msgid "For tenants, you can use quota controls to limit the:"
msgstr ""
#: ../compute_arch.rst:93
msgid "Number of volumes that can be launched."
msgstr ""
#: ../compute_arch.rst:95
msgid "Number of processor cores and the amount of RAM that can be allocated."
msgstr ""
#: ../compute_arch.rst:98
msgid ""
"Floating IP addresses assigned to any instance when it launches. This allows "
"instances to have the same publicly accessible IP addresses."
msgstr ""
#: ../compute_arch.rst:101
msgid ""
"Fixed IP addresses assigned to the same instance when it launches. This "
"allows instances to have the same publicly or privately accessible IP "
"addresses."
msgstr ""
#: ../compute_arch.rst:105
msgid ""
"Roles control the actions a user is allowed to perform. By default, most "
"actions do not require a particular role, but you can configure them by "
"editing the :file:`policy.json` file for user roles. For example, a rule can "
"be defined so that a user must have the ``admin`` role in order to be able "
"to allocate a public IP address."
msgstr ""
#: ../compute_arch.rst:111
msgid ""
"A tenant limits users' access to particular images. Each user is assigned a "
"user name and password. Keypairs granting access to an instance are enabled "
"for each user, but quotas are set, so that each tenant can control resource "
"consumption across available hardware resources."
msgstr ""
#: ../compute_arch.rst:119
msgid ""
"Earlier versions of OpenStack used the term ``project`` instead of "
"``tenant``. Because of this legacy terminology, some command-line tools use "
"``--project_id`` where you would normally expect to enter a tenant ID."
msgstr ""
#: ../compute_arch.rst:125
msgid "Block storage"
msgstr ""
#: ../compute_arch.rst:126
msgid ""
"OpenStack provides two classes of block storage: ephemeral storage and "
"persistent volume."
msgstr ""
#: ../compute_arch.rst:129
msgid "**Ephemeral storage**"
msgstr ""
#: ../compute_arch.rst:131
msgid ""
"Ephemeral storage includes a root ephemeral volume and an additional "
"ephemeral volume."
msgstr ""
#: ../compute_arch.rst:134
msgid ""
"The root disk is associated with an instance, and exists only for the life "
"of this very instance. Generally, it is used to store an instance's root "
"file system, persists across the guest operating system reboots, and is "
"removed on an instance deletion. The amount of the root ephemeral volume is "
"defined by the flavor of an instance."
msgstr ""
#: ../compute_arch.rst:140
msgid ""
"In addition to the ephemeral root volume, all default types of flavors, "
"except ``m1.tiny``, which is the smallest one, provide an additional "
"ephemeral block device sized between 20 and 160 GB (a configurable value to "
"suit an environment). It is represented as a raw block device with no "
"partition table or file system. A cloud-aware operating system can discover, "
"format, and mount such a storage device. OpenStack Compute defines the "
"default file system for different operating systems as Ext4 for Linux "
"distributions, VFAT for non-Linux and non-Windows operating systems, and "
"NTFS for Windows. However, it is possible to specify any other filesystem "
"type by using ``virt_mkfs`` or ``default_ephemeral_format`` configuration "
"options."
msgstr ""
#: ../compute_arch.rst:154
msgid ""
"For example, the ``cloud-init`` package included into an Ubuntu's stock "
"cloud image, by default, formats this space as an Ext4 file system and "
"mounts it on :file:`/mnt`. This is a cloud-init feature, and is not an "
"OpenStack mechanism. OpenStack only provisions the raw storage."
msgstr ""
#: ../compute_arch.rst:159
msgid "**Persistent volume**"
msgstr ""
#: ../compute_arch.rst:161
msgid ""
"A persistent volume is represented by a persistent virtualized block device "
"independent of any particular instance, and provided by OpenStack Block "
"Storage."
msgstr ""
#: ../compute_arch.rst:165
msgid ""
"Only a single configured instance can access a persistent volume. Multiple "
"instances cannot access a persistent volume. This type of configuration "
"requires a traditional network file system to allow multiple instances "
"accessing the persistent volume. It also requires a traditional network file "
"system like NFS, CIFS, or a cluster file system such as GlusterFS. These "
"systems can be built within an OpenStack cluster, or provisioned outside of "
"it, but OpenStack software does not provide these features."
msgstr ""
#: ../compute_arch.rst:174
msgid ""
"You can configure a persistent volume as bootable and use it to provide a "
"persistent virtual instance similar to the traditional non-cloud-based "
"virtualization system. It is still possible for the resulting instance to "
"keep ephemeral storage, depending on the flavor selected. In this case, the "
"root file system can be on the persistent volume, and its state is "
"maintained, even if the instance is shut down. For more information about "
"this type of configuration, see the `OpenStack Configuration Reference "
"<http://docs.openstack.org/liberty/config-reference/content/>`__."
msgstr ""
#: ../compute_arch.rst:186
msgid ""
"A persistent volume does not provide concurrent access from multiple "
"instances. That type of configuration requires a traditional network file "
"system like NFS, or CIFS, or a cluster file system such as GlusterFS. These "
"systems can be built within an OpenStack cluster, or provisioned outside of "
"it, but OpenStack software does not provide these features."
msgstr ""
#: ../compute_arch.rst:194
msgid "EC2 compatibility API"
msgstr ""
#: ../compute_arch.rst:195
msgid ""
"In addition to the native compute API, OpenStack provides an EC2-compatible "
"API. This API allows EC2 legacy workflows built for EC2 to work with "
"OpenStack. For more information and configuration options about this "
"compatibility API, see the `OpenStack Configuration Reference <http://docs."
"openstack.org/liberty/config-reference/content/ configuring-ec2-api.html>`__."
msgstr ""
#: ../compute_arch.rst:202
msgid ""
"Numerous third-party tools and language-specific SDKs can be used to "
"interact with OpenStack clouds, using both native and compatibility APIs. "
"Some of the more popular third-party tools are:"
msgstr ""
#: ../compute_arch.rst:207
msgid ""
"A popular open source command-line tool for interacting with the EC2 API. "
"This is convenient for multi-cloud environments where EC2 is the common API, "
"or for transitioning from EC2-based clouds to OpenStack. For more "
"information, see the `euca2ools site <https://www.eucalyptus.com/docs/"
"eucalyptus/4.1.2/index.html#shared/euca2ools_section.html>`__."
msgstr ""
#: ../compute_arch.rst:211
msgid "Euca2ools"
msgstr ""
#: ../compute_arch.rst:214
msgid ""
"A Firefox browser add-on that provides a graphical interface to many popular "
"public and private cloud technologies, including OpenStack. For more "
"information, see the `hybridfox site <http://code.google.com/p/hybridfox/"
">`__."
msgstr ""
#: ../compute_arch.rst:217
msgid "Hybridfox"
msgstr ""
#: ../compute_arch.rst:220
msgid ""
"A Python library for interacting with Amazon Web Services. It can be used to "
"access OpenStack through the EC2 compatibility API. For more information, "
"see the `boto project page on GitHub <https://github.com/boto/boto>`__."
msgstr ""
#: ../compute_arch.rst:223
msgid "boto"
msgstr ""
#: ../compute_arch.rst:226
msgid ""
"A Ruby cloud services library. It provides methods for interacting with a "
"large number of cloud and virtualization platforms, including OpenStack. For "
"more information, see the `fog site <https://rubygems.org/gems/fog>`__."
msgstr ""
#: ../compute_arch.rst:229
msgid "fog"
msgstr ""
#: ../compute_arch.rst:232
msgid ""
"A PHP SDK designed to work with most OpenStack- based cloud deployments, as "
"well as Rackspace public cloud. For more information, see the `php-opencloud "
"site <http://www.php-opencloud.com>`__."
msgstr ""
#: ../compute_arch.rst:235
msgid "php-opencloud"
msgstr ""
#: ../compute_arch.rst:238
msgid "Building blocks"
msgstr ""
#: ../compute_arch.rst:239
msgid ""
"In OpenStack the base operating system is usually copied from an image "
"stored in the OpenStack Image service. This is the most common case and "
"results in an ephemeral instance that starts from a known template state and "
"loses all accumulated states on virtual machine deletion. It is also "
"possible to put an operating system on a persistent volume in the OpenStack "
"Block Storage volume system. This gives a more traditional persistent system "
"that accumulates states which are preserved on the OpenStack Block Storage "
"volume across the deletion and re-creation of the virtual machine. To get a "
"list of available images on your system, run::"
msgstr ""
#: ../compute_arch.rst:262
msgid "Automatically generated UUID of the image"
msgstr ""
#: ../compute_arch.rst:265
msgid "Free form, human-readable name for image"
msgstr ""
#: ../compute_arch.rst:276
msgid ""
"Virtual hardware templates are called ``flavors``. The default installation "
"provides five flavors. By default, these are configurable by admin users, "
"however that behavior can be changed by redefining the access controls for "
"``compute_extension:flavormanage`` in :file:`/etc/nova/policy.json` on the "
"``compute-api`` server."
msgstr ""
#: ../compute_arch.rst:282
msgid "For a list of flavors that are available on your system::"
msgstr ""
#: ../compute_arch.rst:296
msgid "Compute service architecture"
msgstr ""
#: ../compute_arch.rst:297
msgid ""
"These basic categories describe the service architecture and information "
"about the cloud controller."
msgstr ""
#: ../compute_arch.rst:300
msgid "**API server**"
msgstr ""
#: ../compute_arch.rst:302
msgid ""
"At the heart of the cloud framework is an API server, which makes command "
"and control of the hypervisor, storage, and networking programmatically "
"available to users."
msgstr ""
#: ../compute_arch.rst:306
msgid ""
"The API endpoints are basic HTTP web services which handle authentication, "
"authorization, and basic command and control functions using various API "
"interfaces under the Amazon, Rackspace, and related models. This enables API "
"compatibility with multiple existing tool sets created for interaction with "
"offerings from other vendors. This broad compatibility prevents vendor lock-"
"in."
msgstr ""
#: ../compute_arch.rst:313
msgid "**Message queue**"
msgstr ""
#: ../compute_arch.rst:315
msgid ""
"A messaging queue brokers the interaction between compute nodes "
"(processing), the networking controllers (software which controls network "
"infrastructure), API endpoints, the scheduler (determines which physical "
"hardware to allocate to a virtual resource), and similar components. "
"Communication to and from the cloud controller is handled by HTTP requests "
"through multiple API endpoints."
msgstr ""
#: ../compute_arch.rst:322
msgid ""
"A typical message passing event begins with the API server receiving a "
"request from a user. The API server authenticates the user and ensures that "
"they are permitted to issue the subject command. The availability of objects "
"implicated in the request is evaluated and, if available, the request is "
"routed to the queuing engine for the relevant workers. Workers continually "
"listen to the queue based on their role, and occasionally their type host "
"name. When an applicable work request arrives on the queue, the worker takes "
"assignment of the task and begins executing it. Upon completion, a response "
"is dispatched to the queue which is received by the API server and relayed "
"to the originating user. Database entries are queried, added, or removed as "
"necessary during the process."
msgstr ""
#: ../compute_arch.rst:335
msgid "**Compute worker**"
msgstr ""
#: ../compute_arch.rst:337
msgid ""
"Compute workers manage computing instances on host machines. The API "
"dispatches commands to compute workers to complete these tasks:"
msgstr ""
#: ../compute_arch.rst:340
msgid "Run instances"
msgstr ""
#: ../compute_arch.rst:342
msgid "Terminate instances"
msgstr ""
#: ../compute_arch.rst:344
msgid "Reboot instances"
msgstr ""
#: ../compute_arch.rst:346
msgid "Attach volumes"
msgstr ""
#: ../compute_arch.rst:348
msgid "Detach volumes"
msgstr ""
#: ../compute_arch.rst:350
msgid "Get console output"
msgstr ""
#: ../compute_arch.rst:352
msgid "**Network Controller**"
msgstr ""
#: ../compute_arch.rst:354
msgid ""
"The Network Controller manages the networking resources on host machines. "
"The API server dispatches commands through the message queue, which are "
"subsequently processed by Network Controllers. Specific operations include:"
msgstr ""
#: ../compute_arch.rst:359
msgid "Allocating fixed IP addresses"
msgstr ""
#: ../compute_arch.rst:361
msgid "Configuring VLANs for projects"
msgstr ""
#: ../compute_arch.rst:363
msgid "Configuring networks for compute nodes"
msgstr ""
#: ../cross_project.rst:3
msgid "Cross-project features"
msgstr ""
#: ../cross_project.rst:5
msgid ""
"Many features are common to all the OpenStack services and are consistent in "
"their configuration and deployment patterns. Unless explicitly noted, you "
"can safely assume that the features in this chapter are supported and "
"configured in a consistent manner."
msgstr ""
#: ../cross_project_cors.rst:5
msgid "Cross-origin resource sharing"
msgstr ""
#: ../cross_project_cors.rst:8
msgid "This is a new feature in OpenStack Liberty."
msgstr ""
#: ../cross_project_cors.rst:10
msgid ""
"OpenStack supports :term:`Cross-Origin Resource Sharing (CORS)`, a W3C "
"specification defining a contract by which the single-origin policy of a "
"user agent (usually a browser) may be relaxed. It permits it's javascript "
"engine to access an API that does not reside on the same domain, protocol, "
"or port."
msgstr ""
#: ../cross_project_cors.rst:15
msgid ""
"This feature is most useful to organizations which maintain one or more "
"custom user interfaces for OpenStack, as it permits those interfaces to "
"access the services directly, rather than requiring an intermediate proxy "
"server. It can, however, also be misused by malicious actors; please review "
"the security advisory below for more information."
msgstr ""
#: ../cross_project_cors.rst:23
msgid ""
"Both the Object Storage and dashboard projects provide CORS support that is "
"not covered by this document. For those, please refer to their respective "
"implementations:"
msgstr ""
#: ../cross_project_cors.rst:27
msgid ""
"`CORS in Object Storage <http://docs.openstack.org/liberty/config-reference/"
"content/object-storage-cores.html>`_"
msgstr ""
#: ../cross_project_cors.rst:28
msgid ""
"`CORS in dashboard <http://docs.openstack.org/security-guide/dashboard/cross-"
"origin-resource-sharing-cors.html>`_"
msgstr ""
#: ../cross_project_cors.rst:32
msgid "Enabling CORS with configuration"
msgstr ""
#: ../cross_project_cors.rst:34
msgid ""
"In most cases, CORS support is built directly into the service itself. To "
"enable it, simply follow the configuration options exposed in the default "
"configuration file, or add it yourself according to the pattern below."
msgstr ""
#: ../cross_project_cors.rst:47
msgid ""
"This method also enables you to define multiple origins. To express this in "
"your configuration file, first begin with a :code:`[cors]` group as above, "
"into which you place your default configuration values. Then, add as many "
"additional configuration groups as necessary, naming them :code:`[cors."
"{something}]` (each name must be unique). The purpose of the suffix to :code:"
"`cors.` is legibility, we recommend using a reasonable human-readable string:"
msgstr ""
#: ../cross_project_cors.rst:74
msgid "Enabling CORS with PasteDeploy"
msgstr ""
#: ../cross_project_cors.rst:76
msgid ""
"In other services, CORS is configured via PasteDeploy. In this case, you "
"must first make sure that OpenStack's :code:`oslo_middleware` package "
"(version 2.4.0 or later) is available in the Python environment that is "
"running the service. Then, add the following configuration block to your :"
"file:`paste.ini` file."
msgstr ""
#: ../cross_project_cors.rst:92
msgid "To add another domain, simply add another filter."
msgstr ""
#: ../cross_project_cors.rst:95
msgid "Security concerns"
msgstr ""
#: ../cross_project_cors.rst:97
msgid ""
"CORS specifies a wildcard character `*`, which permits access to all user "
"agents, regardless of domain, protocol, or host. While there are valid use "
"cases for this approach, it also permits a malicious actor to create a "
"convincing facsimile of a user interface, and trick users into revealing "
"authentication credentials. Please carefully evaluate your use case and the "
"relevant documentation for any risk to your organization."
msgstr ""
#: ../cross_project_cors.rst:104
msgid ""
"The CORS specification does not support using this wildcard as a part of a "
"URI. Setting allowed-origin to `*` would work, while :code:`*.openstack.org` "
"would not."
msgstr ""
#: ../cross_project_cors.rst:110
msgid ""
"CORS is very easy to get wrong, as even one incorrect property will violate "
"the prescribed contract. Here are some steps you can take to troubleshoot "
"your configuration."
msgstr ""
#: ../cross_project_cors.rst:115
msgid "Check the service log"
msgstr ""
#: ../cross_project_cors.rst:117
msgid ""
"The CORS middleware used by OpenStack provides verbose debug logging that "
"should reveal most configuration problems. Here are some example log "
"messages, and how to resolve them."
msgstr ""
#: ../cross_project_cors.rst:122
msgid ""
"A request was received from the origin 'http://foo.com', however this origin "
"was not found in the permitted list. The cause may be a superfluous port "
"notation (ports 80 and 443 do not need to be specified). To correct, ensure "
"that the configuration property for this host is identical to the host "
"indicated in the log message."
msgstr ""
#: ../cross_project_cors.rst:126
msgid "``CORS request from origin 'http://foo.com' not permitted.``"
msgstr ""
#: ../cross_project_cors.rst:129
msgid ""
"A user agent has requested permission to perform a DELETE request, however "
"the CORS configuration for the domain does not permit this. To correct, add "
"this method to the :code:`allow_methods` configuration property."
msgstr ""
#: ../cross_project_cors.rst:131
msgid "``Request method 'DELETE' not in permitted list: GET,PUT,POST``"
msgstr ""
#: ../cross_project_cors.rst:134
msgid ""
"A request was received with the header 'X-Custom-Header', which is not "
"permitted. Add this header to the :code:`allow_headers` configuration "
"property."
msgstr ""
#: ../cross_project_cors.rst:136
msgid ""
"``Request header 'X-Custom-Header' not in permitted list: X-Other-Header``"
msgstr ""
#: ../cross_project_cors.rst:139
msgid "Open your browser's console log"
msgstr ""
#: ../cross_project_cors.rst:141
msgid ""
"Most browsers provide helpful debug output when a CORS request is rejected. "
"Usually this happens when a request was successful, but the return headers "
"on the response do not permit access to a property which the browser is "
"trying to access."
msgstr ""
#: ../cross_project_cors.rst:147
msgid "Manually construct a CORS request"
msgstr ""
#: ../cross_project_cors.rst:148
msgid ""
"By using ``curl`` or a similar tool, you can trigger a CORS response with a "
"properly constructed HTTP request. An example request and response might "
"look like this."
msgstr ""
#: ../cross_project_cors.rst:152
msgid "Request::"
msgstr ""
#: ../cross_project_cors.rst:156
msgid "Response::"
msgstr ""
#: ../cross_project_cors.rst:166
msgid ""
"If the service does not return any access control headers, check the service "
"log, such as :code:`/var/log/upstart/ironic-api.log` for an indication on "
"what went wrong."
msgstr ""
#: ../dashboard.rst:3
msgid "Dashboard"
msgstr ""
#: ../dashboard.rst:5
msgid ""
"The OpenStack dashboard is a web-based interface that allows you to manage "
"OpenStack resources and services. The dashboard allows you to interact with "
"the OpenStack Compute cloud controller using the OpenStack APIs. For more "
"information about installing and configuring the dashboard, see the "
"`OpenStack Installation Guide <http://docs.openstack.org/index.html#install-"
"guides>`__ for your operating system."
msgstr ""
#: ../dashboard.rst:20
msgid ""
"To deploy the dashboard, see the `OpenStack dashboard documentation <http://"
"docs.openstack.org/developer/horizon/topics/deployment.html>`__."
msgstr ""
#: ../dashboard.rst:22
msgid ""
"To launch instances with the dashboard, see the `OpenStack End User Guide "
"<http://docs.openstack.org/user-guide/dashboard_launch_instances.html>`__."
msgstr ""
#: ../dashboard_sessions.rst:3
msgid "Set up session storage for the Dashboard"
msgstr ""
#: ../dashboard_sessions.rst:5
msgid ""
"The dashboard uses `Django sessions framework <https://docs.djangoproject."
"com/en/dev/topics/http/sessions/>`__ to handle user session data. However, "
"you can use any available session back end. You customize the session back "
"end through the ``SESSION_ENGINE`` setting in your :file:`local_settings` "
"file (on Fedora/RHEL/ CentOS: :file:`/etc/openstack-dashboard/"
"local_settings`, on Ubuntu and Debian: :file:`/etc/openstack-dashboard/"
"local_settings.py`, and on openSUSE: :file:`/srv/www/openstack-dashboard/"
"openstack_dashboard/local/local_settings.py`)."
msgstr ""
#: ../dashboard_sessions.rst:14
msgid ""
"After architecting and implementing the core OpenStack services and other "
"required services, combined with the Dashboard service steps below, users "
"and administrators can use the OpenStack dashboard. Refer to the `OpenStack "
"Dashboard <http://docs.openstack.org/ user-guide/dashboard.html>`__ chapter "
"of the User Guide for further instructions on logging in to the Dashboard."
msgstr ""
#: ../dashboard_sessions.rst:22
msgid ""
"The following sections describe the pros and cons of each option as it "
"pertains to deploying the Dashboard."
msgstr ""
#: ../dashboard_sessions.rst:26
msgid "Local memory cache"
msgstr ""
#: ../dashboard_sessions.rst:27
msgid ""
"Local memory storage is the quickest and easiest session back end to set up, "
"as it has no external dependencies whatsoever. It has the following "
"significant drawbacks:"
msgstr ""
#: ../dashboard_sessions.rst:31
msgid "No shared storage across processes or workers."
msgstr ""
#: ../dashboard_sessions.rst:32
msgid "No persistence after a process terminates."
msgstr ""
#: ../dashboard_sessions.rst:34
msgid ""
"The local memory back end is enabled as the default for Horizon solely "
"because it has no dependencies. It is not recommended for production use, or "
"even for serious development work. Enabled by::"
msgstr ""
#: ../dashboard_sessions.rst:45
msgid ""
"You can use applications such as Memcached or Redis for external caching. "
"These applications offer persistence and shared storage and are useful for "
"small-scale deployments and development."
msgstr ""
#: ../dashboard_sessions.rst:50
msgid "Memcached"
msgstr ""
#: ../dashboard_sessions.rst:51
msgid ""
"Memcached is a high-performance and distributed memory object caching system "
"providing in-memory key-value store for small chunks of arbitrary data."
msgstr ""
#: ../dashboard_sessions.rst:55 ../dashboard_sessions.rst:75
msgid "Requirements:"
msgstr ""
#: ../dashboard_sessions.rst:57
msgid "Memcached service running and accessible."
msgstr ""
#: ../dashboard_sessions.rst:58
msgid "Python module ``python-memcached`` installed."
msgstr ""
#: ../dashboard_sessions.rst:60 ../dashboard_sessions.rst:80
msgid "Enabled by::"
msgstr ""
#: ../dashboard_sessions.rst:71
msgid "Redis"
msgstr ""
#: ../dashboard_sessions.rst:72
msgid ""
"Redis is an open source, BSD licensed, advanced key-value store. It is often "
"referred to as a data structure server."
msgstr ""
#: ../dashboard_sessions.rst:77
msgid "Redis service running and accessible."
msgstr ""
#: ../dashboard_sessions.rst:78
msgid "Python modules ``redis`` and ``django-redis`` installed."
msgstr ""
#: ../dashboard_sessions.rst:94
msgid "Initialize and configure the database"
msgstr ""
#: ../dashboard_sessions.rst:95
msgid ""
"Database-backed sessions are scalable, persistent, and can be made high-"
"concurrency and highly-available."
msgstr ""
#: ../dashboard_sessions.rst:98
msgid ""
"However, database-backed sessions are one of the slower session storages and "
"incur a high overhead under heavy usage. Proper configuration of your "
"database deployment can also be a substantial undertaking and is far beyond "
"the scope of this documentation."
msgstr ""
#: ../dashboard_sessions.rst:103
msgid "Start the mysql command-line client::"
msgstr ""
#: ../dashboard_sessions.rst:107
msgid "Enter the MySQL root user's password when prompted."
msgstr ""
#: ../dashboard_sessions.rst:108
msgid "To configure the MySQL database, create the dash database::"
msgstr ""
#: ../dashboard_sessions.rst:112
msgid ""
"Create a MySQL user for the newly created dash database that has full "
"control of the database. Replace DASH\\_DBPASS with a password for the new "
"user::"
msgstr ""
#: ../dashboard_sessions.rst:119
msgid "Enter ``quit`` at the ``mysql>`` prompt to exit MySQL."
msgstr ""
#: ../dashboard_sessions.rst:121
msgid ""
"In the :file:`local_settings` file (on Fedora/RHEL/CentOS: :file:`/etc/"
"openstack-dashboard/local_settings`, on Ubuntu/Debian: :file:`/etc/openstack-"
"dashboard/local_settings.py`, and on openSUSE: :file:`/srv/www/openstack-"
"dashboard/openstack_dashboard/local/local_settings.py`), change these "
"options::"
msgstr ""
#: ../dashboard_sessions.rst:140
msgid ""
"After configuring the :file:`local_settings` file as shown, you can run the "
"``manage.py syncdb`` command to populate this newly created database::"
msgstr ""
#: ../dashboard_sessions.rst:145
msgid ""
"Note on openSUSE the path is :file:`/srv/www/openstack-dashboard/manage.py`."
msgstr ""
#: ../dashboard_sessions.rst:147
msgid "The following output is returned::"
msgstr ""
#: ../dashboard_sessions.rst:154
msgid ""
"To avoid a warning when you restart Apache on Ubuntu, create a :file:"
"`blackhole` directory in the Dashboard directory, as follows::"
msgstr ""
#: ../dashboard_sessions.rst:159
msgid "Restart the Apache service."
msgstr ""
#: ../dashboard_sessions.rst:161
msgid ""
"On Ubuntu, restart the nova-api service to ensure that the API server can "
"connect to the Dashboard without error::"
msgstr ""
#: ../dashboard_sessions.rst:167
msgid "Cached database"
msgstr ""
#: ../dashboard_sessions.rst:168
msgid ""
"To mitigate the performance issues of database queries, you can use the "
"Django ``cached_db`` session back end, which utilizes both your database and "
"caching infrastructure to perform write-through caching and efficient "
"retrieval."
msgstr ""
#: ../dashboard_sessions.rst:173
msgid ""
"Enable this hybrid setting by configuring both your database and cache, as "
"discussed previously. Then, set the following value::"
msgstr ""
#: ../dashboard_sessions.rst:179
msgid "Cookies"
msgstr ""
#: ../dashboard_sessions.rst:180
msgid ""
"If you use Django 1.4 or later, the ``signed_cookies`` back end avoids "
"server load and scaling problems."
msgstr ""
#: ../dashboard_sessions.rst:183
msgid ""
"This back end stores session data in a cookie, which is stored by the user's "
"browser. The back end uses a cryptographic signing technique to ensure "
"session data is not tampered with during transport. This is not the same as "
"encryption; session data is still readable by an attacker."
msgstr ""
#: ../dashboard_sessions.rst:188
msgid ""
"The pros of this engine are that it requires no additional dependencies or "
"infrastructure overhead, and it scales indefinitely as long as the quantity "
"of session data being stored fits into a normal cookie."
msgstr ""
#: ../dashboard_sessions.rst:192
msgid ""
"The biggest downside is that it places session data into storage on the "
"user's machine and transports it over the wire. It also limits the quantity "
"of session data that can be stored."
msgstr ""
#: ../dashboard_sessions.rst:196
msgid ""
"See the Django `cookie-based sessions <https://docs.djangoproject.com/en/dev/"
"topics/http/sessions/#using-cookie-based-sessions>`__ documentation."
msgstr ""
# #-#-#-#-# database.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-data-collection.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../database.rst:5 ../telemetry-data-collection.rst:1043
msgid "Database"
msgstr ""
#: ../database.rst:7
msgid "The Database service provides database management features."
msgstr ""
#: ../database.rst:12
msgid ""
"The Database service provides scalable and reliable cloud provisioning "
"functionality for both relational and non-relational database engines. Users "
"can quickly and easily use database features without the burden of handling "
"complex administrative tasks. Cloud users and database administrators can "
"provision and manage multiple database instances as needed."
msgstr ""
#: ../database.rst:19
msgid ""
"The Database service provides resource isolation at high performance levels, "
"and automates complex administrative tasks such as deployment, "
"configuration, patching, backups, restores, and monitoring."
msgstr ""
#: ../database.rst:25
msgid "Create a datastore"
msgstr ""
#: ../database.rst:27
msgid ""
"An administrative user can create datastores for a variety of databases."
msgstr ""
#: ../database.rst:30
msgid ""
"This section assumes you do not yet have a MySQL datastore, and shows you "
"how to create a MySQL datastore and populate it with a MySQL 5.5 datastore "
"version."
msgstr ""
#: ../database.rst:36
msgid "**To create a datastore**"
msgstr ""
#: ../database.rst:38
msgid "**Create a trove image**"
msgstr ""
#: ../database.rst:40
msgid ""
"Create an image for the type of database you want to use, for example, "
"MySQL, MongoDB, Cassandra."
msgstr ""
#: ../database.rst:43
msgid ""
"This image must have the trove guest agent installed, and it must have the :"
"file:`trove-guestagent.conf` file configured to connect to your OpenStack "
"environment. To configure :file:`trove-guestagent.conf`, add the following "
"lines to :file:`trove-guestagent.conf` on the guest instance you are using "
"to build your image:"
msgstr ""
#: ../database.rst:58
msgid ""
"This example assumes you have created a MySQL 5.5 image called ``mysql-5.5."
"qcow2``."
msgstr ""
#: ../database.rst:61
msgid "**Register image with Image service**"
msgstr ""
#: ../database.rst:63
msgid "You need to register your guest image with the Image service."
msgstr ""
#: ../database.rst:65
msgid ""
"In this example, you use the glance :command:`image-create` command to "
"register a ``mysql-5.5.qcow2`` image::"
msgstr ""
#: ../database.rst:91
msgid "**Create the datastore**"
msgstr ""
#: ../database.rst:93
msgid ""
"Create the datastore that will house the new image. To do this, use the :"
"command:`trove-manage` :command:`datastore_update` command."
msgstr ""
#: ../database.rst:96 ../database.rst:134
msgid "This example uses the following arguments:"
msgstr ""
#: ../database.rst:102 ../database.rst:140
msgid "Argument"
msgstr ""
#: ../database.rst:105 ../database.rst:144
msgid "config file"
msgstr ""
#: ../database.rst:106 ../database.rst:145
msgid "The configuration file to use."
msgstr ""
#: ../database.rst:107 ../database.rst:146
msgid ":option:`--config-file=/etc/trove/trove.conf`"
msgstr ""
#: ../database.rst:109
msgid "Name you want to use for this datastore."
msgstr ""
#: ../database.rst:110 ../database.rst:151 ../database.rst:168
msgid "``mysql``"
msgstr ""
#: ../database.rst:111
msgid "default version"
msgstr ""
#: ../database.rst:112
msgid ""
"You can attach multiple versions/images to a datastore. For example, you "
"might have a MySQL 5.5 version and a MySQL 5.6 version. You can designate "
"one version as the default, which the system uses if a user does not "
"explicitly request a specific version."
msgstr ""
#: ../database.rst:117 ../database.rst:180
msgid "``\"\"``"
msgstr ""
#: ../database.rst:119
msgid ""
"At this point, you do not yet have a default version, so pass in an empty "
"string."
msgstr ""
# #-#-#-#-# database.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# objectstorage_ringbuilder.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../database.rst:124 ../database.rst:193
#: ../objectstorage_ringbuilder.rst:105
msgid "Example::"
msgstr ""
#: ../database.rst:128
msgid "**Add a version to the new datastore**"
msgstr ""
#: ../database.rst:130
msgid ""
"Now that you have a MySQL datastore, you can add a version to it, using the "
"trove-manage :command:`datastore_version_update` command. The version "
"indicates which guest image to use."
msgstr ""
#: ../database.rst:148
msgid "datastore"
msgstr ""
#: ../database.rst:149
msgid ""
"The name of the datastore you just created via trove-manage :command:"
"`datastore_update`."
msgstr ""
#: ../database.rst:153
msgid "version name"
msgstr ""
#: ../database.rst:154
msgid "The name of the version you are adding to the datastore."
msgstr ""
#: ../database.rst:155
msgid "``mysql-5.5``"
msgstr ""
#: ../database.rst:157
msgid "datastore manager"
msgstr ""
#: ../database.rst:158
msgid ""
"Which datastore manager to use for this version. Typically, the datastore "
"manager is identified by one of the following strings, depending on the "
"database:"
msgstr ""
#: ../database.rst:162
msgid "mysql"
msgstr ""
#: ../database.rst:163
msgid "redis"
msgstr ""
#: ../database.rst:164
msgid "mongodb"
msgstr ""
#: ../database.rst:165
msgid "cassandra"
msgstr ""
#: ../database.rst:166
msgid "couchbase"
msgstr ""
#: ../database.rst:167
msgid "percona"
msgstr ""
#: ../database.rst:170
msgid "glance ID"
msgstr ""
#: ../database.rst:171
msgid ""
"The ID of the guest image you just added to the Image service. You can get "
"this ID by using the glance :command:`image-show` IMAGE_NAME command."
msgstr ""
#: ../database.rst:174
msgid "bb75f870-0c33-4907-8467-1367f8cb15b6"
msgstr ""
#: ../database.rst:176
msgid "packages"
msgstr ""
#: ../database.rst:177
msgid ""
"If you want to put additional packages on each guest that you create with "
"this datastore version, you can list the package names here."
msgstr ""
#: ../database.rst:182
msgid ""
"In this example, the guest image already contains all the required packages, "
"so leave this argument empty."
msgstr ""
# #-#-#-#-# database.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# ts-eql-volume-size.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../database.rst:185 ../ts-eql-volume-size.rst:110
msgid "active"
msgstr ""
#: ../database.rst:187
msgid "Set this to either 1 or 0:"
msgstr ""
#: ../database.rst:187
msgid "``1`` = active"
msgstr ""
#: ../database.rst:188
msgid "``0`` = disabled"
msgstr ""
#: ../database.rst:189
msgid "1"
msgstr ""
#: ../database.rst:197
msgid ""
"**Optional.** Set your new version as the default version. To do this, use "
"the trove-manage :command:`datastore_update` command again, this time "
"specifying the version you just created."
msgstr ""
#: ../database.rst:205
msgid "**Load validation rules for configuration groups**"
msgstr ""
#: ../database.rst:209
msgid "**Applies only to MySQL and Percona datastores**"
msgstr ""
#: ../database.rst:211
msgid ""
"If you just created a MySQL or Percona datastore, then you need to load the "
"appropriate validation rules, as described in this step."
msgstr ""
#: ../database.rst:214
msgid "If you just created a different datastore, skip this step."
msgstr ""
#: ../database.rst:216
msgid ""
"**Background.** You can manage database configuration tasks by using "
"configuration groups. Configuration groups let you set configuration "
"parameters, in bulk, on one or more databases."
msgstr ""
#: ../database.rst:220
msgid ""
"When you set up a configuration group using the trove :command:"
"`configuration-create` command, this command compares the configuration "
"values you are setting against a list of valid configuration values that are "
"stored in the :file:`validation-rules.json` file."
msgstr ""
#: ../database.rst:229
msgid "Operating System"
msgstr ""
#: ../database.rst:230
msgid "Location of :file:`validation-rules.json`"
msgstr ""
# #-#-#-#-# database.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# networking_adv-features.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../database.rst:231 ../networking_adv-features.rst:627
msgid "Notes"
msgstr ""
#: ../database.rst:233
msgid "Ubuntu 14.04"
msgstr ""
#: ../database.rst:234
msgid ":file:`/usr/lib/python2.7/dist-packages/trove/templates/DATASTORE_NAME`"
msgstr ""
#: ../database.rst:235 ../database.rst:241
msgid ""
"DATASTORE_NAME is the name of either the MySQL datastore or the Percona "
"datastore. This is typically either ``mysql`` or ``percona``."
msgstr ""
#: ../database.rst:239
msgid "RHEL 7, CentOS 7, Fedora 20, and Fedora 21"
msgstr ""
#: ../database.rst:240
msgid ":file:`/usr/lib/python2.7/site-packages/trove/templates/DATASTORE_NAME`"
msgstr ""
#: ../database.rst:246
msgid ""
"Therefore, as part of creating a datastore, you need to load the :file:"
"`validation-rules.json` file, using the :command:`trove-manage` :command:"
"`db_load_datastore_config_parameters` command. This command takes the "
"following arguments:"
msgstr ""
#: ../database.rst:251
msgid "Datastore name"
msgstr ""
#: ../database.rst:252
msgid "Datastore version"
msgstr ""
#: ../database.rst:253
msgid "Full path to the :file:`validation-rules.json` file"
msgstr ""
#: ../database.rst:257
msgid ""
"This example loads the :file:`validation-rules.json` file for a MySQL "
"database on Ubuntu 14.04::"
msgstr ""
#: ../database.rst:262
msgid "**Validate datastore**"
msgstr ""
#: ../database.rst:264
msgid ""
"To validate your new datastore and version, start by listing the datastores "
"on your system::"
msgstr ""
#: ../database.rst:275
msgid ""
"Take the ID of the MySQL datastore and pass it in with the :command:"
"`datastore-version-list` command::"
msgstr ""
#: ../database.rst:286
msgid "Configure a cluster"
msgstr ""
#: ../database.rst:288
msgid ""
"An administrative user can configure various characteristics of a MongoDB "
"cluster."
msgstr ""
#: ../database.rst:291
msgid "**Query routers and config servers**"
msgstr ""
#: ../database.rst:293
msgid ""
"**Background.** Each cluster includes at least one query router and one "
"config server. Query routers and config servers count against your quota. "
"When you delete a cluster, the system deletes the associated query router(s) "
"and config server(s)."
msgstr ""
#: ../database.rst:298
msgid ""
"**Configuration.** By default, the system creates one query router and one "
"config server per cluster. You can change this by editing the :file:`/etc/"
"trove/trove.conf` file. These settings are in the ``mongodb`` section of the "
"file:"
msgstr ""
#: ../database.rst:307
msgid "Setting"
msgstr ""
#: ../database.rst:308
msgid "Valid values are:"
msgstr ""
#: ../database.rst:310
msgid "num_config_servers_per_cluster"
msgstr ""
#: ../database.rst:311 ../database.rst:314
msgid "1 or 3"
msgstr ""
#: ../database.rst:313
msgid "num_query_routers_per_cluster"
msgstr ""
#: ../identity_auth_token_middleware.rst:2
msgid "Authentication middleware with user name and password"
msgstr ""
#: ../identity_auth_token_middleware.rst:4
msgid ""
"You can also configure Identity authentication middleware using the :code:"
"`admin_user` and :code:`admin_password` options."
msgstr ""
#: ../identity_auth_token_middleware.rst:9
msgid ""
"The :code:`admin_token` option is deprecated and no longer used for "
"configuring auth_token middleware."
msgstr ""
#: ../identity_auth_token_middleware.rst:12
msgid ""
"For services that have a separate paste-deploy :file:`.ini` file, you can "
"configure the authentication middleware in the ``[keystone_authtoken]`` "
"section of the main configuration file, such as :file:`nova.conf`. In "
"Compute, for example, you can remove the middleware parameters from :file:"
"`api-paste.ini`, as follows:"
msgstr ""
#: ../identity_auth_token_middleware.rst:24
msgid "And set the following values in :file:`nova.conf` as follows:"
msgstr ""
#: ../identity_auth_token_middleware.rst:41
msgid ""
"The middleware parameters in the paste config take priority. You must remove "
"them to use the values in the ``[keystone_authtoken]`` section."
msgstr ""
#: ../identity_auth_token_middleware.rst:47
msgid ""
"Comment out any :code:`auth_host`, :code:`auth_port`, and :code:"
"`auth_protocol` options because the :code:`identity_uri` option replaces "
"them."
msgstr ""
#: ../identity_auth_token_middleware.rst:51
msgid ""
"This sample paste config filter makes use of the :code:`admin_user` and :"
"code:`admin_password` options:"
msgstr ""
#: ../identity_auth_token_middleware.rst:66
msgid ""
"Using this option requires an admin tenant/role relationship. The admin user "
"is granted access to the admin role on the admin tenant."
msgstr ""
#: ../identity_auth_token_middleware.rst:71
msgid ""
"Comment out any ``auth_host``, ``auth_port``, and ``auth_protocol`` options "
"because the ``identity_uri`` option replaces them."
msgstr ""
#: ../identity_concepts.rst:3
msgid "Identity concepts"
msgstr ""
#: ../identity_concepts.rst:6
msgid ""
"The process of confirming the identity of a user. To confirm an incoming "
"request, OpenStack Identity validates a set of credentials that the user "
"supplies. Initially, these credentials are a user name and password or a "
"user name and API key. When OpenStack Identity validates user credentials, "
"it issues an authentication token that the user provides in subsequent "
"requests."
msgstr ""
#: ../identity_concepts.rst:11
msgid "Authentication"
msgstr ""
#: ../identity_concepts.rst:14
msgid ""
"Data that confirms the identity of the user. For example, user name and "
"password, user name and API key, or an authentication token that the "
"Identity service provides."
msgstr ""
#: ../identity_concepts.rst:16
msgid "Credentials"
msgstr ""
#: ../identity_concepts.rst:19
msgid ""
"An Identity service API v3 entity. Represents a collection of projects and "
"users that defines administrative boundaries for the management of Identity "
"entities. A domain, which can represent an individual, company, or operator-"
"owned space, exposes administrative activities directly to system users. "
"Users can be granted the administrator role for a domain. A domain "
"administrator can create projects, users, and groups in a domain and assign "
"roles to users and groups in a domain."
msgstr ""
#: ../identity_concepts.rst:26
msgid "Domain"
msgstr ""
#: ../identity_concepts.rst:29
msgid ""
"A network-accessible address, usually a URL, through which you can access a "
"service. If you are using an extension for templates, you can create an "
"endpoint template that represents the templates of all consumable services "
"that are available across the regions."
msgstr ""
#: ../identity_concepts.rst:32
msgid "Endpoint"
msgstr ""
#: ../identity_concepts.rst:35
msgid ""
"An Identity service API v3 entity. Represents a collection of users that are "
"owned by a domain. A group role granted to a domain or project applies to "
"all users in the group. Adding users to, or removing users from, a group "
"respectively grants, or revokes, their role and authentication to the "
"associated domain or project."
msgstr ""
# #-#-#-#-# identity_concepts.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# networking_adv-features.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../identity_concepts.rst:39 ../networking_adv-features.rst:624
msgid "Group"
msgstr ""
#: ../identity_concepts.rst:42
msgid ""
"A command-line interface for several OpenStack services including the "
"Identity API. For example, a user can run the :command:`openstack service "
"create` and :command:`openstack endpoint create` commands to register "
"services in her OpenStack installation."
msgstr ""
#: ../identity_concepts.rst:46
msgid "OpenStackClient"
msgstr ""
#: ../identity_concepts.rst:49
msgid ""
"A container that groups or isolates resources or identity objects. Depending "
"on the service operator, a project might map to a customer, account, "
"organization, or tenant."
msgstr ""
#: ../identity_concepts.rst:51
msgid "Project"
msgstr ""
#: ../identity_concepts.rst:54
msgid ""
"An Identity service API v3 entity. Represents a general division in an "
"OpenStack deployment. You can associate zero or more sub-regions with a "
"region to make a tree-like structured hierarchy. Although a region does not "
"have a geographical connotation, a deployment can use a geographical name "
"for a region, such as ``us-east``."
msgstr ""
#: ../identity_concepts.rst:58
msgid "Region"
msgstr ""
#: ../identity_concepts.rst:61
msgid ""
"A personality with a defined set of user rights and privileges to perform a "
"specific set of operations. The Identity service issues a token that "
"includes a list of roles to a user. When a user calls a service, that "
"service interprets the set of user roles and determines to which operations "
"or resources each role grants access."
msgstr ""
#: ../identity_concepts.rst:66
msgid "Role"
msgstr ""
#: ../identity_concepts.rst:69
msgid ""
"An OpenStack service, such as Compute (nova), Object Storage (swift), or "
"Image service (glance), that provides one or more endpoints through which "
"users can access resources and perform operations."
msgstr ""
#: ../identity_concepts.rst:72
msgid "Service"
msgstr ""
#: ../identity_concepts.rst:75
msgid ""
"An alpha-numeric text string that enables access to OpenStack APIs and "
"resources. A token may be revoked at any time and is valid for a finite "
"duration. While OpenStack Identity supports token-based authentication in "
"this release, it intends to support additional protocols in the future. "
"OpenStack Identity is an integration service that does not aspire to be a "
"full-fledged identity store and management solution."
msgstr ""
#: ../identity_concepts.rst:81
msgid "Token"
msgstr ""
#: ../identity_concepts.rst:84
msgid ""
"A digital representation of a person, system, or service that uses OpenStack "
"cloud services. The Identity service validates that incoming requests are "
"made by the user who claims to be making the call. Users have a login and "
"can access resources by using assigned tokens. Users can be directly "
"assigned to a particular project and behave as if they are contained in that "
"project."
msgstr ""
#: ../identity_concepts.rst:89
msgid "User"
msgstr ""
#: ../identity_concepts.rst:92
msgid "User management"
msgstr ""
#: ../identity_concepts.rst:94
msgid "Identity user management examples:"
msgstr ""
#: ../identity_concepts.rst:96
msgid "Create a user named ``alice``:"
msgstr ""
#: ../identity_concepts.rst:102
msgid "Create a project named ``acme``:"
msgstr ""
#: ../identity_concepts.rst:108
msgid "Create a domain named ``emea``:"
msgstr ""
#: ../identity_concepts.rst:114
msgid "Create a role named ``compute-user``:"
msgstr ""
#: ../identity_concepts.rst:122
msgid ""
"Individual services assign meaning to roles, typically through limiting or "
"granting access to users with the role to the operations that the service "
"supports. Role access is typically configured in the service's ``policy."
"json`` file. For example, to limit Compute access to the ``compute-user`` "
"role, edit the Compute service's ``policy.json`` file to require this role "
"for Compute operations."
msgstr ""
#: ../identity_concepts.rst:130
msgid ""
"The Identity service assigns a tenant and a role to a user. You might assign "
"the ``compute-user`` role to the ``alice`` user in the ``acme`` tenant:"
msgstr ""
#: ../identity_concepts.rst:138
msgid ""
"A user can have different roles in different tenants. For example, Alice "
"might also have the ``admin`` role in the ``Cyberdyne`` tenant. A user can "
"also have multiple roles in the same tenant."
msgstr ""
#: ../identity_concepts.rst:142
msgid ""
"The :file:`/etc/[SERVICE_CODENAME]/policy.json` file controls the tasks that "
"users can perform for a given service. For example, the :file:`/etc/nova/"
"policy.json` file specifies the access policy for the Compute service, the :"
"file:`/etc/glance/policy.json` file specifies the access policy for the "
"Image service, and the :file:`/etc/keystone/policy.json` file specifies the "
"access policy for the Identity service."
msgstr ""
#: ../identity_concepts.rst:150
msgid ""
"The default :file:`policy.json` files in the Compute, Identity, and Image "
"services recognize only the ``admin`` role. Any user with any role in a "
"tenant can access all operations that do not require the ``admin`` role."
msgstr ""
#: ../identity_concepts.rst:155
msgid ""
"To restrict users from performing operations in, for example, the Compute "
"service, you must create a role in the Identity service and then modify the :"
"file:`/etc/nova/policy.json` file so that this role is required for Compute "
"operations."
msgstr ""
#: ../identity_concepts.rst:160
msgid ""
"For example, the following line in the :file:`/etc/nova/policy.json` file "
"does not restrict which users can create volumes:"
msgstr ""
#: ../identity_concepts.rst:167
msgid ""
"If the user has any role in a tenant, he can create volumes in that tenant."
msgstr ""
#: ../identity_concepts.rst:170
msgid ""
"To restrict the creation of volumes to users who have the ``compute-user`` "
"role in a particular tenant, you add ``\"role:compute-user\"``:"
msgstr ""
#: ../identity_concepts.rst:177
msgid ""
"To restrict all Compute service requests to require this role, the resulting "
"file looks like:"
msgstr ""
#: ../identity_concepts.rst:281
msgid "Service management"
msgstr ""
#: ../identity_concepts.rst:283
msgid ""
"The Identity service provides identity, token, catalog, and policy services. "
"It consists of:"
msgstr ""
#: ../identity_concepts.rst:287
msgid ""
"Can be run in a WSGI-capable web server such as Apache httpd to provide the "
"Identity service. The service and administrative APIs are run as separate "
"instances of the WSGI service."
msgstr ""
#: ../identity_concepts.rst:289
msgid "keystone Web Server Gateway Interface (WSGI) service"
msgstr ""
#: ../identity_concepts.rst:292
msgid ""
"Each has a pluggable back end that allow different ways to use the "
"particular service. Most support standard back ends like LDAP or SQL."
msgstr ""
#: ../identity_concepts.rst:293
msgid "Identity service functions"
msgstr ""
#: ../identity_concepts.rst:296
msgid ""
"Starts both the service and administrative APIs in a single process. Using "
"federation with keystone-all is not supported. keystone-all is deprecated in "
"favor of the WSGI service."
msgstr ""
#: ../identity_concepts.rst:298
msgid "keystone-all"
msgstr ""
#: ../identity_concepts.rst:300
msgid ""
"The Identity service also maintains a user that corresponds to each service, "
"such as, a user named ``nova`` for the Compute service, and a special "
"service tenant called ``service``."
msgstr ""
#: ../identity_concepts.rst:304
msgid ""
"For information about how to create services and endpoints, see the "
"`OpenStack Admin User Guide <http://docs.openstack.org/user-guide-admin/ "
"cli_manage_services.html>`__."
msgstr ""
#: ../identity_concepts.rst:309
msgid "Groups"
msgstr ""
#: ../identity_concepts.rst:311
msgid ""
"A group is a collection of users in a domain. Administrators can create "
"groups and add users to them. A role can then be assigned to the group, "
"rather than individual users. Groups were introduced with the Identity API "
"v3."
msgstr ""
#: ../identity_concepts.rst:316
msgid "Identity API V3 provides the following group-related operations:"
msgstr ""
#: ../identity_concepts.rst:318
msgid "Create a group"
msgstr ""
#: ../identity_concepts.rst:320
msgid "Delete a group"
msgstr ""
#: ../identity_concepts.rst:322
msgid "Update a group (change its name or description)"
msgstr ""
#: ../identity_concepts.rst:324
msgid "Add a user to a group"
msgstr ""
#: ../identity_concepts.rst:326
msgid "Remove a user from a group"
msgstr ""
#: ../identity_concepts.rst:328
msgid "List group members"
msgstr ""
#: ../identity_concepts.rst:330
msgid "List groups for a user"
msgstr ""
#: ../identity_concepts.rst:332
msgid "Assign a role on a tenant to a group"
msgstr ""
#: ../identity_concepts.rst:334
msgid "Assign a role on a domain to a group"
msgstr ""
#: ../identity_concepts.rst:336
msgid "Query role assignments to groups"
msgstr ""
#: ../identity_concepts.rst:340
msgid ""
"The Identity service server might not allow all operations. For example, if "
"you use the Identity server with the LDAP Identity back end and group "
"updates are disabled, a request to create, delete, or update a group fails."
msgstr ""
#: ../identity_concepts.rst:345
msgid "Here are a couple of examples:"
msgstr ""
#: ../identity_concepts.rst:347
msgid ""
"Group A is granted Role A on Tenant A. If User A is a member of Group A, "
"when User A gets a token scoped to Tenant A, the token also includes Role A."
msgstr ""
#: ../identity_concepts.rst:351
msgid ""
"Group B is granted Role B on Domain B. If User B is a member of Group B, "
"when User B gets a token scoped to Domain B, the token also includes Role B."
msgstr ""
#: ../identity_example_usage.rst:3
msgid "Example usage"
msgstr ""
#: ../identity_example_usage.rst:5
msgid ""
"The ``keystone`` client is set up to expect commands in the general form of "
"``keystone command argument``, followed by flag-like keyword arguments to "
"provide additional (often optional) information. For example, the :command:"
"`user-list` and :command:`tenant-create` commands can be invoked as follows:"
msgstr ""
#: ../identity_logging.rst:5
msgid ""
"You configure logging externally to the rest of Identity. The name of the "
"file specifying the logging configuration is set using the ``log_config`` "
"option in the ``[DEFAULT]`` section of the :file:`keystone.conf` file. To "
"route logging through syslog, set ``use_syslog=true`` in the ``[DEFAULT]`` "
"section."
msgstr ""
#: ../identity_logging.rst:11
msgid ""
"A sample logging configuration file is available with the project in :file:"
"`etc/logging.conf.sample`. Like other OpenStack projects, Identity uses the "
"Python logging module, which provides extensive configuration options that "
"let you define the output levels and formats."
msgstr ""
#: ../identity_management.rst:5
msgid "Identity management"
msgstr ""
#: ../identity_management.rst:7
msgid ""
"OpenStack Identity, code-named keystone, is the default identity management "
"system for OpenStack. After you install Identity, you configure it through "
"the :file:`/etc/keystone/keystone.conf` configuration file and, possibly, a "
"separate logging configuration file. You initialize data into Identity by "
"using the ``keystone`` command-line client."
msgstr ""
#: ../identity_service_api_protection.rst:3
msgid "Identity API protection with role-based access control (RBAC)"
msgstr ""
#: ../identity_service_api_protection.rst:5
msgid ""
"Like most OpenStack projects, Identity supports the protection of its APIs "
"by defining policy rules based on an RBAC approach. Identity stores a "
"reference to a policy JSON file in the main Identity configuration file, :"
"file:`keystone.conf`. Typically this file is named ``policy.json``, and "
"contains the rules for which roles have access to certain actions in defined "
"services."
msgstr ""
#: ../identity_service_api_protection.rst:12
msgid ""
"Each Identity API v3 call has a line in the policy file that dictates which "
"level of governance of access applies."
msgstr ""
#: ../identity_service_api_protection.rst:21
msgid ""
"``RULE_STATEMENT`` can contain ``RULE_STATEMENT`` or ``MATCH_STATEMENT``."
msgstr ""
#: ../identity_service_api_protection.rst:24
msgid ""
"``MATCH_STATEMENT`` is a set of identifiers that must match between the "
"token provided by the caller of the API and the parameters or target "
"entities of the API call in question. For example:"
msgstr ""
#: ../identity_service_api_protection.rst:32
msgid ""
"Indicates that to create a user, you must have the admin role in your token. "
"The :code:`domain_id` in your token must match the :code:`domain_id` in the "
"user object that you are trying to create, which implies this must be a "
"domain-scoped token. In other words, you must have the admin role on the "
"domain in which you are creating the user, and the token that you use must "
"be scoped to that domain."
msgstr ""
#: ../identity_service_api_protection.rst:40
msgid "Each component of a match statement uses this format:"
msgstr ""
#: ../identity_service_api_protection.rst:46
msgid "The Identity service expects these attributes:"
msgstr ""
#: ../identity_service_api_protection.rst:48
msgid "Attributes from token:"
msgstr ""
# #-#-#-#-# identity_service_api_protection.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-data-retrieval.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../identity_service_api_protection.rst:50
#: ../telemetry-data-retrieval.rst:154
msgid "``user_id``"
msgstr ""
#: ../identity_service_api_protection.rst:51
msgid "``domain_id``"
msgstr ""
# #-#-#-#-# identity_service_api_protection.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-data-retrieval.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../identity_service_api_protection.rst:52
#: ../telemetry-data-retrieval.rst:150
msgid "``project_id``"
msgstr ""
#: ../identity_service_api_protection.rst:54
msgid ""
"The ``project_id`` attribute requirement depends on the scope, and the list "
"of roles you have within that scope."
msgstr ""
#: ../identity_service_api_protection.rst:57
msgid "Attributes related to API call:"
msgstr ""
#: ../identity_service_api_protection.rst:59
msgid "``user.domain_id``"
msgstr ""
#: ../identity_service_api_protection.rst:60
msgid "Any parameters passed into the API call"
msgstr ""
#: ../identity_service_api_protection.rst:61
msgid "Any filters specified in the query string"
msgstr ""
#: ../identity_service_api_protection.rst:63
msgid ""
"You reference attributes of objects passed with an object.attribute syntax "
"(such as, ``user.domain_id``). The target objects of an API are also "
"available using a target.object.attribute syntax. For instance:"
msgstr ""
#: ../identity_service_api_protection.rst:71
msgid ""
"would ensure that Identity only deletes the user object in the same domain "
"as the provided token."
msgstr ""
#: ../identity_service_api_protection.rst:74
msgid ""
"Every target object has an ``id`` and a ``name`` available as ``target."
"OBJECT.id`` and ``target.OBJECT.name``. Identity retrieves other attributes "
"from the database, and the attributes vary between object types. The "
"Identity service filters out some database fields, such as user passwords."
msgstr ""
#: ../identity_service_api_protection.rst:80
msgid "List of object attributes:"
msgstr ""
#: ../identity_service_api_protection.rst:114
msgid ""
"The default :file:`policy.json` file supplied provides a somewhat basic "
"example of API protection, and does not assume any particular use of "
"domains. Refer to :file:`policy.v3cloudsample.json` as an example of multi-"
"domain configuration installations where a cloud provider wants to delegate "
"administration of the contents of a domain to a particular :code:`admin "
"domain`. This example policy file also shows the use of an :code:"
"`admin_domain` to allow a cloud provider to enable cloud administrators to "
"have wider access across the APIs."
msgstr ""
#: ../identity_service_api_protection.rst:123
msgid ""
"A clean installation could start with the standard policy file, to allow "
"creation of the :code:`admin_domain` with the first users within it. You "
"could then obtain the :code:`domain_id` of the admin domain, paste the ID "
"into a modified version of :file:`policy.v3cloudsample.json`, and then "
"enable it as the main policy file."
msgstr ""
#: ../identity_start.rst:2
msgid "Start the Identity services"
msgstr ""
#: ../identity_start.rst:4
msgid "To start the services for Identity, run the following command:"
msgstr ""
#: ../identity_start.rst:10
msgid ""
"This command starts two wsgi.Server instances configured by the :file:"
"`keystone.conf` file as described previously. One of these wsgi servers is :"
"code:`admin` (the administration API) and the other is :code:`main` (the "
"primary/public API interface). Both run in a single process."
msgstr ""
#: ../identity_troubleshoot.rst:3
msgid "Troubleshoot the Identity service"
msgstr ""
#: ../identity_troubleshoot.rst:5
msgid ""
"To troubleshoot the Identity service, review the logs in the ``/var/log/"
"keystone/keystone.log`` file."
msgstr ""
#: ../identity_troubleshoot.rst:13
msgid ""
"The logs show the components that have come in to the WSGI request, and "
"ideally show an error that explains why an authorization request failed. If "
"you do not see the request in the logs, run keystone with the :option:`--"
"debug` parameter. Pass the :option:`--debug` parameter before the command "
"parameters."
msgstr ""
#: ../identity_troubleshoot.rst:20
msgid "Debug PKI middleware"
msgstr ""
#: ../identity_troubleshoot.rst:22
msgid ""
"If you receive an ``Invalid OpenStack Identity Credentials`` message when "
"you talk to an OpenStack service, it might be caused by the changeover from "
"UUID tokens to PKI tokens in the Grizzly release. Learn how to troubleshoot "
"this error."
msgstr ""
#: ../identity_troubleshoot.rst:27
msgid ""
"The PKI-based token validation scheme relies on certificates from Identity "
"that are fetched through HTTP and stored in a local directory. The location "
"for this directory is specified by the ``signing_dir`` configuration option. "
"In your services configuration file, look for a section like this:"
msgstr ""
#: ../identity_troubleshoot.rst:42
msgid ""
"The first thing to check is that the ``signing_dir`` does, in fact, exist. "
"If it does, check for certificate files:"
msgstr ""
#: ../identity_troubleshoot.rst:58
msgid ""
"This directory contains two certificates and the token revocation list. If "
"these files are not present, your service cannot fetch them from Identity. "
"To troubleshoot, try to talk to Identity to make sure it correctly serves "
"files, as follows:"
msgstr ""
#: ../identity_troubleshoot.rst:67
msgid "This command fetches the signing certificate:"
msgstr ""
#: ../identity_troubleshoot.rst:82
msgid "Note the expiration dates of the certificate:"
msgstr ""
#: ../identity_troubleshoot.rst:89
msgid ""
"The token revocation list is updated once a minute, but the certificates are "
"not. One possible problem is that the certificates are the wrong files or "
"garbage. You can remove these files and run another command against your "
"server; they are fetched on demand."
msgstr ""
#: ../identity_troubleshoot.rst:94
msgid ""
"The Identity service log should show the access of the certificate files. "
"You might have to turn up your logging levels. Set ``debug = True`` and "
"``verbose = True`` in your Identity configuration file and restart the "
"Identity server."
msgstr ""
#: ../identity_troubleshoot.rst:106
msgid ""
"If the files do not appear in your directory after this, it is likely one of "
"the following issues:"
msgstr ""
#: ../identity_troubleshoot.rst:109
msgid ""
"Your service is configured incorrectly and cannot talk to Identity. Check "
"the ``auth_port`` and ``auth_host`` values and make sure that you can talk "
"to that service through cURL, as shown previously."
msgstr ""
#: ../identity_troubleshoot.rst:113
msgid ""
"Your signing directory is not writable. Use the ``chmod`` command to change "
"its permissions so that the service (POSIX) user can write to it. Verify the "
"change through ``su`` and ``touch`` commands."
msgstr ""
#: ../identity_troubleshoot.rst:117
msgid "The SELinux policy is denying access to the directory."
msgstr ""
#: ../identity_troubleshoot.rst:119
msgid ""
"SELinux troubles often occur when you use Fedora or RHEL-based packages and "
"you choose configuration options that do not match the standard policy. Run "
"the ``setenforce permissive`` command. If that makes a difference, you "
"should relabel the directory. If you are using a sub-directory of the ``/var/"
"cache/`` directory, run the following command:"
msgstr ""
#: ../identity_troubleshoot.rst:129
msgid ""
"If you are not using a ``/var/cache`` sub-directory, you should. Modify the "
"``signing_dir`` configuration option for your service and restart."
msgstr ""
#: ../identity_troubleshoot.rst:132
msgid ""
"Set back to ``setenforce enforcing`` to confirm that your changes solve the "
"problem."
msgstr ""
#: ../identity_troubleshoot.rst:135
msgid ""
"If your certificates are fetched on demand, the PKI validation is working "
"properly. Most likely, the token from Identity is not valid for the "
"operation you are attempting to perform, and your user needs a different "
"role for the operation."
msgstr ""
#: ../identity_troubleshoot.rst:141
msgid "Debug signing key file errors"
msgstr ""
#: ../identity_troubleshoot.rst:143
msgid ""
"If an error occurs when the signing key file opens, it is possible that the "
"person who ran the ``keystone-manage pki_setup`` command to generate "
"certificates and keys did not use the correct user. When you run the "
"``keystone-manage pki_setup`` command, Identity generates a set of "
"certificates and keys in ``/etc/keystone/ssl*``, which is owned by ``root:"
"root``."
msgstr ""
#: ../identity_troubleshoot.rst:150
msgid ""
"This can present a problem when you run the Identity daemon under the "
"keystone user account (nologin) when you try to run PKI. Unless you run the "
"``chown`` command against the files ``keystone:keystone``, or run the "
"``keystone-manage pki_setup`` command with the :option:`--keystone-user` "
"and :option:`--keystone-group` parameters, you will get an error. For "
"example:"
msgstr ""
#: ../identity_troubleshoot.rst:166
msgid "Flush expired tokens from the token database table"
msgstr ""
#: ../identity_troubleshoot.rst:168
msgid ""
"As you generate tokens, the token database table on the Identity server "
"grows. To clear the token table, an administrative user must run the "
"``keystone-manage token_flush`` command to flush the tokens. When you flush "
"tokens, expired tokens are deleted and traceability is eliminated."
msgstr ""
#: ../identity_troubleshoot.rst:173
msgid ""
"Use ``cron`` to schedule this command to run frequently based on your "
"workload. For large workloads, running it every minute is recommended."
msgstr ""
#: ../identity_user_crud.rst:3
msgid "User CRUD"
msgstr ""
#: ../identity_user_crud.rst:5
msgid ""
"Identity provides a user CRUD (Create, Read, Update, and Delete) filter that "
"can be added to the ``public_api`` pipeline. The user CRUD filter enables "
"users to use a HTTP PATCH to change their own password. To enable this "
"extension you should define a :code:`user_crud_extension` filter, insert it "
"after the \"option:`*_body` middleware and before the ``public_service`` "
"application in the ``public_api`` WSGI pipeline in :file:`keystone-paste."
"ini`. For example:"
msgstr ""
#: ../identity_user_crud.rst:21
msgid "Each user can then change their own password with a HTTP PATCH::"
msgstr ""
#: ../identity_user_crud.rst:26
msgid ""
"In addition to changing their password, all current tokens for the user are "
"invalidated."
msgstr ""
#: ../identity_user_crud.rst:31
msgid "Only use a KVS back end for tokens when testing."
msgstr ""
#: ../index.rst:3
msgid "OpenStack Cloud Administrator Guide"
msgstr ""
#: ../index.rst:6
msgid "Abstract"
msgstr ""
#: ../index.rst:8
msgid ""
"OpenStack offers open source software for cloud administrators to manage and "
"troubleshoot an OpenStack cloud."
msgstr ""
#: ../index.rst:11
msgid ""
"This guide documents OpenStack Liberty, OpenStack Kilo, and OpenStack Juno "
"releases."
msgstr ""
#: ../index.rst:15
msgid "Contents"
msgstr ""
#: ../index.rst:38
msgid "Search in this guide"
msgstr ""
#: ../index.rst:40
msgid ":ref:`search`"
msgstr ""
#: ../keystone_caching_layer.rst:4
msgid "Caching layer"
msgstr ""
#: ../keystone_caching_layer.rst:6
msgid ""
"OpenStack Identity supports a caching layer that is above the configurable "
"subsystems (for example, token, assignment). OpenStack Identity uses the "
"`dogpile.cache <http://dogpilecache.readthedocs.org/en/latest/>`__ library "
"which allows flexible cache back ends. The majority of the caching "
"configuration options are set in the ``[cache]`` section of the :file:"
"`keystone.conf` file. However, each section that has the capability to be "
"cached usually has a caching boolean value that toggles caching."
msgstr ""
#: ../keystone_caching_layer.rst:15
msgid ""
"So to enable only the token back end caching, set the values as follows:"
msgstr ""
#: ../keystone_caching_layer.rst:30
msgid ""
"Since the Juno release, the default setting is enabled for subsystem "
"caching, but the global toggle is disabled. As a result, no caching in "
"available unless the global toggle for ``[cache]`` is enabled by setting the "
"value to ``true``."
msgstr ""
#: ../keystone_caching_layer.rst:36
msgid "Caching for tokens and tokens validation"
msgstr ""
#: ../keystone_caching_layer.rst:38
msgid ""
"The token system has a separate ``cache_time`` configuration option, that "
"can be set to a value above or below the global ``expiration_time`` default, "
"allowing for different caching behavior from the other systems in OpenStack "
"Identity. This option is set in the ``[token]`` section of the configuration "
"file."
msgstr ""
#: ../keystone_caching_layer.rst:44
msgid ""
"The token revocation list cache time is handled by the configuration option "
"``revocation_cache_time`` in the ``[token]`` section. The revocation list is "
"refreshed whenever a token is revoked. It typically sees significantly more "
"requests than specific token retrievals or token validation calls."
msgstr ""
#: ../keystone_caching_layer.rst:50
msgid ""
"Here is a list of actions that are affected by the cached time: getting a "
"new token, revoking tokens, validating tokens, checking v2 tokens, and "
"checking v3 tokens."
msgstr ""
#: ../keystone_caching_layer.rst:54
msgid ""
"The delete token API calls invalidate the cache for the tokens being acted "
"upon, as well as invalidating the cache for the revoked token list and the "
"validate/check token calls."
msgstr ""
#: ../keystone_caching_layer.rst:58
msgid ""
"Token caching is configurable independently of the ``revocation_list`` "
"caching. Lifted expiration checks from the token drivers to the token "
"manager. This ensures that cached tokens will still raise a "
"``TokenNotFound`` flag when expired."
msgstr ""
#: ../keystone_caching_layer.rst:63
msgid ""
"For cache consistency, all token IDs are transformed into the short token "
"hash at the provider and token driver level. Some methods have access to the "
"full ID (PKI Tokens), and some methods do not. Cache invalidation is "
"inconsistent without token ID normalization."
msgstr ""
#: ../keystone_caching_layer.rst:69
msgid "Caching around assignment CRUD"
msgstr ""
#: ../keystone_caching_layer.rst:71
msgid ""
"The assignment system has a separate ``cache_time`` configuration option, "
"that can be set to a value above or below the global ``expiration_time`` "
"default, allowing for different caching behavior from the other systems in "
"Identity service. This option is set in the ``[assignment]`` section of the "
"configuration file."
msgstr ""
#: ../keystone_caching_layer.rst:77
msgid ""
"Currently ``assignment`` has caching for ``project``, ``domain``, and "
"``role`` specific requests (primarily around the CRUD actions). Caching is "
"currently not implemented on grants. The ``list`` methods are not subject to "
"caching."
msgstr ""
#: ../keystone_caching_layer.rst:82
msgid ""
"Here is a list of actions that are affected by the assignment: assign domain "
"API, assign project API, and assign role API."
msgstr ""
#: ../keystone_caching_layer.rst:85
msgid ""
"The create, update, and delete actions for domains, projects and roles will "
"perform proper invalidations of the cached methods listed above."
msgstr ""
#: ../keystone_caching_layer.rst:90
msgid ""
"If a read-only ``assignment`` back end is in use, the cache will not "
"immediately reflect changes on the back end. Any given change may take up to "
"the ``cache_time`` (if set in the ``[assignment]`` section of the "
"configuration file) or the global ``expiration_time`` (set in the "
"``[cache]`` section of the configuration file) before it is reflected. If "
"this type of delay (when using a read-only ``assignment`` back end) is an "
"issue, it is recommended that caching be disabled on ``assignment``. To "
"disable caching specifically on ``assignment``, in the ``[assignment]`` "
"section of the configuration set ``caching`` to ``False``."
msgstr ""
#: ../keystone_caching_layer.rst:101
msgid ""
"For more information about the different back ends (and configuration "
"options), see:"
msgstr ""
#: ../keystone_caching_layer.rst:104
msgid ""
"`dogpile.cache.backends.memory <http://dogpilecache.readthedocs.org/en/"
"latest/api.html#memory-backend>`__"
msgstr ""
#: ../keystone_caching_layer.rst:106
msgid ""
"`dogpile.cache.backends.memcached <http://dogpilecache.readthedocs.org/en/"
"latest/api.html#memcached-backends>`__"
msgstr ""
#: ../keystone_caching_layer.rst:110
msgid ""
"The memory back end is not suitable for use in a production environment."
msgstr ""
#: ../keystone_caching_layer.rst:113
msgid ""
"`dogpile.cache.backends.redis <http://dogpilecache.readthedocs.org/en/latest/"
"api.html#redis-backends>`__"
msgstr ""
#: ../keystone_caching_layer.rst:115
msgid ""
"`dogpile.cache.backends.file <http://dogpilecache.readthedocs.org/en/latest/"
"api.html#file-backends>`__"
msgstr ""
#: ../keystone_caching_layer.rst:117
msgid "``keystone.common.cache.backends.mongo``"
msgstr ""
#: ../keystone_caching_layer.rst:120
msgid "Configure the Memcached back end example"
msgstr ""
#: ../keystone_caching_layer.rst:122
msgid "The following example shows how to configure the memcached back end:"
msgstr ""
#: ../keystone_caching_layer.rst:132
msgid ""
"You need to specify the URL to reach the ``memcached`` instance with the "
"``backend_argument`` parameter."
msgstr ""
#: ../keystone_certificates_for_pki.rst:3
msgid "Certificates for PKI"
msgstr ""
#: ../keystone_certificates_for_pki.rst:5
msgid ""
"PKI stands for Public Key Infrastructure. Tokens are documents, "
"cryptographically signed using the X509 standard. In order to work correctly "
"token generation requires a public/private key pair. The public key must be "
"signed in an X509 certificate, and the certificate used to sign it must be "
"available as a Certificate Authority (CA) certificate. These files can be "
"generated either using the ``keystone-manage`` utility, or externally "
"generated. The files need to be in the locations specified by the top level "
"Identity service configuration file :file:`keystone.conf` as specified in "
"the above section. Additionally, the private key should only be readable by "
"the system user that will run the Identity service."
msgstr ""
#: ../keystone_certificates_for_pki.rst:20
msgid ""
"The certificates can be world readable, but the private key cannot be. The "
"private key should only be readable by the account that is going to sign "
"tokens. When generating files with the :command:`keystone-manage pki_setup` "
"command, your best option is to run as the pki user. If you run :command:"
"`keystone-manage` as root, you can append :option:`--keystone-user` and :"
"option:`--keystone-group` parameters to set the user name and group keystone "
"is going to run under."
msgstr ""
#: ../keystone_certificates_for_pki.rst:28
msgid ""
"The values that specify where to read the certificates are under the "
"``[signing]`` section of the configuration file. The configuration values "
"are:"
msgstr ""
#: ../keystone_certificates_for_pki.rst:33
msgid ""
"Location of certificate used to verify tokens. Default is :file:`/etc/"
"keystone/ssl/certs/signing_cert.pem`."
msgstr ""
# #-#-#-#-# keystone_certificates_for_pki.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# keystone_configure_with_SSL.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../keystone_certificates_for_pki.rst:34
#: ../keystone_configure_with_SSL.rst:58
msgid "``certfile``"
msgstr ""
#: ../keystone_certificates_for_pki.rst:37
msgid ""
"Location of private key used to sign tokens. Default is :file:`/etc/keystone/"
"ssl/private/signing_key.pem`."
msgstr ""
# #-#-#-#-# keystone_certificates_for_pki.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# keystone_configure_with_SSL.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../keystone_certificates_for_pki.rst:38
#: ../keystone_configure_with_SSL.rst:63
msgid "``keyfile``"
msgstr ""
#: ../keystone_certificates_for_pki.rst:41
msgid ""
"Location of certificate for the authority that issued the above certificate. "
"Default is :file:`/etc/keystone/ssl/certs/ca.pem`."
msgstr ""
# #-#-#-#-# keystone_certificates_for_pki.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# keystone_configure_with_SSL.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../keystone_certificates_for_pki.rst:43
#: ../keystone_configure_with_SSL.rst:66
msgid "``ca_certs``"
msgstr ""
#: ../keystone_certificates_for_pki.rst:46
msgid ""
"Location of the private key used by the CA. Default is :file:`/etc/keystone/"
"ssl/private/cakey.pem`."
msgstr ""
#: ../keystone_certificates_for_pki.rst:47
msgid "``ca_key``"
msgstr ""
#: ../keystone_certificates_for_pki.rst:50
msgid "Default is ``2048``."
msgstr ""
#: ../keystone_certificates_for_pki.rst:50
msgid "``key_size``"
msgstr ""
#: ../keystone_certificates_for_pki.rst:53
msgid "Default is ``3650``."
msgstr ""
#: ../keystone_certificates_for_pki.rst:53
msgid "``valid_days``"
msgstr ""
#: ../keystone_certificates_for_pki.rst:56
msgid ""
"Certificate subject (auto generated certificate) for token signing. Default "
"is ``/C=US/ST=Unset/L=Unset/O=Unset/CN=www.example.com``."
msgstr ""
#: ../keystone_certificates_for_pki.rst:57
msgid "``cert_subject``"
msgstr ""
#: ../keystone_certificates_for_pki.rst:59
msgid ""
"When generating certificates with the :command:`keystone-manage pki_setup` "
"command, the ``ca_key``, ``key_size``, and ``valid_days`` configuration "
"options are used."
msgstr ""
#: ../keystone_certificates_for_pki.rst:63
msgid ""
"If the :command:`keystone-manage pki_setup` command is not used to generate "
"certificates, or you are providing your own certificates, these values do "
"not need to be set."
msgstr ""
#: ../keystone_certificates_for_pki.rst:67
msgid ""
"If ``provider=keystone.token.providers.uuid.Provider`` in the ``[token]`` "
"section of the keystone configuration, a typical token looks like "
"``53f7f6ef0cc344b5be706bcc8b1479e1``. If ``provider=keystone.token.providers."
"pki.Provider``, a typical token is a much longer string, such as::"
msgstr ""
#: ../keystone_certificates_for_pki.rst:102
msgid "Sign certificate issued by external CA"
msgstr ""
#: ../keystone_certificates_for_pki.rst:104
msgid ""
"You can use a signing certificate issued by an external CA instead of "
"generated by ``keystone-manage``. However, a certificate issued by an "
"external CA must satisfy the following conditions:"
msgstr ""
#: ../keystone_certificates_for_pki.rst:108
msgid ""
"All certificate and key files must be in Privacy Enhanced Mail (PEM) format"
msgstr ""
#: ../keystone_certificates_for_pki.rst:111
msgid "Private key files must not be protected by a password"
msgstr ""
#: ../keystone_certificates_for_pki.rst:113
msgid ""
"When using a signing certificate issued by an external CA, you do not need "
"to specify ``key_size``, ``valid_days``, and ``ca_password`` as they will be "
"ignored."
msgstr ""
#: ../keystone_certificates_for_pki.rst:117
msgid ""
"The basic workflow for using a signing certificate issued by an external CA "
"involves:"
msgstr ""
#: ../keystone_certificates_for_pki.rst:120
msgid "Request Signing Certificate from External CA"
msgstr ""
#: ../keystone_certificates_for_pki.rst:122
msgid "Convert certificate and private key to PEM if needed"
msgstr ""
#: ../keystone_certificates_for_pki.rst:124
msgid "Install External Signing Certificate"
msgstr ""
#: ../keystone_certificates_for_pki.rst:127
msgid "Request a signing certificate from an external CA"
msgstr ""
#: ../keystone_certificates_for_pki.rst:129
msgid ""
"One way to request a signing certificate from an external CA is to first "
"generate a PKCS #10 Certificate Request Syntax (CRS) using OpenSSL CLI."
msgstr ""
#: ../keystone_certificates_for_pki.rst:132
msgid ""
"Create a certificate request configuration file. For example, create the :"
"file:`cert_req.conf` file, as follows:"
msgstr ""
#: ../keystone_certificates_for_pki.rst:154
msgid ""
"Then generate a CRS with OpenSSL CLI. **Do not encrypt the generated private "
"key. You must use the -nodes option.**"
msgstr ""
# #-#-#-#-# keystone_certificates_for_pki.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# objectstorage_ringbuilder.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../keystone_certificates_for_pki.rst:157
#: ../objectstorage_ringbuilder.rst:119
msgid "For example::"
msgstr ""
#: ../keystone_certificates_for_pki.rst:162
msgid ""
"If everything is successful, you should end up with :file:`signing_cert_req."
"pem` and :file:`signing_key.pem`. Send :file:`signing_cert_req.pem` to your "
"CA to request a token signing certificate and make sure to ask the "
"certificate to be in PEM format. Also, make sure your trusted CA certificate "
"chain is also in PEM format."
msgstr ""
#: ../keystone_certificates_for_pki.rst:169
msgid "Install an external signing certificate"
msgstr ""
#: ../keystone_certificates_for_pki.rst:171
msgid "Assuming you have the following already:"
msgstr ""
#: ../keystone_certificates_for_pki.rst:174
msgid "(Keystone token) signing certificate in PEM format"
msgstr ""
#: ../keystone_certificates_for_pki.rst:174
msgid ":file:`signing_cert.pem`"
msgstr ""
#: ../keystone_certificates_for_pki.rst:177
msgid ":file:`signing_key.pem`"
msgstr ""
#: ../keystone_certificates_for_pki.rst:177
msgid "Corresponding (non-encrypted) private key in PEM format"
msgstr ""
#: ../keystone_certificates_for_pki.rst:180
msgid ":file:`cacert.pem`"
msgstr ""
#: ../keystone_certificates_for_pki.rst:180
msgid "Trust CA certificate chain in PEM format"
msgstr ""
#: ../keystone_certificates_for_pki.rst:182
msgid "Copy the above to your certificate directory. For example:"
msgstr ""
#: ../keystone_certificates_for_pki.rst:194
msgid "Make sure the certificate directory is only accessible by root."
msgstr ""
#: ../keystone_certificates_for_pki.rst:198
msgid ""
"The procedure of copying the key and cert files may be improved if done "
"after first running :command:`keystone-manage pki_setup` since this command "
"also creates other needed files, such as the :file:`index.txt` and :file:"
"`serial` files."
msgstr ""
#: ../keystone_certificates_for_pki.rst:203
msgid ""
"Also, when copying the necessary files to a different server for replicating "
"the functionality, the entire directory of files is needed, not just the key "
"and cert files."
msgstr ""
#: ../keystone_certificates_for_pki.rst:207
msgid ""
"If your certificate directory path is different from the default :file:`/etc/"
"keystone/ssl/certs`, make sure it is reflected in the ``[signing]`` section "
"of the configuration file."
msgstr ""
#: ../keystone_certificates_for_pki.rst:212
msgid "Switching out expired signing certificates"
msgstr ""
#: ../keystone_certificates_for_pki.rst:214
msgid ""
"The following procedure details how to switch out expired signing "
"certificates with no cloud outages."
msgstr ""
#: ../keystone_certificates_for_pki.rst:217
msgid "Generate a new signing key."
msgstr ""
#: ../keystone_certificates_for_pki.rst:219
msgid "Generate a new certificate request."
msgstr ""
#: ../keystone_certificates_for_pki.rst:221
msgid ""
"Sign the new certificate with the existing CA to generate a new "
"``signing_cert``."
msgstr ""
#: ../keystone_certificates_for_pki.rst:224
msgid ""
"Append the new ``signing_cert`` to the old ``signing_cert``. Ensure the old "
"certificate is in the file first."
msgstr ""
#: ../keystone_certificates_for_pki.rst:227
msgid ""
"Remove all signing certificates from all your hosts to force OpenStack "
"Compute to download the new ``signing_cert``."
msgstr ""
#: ../keystone_certificates_for_pki.rst:230
msgid ""
"Replace the old signing key with the new signing key. Move the new signing "
"certificate above the old certificate in the ``signing_cert`` file."
msgstr ""
#: ../keystone_certificates_for_pki.rst:234
msgid ""
"After the old certificate reads as expired, you can safely remove the old "
"signing certificate from the file."
msgstr ""
#: ../keystone_configure_with_SSL.rst:3
msgid "Configure the Identity service with SSL"
msgstr ""
#: ../keystone_configure_with_SSL.rst:5
msgid "You can configure the Identity service to support two-way SSL."
msgstr ""
#: ../keystone_configure_with_SSL.rst:7
msgid "You must obtain the x509 certificates externally and configure them."
msgstr ""
#: ../keystone_configure_with_SSL.rst:9
msgid ""
"The Identity service provides a set of sample certificates in the :file:"
"`examples/pki/certs` and :file:`examples/pki/private` directories:"
msgstr ""
#: ../keystone_configure_with_SSL.rst:13
msgid "Certificate Authority chain to validate against."
msgstr ""
#: ../keystone_configure_with_SSL.rst:13
msgid "cacert.pem"
msgstr ""
#: ../keystone_configure_with_SSL.rst:16
msgid "Public certificate for Identity service server."
msgstr ""
#: ../keystone_configure_with_SSL.rst:16
msgid "ssl\\_cert.pem"
msgstr ""
#: ../keystone_configure_with_SSL.rst:19
msgid "Public and private certificate for Identity service middleware/client."
msgstr ""
#: ../keystone_configure_with_SSL.rst:20
msgid "middleware.pem"
msgstr ""
#: ../keystone_configure_with_SSL.rst:23
msgid "Private key for the CA."
msgstr ""
#: ../keystone_configure_with_SSL.rst:23
msgid "cakey.pem"
msgstr ""
#: ../keystone_configure_with_SSL.rst:26
msgid "Private key for the Identity service server."
msgstr ""
#: ../keystone_configure_with_SSL.rst:26
msgid "ssl\\_key.pem"
msgstr ""
#: ../keystone_configure_with_SSL.rst:30
msgid ""
"You can choose names for these certificates. You can also combine public/"
"private keys in the same file, if you wish. These certificates are provided "
"as an example."
msgstr ""
#: ../keystone_configure_with_SSL.rst:35
msgid "Client authentication with keystone-all"
msgstr ""
#: ../keystone_configure_with_SSL.rst:37
msgid ""
"When running ``keystone-all``, the server can be configured to enable SSL "
"with client authentication using the following instructions. Modify the "
"``[eventlet_server_ssl]`` section in the :file:`/etc/keystone/keystone.conf` "
"file. The following SSL configuration example uses the included sample "
"certificates:"
msgstr ""
#: ../keystone_configure_with_SSL.rst:52
msgid "**Options**"
msgstr ""
#: ../keystone_configure_with_SSL.rst:55
msgid "True enables SSL. Default is False."
msgstr ""
#: ../keystone_configure_with_SSL.rst:55
msgid "``enable``"
msgstr ""
#: ../keystone_configure_with_SSL.rst:58
msgid "Path to the Identity service public certificate file."
msgstr ""
#: ../keystone_configure_with_SSL.rst:61
msgid ""
"Path to the Identity service private certificate file. If you include the "
"private key in the certfile, you can omit the keyfile."
msgstr ""
#: ../keystone_configure_with_SSL.rst:66
msgid "Path to the CA trust chain."
msgstr ""
#: ../keystone_configure_with_SSL.rst:69
msgid "Requires client certificate. Default is False."
msgstr ""
#: ../keystone_configure_with_SSL.rst:69
msgid "``cert_required``"
msgstr ""
#: ../keystone_configure_with_SSL.rst:71
msgid ""
"When running the Identity service as a WSGI service in a web server such as "
"Apache httpd, this configuration is done in the web server instead. In this "
"case the options in the ``[eventlet_server_ssl]`` section are ignored."
msgstr ""
#: ../keystone_external_authentication.rst:3
msgid "External authentication with Identity"
msgstr ""
#: ../keystone_external_authentication.rst:5
msgid ""
"When Identity runs in ``apache-httpd``, you can use external authentication "
"methods that differ from the authentication provided by the identity store "
"back end. For example, you can use an SQL identity back end together with "
"X.509 authentication and Kerberos, instead of using the user name and "
"password combination."
msgstr ""
#: ../keystone_external_authentication.rst:12
msgid "Use HTTPD authentication"
msgstr ""
#: ../keystone_external_authentication.rst:14
msgid ""
"Web servers, like Apache HTTP, support many methods of authentication. "
"Identity can allow the web server to perform the authentication. The web "
"server then passes the authenticated user to Identity by using the "
"``REMOTE_USER`` environment variable. This user must already exist in the "
"Identity back end to get a token from the controller. To use this method, "
"Identity should run on ``apache-httpd``."
msgstr ""
#: ../keystone_external_authentication.rst:22
msgid "Use X.509"
msgstr ""
#: ../keystone_external_authentication.rst:24
msgid ""
"The following Apache configuration snippet authenticates the user based on a "
"valid X.509 certificate from a known CA::"
msgstr ""
#: ../keystone_fernet_token_faq.rst:3
msgid "Fernet - Frequently Asked Questions"
msgstr ""
#: ../keystone_fernet_token_faq.rst:5
msgid ""
"The following questions have been asked periodically since the initial "
"release of the fernet token format in Kilo."
msgstr ""
#: ../keystone_fernet_token_faq.rst:9
msgid "What are the different types of keys?"
msgstr ""
#: ../keystone_fernet_token_faq.rst:11
msgid ""
"A key repository is required by keystone in order to create fernet tokens. "
"These keys are used to encrypt and decrypt the information that makes up the "
"payload of the token. Each key in the repository can have one of three "
"states. The state of the key determines how keystone uses a key with fernet "
"tokens. The different types are as follows:"
msgstr ""
#: ../keystone_fernet_token_faq.rst:18
msgid ""
"There is only ever one primary key in a key repository. The primary key is "
"allowed to encrypt and decrypt tokens. This key is always named as the "
"highest index in the repository."
msgstr ""
#: ../keystone_fernet_token_faq.rst:19
msgid "Primary key:"
msgstr ""
#: ../keystone_fernet_token_faq.rst:22
msgid ""
"A secondary key was at one point a primary key, but has been demoted in "
"place of another primary key. It is only allowed to decrypt tokens. Since it "
"was the primary at some point in time, its existence in the key repository "
"is justified. Keystone needs to be able to decrypt tokens that were created "
"with old primary keys."
msgstr ""
#: ../keystone_fernet_token_faq.rst:25
msgid "Secondary key:"
msgstr ""
#: ../keystone_fernet_token_faq.rst:28
msgid ""
"The staged key is a special key that shares some similarities with secondary "
"keys. There can only ever be one staged key in a repository and it must "
"exist. Just like secondary keys, staged keys have the ability to decrypt "
"tokens. Unlike secondary keys, staged keys have never been a primary key. In "
"fact, they are opposites since the staged key will always be the next "
"primary key. This helps clarify the name because they are the next key "
"staged to be the primary key. This key is always named as ``0`` in the key "
"repository."
msgstr ""
#: ../keystone_fernet_token_faq.rst:34
msgid "Staged key:"
msgstr ""
#: ../keystone_fernet_token_faq.rst:37
msgid "So, how does a staged key help me and why do I care about it?"
msgstr ""
#: ../keystone_fernet_token_faq.rst:39
msgid ""
"The fernet keys have a natural lifecycle. Each key starts as a staged key, "
"is promoted to be the primary key, and then demoted to be a secondary key. "
"New tokens can only be encrypted with a primary key. Secondary and staged "
"keys are never used to encrypt token. The staged key is a special key given "
"the order of events and the attributes of each type of key. The staged key "
"is the only key in the repository that has not had a chance to encrypt any "
"tokens yet, but it is still allowed to decrypt tokens. As an operator, this "
"gives you the chance to perform a key rotation on one keystone node, and "
"distribute the new key set over a span of time. This does not require the "
"distribution to take place in an ultra short period of time. Tokens "
"encrypted with a primary key can be decrypted, and validated, on other nodes "
"where that key is still staged."
msgstr ""
#: ../keystone_fernet_token_faq.rst:52
msgid "Where do I put my key repository?"
msgstr ""
#: ../keystone_fernet_token_faq.rst:54
msgid ""
"The key repository is specified using the ``key_repository`` option in the "
"keystone configuration file. The keystone process should be able to read and "
"write to this location but it should be kept secret otherwise. Currently, "
"keystone only supports file-backed key repositories."
msgstr ""
#: ../keystone_fernet_token_faq.rst:59
msgid "..code-block:: ini"
msgstr ""
#: ../keystone_fernet_token_faq.rst:61
msgid "[fernet_tokens] key_repository = /etc/keystone/fernet-keys/"
msgstr ""
#: ../keystone_fernet_token_faq.rst:65
msgid "What is the recommended way to rotate and distribute keys?"
msgstr ""
#: ../keystone_fernet_token_faq.rst:67
msgid ""
"The ``keystone-manage`` command line utility includes a key rotation "
"mechanism. This mechanism will initialize and rotate keys but does not make "
"an effort to distribute keys across keystone nodes. The distribution of keys "
"across a keystone deployment is best handled through configuration "
"management tooling. Use ``keystone-manage fernet_rotate`` to rotate the key "
"repository."
msgstr ""
#: ../keystone_fernet_token_faq.rst:74
msgid "Do fernet tokens still expires?"
msgstr ""
#: ../keystone_fernet_token_faq.rst:76
msgid ""
"Yes, fernet tokens can expire just like any other keystone token formats."
msgstr ""
#: ../keystone_fernet_token_faq.rst:79
msgid "Why should I choose fernet tokens over UUID tokens?"
msgstr ""
#: ../keystone_fernet_token_faq.rst:81
msgid ""
"Even though fernet tokens operate very similarly to UUID tokens, they do not "
"require persistence. The keystone token database no longer suffers bloat as "
"a side effect of authentication. Pruning expired tokens from the token "
"database is no longer required when using fernet tokens. Because fernet "
"tokens do not require persistence, they do not have to be replicated. As "
"long as each keystone node shares the same key repository, fernet tokens can "
"be created and validated instantly across nodes."
msgstr ""
#: ../keystone_fernet_token_faq.rst:90
msgid "Why should I choose fernet tokens over PKI or PKIZ tokens?"
msgstr ""
#: ../keystone_fernet_token_faq.rst:92
msgid ""
"The arguments for using fernet over PKI and PKIZ remain the same as UUID, in "
"addition to the fact that fernet tokens are much smaller than PKI and PKIZ "
"tokens. PKI and PKIZ tokens still require persistent storage and can "
"sometimes cause issues due to their size. This issue is mitigated when "
"switching to fernet because fernet tokens are kept under a 250 byte limit. "
"PKI and PKIZ tokens typically exceed 1600 bytes in length. The length of a "
"PKI or PKIZ token is dependent on the size of the deployment. Bigger service "
"catalogs will result in longer token lengths. This pattern does not exist "
"with fernet tokens because the contents of the encrypted payload is kept to "
"a minimum."
msgstr ""
#: ../keystone_fernet_token_faq.rst:103
msgid ""
"Should I rotate and distribute keys from the same keystone node every "
"rotation?"
msgstr ""
#: ../keystone_fernet_token_faq.rst:105
msgid ""
"No, but the relationship between rotation and distribution should be lock-"
"step. Once you rotate keys on one keystone node, the key repository from "
"that node should be distributed to the rest of the cluster. Once you confirm "
"that each node has the same key repository state, you could rotate and "
"distribute from any other node in the cluster."
msgstr ""
#: ../keystone_fernet_token_faq.rst:111
msgid ""
"If the rotation and distribution are not lock-step, a single keystone node "
"in the deployment will create tokens with a primary key that no other node "
"has as a staged key. This will cause tokens generated from one keystone node "
"to fail validation on other keystone nodes."
msgstr ""
#: ../keystone_fernet_token_faq.rst:117
msgid "How do I add new keystone nodes to a deployment?"
msgstr ""
#: ../keystone_fernet_token_faq.rst:119
msgid ""
"The keys used to create fernet tokens should be treated like super secret "
"configuration files, similar to an SSL secret key. Before a node is allowed "
"to join an existing cluster, issuing and validating tokens, it should have "
"the same key repository as the rest of the nodes in the cluster."
msgstr ""
#: ../keystone_fernet_token_faq.rst:125
msgid "How should I approach key distribution?"
msgstr ""
#: ../keystone_fernet_token_faq.rst:127
msgid ""
"Remember that key distribution is only required in multi-node keystone "
"deployments. If you only have one keystone node serving requests in your "
"deployment, key distribution is unnecessary."
msgstr ""
#: ../keystone_fernet_token_faq.rst:131
msgid ""
"Key distribution is a problem best approached from the deployment's current "
"configuration management system. Since not all deployments use the same "
"configuration management systems, it makes sense to explore options around "
"what is already available for managing keys, while keeping the secrecy of "
"the keys in mind. Many configuration management tools can leverage something "
"like ``rsync`` to manage key distribution."
msgstr ""
#: ../keystone_fernet_token_faq.rst:138
msgid ""
"Key rotation is a single operation that promotes the current staged key to "
"primary, creates a new staged key, and prunes old secondary keys. It is "
"easiest to do this on a single node and verify the rotation took place "
"properly before distributing the key repository to the rest of the cluster. "
"The concept behind the staged key breaks the expectation that key rotation "
"and key distribution have to be done in a single step. With the staged key, "
"we have time to inspect the new key repository before syncing state with the "
"rest of the cluster. Key distribution should be an operation that can run in "
"succession until it succeeds. The following might help illustrate the "
"isolation between key rotation and key distribution."
msgstr ""
#: ../keystone_fernet_token_faq.rst:149
msgid ""
"Ensure all keystone nodes in the deployment have the same key repository."
msgstr ""
#: ../keystone_fernet_token_faq.rst:150
msgid "Pick a keystone node in the cluster to rotate from."
msgstr ""
#: ../keystone_fernet_token_faq.rst:153
msgid ""
"If no, investigate issues with the particular keystone node you rotated keys "
"on. Fernet keys are small and the operation for rotation is trivial. There "
"should not be much room for error in key rotation. It is possible that the "
"user does not have the ability to write new keys to the key repository. Log "
"output from ``keystone-manage fernet_rotate`` should give more information "
"into specific failures."
msgstr ""
#: ../keystone_fernet_token_faq.rst:160
msgid ""
"If yes, you should see a new staged key. The old staged key should be the "
"new primary. Depending on the ``max_active_keys`` limit you might have "
"secondary keys that were pruned. At this point, the node that you rotated on "
"will be creating fernet tokens with a primary key that all other nodes "
"should have as the staged key. This is why we checked the state of all key "
"repositories in Step one. All other nodes in the cluster should be able to "
"decrypt tokens created with the new primary key. At this point, we are ready "
"to distribute the new key set."
msgstr ""
#: ../keystone_fernet_token_faq.rst:167
msgid "Rotate keys."
msgstr ""
#: ../keystone_fernet_token_faq.rst:167
msgid "Was is successful?"
msgstr ""
#: ../keystone_fernet_token_faq.rst:171
msgid ""
"If yes, you should be able to confirm that all nodes in the cluster have the "
"same key repository that was introduced in Step 3. All nodes in the cluster "
"will be creating tokens with the primary key that was promoted in Step 3. No "
"further action is required until the next schedule key rotation."
msgstr ""
#: ../keystone_fernet_token_faq.rst:176
msgid ""
"If no, try distributing again. Remember that we already rotated the "
"repository and performing another rotation at this point will result in "
"tokens that cannot be validated across certain hosts. Specifically, the "
"hosts that did not get the latest key set. You should be able to distribe "
"keys until it is successful. If certain nodes have issues syncing, it could "
"be permission or network issues and those should be resolved before "
"subsequent rotations."
msgstr ""
#: ../keystone_fernet_token_faq.rst:182
msgid "Distribute the new key repository."
msgstr ""
#: ../keystone_fernet_token_faq.rst:182
msgid "Was it successful?"
msgstr ""
#: ../keystone_fernet_token_faq.rst:185
msgid "How long should I keep my keys around?"
msgstr ""
#: ../keystone_fernet_token_faq.rst:187
msgid ""
"The fernet tokens that keystone creates are only secure as the keys creating "
"them. With staged keys the penalty of key rotation is low, allowing you to "
"err on the side of security and rotate weekly, daily, or even hourly. "
"Ultimately, this should be less time than it takes an attacker to break a "
"``AES256`` key and a ``SHA256 HMAC``."
msgstr ""
#: ../keystone_fernet_token_faq.rst:194
msgid "Is a fernet token still a bearer token?"
msgstr ""
#: ../keystone_fernet_token_faq.rst:196
msgid ""
"Yes, and they follow exactly the same validation path as UUID tokens, with "
"the exception of being written to, and read from, a back end. If someone "
"compromises your fernet token, they have the power to do all the operations "
"you are allowed to do."
msgstr ""
#: ../keystone_fernet_token_faq.rst:202
msgid "What if I need to revoke all my tokens?"
msgstr ""
#: ../keystone_fernet_token_faq.rst:204
msgid ""
"To invalidate every token issued from keystone and start fresh, remove the "
"current key repository, create a new key set, and redistribute it to all "
"nodes in the cluster. This will render every token issued from keystone as "
"invalid regardless if the token has actually expired. When a client goes to "
"re-authenticate, the new token will have been created with a new fernet key."
msgstr ""
#: ../keystone_fernet_token_faq.rst:211
msgid ""
"What can an attacker do if they compromise a fernet key in my deployment?"
msgstr ""
#: ../keystone_fernet_token_faq.rst:213
msgid ""
"If any key used in the key repository is compromised, an attacker will be "
"able to build their own tokens. If they know the ID of an administrator on a "
"project, they could generate administrator tokens for the project. They will "
"be able to generate their own tokens until the compromised key has been "
"removed from from the repository."
msgstr ""
#: ../keystone_fernet_token_faq.rst:220
msgid "I rotated keys and now tokens are invalidating early, what did I do?"
msgstr ""
#: ../keystone_fernet_token_faq.rst:222
msgid ""
"Using fernet tokens requires some awareness around token expiration and the "
"key lifecycle. You do not want to rotate so often that secondary keys are "
"removed that might still be needed to decrypt unexpired tokens. If this "
"happens, you will not be able to decrypt the token because the key the was "
"used to encrypt it is now gone. Only remove keys that you know are not being "
"used to encrypt or decrypt tokens."
msgstr ""
#: ../keystone_fernet_token_faq.rst:229
msgid ""
"For example, your token is valid for 24 hours and we want to rotate keys "
"every six hours. We will need to make sure tokens that were created at 08:00 "
"AM on Monday are still valid at 07:00 AM on Tuesday, assuming they were not "
"prematurely revoked. To accomplish this, we will want to make sure we set "
"``max_active_keys=6`` in our keystone configuration file. This will allow us "
"to hold all keys that might still be required to validate a previous token, "
"but keeps the key repository limited to only the keys that are needed."
msgstr ""
#: ../keystone_fernet_token_faq.rst:237
msgid ""
"The number of ``max_active_keys`` for a deployment can be determined by "
"dividing the token lifetime, in hours, by the frequency of rotation in hours "
"and adding two. Better illustrated as::"
msgstr ""
#: ../keystone_fernet_token_faq.rst:245
msgid ""
"The reason for adding two additional keys to the count is to include the "
"staged key and a buffer key. This can be shown based on the previous "
"example. We initially setup the key repository at 6:00 AM on Monday, and the "
"initial state looks like::"
msgstr ""
#: ../keystone_fernet_token_faq.rst:256
msgid ""
"All tokens created after 6:00 AM are encrypted with key ``1``. At 12:00 PM "
"we will rotate keys again, resulting in::"
msgstr ""
#: ../keystone_fernet_token_faq.rst:266
msgid ""
"We are still able to validate tokens created between 6:00 - 11:59 AM because "
"the ``1`` key still exists as a secondary key. All tokens issued after 12:00 "
"PM will be encrypted with key ``2``. At 6:00 PM we do our next rotation, "
"resulting in::"
msgstr ""
#: ../keystone_fernet_token_faq.rst:279
msgid ""
"It is still possible to validate tokens issued from 6:00 AM - 5:59 PM "
"because keys ``1`` and ``2`` exist as secondary keys. Every token issued "
"until 11:59 PM will be encrypted with key ``3``, and at 12:00 AM we do our "
"next rotation::"
msgstr ""
#: ../keystone_fernet_token_faq.rst:292
msgid ""
"Just like before, we can still validate tokens issued from 6:00 AM the "
"previous day until 5:59 AM today because keys ``1`` - ``4`` are present. At "
"6:00 AM, tokens issued from the previous day will start to expire and we do "
"our next scheduled rotation::"
msgstr ""
#: ../keystone_fernet_token_faq.rst:307
msgid ""
"Tokens will naturally expire after 6:00 AM, but we will not be able to "
"remove key ``1`` until the next rotation because it encrypted all tokens "
"from 6:00 AM to 12:00 PM the day before. Once we do our next rotation, which "
"is at 12:00 PM, the ``1`` key will be pruned from the repository::"
msgstr ""
#: ../keystone_fernet_token_faq.rst:322
msgid ""
"If keystone were to receive a token that was created between 6:00 AM and "
"12:00 PM the day before, encrypted with the ``1`` key, it would not be valid "
"because it was already expired. This makes it possible for us to remove the "
"``1`` key from the repository without negative validation side-effects."
msgstr ""
#: ../keystone_integrate_assignment_backend_ldap.rst:5
msgid "Integrate assignment back end with LDAP"
msgstr ""
#: ../keystone_integrate_assignment_backend_ldap.rst:7
msgid ""
"When you configure the OpenStack Identity service to use LDAP servers, you "
"can split authentication and authorization using the *assignment* feature. "
"Integrating the *assignment* back end with LDAP allows administrators to use "
"projects (tenant), roles, domains, and role assignments in LDAP."
msgstr ""
#: ../keystone_integrate_assignment_backend_ldap.rst:15
msgid ""
"Be aware of domain-specific back end limitations when configuring OpenStack "
"Identity. The OpenStack Identity service does not support domain-specific "
"assignment back ends. Using LDAP as an assignment back end is not "
"recommended."
msgstr ""
#: ../keystone_integrate_assignment_backend_ldap.rst:23
msgid ""
"For OpenStack Identity assignments to access LDAP servers, you must define "
"the destination LDAP server in the :file:`keystone.conf` file. For more "
"information, see :ref:`integrate-identity-with-ldap`."
msgstr ""
#: ../keystone_integrate_assignment_backend_ldap.rst:27
msgid "**To integrate assignment back ends with LDAP**"
msgstr ""
#: ../keystone_integrate_assignment_backend_ldap.rst:29
msgid ""
"Enable the assignment driver. In the ``[assignment]`` section, set the "
"``driver`` configuration key to ``keystone.assignment.backends.sql."
"Assignment``:"
msgstr ""
#: ../keystone_integrate_assignment_backend_ldap.rst:39
msgid ""
"Create the organizational units (OU) in the LDAP directory, and define their "
"corresponding location in the ``keystone.conf`` file:"
msgstr ""
#: ../keystone_integrate_assignment_backend_ldap.rst:53
msgid ""
"These schema attributes are extensible for compatibility with various "
"schemas. For example, this entry maps to the groupOfNames attribute in "
"Active Directory:"
msgstr ""
#: ../keystone_integrate_assignment_backend_ldap.rst:61
msgid ""
"A read-only implementation is recommended for LDAP integration. These "
"permissions are applied to object types in the ``keystone.conf`` file:"
msgstr ""
# #-#-#-#-# keystone_integrate_assignment_backend_ldap.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# keystone_integrate_identity_backend_ldap.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../keystone_integrate_assignment_backend_ldap.rst:75
#: ../keystone_integrate_identity_backend_ldap.rst:64
#: ../keystone_integrate_identity_backend_ldap.rst:166
msgid "Restart the OpenStack Identity service::"
msgstr ""
# #-#-#-#-# keystone_integrate_assignment_backend_ldap.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# keystone_integrate_identity_backend_ldap.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../keystone_integrate_assignment_backend_ldap.rst:82
#: ../keystone_integrate_identity_backend_ldap.rst:70
#: ../keystone_integrate_identity_backend_ldap.rst:172
#: ../keystone_integrate_identity_backend_ldap.rst:244
msgid ""
"During service restart, authentication and authorization are unavailable."
msgstr ""
#: ../keystone_integrate_assignment_backend_ldap.rst:85
msgid "**Additional LDAP integration settings.**"
msgstr ""
#: ../keystone_integrate_assignment_backend_ldap.rst:87
msgid ""
"Set these options in the :file:`/etc/keystone/keystone.conf` file for a "
"single LDAP server, or :file:`/etc/keystone/domains/keystone.DOMAIN_NAME."
"conf` files for multiple back ends."
msgstr ""
# #-#-#-#-# keystone_integrate_assignment_backend_ldap.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# keystone_integrate_identity_backend_ldap.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../keystone_integrate_assignment_backend_ldap.rst:92
#: ../keystone_integrate_identity_backend_ldap.rst:183
msgid "Use filters to control the scope of data presented through LDAP."
msgstr ""
#: ../keystone_integrate_assignment_backend_ldap.rst:103
msgid "Filtering method"
msgstr ""
# #-#-#-#-# keystone_integrate_assignment_backend_ldap.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# keystone_integrate_identity_backend_ldap.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../keystone_integrate_assignment_backend_ldap.rst:103
#: ../keystone_integrate_identity_backend_ldap.rst:189
msgid "Filters"
msgstr ""
#: ../keystone_integrate_assignment_backend_ldap.rst:106
msgid ""
"Mask account status values (include any additional attribute mappings) for "
"compatibility with various directory services. Superfluous accounts are "
"filtered with user\\_filter."
msgstr ""
# #-#-#-#-# keystone_integrate_assignment_backend_ldap.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# keystone_integrate_identity_backend_ldap.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../keystone_integrate_assignment_backend_ldap.rst:110
#: ../keystone_integrate_identity_backend_ldap.rst:196
msgid "Setting attribute ignore to list of attributes stripped off on update."
msgstr ""
#: ../keystone_integrate_assignment_backend_ldap.rst:129
msgid "Assignment attribute mapping"
msgstr ""
#: ../keystone_integrate_assignment_backend_ldap.rst:132
msgid ""
"An alternative method to determine if a project is enabled or not is to "
"check if that project is a member of the emulation group."
msgstr ""
#: ../keystone_integrate_assignment_backend_ldap.rst:135
msgid ""
"Use DN of the group entry to hold enabled projects when using enabled "
"emulation."
msgstr ""
# #-#-#-#-# keystone_integrate_assignment_backend_ldap.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# keystone_integrate_identity_backend_ldap.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../keystone_integrate_assignment_backend_ldap.rst:141
#: ../keystone_integrate_identity_backend_ldap.rst:235
msgid "Enabled emulation"
msgstr ""
#: ../keystone_integrate_identity_backend_ldap.rst:5
msgid "Integrate Identity back end with LDAP"
msgstr ""
#: ../keystone_integrate_identity_backend_ldap.rst:7
msgid ""
"The Identity back end contains information for users, groups, and group "
"member lists. Integrating the Identity back end with LDAP allows "
"administrators to use users and groups in LDAP."
msgstr ""
#: ../keystone_integrate_identity_backend_ldap.rst:13
msgid ""
"For OpenStack Identity service to access LDAP servers, you must define the "
"destination LDAP server in the ``keystone.conf`` file. For more information, "
"see :ref:`integrate-identity-with-ldap`."
msgstr ""
#: ../keystone_integrate_identity_backend_ldap.rst:17
msgid "**To integrate one Identity back end with LDAP**"
msgstr ""
#: ../keystone_integrate_identity_backend_ldap.rst:19
msgid ""
"Enable the LDAP Identity driver in the ``keystone.conf`` file. This allows "
"LDAP as an identity back end:"
msgstr ""
#: ../keystone_integrate_identity_backend_ldap.rst:28
msgid ""
"Create the organizational units (OU) in the LDAP directory, and define the "
"corresponding location in the :file:`keystone.conf` file:"
msgstr ""
#: ../keystone_integrate_identity_backend_ldap.rst:42
#: ../keystone_integrate_identity_backend_ldap.rst:143
msgid ""
"These schema attributes are extensible for compatibility with various "
"schemas. For example, this entry maps to the person attribute in Active "
"Directory:"
msgstr ""
#: ../keystone_integrate_identity_backend_ldap.rst:50
msgid ""
"A read-only implementation is recommended for LDAP integration. These "
"permissions are applied to object types in the :file:`keystone.conf`:"
msgstr ""
#: ../keystone_integrate_identity_backend_ldap.rst:73
msgid "**To integrate multiple Identity back ends with LDAP**"
msgstr ""
#: ../keystone_integrate_identity_backend_ldap.rst:75
msgid ""
"Set the following options in the :file:`/etc/keystone/keystone.conf` file:"
msgstr ""
#: ../keystone_integrate_identity_backend_ldap.rst:77
msgid "Enable the LDAP driver:"
msgstr ""
#: ../keystone_integrate_identity_backend_ldap.rst:85
msgid "Enable domain-specific drivers:"
msgstr ""
#: ../keystone_integrate_identity_backend_ldap.rst:93
msgid "Restart the service::"
msgstr ""
#: ../keystone_integrate_identity_backend_ldap.rst:97
msgid ""
"List the domains using the dashboard, or the OpenStackClient CLI. Refer to "
"the `Command List <http://docs.openstack.org/developer/python-"
"openstackclient/command-list.html>`__ for a list of OpenStackClient commands."
msgstr ""
#: ../keystone_integrate_identity_backend_ldap.rst:102
msgid "Create domains using OpenStack dashboard, or the OpenStackClient CLI."
msgstr ""
#: ../keystone_integrate_identity_backend_ldap.rst:104
msgid ""
"For each domain, create a domain-specific configuration file in the :file:`/"
"etc/keystone/domains` directory. Use the file naming convention :file:"
"`keystone.DOMAIN_NAME.conf`, where DOMAIN\\_NAME is the domain name assigned "
"in the previous step."
msgstr ""
#: ../keystone_integrate_identity_backend_ldap.rst:111
msgid ""
"The options set in the :file:`/etc/keystone/domains/keystone.DOMAIN_NAME."
"conf` file will override options in the :file:`/etc/keystone/keystone.conf` "
"file."
msgstr ""
#: ../keystone_integrate_identity_backend_ldap.rst:115
msgid ""
"Define the destination LDAP server in the :file:`/etc/keystone/domains/"
"keystone.DOMAIN_NAME.conf` file. For example:"
msgstr ""
#: ../keystone_integrate_identity_backend_ldap.rst:128
msgid ""
"Create the organizational units (OU) in the LDAP directories, and define "
"their corresponding locations in the :file:`/etc/keystone/domains/keystone."
"DOMAIN_NAME.conf` file. For example:"
msgstr ""
#: ../keystone_integrate_identity_backend_ldap.rst:151
msgid ""
"A read-only implementation is recommended for LDAP integration. These "
"permissions are applied to object types in the :file:`/etc/keystone/domains/"
"keystone.DOMAIN_NAME.conf` file:"
msgstr ""
# #-#-#-#-# keystone_integrate_identity_backend_ldap.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# keystone_integrate_with_ldap.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../keystone_integrate_identity_backend_ldap.rst:175
#: ../keystone_integrate_with_ldap.rst:79
msgid "**Additional LDAP integration settings**"
msgstr ""
# #-#-#-#-# keystone_integrate_identity_backend_ldap.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# keystone_integrate_with_ldap.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../keystone_integrate_identity_backend_ldap.rst:177
#: ../keystone_integrate_with_ldap.rst:81
msgid ""
"Set these options in the :file:`/etc/keystone/keystone.conf` file for a "
"single LDAP server, or :file:`/etc/keystone/domains/keystone.DOMAIN_NAME."
"conf` files for multiple back ends. Example configurations appear below each "
"setting summary:"
msgstr ""
#: ../keystone_integrate_identity_backend_ldap.rst:192
msgid ""
"Mask account status values (include any additional attribute mappings) for "
"compatibility with various directory services. Superfluous accounts are "
"filtered with ``user_filter``."
msgstr ""
#: ../keystone_integrate_identity_backend_ldap.rst:199
msgid ""
"For example, you can mask Active Directory account status attributes in the :"
"file:`keystone.conf` file:"
msgstr ""
#: ../keystone_integrate_identity_backend_ldap.rst:222
msgid "Identity attribute mapping"
msgstr ""
#: ../keystone_integrate_identity_backend_ldap.rst:225
msgid ""
"An alternative method to determine if a user is enabled or not is by "
"checking if that user is a member of the emulation group."
msgstr ""
#: ../keystone_integrate_identity_backend_ldap.rst:228
msgid ""
"Use DN of the group entry to hold enabled user when using enabled emulation."
msgstr ""
#: ../keystone_integrate_identity_backend_ldap.rst:237
msgid ""
"When you have finished configuration, restart the OpenStack Identity "
"service::"
msgstr ""
#: ../keystone_integrate_with_ldap.rst:5
msgid "Integrate Identity with LDAP"
msgstr ""
#: ../keystone_integrate_with_ldap.rst:14
msgid ""
"The OpenStack Identity service supports integration with existing LDAP "
"directories for authentication and authorization services."
msgstr ""
#: ../keystone_integrate_with_ldap.rst:17
msgid ""
"When the OpenStack Identity service is configured to use LDAP back ends, you "
"can split authentication (using the *identity* feature) and authorization "
"(using the *assignment* feature)."
msgstr ""
#: ../keystone_integrate_with_ldap.rst:21
msgid ""
"The *identity* feature enables administrators to manage users and groups by "
"each domain or the OpenStack Identity service entirely."
msgstr ""
#: ../keystone_integrate_with_ldap.rst:24
msgid ""
"The *assignment* feature enables administrators to manage project role "
"authorization using the OpenStack Identity service SQL database, while "
"providing user authentication through the LDAP directory."
msgstr ""
#: ../keystone_integrate_with_ldap.rst:30
msgid ""
"For the OpenStack Identity service to access LDAP servers, you must enable "
"the ``authlogin_nsswitch_use_ldap`` boolean value for SELinux on the server "
"running the OpenStack Identity service. To enable and make the option "
"persistent across reboots, set the following boolean value as the root user:"
msgstr ""
#: ../keystone_integrate_with_ldap.rst:40
msgid ""
"The Identity configuration is split into two separate back ends; identity "
"(back end for users and groups), and assignments (back end for domains, "
"projects, roles, role assignments). To configure Identity, set options in "
"the :file:`/etc/keystone/keystone.conf` file. See :ref:`integrate-identity-"
"backend-ldap` for Identity back end configuration examples and :ref:"
"`integrate-assignment-backend-ldap` for assignment back end configuration "
"examples. Modify these examples as needed."
msgstr ""
#: ../keystone_integrate_with_ldap.rst:50
msgid ""
"Multiple back ends are supported. You can integrate the OpenStack Identity "
"service with a single LDAP server (configure both identity and assignments "
"to LDAP, or set identity and assignments back end with SQL or LDAP), or "
"multiple back ends using domain-specific configuration files."
msgstr ""
#: ../keystone_integrate_with_ldap.rst:57
msgid "**To define the destination LDAP server**"
msgstr ""
#: ../keystone_integrate_with_ldap.rst:59
msgid "Define the destination LDAP server in the :file:`keystone.conf` file:"
msgstr ""
#: ../keystone_integrate_with_ldap.rst:71
msgid ""
"Configure ``dumb_member`` to true if your environment requires the "
"``use_dumb_member`` variable."
msgstr ""
#: ../keystone_integrate_with_ldap.rst:86
msgid "**Query option**"
msgstr ""
#: ../keystone_integrate_with_ldap.rst:91
msgid ""
"Use ``query_scope`` to control the scope level of data presented (search "
"only the first level or search an entire sub-tree) through LDAP."
msgstr ""
#: ../keystone_integrate_with_ldap.rst:94
msgid ""
"Use ``page_size`` to control the maximum results per page. A value of zero "
"disables paging."
msgstr ""
#: ../keystone_integrate_with_ldap.rst:96
msgid ""
"Use ``alias_dereferencing`` to control the LDAP dereferencing option for "
"queries."
msgstr ""
#: ../keystone_integrate_with_ldap.rst:98
msgid ""
"Use ``chase_referrals`` to override the system's default referral chasing "
"behavior for queries."
msgstr ""
#: ../keystone_integrate_with_ldap.rst:109
msgid "**Debug**"
msgstr ""
#: ../keystone_integrate_with_ldap.rst:111
msgid ""
"Use ``debug_level`` to set the LDAP debugging level for LDAP calls. A value "
"of zero means that debugging is not enabled."
msgstr ""
#: ../keystone_integrate_with_ldap.rst:121
msgid ""
"This value is a bitmask, consult your LDAP documentation for possible values."
msgstr ""
#: ../keystone_integrate_with_ldap.rst:124
msgid "**Connection pooling**"
msgstr ""
#: ../keystone_integrate_with_ldap.rst:126
msgid ""
"Use ``use_pool`` to enable LDAP connection pooling. Configure the connection "
"pool size, maximum retry, reconnect trials, timeout (-1 indicates indefinite "
"wait) and lifetime in seconds."
msgstr ""
#: ../keystone_integrate_with_ldap.rst:140
msgid "**Connection pooling for end user authentication**"
msgstr ""
#: ../keystone_integrate_with_ldap.rst:142
msgid ""
"Use ``use_auth_pool`` to enable LDAP connection pooling for end user "
"authentication. Configure the connection pool size and lifetime in seconds."
msgstr ""
#: ../keystone_integrate_with_ldap.rst:153
msgid ""
"When you have finished the configuration, restart the OpenStack Identity "
"service::"
msgstr ""
#: ../keystone_integrate_with_ldap.rst:160
msgid ""
"During the service restart, authentication and authorization are unavailable."
msgstr ""
#: ../keystone_secure_identity_to_ldap_backend.rst:2
msgid "Secure the OpenStack Identity service connection to an LDAP back end"
msgstr ""
#: ../keystone_secure_identity_to_ldap_backend.rst:4
msgid ""
"The Identity service supports the use of TLS to encrypt LDAP traffic. Before "
"configuring this, you must first verify where your certificate authority "
"file is located. For more information, see the `OpenStack Security Guide SSL "
"introduction <http://docs.openstack.org/ security-guide/secure-communication/"
"introduction-to-ssl-and-tls.html>`_."
msgstr ""
#: ../keystone_secure_identity_to_ldap_backend.rst:10
msgid "Once you verify the location of your certificate authority file:"
msgstr ""
#: ../keystone_secure_identity_to_ldap_backend.rst:12
msgid "**To configure TLS encryption on LDAP traffic**"
msgstr ""
#: ../keystone_secure_identity_to_ldap_backend.rst:14
msgid "Open the :file:`/etc/keystone/keystone.conf` configuration file."
msgstr ""
#: ../keystone_secure_identity_to_ldap_backend.rst:16
msgid "Find the ``[ldap]`` section."
msgstr ""
#: ../keystone_secure_identity_to_ldap_backend.rst:18
msgid ""
"In the ``[ldap]`` section, set the ``use_tls`` configuration key to "
"``True``. Doing so will enable TLS."
msgstr ""
#: ../keystone_secure_identity_to_ldap_backend.rst:21
msgid ""
"Configure the Identity service to use your certificate authorities file. To "
"do so, set the ``tls_cacertfile`` configuration key in the ``ldap`` section "
"to the certificate authorities file's path."
msgstr ""
#: ../keystone_secure_identity_to_ldap_backend.rst:27
msgid ""
"You can also set the ``tls_cacertdir`` (also in the ``ldap`` section) to the "
"directory where all certificate authorities files are kept. If both "
"``tls_cacertfile`` and ``tls_cacertdir`` are set, then the latter will be "
"ignored."
msgstr ""
#: ../keystone_secure_identity_to_ldap_backend.rst:32
msgid ""
"Specify what client certificate checks to perform on incoming TLS sessions "
"from the LDAP server. To do so, set the ``tls_req_cert`` configuration key "
"in the ``[ldap]`` section to ``demand``, ``allow``, or ``never``:"
msgstr ""
#: ../keystone_secure_identity_to_ldap_backend.rst:40
msgid ""
"``demand`` - The LDAP server always receives certificate requests. The "
"session terminates if no certificate is provided, or if the certificate "
"provided cannot be verified against the existing certificate authorities "
"file."
msgstr ""
#: ../keystone_secure_identity_to_ldap_backend.rst:44
msgid ""
"``allow`` - The LDAP server always receives certificate requests. The "
"session will proceed as normal even if a certificate is not provided. If a "
"certificate is provided but it cannot be verified against the existing "
"certificate authorities file, the certificate will be ignored and the "
"session will proceed as normal."
msgstr ""
#: ../keystone_secure_identity_to_ldap_backend.rst:50
msgid "``never`` - A certificate will never be requested."
msgstr ""
#: ../keystone_secure_identity_to_ldap_backend.rst:52
msgid ""
"On distributions that include openstack-config, you can configure TLS "
"encryption on LDAP traffic by running the following commands instead::"
msgstr ""
#: ../keystone_secure_identity_to_ldap_backend.rst:64
msgid ""
"``CA_FILE`` is the absolute path to the certificate authorities file that "
"should be used to encrypt LDAP traffic."
msgstr ""
#: ../keystone_secure_identity_to_ldap_backend.rst:67
msgid ""
"``CERT_BEHAVIOR`` specifies what client certificate checks to perform on an "
"incoming TLS session from the LDAP server (``demand``, ``allow``, or "
"``never``)."
msgstr ""
#: ../keystone_token-binding.rst:3
msgid "Configure Identity service for token binding"
msgstr ""
#: ../keystone_token-binding.rst:5
msgid ""
"Token binding embeds information from an external authentication mechanism, "
"such as a Kerberos server or X.509 certificate, inside a token. By using "
"token binding, a client can enforce the use of a specified external "
"authentication mechanism with the token. This additional security mechanism "
"ensures that if a token is stolen, for example, it is not usable without "
"external authentication."
msgstr ""
#: ../keystone_token-binding.rst:12
msgid ""
"You configure the authentication types for a token binding in the :file:"
"`keystone.conf` file:"
msgstr ""
#: ../keystone_token-binding.rst:20
msgid "or"
msgstr ""
#: ../keystone_token-binding.rst:27
msgid "Currently ``kerberos`` and ``x509`` are supported."
msgstr ""
#: ../keystone_token-binding.rst:29
msgid ""
"To enforce checking of token binding, set the ``enforce_token_bind`` option "
"to one of these modes:"
msgstr ""
#: ../keystone_token-binding.rst:33
msgid "Disables token bind checking."
msgstr ""
#: ../keystone_token-binding.rst:33
msgid "``disabled``"
msgstr ""
#: ../keystone_token-binding.rst:36
msgid ""
"Enables bind checking. If a token is bound to an unknown authentication "
"mechanism, the server ignores it. The default is this mode."
msgstr ""
#: ../keystone_token-binding.rst:38
msgid "``permissive``"
msgstr ""
#: ../keystone_token-binding.rst:41
msgid ""
"Enables bind checking. If a token is bound to an unknown authentication "
"mechanism, the server rejects it."
msgstr ""
#: ../keystone_token-binding.rst:42
msgid "``strict``"
msgstr ""
#: ../keystone_token-binding.rst:45
msgid ""
"Enables bind checking. Requires use of at least authentication mechanism for "
"tokens."
msgstr ""
#: ../keystone_token-binding.rst:46
msgid "``required``"
msgstr ""
#: ../keystone_token-binding.rst:49
msgid ""
"Enables bind checking. Requires use of kerberos as the authentication "
"mechanism for tokens:"
msgstr ""
#: ../keystone_token-binding.rst:55
msgid "``kerberos``"
msgstr ""
#: ../keystone_token-binding.rst:58
msgid ""
"Enables bind checking. Requires use of X.509 as the authentication mechanism "
"for tokens:"
msgstr ""
#: ../keystone_token-binding.rst:63
msgid "``x509``"
msgstr ""
#: ../keystone_tokens.rst:3
msgid "Keystone token providers"
msgstr ""
#: ../keystone_tokens.rst:5
msgid ""
"Tokens are used to interact with the various OpenStack APIs. The token type "
"issued by keystone is configurable through the :file:`etc/keystone.conf` "
"file. Currently, there are four supported token types and they include UUID, "
"fernet, PKI, and PKIZ."
msgstr ""
#: ../keystone_tokens.rst:11
msgid "UUID tokens"
msgstr ""
#: ../keystone_tokens.rst:13
msgid ""
"UUID was the first token type supported and is currently the default token "
"provider. UUID tokens are 32 bytes in length and must be persisted in a back "
"end. Clients must pass their UUID token to the Identity service in order to "
"validate it."
msgstr ""
#: ../keystone_tokens.rst:19
msgid "Fernet tokens"
msgstr ""
#: ../keystone_tokens.rst:21
msgid ""
"The fernet token format was introduced in the OpenStack Kilo release. Unlike "
"the other token types mentioned in this document, fernet tokens do not need "
"to be persisted in a back end. ``AES256`` encryption is used to protect the "
"information stored in the token and integrity is verified with a ``SHA256 "
"HMAC`` signature. Only the Identity service should have access to the keys "
"used to encrypt and decrypt fernet tokens. Like UUID tokens, fernet tokens "
"must be passed back to the Identity service in order to validate them. For "
"more information on the fernet token type, see the :doc:"
"`keystone_fernet_token_faq`."
msgstr ""
#: ../keystone_tokens.rst:31
msgid "PKI and PKIZ tokens"
msgstr ""
#: ../keystone_tokens.rst:33
msgid ""
"PKI tokens are signed documents that contain the authentication context, as "
"well as the service catalog. Depending on the size of the OpenStack "
"deployment, these tokens can be very long. The Identity service uses public/"
"private key pairs and certificates in order to create and validate PKI "
"tokens."
msgstr ""
#: ../keystone_tokens.rst:38
msgid ""
"The same concepts from PKI tokens apply to PKIZ tokens. The only difference "
"between the two is PKIZ tokens are compressed to help mitigate the size "
"issues of PKI. For more information on the certificate setup for PKI and "
"PKIZ tokens, see the :doc:`keystone_certificates_for_pki`."
msgstr ""
#: ../keystone_use_trusts.rst:3
msgid "Use trusts"
msgstr ""
#: ../keystone_use_trusts.rst:5
msgid ""
"OpenStack Identity manages authentication and authorization. A trust is an "
"OpenStack Identity extension that enables delegation and, optionally, "
"impersonation through ``keystone``. A trust extension defines a relationship "
"between:"
msgstr ""
#: ../keystone_use_trusts.rst:11
msgid "**Trustor**"
msgstr ""
#: ../keystone_use_trusts.rst:11
msgid "The user delegating a limited set of their own rights to another user."
msgstr ""
#: ../keystone_use_trusts.rst:14
msgid "The user trust is being delegated to, for a limited time."
msgstr ""
#: ../keystone_use_trusts.rst:16
msgid ""
"The trust can eventually allow the trustee to impersonate the trustor. For "
"security reasons, some safeties are added. For example, if a trustor loses a "
"given role, any trusts the user issued with that role, and the related "
"tokens, are automatically revoked."
msgstr ""
#: ../keystone_use_trusts.rst:19
msgid "**Trustee**"
msgstr ""
#: ../keystone_use_trusts.rst:21
msgid "The delegation parameters are:"
msgstr ""
#: ../keystone_use_trusts.rst:24
msgid "**User ID**"
msgstr ""
#: ../keystone_use_trusts.rst:24
msgid "The user IDs for the trustor and trustee."
msgstr ""
#: ../keystone_use_trusts.rst:27
msgid ""
"The delegated privileges are a combination of a tenant ID and a number of "
"roles that must be a subset of the roles assigned to the trustor."
msgstr ""
#: ../keystone_use_trusts.rst:31
msgid ""
"If you omit all privileges, nothing is delegated. You cannot delegate "
"everything."
msgstr ""
#: ../keystone_use_trusts.rst:32
msgid "**Privileges**"
msgstr ""
#: ../keystone_use_trusts.rst:35
msgid ""
"Defines whether or not the delegation is recursive. If it is recursive, "
"defines the delegation chain length."
msgstr ""
#: ../keystone_use_trusts.rst:38
msgid "Specify one of the following values:"
msgstr ""
#: ../keystone_use_trusts.rst:40
msgid "``0``. The delegate cannot delegate these permissions further."
msgstr ""
#: ../keystone_use_trusts.rst:42
msgid ""
"``1``. The delegate can delegate the permissions to any set of delegates but "
"the latter cannot delegate further."
msgstr ""
#: ../keystone_use_trusts.rst:45
msgid "**Delegation depth**"
msgstr ""
#: ../keystone_use_trusts.rst:45
msgid "``inf``. The delegation is infinitely recursive."
msgstr ""
#: ../keystone_use_trusts.rst:48
msgid "A list of endpoints associated with the delegation."
msgstr ""
#: ../keystone_use_trusts.rst:50
msgid ""
"This parameter further restricts the delegation to the specified endpoints "
"only. If you omit the endpoints, the delegation is useless. A special value "
"of ``all_endpoints`` allows the trust to be used by all endpoints associated "
"with the delegated tenant."
msgstr ""
#: ../keystone_use_trusts.rst:53
msgid "**Endpoints**"
msgstr ""
#: ../keystone_use_trusts.rst:55
msgid "**Duration**"
msgstr ""
#: ../keystone_use_trusts.rst:56
msgid "(Optional) Comprised of the start time and end time for the trust."
msgstr ""
# #-#-#-#-# networking.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# shared_file_systems_networking.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../networking.rst:5 ../shared_file_systems_networking.rst:5
msgid "Networking"
msgstr ""
#: ../networking.rst:7
msgid ""
"Learn OpenStack Networking concepts, architecture, and basic and advanced "
"``neutron`` and ``nova`` command-line interface (CLI) commands."
msgstr ""
#: ../networking_adv-config.rst:3
msgid "Advanced configuration options"
msgstr ""
#: ../networking_adv-config.rst:5
msgid ""
"This section describes advanced configuration options for various system "
"components. For example, configuration options where the default works but "
"that the user wants to customize options. After installing from packages, ``"
"$NEUTRON_CONF_DIR`` is :file:`/etc/neutron`."
msgstr ""
#: ../networking_adv-config.rst:11
msgid "L3 metering agent"
msgstr ""
#: ../networking_adv-config.rst:13
msgid ""
"You can run an L3 metering agent that enables layer-3 traffic metering. In "
"general, you should launch the metering agent on all nodes that run the L3 "
"agent:"
msgstr ""
#: ../networking_adv-config.rst:22
msgid ""
"You must configure a driver that matches the plug-in that runs on the "
"service. The driver adds metering to the routing interface."
msgstr ""
#: ../networking_adv-config.rst:26
msgid "Option"
msgstr ""
#: ../networking_adv-config.rst:28
msgid "**Open vSwitch**"
msgstr ""
#: ../networking_adv-config.rst:30 ../networking_adv-config.rst:36
msgid "interface\\_driver ($NEUTRON\\_CONF\\_DIR/metering\\_agent.ini)"
msgstr ""
#: ../networking_adv-config.rst:31
msgid "neutron.agent.linux.interface. OVSInterfaceDriver"
msgstr ""
#: ../networking_adv-config.rst:34
msgid "**Linux Bridge**"
msgstr ""
#: ../networking_adv-config.rst:37
msgid "neutron.agent.linux.interface. BridgeInterfaceDriver"
msgstr ""
#: ../networking_adv-config.rst:42
msgid "Namespace"
msgstr ""
#: ../networking_adv-config.rst:44
msgid ""
"The metering agent and the L3 agent must have the same network namespaces "
"configuration."
msgstr ""
#: ../networking_adv-config.rst:49
msgid ""
"If the Linux installation does not support network namespaces, you must "
"disable network namespaces in the L3 metering configuration file. The "
"default value of the ``use_namespaces`` option is ``True``."
msgstr ""
#: ../networking_adv-config.rst:59
msgid "L3 metering driver"
msgstr ""
#: ../networking_adv-config.rst:61
msgid ""
"You must configure any driver that implements the metering abstraction. "
"Currently the only available implementation uses iptables for metering."
msgstr ""
#: ../networking_adv-config.rst:70
msgid "L3 metering service driver"
msgstr ""
#: ../networking_adv-config.rst:72
msgid ""
"To enable L3 metering, you must set the following option in the :file:"
"`neutron.conf` file on the host that runs neutron-server:"
msgstr ""
#: ../networking_adv-features.rst:0
msgid "**Basic L3 Operations**"
msgstr ""
#: ../networking_adv-features.rst:0
msgid "**Basic L3 operations**"
msgstr ""
#: ../networking_adv-features.rst:0
msgid "**Basic VMware NSX QoS operations**"
msgstr ""
#: ../networking_adv-features.rst:0
msgid "**Basic security group operations**"
msgstr ""
#: ../networking_adv-features.rst:0
msgid "**Big Switch Router rule attributes**"
msgstr ""
#: ../networking_adv-features.rst:0
msgid ""
"**Configuration options for tuning operational status synchronization in the "
"NSX plug-in**"
msgstr ""
#: ../networking_adv-features.rst:0
msgid "**Provider network attributes**"
msgstr ""
#: ../networking_adv-features.rst:3
msgid "Advanced features through API extensions"
msgstr ""
#: ../networking_adv-features.rst:5
msgid ""
"Several plug-ins implement API extensions that provide capabilities similar "
"to what was available in nova-network. These plug-ins are likely to be of "
"interest to the OpenStack community."
msgstr ""
#: ../networking_adv-features.rst:10
msgid "Provider networks"
msgstr ""
#: ../networking_adv-features.rst:12
msgid ""
"Networks can be categorized as either tenant networks or provider networks. "
"Tenant networks are created by normal users and details about how they are "
"physically realized are hidden from those users. Provider networks are "
"created with administrative credentials, specifying the details of how the "
"network is physically realized, usually to match some existing network in "
"the data center."
msgstr ""
#: ../networking_adv-features.rst:19
msgid ""
"Provider networks enable cloud administrators to create Networking networks "
"that map directly to the physical networks in the data center. This is "
"commonly used to give tenants direct access to a public network that can be "
"used to reach the Internet. It might also be used to integrate with VLANs in "
"the network that already have a defined meaning (for example, enable a VM "
"from the marketing department to be placed on the same VLAN as bare-metal "
"marketing hosts in the same data center)."
msgstr ""
#: ../networking_adv-features.rst:27
msgid ""
"The provider extension allows administrators to explicitly manage the "
"relationship between Networking virtual networks and underlying physical "
"mechanisms such as VLANs and tunnels. When this extension is supported, "
"Networking client users with administrative privileges see additional "
"provider attributes on all virtual networks and are able to specify these "
"attributes in order to create provider networks."
msgstr ""
#: ../networking_adv-features.rst:34
msgid ""
"The provider extension is supported by the Open vSwitch and Linux Bridge "
"plug-ins. Configuration of these plug-ins requires familiarity with this "
"extension."
msgstr ""
#: ../networking_adv-features.rst:39
msgid "Terminology"
msgstr ""
#: ../networking_adv-features.rst:41
msgid ""
"A number of terms are used in the provider extension and in the "
"configuration of plug-ins supporting the provider extension:"
msgstr ""
#: ../networking_adv-features.rst:44
msgid "**Provider extension terminology**"
msgstr ""
#: ../networking_adv-features.rst:47
msgid "Term"
msgstr ""
#: ../networking_adv-features.rst:49
msgid "**virtual network**"
msgstr ""
#: ../networking_adv-features.rst:49
msgid ""
"A Networking L2 network (identified by a UUID and optional name) whose ports "
"can be attached as vNICs to Compute instances and to various Networking "
"agents. The Open vSwitch and Linux Bridge plug-ins each support several "
"different mechanisms to realize virtual networks."
msgstr ""
#: ../networking_adv-features.rst:56
msgid "**physical network**"
msgstr ""
#: ../networking_adv-features.rst:56
msgid ""
"A network connecting virtualization hosts (such as compute nodes) with each "
"other and with other network resources. Each physical network might support "
"multiple virtual networks. The provider extension and the plug-in "
"configurations identify physical networks using simple string names."
msgstr ""
#: ../networking_adv-features.rst:63
msgid "**tenant network**"
msgstr ""
#: ../networking_adv-features.rst:63
msgid ""
"A virtual network that a tenant or an administrator creates. The physical "
"details of the network are not exposed to the tenant."
msgstr ""
#: ../networking_adv-features.rst:67
msgid "**provider network**"
msgstr ""
#: ../networking_adv-features.rst:67
msgid ""
"A virtual network administratively created to map to a specific network in "
"the data center, typically to enable direct access to non-OpenStack "
"resources on that network. Tenants can be given access to provider networks."
msgstr ""
#: ../networking_adv-features.rst:73
msgid "**VLAN network**"
msgstr ""
#: ../networking_adv-features.rst:73
msgid ""
"A virtual network implemented as packets on a specific physical network "
"containing IEEE 802.1Q headers with a specific VID field value. VLAN "
"networks sharing the same physical network are isolated from each other at "
"L2 and can even have overlapping IP address spaces. Each distinct physical "
"network supporting VLAN networks is treated as a separate VLAN trunk, with a "
"distinct space of VID values. Valid VID values are 1 through 4094."
msgstr ""
#: ../networking_adv-features.rst:84
msgid "**flat network**"
msgstr ""
#: ../networking_adv-features.rst:84
msgid ""
"A virtual network implemented as packets on a specific physical network "
"containing no IEEE 802.1Q header. Each physical network can realize at most "
"one flat network."
msgstr ""
#: ../networking_adv-features.rst:89
msgid "**local network**"
msgstr ""
#: ../networking_adv-features.rst:89
msgid ""
"A virtual network that allows communication within each host, but not across "
"a network. Local networks are intended mainly for single-node test "
"scenarios, but can have other uses."
msgstr ""
#: ../networking_adv-features.rst:94
msgid "**GRE network**"
msgstr ""
#: ../networking_adv-features.rst:94
msgid ""
"A virtual network implemented as network packets encapsulated using GRE. GRE "
"networks are also referred to as *tunnels*. GRE tunnel packets are routed by "
"the IP routing table for the host, so GRE networks are not associated by "
"Networking with specific physical networks."
msgstr ""
#: ../networking_adv-features.rst:101
msgid "**Virtual Extensible LAN (VXLAN) network**"
msgstr ""
#: ../networking_adv-features.rst:102
msgid ""
"VXLAN is a proposed encapsulation protocol for running an overlay network on "
"existing Layer 3 infrastructure. An overlay network is a virtual network "
"that is built on top of existing network Layer 2 and Layer 3 technologies to "
"support elastic compute architectures."
msgstr ""
#: ../networking_adv-features.rst:110
msgid ""
"The ML2, Open vSwitch, and Linux Bridge plug-ins support VLAN networks, flat "
"networks, and local networks. Only the ML2 and Open vSwitch plug-ins "
"currently support GRE and VXLAN networks, provided that the required "
"features exist in the hosts Linux kernel, Open vSwitch, and iproute2 "
"packages."
msgstr ""
#: ../networking_adv-features.rst:117
msgid "Provider attributes"
msgstr ""
#: ../networking_adv-features.rst:119
msgid ""
"The provider extension extends the Networking network resource with these "
"attributes:"
msgstr ""
# #-#-#-#-# networking_adv-features.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# networking_config-identity.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../networking_adv-features.rst:127 ../networking_adv-features.rst:708
#: ../networking_config-identity.rst:166
msgid "Attribute name"
msgstr ""
#: ../networking_adv-features.rst:129
msgid "Default Value"
msgstr ""
#: ../networking_adv-features.rst:131
msgid "provider: network\\_type"
msgstr ""
#: ../networking_adv-features.rst:132 ../networking_adv-features.rst:143
msgid "String"
msgstr ""
#: ../networking_adv-features.rst:133 ../networking_adv-features.rst:152
msgid "N/A"
msgstr ""
#: ../networking_adv-features.rst:134
msgid ""
"The physical mechanism by which the virtual network is implemented. Possible "
"values are ``flat``, ``vlan``, ``local``, ``gre``, and ``vxlan``, "
"corresponding to flat networks, VLAN networks, local networks, GRE networks, "
"and VXLAN networks as defined above. All types of provider networks can be "
"created by administrators, while tenant networks can be implemented as "
"``vlan``, ``gre``, ``vxlan``, or ``local`` network types depending on plug-"
"in configuration."
msgstr ""
#: ../networking_adv-features.rst:142
msgid "provider: physical_network"
msgstr ""
#: ../networking_adv-features.rst:144
msgid ""
"If a physical network named \"default\" has been configured and if provider:"
"network_type is ``flat`` or ``vlan``, then \"default\" is used."
msgstr ""
#: ../networking_adv-features.rst:147
msgid ""
"The name of the physical network over which the virtual network is "
"implemented for flat and VLAN networks. Not applicable to the ``local`` or "
"``gre`` network types."
msgstr ""
#: ../networking_adv-features.rst:150
msgid "provider:segmentation_id"
msgstr ""
#: ../networking_adv-features.rst:151
msgid "Integer"
msgstr ""
#: ../networking_adv-features.rst:153
msgid ""
"For VLAN networks, the VLAN VID on the physical network that realizes the "
"virtual network. Valid VLAN VIDs are 1 through 4094. For GRE networks, the "
"tunnel ID. Valid tunnel IDs are any 32 bit unsigned integer. Not applicable "
"to the ``flat`` or ``local`` network types."
msgstr ""
#: ../networking_adv-features.rst:159
msgid ""
"To view or set provider extended attributes, a client must be authorized for "
"the ``extension:provider_network:view`` and ``extension:provider_network:"
"set`` actions in the Networking policy configuration. The default Networking "
"configuration authorizes both actions for users with the admin role. An "
"authorized client or an administrative user can view and set the provider "
"extended attributes through Networking API calls. See the section called :"
"ref:`Authentication and authorization` for details on policy configuration."
msgstr ""
#: ../networking_adv-features.rst:171
msgid "L3 routing and NAT"
msgstr ""
#: ../networking_adv-features.rst:173
msgid ""
"The Networking API provides abstract L2 network segments that are decoupled "
"from the technology used to implement the L2 network. Networking includes an "
"API extension that provides abstract L3 routers that API users can "
"dynamically provision and configure. These Networking routers can connect "
"multiple L2 Networking networks and can also provide a gateway that connects "
"one or more private L2 networks to a shared external network. For example, a "
"public network for access to the Internet. See the *OpenStack Configuration "
"Reference* for details on common models of deploying Networking L3 routers."
msgstr ""
#: ../networking_adv-features.rst:183
msgid ""
"The L3 router provides basic NAT capabilities on gateway ports that uplink "
"the router to external networks. This router SNATs all traffic by default "
"and supports floating IPs, which creates a static one-to-one mapping from a "
"public IP on the external network to a private IP on one of the other "
"subnets attached to the router. This allows a tenant to selectively expose "
"VMs on private networks to other hosts on the external network (and often to "
"all hosts on the Internet). You can allocate and map floating IPs from one "
"port to another, as needed."
msgstr ""
#: ../networking_adv-features.rst:193
msgid "Basic L3 operations"
msgstr ""
#: ../networking_adv-features.rst:195
msgid ""
"External networks are visible to all users. However, the default policy "
"settings enable only administrative users to create, update, and delete "
"external networks."
msgstr ""
#: ../networking_adv-features.rst:199
msgid ""
"This table shows example neutron commands that enable you to complete basic "
"L3 operations:"
msgstr ""
# #-#-#-#-# networking_adv-features.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# networking_config-agents.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# networking_use.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../networking_adv-features.rst:206 ../networking_adv-features.rst:374
#: ../networking_adv-features.rst:509 ../networking_adv-features.rst:801
#: ../networking_config-agents.rst:471 ../networking_use.rst:47
#: ../networking_use.rst:123 ../networking_use.rst:241
msgid "Operation"
msgstr ""
#: ../networking_adv-features.rst:208
msgid "Creates external networks."
msgstr ""
#: ../networking_adv-features.rst:213
msgid "Lists external networks."
msgstr ""
#: ../networking_adv-features.rst:217
msgid ""
"Creates an internal-only router that connects to multiple L2 networks "
"privately."
msgstr ""
#: ../networking_adv-features.rst:228
msgid ""
"An internal router port can have only one IPv4 subnet and multiple IPv6 "
"subnets that belong to the same network ID. When you call ``router-interface-"
"add`` with an IPv6 subnet, this operation adds the interface to an existing "
"internal port with the same network ID. If a port with the same network ID "
"does not exist, a new port is created."
msgstr ""
#: ../networking_adv-features.rst:233
msgid ""
"Connects a router to an external network, which enables that router to act "
"as a NAT gateway for external connectivity."
msgstr ""
#: ../networking_adv-features.rst:239
msgid ""
"The router obtains an interface with the gateway_ip address of the subnet "
"and this interface is attached to a port on the L2 Networking network "
"associated with the subnet. The router also gets a gateway interface to the "
"specified external network. This provides SNAT connectivity to the external "
"network as well as support for floating IPs allocated on that external "
"networks. Commonly an external network maps to a network in the provider."
msgstr ""
#: ../networking_adv-features.rst:247
msgid "Lists routers."
msgstr ""
#: ../networking_adv-features.rst:251
msgid "Shows information for a specified router."
msgstr ""
#: ../networking_adv-features.rst:255
msgid "Shows all internal interfaces for a router."
msgstr ""
#: ../networking_adv-features.rst:260
msgid ""
"Identifies the PORT_ID that represents the VM NIC to which the floating IP "
"should map."
msgstr ""
#: ../networking_adv-features.rst:266
msgid ""
"This port must be on an Networking subnet that is attached to a router "
"uplinked to the external network used to create the floating IP. "
"Conceptually, this is because the router must be able to perform the "
"Destination NAT (DNAT) rewriting of packets from the floating IP address "
"(chosen from a subnet on the external network) to the internal fixed IP "
"(chosen from a private subnet that is behind the router)."
msgstr ""
#: ../networking_adv-features.rst:273
msgid "Creates a floating IP address and associates it with a port."
msgstr ""
#: ../networking_adv-features.rst:279
msgid "Creates a floating IP on a specific subnet in the external network."
msgstr ""
#: ../networking_adv-features.rst:284
msgid ""
"If there are multiple subnets in the external network, you can choose a "
"specific subnet based on quality and costs."
msgstr ""
#: ../networking_adv-features.rst:287
msgid ""
"Creates a floating IP address and associates it with a port, in a single "
"step."
msgstr ""
#: ../networking_adv-features.rst:291
msgid "Lists floating IPs"
msgstr ""
#: ../networking_adv-features.rst:295
msgid "Finds floating IP for a specified VM port."
msgstr ""
#: ../networking_adv-features.rst:299
msgid "Disassociates a floating IP address."
msgstr ""
#: ../networking_adv-features.rst:303
msgid "Deletes the floating IP address."
msgstr ""
#: ../networking_adv-features.rst:307
msgid "Clears the gateway."
msgstr ""
#: ../networking_adv-features.rst:311
msgid "Removes the interfaces from the router."
msgstr ""
#: ../networking_adv-features.rst:316
msgid ""
"If this subnet ID is the last subnet on the port, this operation deletes the "
"port itself."
msgstr ""
#: ../networking_adv-features.rst:318
msgid "Deletes the router."
msgstr ""
#: ../networking_adv-features.rst:324
msgid "Security groups"
msgstr ""
#: ../networking_adv-features.rst:326
msgid ""
"Security groups and security group rules allow administrators and tenants to "
"specify the type of traffic and direction (ingress/egress) that is allowed "
"to pass through a port. A security group is a container for security group "
"rules."
msgstr ""
#: ../networking_adv-features.rst:331
msgid ""
"When a port is created in Networking it is associated with a security group. "
"If a security group is not specified the port is associated with a 'default' "
"security group. By default, this group drops all ingress traffic and allows "
"all egress. Rules can be added to this group in order to change the behavior."
msgstr ""
#: ../networking_adv-features.rst:337
msgid ""
"To use the Compute security group APIs or use Compute to orchestrate the "
"creation of ports for instances on specific security groups, you must "
"complete additional configuration. You must configure the :file:`/etc/nova/"
"nova.conf` file and set the ``security_group_api=neutron`` option on every "
"node that runs nova-compute and nova-api. After you make this change, "
"restart nova-api and nova-compute to pick up this change. Then, you can use "
"both the Compute and OpenStack Network security group APIs at the same time."
msgstr ""
#: ../networking_adv-features.rst:348
msgid ""
"To use the Compute security group API with Networking, the Networking plug-"
"in must implement the security group API. The following plug-ins currently "
"implement this: ML2, Open vSwitch, Linux Bridge, NEC, and VMware NSX."
msgstr ""
#: ../networking_adv-features.rst:353
msgid ""
"You must configure the correct firewall driver in the ``securitygroup`` "
"section of the plug-in/agent configuration file. Some plug-ins and agents, "
"such as Linux Bridge Agent and Open vSwitch Agent, use the no-operation "
"driver as the default, which results in non-working security groups."
msgstr ""
#: ../networking_adv-features.rst:359
msgid ""
"When using the security group API through Compute, security groups are "
"applied to all ports on an instance. The reason for this is that Compute "
"security group APIs are instances based and not port based as Networking."
msgstr ""
#: ../networking_adv-features.rst:365
msgid "Basic security group operations"
msgstr ""
#: ../networking_adv-features.rst:367
msgid ""
"This table shows example neutron commands that enable you to complete basic "
"security group operations:"
msgstr ""
#: ../networking_adv-features.rst:376
msgid "Creates a security group for our web servers."
msgstr ""
#: ../networking_adv-features.rst:380
msgid "Lists security groups."
msgstr ""
#: ../networking_adv-features.rst:384
msgid "Creates a security group rule to allow port 80 ingress."
msgstr ""
#: ../networking_adv-features.rst:389
msgid "Lists security group rules."
msgstr ""
#: ../networking_adv-features.rst:393
msgid "Deletes a security group rule."
msgstr ""
#: ../networking_adv-features.rst:397
msgid "Deletes a security group."
msgstr ""
#: ../networking_adv-features.rst:401
msgid "Creates a port and associates two security groups."
msgstr ""
#: ../networking_adv-features.rst:405
msgid "Removes security groups from a port."
msgstr ""
#: ../networking_adv-features.rst:411
msgid "Basic Load-Balancer-as-a-Service operations"
msgstr ""
#: ../networking_adv-features.rst:415
msgid ""
"The Load-Balancer-as-a-Service (LBaaS) API provisions and configures load "
"balancers. The reference implementation is based on the HAProxy software "
"load balancer."
msgstr ""
#: ../networking_adv-features.rst:419
msgid ""
"This list shows example neutron commands that enable you to complete basic "
"LBaaS operations:"
msgstr ""
#: ../networking_adv-features.rst:422
msgid "Creates a load balancer pool by using specific provider."
msgstr ""
#: ../networking_adv-features.rst:424
msgid ""
":option:`--provider` is an optional argument. If not used, the pool is "
"created with default provider for LBaaS service. You should configure the "
"default provider in the ``[service_providers]`` section of :file:`neutron."
"conf` file. If no default provider is specified for LBaaS, the :option:`--"
"provider` parameter is required for pool creation."
msgstr ""
#: ../networking_adv-features.rst:435
msgid "Associates two web servers with pool."
msgstr ""
#: ../networking_adv-features.rst:442
msgid ""
"Creates a health monitor that checks to make sure our instances are still "
"running on the specified protocol-port."
msgstr ""
#: ../networking_adv-features.rst:449
msgid "Associates a health monitor with pool."
msgstr ""
#: ../networking_adv-features.rst:455
msgid ""
"Creates a virtual IP (VIP) address that, when accessed through the load "
"balancer, directs the requests to one of the pool members."
msgstr ""
#: ../networking_adv-features.rst:464
msgid "Plug-in specific extensions"
msgstr ""
#: ../networking_adv-features.rst:466
msgid ""
"Each vendor can choose to implement additional API extensions to the core "
"API. This section describes the extensions for each plug-in."
msgstr ""
#: ../networking_adv-features.rst:470
msgid "VMware NSX extensions"
msgstr ""
#: ../networking_adv-features.rst:472
msgid "These sections explain NSX plug-in extensions."
msgstr ""
#: ../networking_adv-features.rst:475
msgid "VMware NSX QoS extension"
msgstr ""
#: ../networking_adv-features.rst:477
msgid ""
"The VMware NSX QoS extension rate-limits network ports to guarantee a "
"specific amount of bandwidth for each port. This extension, by default, is "
"only accessible by a tenant with an admin role but is configurable through "
"the :file:`policy.json` file. To use this extension, create a queue and "
"specify the min/max bandwidth rates (kbps) and optionally set the QoS "
"Marking and DSCP value (if your network fabric uses these values to make "
"forwarding decisions). Once created, you can associate a queue with a "
"network. Then, when ports are created on that network they are automatically "
"created and associated with the specific queue size that was associated with "
"the network. Because one size queue for a every port on a network might not "
"be optimal, a scaling factor from the nova flavor ``rxtx_factor`` is passed "
"in from Compute when creating the port to scale the queue."
msgstr ""
#: ../networking_adv-features.rst:491
msgid ""
"Lastly, if you want to set a specific baseline QoS policy for the amount of "
"bandwidth a single port can use (unless a network queue is specified with "
"the network a port is created on) a default queue can be created in "
"Networking which then causes ports created to be associated with a queue of "
"that size times the rxtx scaling factor. Note that after a network or "
"default queue is specified, queues are added to ports that are subsequently "
"created but are not added to existing ports."
msgstr ""
#: ../networking_adv-features.rst:500
msgid "Basic VMware NSX QoS operations"
msgstr ""
#: ../networking_adv-features.rst:502
msgid ""
"This table shows example neutron commands that enable you to complete basic "
"queue operations:"
msgstr ""
#: ../networking_adv-features.rst:511
msgid "Creates QoS queue (admin-only)."
msgstr ""
#: ../networking_adv-features.rst:515
msgid "Associates a queue with a network."
msgstr ""
#: ../networking_adv-features.rst:519
msgid "Creates a default system queue."
msgstr ""
#: ../networking_adv-features.rst:523
msgid "Lists QoS queues."
msgstr ""
#: ../networking_adv-features.rst:527
msgid "Deletes a QoS queue."
msgstr ""
#: ../networking_adv-features.rst:533
msgid "VMware NSX provider networks extension"
msgstr ""
#: ../networking_adv-features.rst:535
msgid ""
"Provider networks can be implemented in different ways by the underlying NSX "
"platform."
msgstr ""
#: ../networking_adv-features.rst:538
msgid ""
"The *FLAT* and *VLAN* network types use bridged transport connectors. These "
"network types enable the attachment of large number of ports. To handle the "
"increased scale, the NSX plug-in can back a single OpenStack Network with a "
"chain of NSX logical switches. You can specify the maximum number of ports "
"on each logical switch in this chain on the ``max_lp_per_bridged_ls`` "
"parameter, which has a default value of 5,000."
msgstr ""
#: ../networking_adv-features.rst:545
msgid ""
"The recommended value for this parameter varies with the NSX version running "
"in the back-end, as shown in the following table."
msgstr ""
#: ../networking_adv-features.rst:548
msgid "**Recommended values for max_lp_per_bridged_ls**"
msgstr ""
#: ../networking_adv-features.rst:551
msgid "NSX version"
msgstr ""
#: ../networking_adv-features.rst:551
msgid "Recommended Value"
msgstr ""
#: ../networking_adv-features.rst:553
msgid "2.x"
msgstr ""
#: ../networking_adv-features.rst:553
msgid "64"
msgstr ""
#: ../networking_adv-features.rst:555
msgid "3.0.x"
msgstr ""
#: ../networking_adv-features.rst:555 ../networking_adv-features.rst:557
msgid "5,000"
msgstr ""
#: ../networking_adv-features.rst:557
msgid "3.1.x"
msgstr ""
#: ../networking_adv-features.rst:559
msgid "10,000"
msgstr ""
#: ../networking_adv-features.rst:559
msgid "3.2.x"
msgstr ""
#: ../networking_adv-features.rst:562
msgid ""
"In addition to these network types, the NSX plug-in also supports a special "
"*l3_ext* network type, which maps external networks to specific NSX gateway "
"services as discussed in the next section."
msgstr ""
#: ../networking_adv-features.rst:567
msgid "VMware NSX L3 extension"
msgstr ""
#: ../networking_adv-features.rst:569
msgid ""
"NSX exposes its L3 capabilities through gateway services which are usually "
"configured out of band from OpenStack. To use NSX with L3 capabilities, "
"first create an L3 gateway service in the NSX Manager. Next, in :file:`/etc/"
"neutron/plugins/vmware/nsx.ini` set ``default_l3_gw_service_uuid`` to this "
"value. By default, routers are mapped to this gateway service."
msgstr ""
#: ../networking_adv-features.rst:577
msgid "VMware NSX L3 extension operations"
msgstr ""
#: ../networking_adv-features.rst:579
msgid "Create external network and map it to a specific NSX gateway service:"
msgstr ""
#: ../networking_adv-features.rst:586
msgid "Terminate traffic on a specific VLAN from a NSX gateway service:"
msgstr ""
#: ../networking_adv-features.rst:594
msgid "Operational status synchronization in the VMware NSX plug-in"
msgstr ""
#: ../networking_adv-features.rst:596
msgid ""
"Starting with the Havana release, the VMware NSX plug-in provides an "
"asynchronous mechanism for retrieving the operational status for neutron "
"resources from the NSX back-end; this applies to *network*, *port*, and "
"*router* resources."
msgstr ""
#: ../networking_adv-features.rst:601
msgid ""
"The back-end is polled periodically and the status for every resource is "
"retrieved; then the status in the Networking database is updated only for "
"the resources for which a status change occurred. As operational status is "
"now retrieved asynchronously, performance for ``GET`` operations is "
"consistently improved."
msgstr ""
#: ../networking_adv-features.rst:607
msgid ""
"Data to retrieve from the back-end are divided in chunks in order to avoid "
"expensive API requests; this is achieved leveraging NSX APIs response paging "
"capabilities. The minimum chunk size can be specified using a configuration "
"option; the actual chunk size is then determined dynamically according to: "
"total number of resources to retrieve, interval between two synchronization "
"task runs, minimum delay between two subsequent requests to the NSX back-end."
msgstr ""
#: ../networking_adv-features.rst:615
msgid ""
"The operational status synchronization can be tuned or disabled using the "
"configuration options reported in this table; it is however worth noting "
"that the default values work fine in most cases."
msgstr ""
#: ../networking_adv-features.rst:623
msgid "Option name"
msgstr ""
#: ../networking_adv-features.rst:625
msgid "Default value"
msgstr ""
#: ../networking_adv-features.rst:626
msgid "Type and constraints"
msgstr ""
#: ../networking_adv-features.rst:628
msgid "``state_sync_interval``"
msgstr ""
#: ../networking_adv-features.rst:629 ../networking_adv-features.rst:638
#: ../networking_adv-features.rst:645 ../networking_adv-features.rst:652
#: ../networking_adv-features.rst:663
msgid "``nsx_sync``"
msgstr ""
#: ../networking_adv-features.rst:630
msgid "10 seconds"
msgstr ""
#: ../networking_adv-features.rst:631 ../networking_adv-features.rst:654
msgid "Integer; no constraint."
msgstr ""
#: ../networking_adv-features.rst:632
msgid ""
"Interval in seconds between two run of the synchronization task. If the "
"synchronization task takes more than ``state_sync_interval`` seconds to "
"execute, a new instance of the task is started as soon as the other is "
"completed. Setting the value for this option to 0 will disable the "
"synchronization task."
msgstr ""
#: ../networking_adv-features.rst:637
msgid "``max_random_sync_delay``"
msgstr ""
#: ../networking_adv-features.rst:639
msgid "0 seconds"
msgstr ""
#: ../networking_adv-features.rst:640
msgid "Integer. Must not exceed ``min_sync_req_delay``"
msgstr ""
#: ../networking_adv-features.rst:641
msgid ""
"When different from zero, a random delay between 0 and "
"``max_random_sync_delay`` will be added before processing the next chunk."
msgstr ""
#: ../networking_adv-features.rst:644
msgid "``min_sync_req_delay``"
msgstr ""
#: ../networking_adv-features.rst:646
msgid "1 second"
msgstr ""
#: ../networking_adv-features.rst:647
msgid "Integer. Must not exceed ``state_sync_interval``."
msgstr ""
#: ../networking_adv-features.rst:648
msgid ""
"The value of this option can be tuned according to the observed load on the "
"NSX controllers. Lower values will result in faster synchronization, but "
"might increase the load on the controller cluster."
msgstr ""
#: ../networking_adv-features.rst:651
msgid "``min_chunk_size``"
msgstr ""
#: ../networking_adv-features.rst:653
msgid "500 resources"
msgstr ""
#: ../networking_adv-features.rst:655
msgid ""
"Minimum number of resources to retrieve from the back-end for each "
"synchronization chunk. The expected number of synchronization chunks is "
"given by the ratio between ``state_sync_interval`` and "
"``min_sync_req_delay``. This size of a chunk might increase if the total "
"number of resources is such that more than ``min_chunk_size`` resources must "
"be fetched in one chunk with the current number of chunks."
msgstr ""
#: ../networking_adv-features.rst:662
msgid "``always_read_status``"
msgstr ""
#: ../networking_adv-features.rst:664
msgid "False"
msgstr ""
#: ../networking_adv-features.rst:665
msgid "Boolean; no constraint."
msgstr ""
#: ../networking_adv-features.rst:666
msgid ""
"When this option is enabled, the operational status will always be retrieved "
"from the NSX back-end ad every ``GET`` request. In this case it is advisable "
"to disable the synchronization task."
msgstr ""
#: ../networking_adv-features.rst:670
msgid ""
"When running multiple OpenStack Networking server instances, the status "
"synchronization task should not run on every node; doing so sends "
"unnecessary traffic to the NSX back-end and performs unnecessary DB "
"operations. Set the ``state_sync_interval`` configuration option to a non-"
"zero value exclusively on a node designated for back-end status "
"synchronization."
msgstr ""
#: ../networking_adv-features.rst:677
msgid ""
"The ``fields=status`` parameter in Networking API requests always triggers "
"an explicit query to the NSX back end, even when you enable asynchronous "
"state synchronization. For example, ``GET /v2.0/networks/NET_ID?"
"fields=status&fields=name``."
msgstr ""
#: ../networking_adv-features.rst:683
msgid "Big Switch plug-in extensions"
msgstr ""
#: ../networking_adv-features.rst:685
msgid ""
"This section explains the Big Switch neutron plug-in-specific extension."
msgstr ""
#: ../networking_adv-features.rst:688
msgid "Big Switch router rules"
msgstr ""
#: ../networking_adv-features.rst:690
msgid ""
"Big Switch allows router rules to be added to each tenant router. These "
"rules can be used to enforce routing policies such as denying traffic "
"between subnets or traffic to external networks. By enforcing these at the "
"router level, network segmentation policies can be enforced across many VMs "
"that have differing security groups."
msgstr ""
#: ../networking_adv-features.rst:697
msgid "Router rule attributes"
msgstr ""
#: ../networking_adv-features.rst:699
msgid ""
"Each tenant router has a set of router rules associated with it. Each router "
"rule has the attributes in this table. Router rules and their attributes can "
"be set using the :command:`neutron router-update` command, through the "
"horizon interface or the Networking API."
msgstr ""
# #-#-#-#-# networking_adv-features.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# networking_config-identity.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../networking_adv-features.rst:709 ../networking_config-identity.rst:167
msgid "Required"
msgstr ""
#: ../networking_adv-features.rst:710
msgid "Input type"
msgstr ""
#: ../networking_adv-features.rst:712
msgid "source"
msgstr ""
# #-#-#-#-# networking_adv-features.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-data-collection.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../networking_adv-features.rst:713 ../networking_adv-features.rst:718
#: ../networking_adv-features.rst:723 ../telemetry-data-collection.rst:1045
#: ../telemetry-data-collection.rst:1049
msgid "Yes"
msgstr ""
#: ../networking_adv-features.rst:714 ../networking_adv-features.rst:719
msgid "A valid CIDR or one of the keywords 'any' or 'external'"
msgstr ""
#: ../networking_adv-features.rst:715
msgid ""
"The network that a packet's source IP must match for the rule to be applied."
msgstr ""
#: ../networking_adv-features.rst:717
msgid "destination"
msgstr ""
#: ../networking_adv-features.rst:720
msgid ""
"The network that a packet's destination IP must match for the rule to be "
"applied."
msgstr ""
#: ../networking_adv-features.rst:722
msgid "action"
msgstr ""
#: ../networking_adv-features.rst:724
msgid "'permit' or 'deny'"
msgstr ""
#: ../networking_adv-features.rst:725
msgid ""
"Determines whether or not the matched packets will allowed to cross the "
"router."
msgstr ""
#: ../networking_adv-features.rst:727
msgid "nexthop"
msgstr ""
# #-#-#-#-# networking_adv-features.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-data-collection.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../networking_adv-features.rst:728 ../telemetry-data-collection.rst:1053
#: ../telemetry-data-collection.rst:1057
msgid "No"
msgstr ""
#: ../networking_adv-features.rst:729
msgid ""
"A plus-separated (+) list of next-hop IP addresses. For example, "
"``1.1.1.1+1.1.1.2``."
msgstr ""
#: ../networking_adv-features.rst:731
msgid ""
"Overrides the default virtual router used to handle traffic for packets that "
"match the rule."
msgstr ""
#: ../networking_adv-features.rst:735
msgid "Order of rule processing"
msgstr ""
#: ../networking_adv-features.rst:737
msgid ""
"The order of router rules has no effect. Overlapping rules are evaluated "
"using longest prefix matching on the source and destination fields. The "
"source field is matched first so it always takes higher precedence over the "
"destination field. In other words, longest prefix matching is used on the "
"destination field only if there are multiple matching rules with the same "
"source."
msgstr ""
#: ../networking_adv-features.rst:745
msgid "Big Switch router rules operations"
msgstr ""
#: ../networking_adv-features.rst:747
msgid ""
"Router rules are configured with a router update operation in OpenStack "
"Networking. The update overrides any previous rules so all rules must be "
"provided at the same time."
msgstr ""
#: ../networking_adv-features.rst:751
msgid ""
"Update a router with rules to permit traffic by default but block traffic "
"from external networks to the 10.10.10.0/24 subnet:"
msgstr ""
#: ../networking_adv-features.rst:760
msgid "Specify alternate next-hop addresses for a specific subnet:"
msgstr ""
#: ../networking_adv-features.rst:768
msgid "Block traffic between two subnets while allowing everything else:"
msgstr ""
#: ../networking_adv-features.rst:777
msgid "L3 metering"
msgstr ""
#: ../networking_adv-features.rst:779
msgid ""
"The L3 metering API extension enables administrators to configure IP ranges "
"and assign a specified label to them to be able to measure traffic that goes "
"through a virtual router."
msgstr ""
#: ../networking_adv-features.rst:783
msgid ""
"The L3 metering extension is decoupled from the technology that implements "
"the measurement. Two abstractions have been added: One is the metering label "
"that can contain metering rules. Because a metering label is associated with "
"a tenant, all virtual routers in this tenant are associated with this label."
msgstr ""
#: ../networking_adv-features.rst:790
msgid "Basic L3 metering operations"
msgstr ""
#: ../networking_adv-features.rst:792
msgid "Only administrators can manage the L3 metering labels and rules."
msgstr ""
#: ../networking_adv-features.rst:794
msgid ""
"This table shows example :command:`neutron` commands that enable you to "
"complete basic L3 metering operations:"
msgstr ""
#: ../networking_adv-features.rst:803
msgid "Creates a metering label."
msgstr ""
#: ../networking_adv-features.rst:807
msgid "Lists metering labels."
msgstr ""
#: ../networking_adv-features.rst:811
msgid "Shows information for a specified label."
msgstr ""
#: ../networking_adv-features.rst:816
msgid "Deletes a metering label."
msgstr ""
#: ../networking_adv-features.rst:821
msgid "Creates a metering rule."
msgstr ""
# #-#-#-#-# networking_adv-features.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# networking_config-agents.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# networking_config-identity.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# networking_use.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../networking_adv-features.rst:827 ../networking_adv-features.rst:851
#: ../networking_config-agents.rst:94 ../networking_config-identity.rst:53
#: ../networking_config-identity.rst:75 ../networking_use.rst:101
msgid "For example:"
msgstr ""
#: ../networking_adv-features.rst:834
msgid "Lists metering all label rules."
msgstr ""
#: ../networking_adv-features.rst:838
msgid "Shows information for a specified label rule."
msgstr ""
#: ../networking_adv-features.rst:842
msgid "Deletes a metering label rule."
msgstr ""
#: ../networking_adv-features.rst:846
msgid "Lists the value of created metering label rules."
msgstr ""
#: ../networking_adv-operational-features.rst:3
msgid "Advanced operational features"
msgstr ""
#: ../networking_adv-operational-features.rst:6
msgid "Logging settings"
msgstr ""
#: ../networking_adv-operational-features.rst:8
msgid ""
"Networking components use Python logging module to do logging. Logging "
"configuration can be provided in :file:`neutron.conf` or as command-line "
"options. Command options override ones in :file:`neutron.conf`."
msgstr ""
#: ../networking_adv-operational-features.rst:12
msgid ""
"To configure logging for Networking components, use one of these methods:"
msgstr ""
#: ../networking_adv-operational-features.rst:15
msgid "Provide logging settings in a logging configuration file."
msgstr ""
#: ../networking_adv-operational-features.rst:17
msgid ""
"See `Python logging how-to <http://docs.python.org/howto/logging.html>`__ to "
"learn more about logging."
msgstr ""
#: ../networking_adv-operational-features.rst:21
msgid "Provide logging setting in :file:`neutron.conf`."
msgstr ""
#: ../networking_adv-operational-features.rst:49
msgid ""
"Notifications can be sent when Networking resources such as network, subnet "
"and port are created, updated or deleted."
msgstr ""
#: ../networking_adv-operational-features.rst:53
msgid "Notification options"
msgstr ""
#: ../networking_adv-operational-features.rst:55
msgid ""
"To support DHCP agent, rpc\\_notifier driver must be set. To set up the "
"notification, edit notification options in :file:`neutron.conf`:"
msgstr ""
#: ../networking_adv-operational-features.rst:69
msgid "Setting cases"
msgstr ""
#: ../networking_adv-operational-features.rst:72
msgid "Logging and RPC"
msgstr ""
#: ../networking_adv-operational-features.rst:74
msgid ""
"These options configure the Networking server to send notifications through "
"logging and RPC. The logging options are described in OpenStack "
"Configuration Reference . RPC notifications go to ``notifications.info`` "
"queue bound to a topic exchange defined by ``control_exchange`` in :file:"
"`neutron.conf`."
msgstr ""
#: ../networking_adv-operational-features.rst:121
msgid "Multiple RPC topics"
msgstr ""
#: ../networking_adv-operational-features.rst:123
msgid ""
"These options configure the Networking server to send notifications to "
"multiple RPC topics. RPC notifications go to ``notifications_one.info`` and "
"``notifications_two.info`` queues bound to a topic exchange defined by "
"``control_exchange`` in :file:`neutron.conf`."
msgstr ""
#: ../networking_arch.rst:3
msgid "Networking architecture"
msgstr ""
#: ../networking_arch.rst:5
msgid ""
"Before you deploy Networking, it is useful to understand the Networking "
"services and how they interact with the OpenStack components."
msgstr ""
#: ../networking_arch.rst:9
msgid "Overview"
msgstr ""
#: ../networking_arch.rst:11
msgid ""
"Networking is a standalone component in the OpenStack modular architecture. "
"It's positioned alongside OpenStack components such as Compute, Image "
"service, Identity, or dashboard. Like those components, a deployment of "
"Networking often involves deploying several services to a variety of hosts."
msgstr ""
#: ../networking_arch.rst:17
msgid ""
"The Networking server uses the neutron-server daemon to expose the "
"Networking API and enable administration of the configured Networking plug-"
"in. Typically, the plug-in requires access to a database for persistent "
"storage (also similar to other OpenStack services)."
msgstr ""
#: ../networking_arch.rst:22
msgid ""
"If your deployment uses a controller host to run centralized Compute "
"components, you can deploy the Networking server to that same host. However, "
"Networking is entirely standalone and can be deployed to a dedicated host. "
"Depending on your configuration, Networking can also include the following "
"agents:"
msgstr ""
#: ../networking_arch.rst:29
msgid "Agent"
msgstr ""
#: ../networking_arch.rst:31
msgid "**plug-in agent** (``neutron-*-agent``)"
msgstr ""
#: ../networking_arch.rst:32
msgid ""
"Runs on each hypervisor to perform local vSwitch configuration. The agent "
"that runs, depends on the plug-in that you use. Certain plug-ins do not "
"require an agent."
msgstr ""
#: ../networking_arch.rst:37
msgid "**dhcp agent** (``neutron-dhcp-agent``)"
msgstr ""
#: ../networking_arch.rst:38
msgid ""
"Provides DHCP services to tenant networks. Required by certain plug-ins."
msgstr ""
#: ../networking_arch.rst:41
msgid "**l3 agent** (``neutron-l3-agent``)"
msgstr ""
#: ../networking_arch.rst:42
msgid ""
"Provides L3/NAT forwarding to provide external network access for VMs on "
"tenant networks. Required by certain plug-ins."
msgstr ""
#: ../networking_arch.rst:46
msgid "**metering agent** (``neutron-metering-agent``)"
msgstr ""
#: ../networking_arch.rst:47
msgid "Provides L3 traffic metering for tenant networks."
msgstr ""
#: ../networking_arch.rst:51
msgid ""
"These agents interact with the main neutron process through RPC (for "
"example, RabbitMQ or Qpid) or through the standard Networking API. In "
"addition, Networking integrates with OpenStack components in a number of "
"ways:"
msgstr ""
#: ../networking_arch.rst:56
msgid ""
"Networking relies on the Identity service (keystone) for the authentication "
"and authorization of all API requests."
msgstr ""
#: ../networking_arch.rst:59
msgid ""
"Compute (nova) interacts with Networking through calls to its standard API. "
"As part of creating a VM, the nova-compute service communicates with the "
"Networking API to plug each virtual NIC on the VM into a particular network."
msgstr ""
#: ../networking_arch.rst:64
msgid ""
"The dashboard (horizon) integrates with the Networking API, enabling "
"administrators and tenant users to create and manage network services "
"through a web-based GUI."
msgstr ""
#: ../networking_arch.rst:69
msgid "VMware NSX integration"
msgstr ""
#: ../networking_arch.rst:71
msgid ""
"OpenStack Networking uses the NSX plug-in to integrate with an existing "
"VMware vCenter deployment. When installed on the network nodes, the NSX plug-"
"in enables a NSX controller to centrally manage configuration settings and "
"push them to managed network nodes. Network nodes are considered managed "
"when they're added as hypervisors to the NSX controller."
msgstr ""
#: ../networking_arch.rst:78
msgid ""
"The diagrams below depict some VMware NSX deployment examples. The first "
"diagram illustrates the traffic flow between VMs on separate Compute nodes, "
"and the second diagram between two VMs on a single Compute node. Note the "
"placement of the VMware NSX plug-in and the neutron-server service on the "
"network node. The green arrow indicates the management relationship between "
"the NSX controller and the network node."
msgstr ""
#: ../networking_arch.rst:85
msgid "|VMware NSX deployment example - two Compute nodes|"
msgstr ""
#: ../networking_arch.rst:87
msgid "|VMware NSX deployment example - single Compute node|"
msgstr ""
#: ../networking_auth.rst:5
msgid "Authentication and authorization"
msgstr ""
#: ../networking_auth.rst:7
msgid ""
"Networking uses the Identity service as the default authentication service. "
"When the Identity service is enabled, users who submit requests to the "
"Networking service must provide an authentication token in ``X-Auth-Token`` "
"request header. Users obtain this token by authenticating with the Identity "
"service endpoint. For more information about authentication with the "
"Identity service, see `OpenStack Identity service API v2.0 Reference <http://"
"developer.openstack.org/api-ref-identity-v2.html>`__. When the Identity "
"service is enabled, it is not mandatory to specify the tenant ID for "
"resources in create requests because the tenant ID is derived from the "
"authentication token."
msgstr ""
#: ../networking_auth.rst:19
msgid ""
"The default authorization settings only allow administrative users to create "
"resources on behalf of a different tenant. Networking uses information "
"received from Identity to authorize user requests. Networking handles two "
"kind of authorization policies:"
msgstr ""
#: ../networking_auth.rst:24
msgid ""
"**Operation-based** policies specify access criteria for specific "
"operations, possibly with fine-grained control over specific attributes."
msgstr ""
#: ../networking_auth.rst:28
msgid ""
"**Resource-based** policies specify whether access to specific resource is "
"granted or not according to the permissions configured for the resource "
"(currently available only for the network resource). The actual "
"authorization policies enforced in Networking might vary from deployment to "
"deployment."
msgstr ""
#: ../networking_auth.rst:34
msgid ""
"The policy engine reads entries from the :file:`policy.json` file. The "
"actual location of this file might vary from distribution to distribution. "
"Entries can be updated while the system is running, and no service restart "
"is required. Every time the policy file is updated, the policies are "
"automatically reloaded. Currently the only way of updating such policies is "
"to edit the policy file. In this section, the terms *policy* and *rule* "
"refer to objects that are specified in the same way in the policy file. "
"There are no syntax differences between a rule and a policy. A policy is "
"something that is matched directly from the Networking policy engine. A rule "
"is an element in a policy, which is evaluated. For instance in "
"``create_subnet: [[\"admin_or_network_owner\"]]``, *create_subnet* is a "
"policy, and *admin_or_network_owner* is a rule."
msgstr ""
#: ../networking_auth.rst:48
msgid ""
"Policies are triggered by the Networking policy engine whenever one of them "
"matches a Networking API operation or a specific attribute being used in a "
"given operation. For instance the ``create_subnet`` policy is triggered "
"every time a ``POST /v2.0/subnets`` request is sent to the Networking "
"server; on the other hand ``create_network:shared`` is triggered every time "
"the *shared* attribute is explicitly specified (and set to a value different "
"from its default) in a ``POST /v2.0/networks`` request. It is also worth "
"mentioning that policies can also be related to specific API extensions; for "
"instance ``extension:provider_network:set`` is triggered if the attributes "
"defined by the Provider Network extensions are specified in an API request."
msgstr ""
#: ../networking_auth.rst:61
msgid ""
"An authorization policy can be composed by one or more rules. If more rules "
"are specified then the evaluation policy succeeds if any of the rules "
"evaluates successfully; if an API operation matches multiple policies, then "
"all the policies must evaluate successfully. Also, authorization rules are "
"recursive. Once a rule is matched, the rule(s) can be resolved to another "
"rule, until a terminal rule is reached."
msgstr ""
#: ../networking_auth.rst:68
msgid ""
"The Networking policy engine currently defines the following kinds of "
"terminal rules:"
msgstr ""
#: ../networking_auth.rst:71
msgid ""
"**Role-based rules** evaluate successfully if the user who submits the "
"request has the specified role. For instance ``\"role:admin\"`` is "
"successful if the user who submits the request is an administrator."
msgstr ""
#: ../networking_auth.rst:75
msgid ""
"**Field-based rules** evaluate successfully if a field of the resource "
"specified in the current request matches a specific value. For instance ``"
"\"field:networks:shared=True\"`` is successful if the ``shared`` attribute "
"of the ``network`` resource is set to true."
msgstr ""
#: ../networking_auth.rst:80
msgid ""
"**Generic rules** compare an attribute in the resource with an attribute "
"extracted from the user's security credentials and evaluates successfully if "
"the comparison is successful. For instance ``\"tenant_id:%(tenant_id)s\"`` "
"is successful if the tenant identifier in the resource is equal to the "
"tenant identifier of the user submitting the request."
msgstr ""
#: ../networking_auth.rst:87
msgid "This extract is from the default :file:`policy.json` file:"
msgstr ""
#: ../networking_auth.rst:89
msgid ""
"A rule that evaluates successfully if the current user is an administrator "
"or the owner of the resource specified in the request (tenant identifier is "
"equal)."
msgstr ""
#: ../networking_auth.rst:126
msgid ""
"The default policy that is always evaluated if an API operation does not "
"match any of the policies in ``policy.json``."
msgstr ""
#: ../networking_auth.rst:163
msgid ""
"This policy evaluates successfully if either *admin\\_or\\_owner*, or "
"*shared* evaluates successfully."
msgstr ""
#: ../networking_auth.rst:177
msgid ""
"This policy restricts the ability to manipulate the *shared* attribute for a "
"network to administrators only."
msgstr ""
#: ../networking_auth.rst:201
msgid ""
"This policy restricts the ability to manipulate the *mac\\_address* "
"attribute for a port only to administrators and the owner of the network "
"where the port is attached."
msgstr ""
#: ../networking_auth.rst:228
msgid ""
"In some cases, some operations are restricted to administrators only. This "
"example shows you how to modify a policy file to permit tenants to define "
"networks, see their resources, and permit administrative users to perform "
"all other operations:"
msgstr ""
#: ../networking_config-agents.rst:3
msgid "Configure neutron agents"
msgstr ""
#: ../networking_config-agents.rst:5
msgid ""
"Plug-ins typically have requirements for particular software that must be "
"run on each node that handles data packets. This includes any node that runs "
"nova-compute and nodes that run dedicated OpenStack Networking service "
"agents such as ``neutron-dhcp-agent``, ``neutron-l3-agent``, ``neutron-"
"metering-agent`` or ``neutron-lbaas-agent``."
msgstr ""
#: ../networking_config-agents.rst:11
msgid ""
"A data-forwarding node typically has a network interface with an IP address "
"on the management network and another interface on the data network."
msgstr ""
#: ../networking_config-agents.rst:15
msgid ""
"This section shows you how to install and configure a subset of the "
"available plug-ins, which might include the installation of switching "
"software (for example, Open vSwitch) and as agents used to communicate with "
"the neutron-server process running elsewhere in the data center."
msgstr ""
#: ../networking_config-agents.rst:21
msgid "Configure data-forwarding nodes"
msgstr ""
#: ../networking_config-agents.rst:24
msgid "Node set up: NSX plug-in"
msgstr ""
#: ../networking_config-agents.rst:26
msgid ""
"If you use the NSX plug-in, you must also install Open vSwitch on each data-"
"forwarding node. However, you do not need to install an additional agent on "
"each node."
msgstr ""
#: ../networking_config-agents.rst:32
msgid ""
"It is critical that you run an Open vSwitch version that is compatible with "
"the current version of the NSX Controller software. Do not use the Open "
"vSwitch version that is installed by default on Ubuntu. Instead, use the "
"Open vSwitch version that is provided on the VMware support portal for your "
"NSX Controller version."
msgstr ""
#: ../networking_config-agents.rst:38
msgid "**To set up each node for the NSX plug-in**"
msgstr ""
#: ../networking_config-agents.rst:40
msgid ""
"Ensure that each data-forwarding node has an IP address on the management "
"network, and an IP address on the data network that is used for tunneling "
"data traffic. For full details on configuring your forwarding node, see the "
"``NSX Administrator Guide``."
msgstr ""
#: ../networking_config-agents.rst:45
msgid ""
"Use the ``NSX Administrator Guide`` to add the node as a Hypervisor by using "
"the NSX Manager GUI. Even if your forwarding node has no VMs and is only "
"used for services agents like ``neutron-dhcp-agent`` or ``neutron-lbaas-"
"agent``, it should still be added to NSX as a Hypervisor."
msgstr ""
#: ../networking_config-agents.rst:50
msgid ""
"After following the NSX Administrator Guide, use the page for this "
"Hypervisor in the NSX Manager GUI to confirm that the node is properly "
"connected to the NSX Controller Cluster and that the NSX Controller Cluster "
"can see the ``br-int`` integration bridge."
msgstr ""
#: ../networking_config-agents.rst:56
msgid "Configure DHCP agent"
msgstr ""
#: ../networking_config-agents.rst:58
msgid ""
"The DHCP service agent is compatible with all existing plug-ins and is "
"required for all deployments where VMs should automatically receive IP "
"addresses through DHCP."
msgstr ""
#: ../networking_config-agents.rst:62
msgid "**To install and configure the DHCP agent**"
msgstr ""
#: ../networking_config-agents.rst:64
msgid ""
"You must configure the host running the neutron-dhcp-agent as a data "
"forwarding node according to the requirements for your plug-in."
msgstr ""
#: ../networking_config-agents.rst:67
msgid "Install the DHCP agent:"
msgstr ""
#: ../networking_config-agents.rst:73
msgid ""
"Update any options in the :file:`/etc/neutron/dhcp_agent.ini` file that "
"depend on the plug-in in use. See the sub-sections."
msgstr ""
#: ../networking_config-agents.rst:78
msgid ""
"If you reboot a node that runs the DHCP agent, you must run the :command:"
"`neutron-ovs-cleanup` command before the neutron-dhcp-agent service starts."
msgstr ""
#: ../networking_config-agents.rst:82
msgid ""
"On Red Hat, SUSE, and Ubuntu based systems, the ``neutron-ovs-cleanup`` "
"service runs the :command:`neutron-ovs-cleanup` command automatically. "
"However, on Debian-based systems, you must manually run this command or "
"write your own system script that runs on boot before the ``neutron-dhcp-"
"agent`` service starts."
msgstr ""
#: ../networking_config-agents.rst:88
msgid ""
"Networking dhcp-agent can use `dnsmasq <http://www.thekelleys.org.uk/dnsmasq/"
"doc.html>`__ driver which supports stateful and stateless DHCPv6 for subnets "
"created with ``--ipv6_address_mode`` set to ``dhcpv6-stateful`` or ``dhcpv6-"
"stateless``."
msgstr ""
#: ../networking_config-agents.rst:106
msgid ""
"If no dnsmasq process for subnet's network is launched, Networking will "
"launch a new one on subnet's dhcp port in ``qdhcp-XXX`` namespace. If "
"previous dnsmasq process is already launched, restart dnsmasq with a new "
"configuration."
msgstr ""
#: ../networking_config-agents.rst:111
msgid ""
"Networking will update dnsmasq process and restart it when subnet gets "
"updated."
msgstr ""
#: ../networking_config-agents.rst:116
msgid "For dhcp-agent to operate in IPv6 mode use at least dnsmasq v2.63."
msgstr ""
#: ../networking_config-agents.rst:118
msgid ""
"After a certain, configured timeframe, networks uncouple from DHCP agents "
"when the agents are no longer in use. You can configure the DHCP agent to "
"automatically detach from a network when the agent is out of service, or no "
"longer needed."
msgstr ""
#: ../networking_config-agents.rst:123
msgid ""
"This feature applies to all plug-ins that support DHCP scaling. For more "
"information, see the `DHCP agent configuration options <http://docs."
"openstack.org/liberty/config-reference/content/networking -options-dhcp."
"html>`__ listed in the OpenStack Configuration Reference."
msgstr ""
#: ../networking_config-agents.rst:130
msgid "DHCP agent setup: OVS plug-in"
msgstr ""
#: ../networking_config-agents.rst:132
msgid ""
"These DHCP agent options are required in the :file:`/etc/neutron/dhcp_agent."
"ini` file for the OVS plug-in:"
msgstr ""
#: ../networking_config-agents.rst:143
msgid "DHCP agent setup: NSX plug-in"
msgstr ""
#: ../networking_config-agents.rst:145
msgid ""
"These DHCP agent options are required in the :file:`/etc/neutron/dhcp_agent."
"ini` file for the NSX plug-in:"
msgstr ""
#: ../networking_config-agents.rst:157
msgid "Configure L3 agent"
msgstr ""
#: ../networking_config-agents.rst:159
msgid ""
"The OpenStack Networking service has a widely used API extension to allow "
"administrators and tenants to create routers to interconnect L2 networks, "
"and floating IPs to make ports on private networks publicly accessible."
msgstr ""
#: ../networking_config-agents.rst:164
msgid ""
"Many plug-ins rely on the L3 service agent to implement the L3 "
"functionality. However, the following plug-ins already have built-in L3 "
"capabilities:"
msgstr ""
#: ../networking_config-agents.rst:168
msgid ""
"Big Switch/Floodlight plug-in, which supports both the open source "
"`Floodlight <http://www.projectfloodlight.org/floodlight/>`__ controller and "
"the proprietary Big Switch controller."
msgstr ""
#: ../networking_config-agents.rst:174
msgid ""
"Only the proprietary BigSwitch controller implements L3 functionality. When "
"using Floodlight as your OpenFlow controller, L3 functionality is not "
"available."
msgstr ""
#: ../networking_config-agents.rst:178
msgid "IBM SDN-VE plug-in"
msgstr ""
#: ../networking_config-agents.rst:180
msgid "MidoNet plug-in"
msgstr ""
#: ../networking_config-agents.rst:182
msgid "NSX plug-in"
msgstr ""
#: ../networking_config-agents.rst:184
msgid "PLUMgrid plug-in"
msgstr ""
#: ../networking_config-agents.rst:188
msgid ""
"Do not configure or use neutron-l3-agent if you use one of these plug-ins."
msgstr ""
#: ../networking_config-agents.rst:191
msgid "**To install the L3 agent for all other plug-ins**"
msgstr ""
#: ../networking_config-agents.rst:193
msgid "Install the neutron-l3-agent binary on the network node:"
msgstr ""
#: ../networking_config-agents.rst:199
msgid ""
"To uplink the node that runs neutron-l3-agent to the external network, "
"create a bridge named \"br-ex\" and attach the NIC for the external network "
"to this bridge."
msgstr ""
#: ../networking_config-agents.rst:203
msgid ""
"For example, with Open vSwitch and NIC eth1 connected to the external "
"network, run:"
msgstr ""
#: ../networking_config-agents.rst:211
msgid ""
"Do not manually configure an IP address on the NIC connected to the external "
"network for the node running neutron-l3-agent. Rather, you must have a range "
"of IP addresses from the external network that can be used by OpenStack "
"Networking for routers that uplink to the external network. This range must "
"be large enough to have an IP address for each router in the deployment, as "
"well as each floating IP."
msgstr ""
#: ../networking_config-agents.rst:218
msgid ""
"The neutron-l3-agent uses the Linux IP stack and iptables to perform L3 "
"forwarding and NAT. In order to support multiple routers with potentially "
"overlapping IP addresses, neutron-l3-agent defaults to using Linux network "
"namespaces to provide isolated forwarding contexts. As a result, the IP "
"addresses of routers are not visible simply by running the :command:`ip addr "
"list` or :command:`ifconfig` command on the node. Similarly, you cannot "
"directly :command:`ping` fixed IPs."
msgstr ""
#: ../networking_config-agents.rst:226
msgid ""
"To do either of these things, you must run the command within a particular "
"network namespace for the router. The namespace has the name ``qrouter-"
"ROUTER_UUID``. These example commands run in the router namespace with UUID "
"47af3868-0fa8-4447-85f6-1304de32153b:"
msgstr ""
#: ../networking_config-agents.rst:241
msgid ""
"For iproute version 3.12.0 and above, networking namespaces are configured "
"to be deleted by default. This behavior can be changed for both DHCP and L3 "
"agents. The configuration files are :file:`/etc/neutron/dhcp_agent.ini` and :"
"file:`/etc/neutron/l3_agent.ini` respectively."
msgstr ""
#: ../networking_config-agents.rst:247
msgid ""
"For DHCP namespace the configuration key: ``dhcp_delete_namespaces = True``. "
"You can set it to False in case namespaces cannot be deleted cleanly on the "
"host running the DHCP agent."
msgstr ""
#: ../networking_config-agents.rst:252
msgid ""
"For L3 namespace, the configuration key: ``router_delete_namespaces = "
"True``. You can set it to False in case namespaces cannot be deleted cleanly "
"on the host running the L3 agent."
msgstr ""
#: ../networking_config-agents.rst:259
msgid ""
"If you reboot a node that runs the L3 agent, you must run the :command:"
"`neutron-ovs-cleanup` command before the neutron-l3-agent service starts."
msgstr ""
#: ../networking_config-agents.rst:263
msgid ""
"On Red Hat, SUSE and Ubuntu based systems, the neutron-ovs-cleanup service "
"runs the :command:`neutron-ovs-cleanup` command automatically. However, on "
"Debian-based systems, you must manually run this command or write your own "
"system script that runs on boot before the neutron-l3-agent service starts."
msgstr ""
#: ../networking_config-agents.rst:270
msgid "Configure metering agent"
msgstr ""
#: ../networking_config-agents.rst:272
msgid "The Neutron Metering agent resides beside neutron-l3-agent."
msgstr ""
#: ../networking_config-agents.rst:274
msgid "**To install the metering agent and configure the node**"
msgstr ""
#: ../networking_config-agents.rst:276
msgid "Install the agent by running:"
msgstr ""
#: ../networking_config-agents.rst:282
msgid ""
"If you use one of the following plug-ins, you need to configure the metering "
"agent with these lines as well:"
msgstr ""
#: ../networking_config-agents.rst:285
msgid "An OVS-based plug-in such as OVS, NSX, NEC, BigSwitch/Floodlight:"
msgstr ""
#: ../networking_config-agents.rst:291
msgid "A plug-in that uses LinuxBridge:"
msgstr ""
#: ../networking_config-agents.rst:298
msgid "To use the reference implementation, you must set:"
msgstr ""
#: ../networking_config-agents.rst:305
msgid ""
"Set the ``service_plugins`` option in the :file:`/etc/neutron/neutron.conf` "
"file on the host that runs neutron-server:"
msgstr ""
#: ../networking_config-agents.rst:312
msgid ""
"If this option is already defined, add ``metering`` to the list, using a "
"comma as separator. For example:"
msgstr ""
#: ../networking_config-agents.rst:320
msgid "Configure Load-Balancer-as-a-Service (LBaaS v2)"
msgstr ""
#: ../networking_config-agents.rst:322
msgid ""
"For the back end, use either Octavia or Haproxy. This example uses Octavia."
msgstr ""
#: ../networking_config-agents.rst:324
msgid "**To configure LBaaS V2**"
msgstr ""
#: ../networking_config-agents.rst:326
msgid "Install Octavia using your distribution's package manager."
msgstr ""
#: ../networking_config-agents.rst:329
msgid ""
"Edit the :file:`/etc/neutron/neutron_lbaas.conf` file and change the "
"``service_provider`` parameter to enable Octavia:"
msgstr ""
#: ../networking_config-agents.rst:339
msgid ""
"The ``service_provider`` option is already defined in the :file:`/usr/share/"
"neutron/neutron-dist.conf` file on Red Hat based systems. Do not define it "
"in :file:`neutron_lbaas.conf` otherwise the Networking services will fail to "
"restart."
msgstr ""
#: ../networking_config-agents.rst:345
msgid ""
"Edit the :file:`/etc/neutron/neutron.conf` file and add the "
"``service_plugins`` parameter to enable the load-balancing plug-in:"
msgstr ""
#: ../networking_config-agents.rst:353
msgid ""
"If this option is already defined, add the load-balancing plug-in to the "
"list using a comma as a separator. For example:"
msgstr ""
# #-#-#-#-# networking_config-agents.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# networking_introduction.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../networking_config-agents.rst:363 ../networking_introduction.rst:202
msgid "Create the required tables in the database:"
msgstr ""
#: ../networking_config-agents.rst:369
msgid "Restart the neutron-server service."
msgstr ""
#: ../networking_config-agents.rst:372
msgid "Enable load balancing in the Project section of the dashboard."
msgstr ""
#: ../networking_config-agents.rst:376
msgid ""
"Horizon panels are enabled only for LBaaSV1. LBaaSV2 panels are still being "
"developed."
msgstr ""
#: ../networking_config-agents.rst:379
msgid ""
"Change the ``enable_lb`` option to ``True`` in the :file:`local_settings` "
"file (on Fedora, RHEL, and CentOS: :file:`/etc/openstack-dashboard/"
"local_settings`, on Ubuntu and Debian: :file:`/etc/openstack-dashboard/"
"local_settings.py`, and on openSUSE and SLES: :file:`/srv/www/openstack-"
"dashboard/openstack_dashboard/local/local_settings .py`):"
msgstr ""
#: ../networking_config-agents.rst:394
msgid ""
"Apply the settings by restarting the web server. You can now view the Load "
"Balancer management options in the Project view in the dashboard."
msgstr ""
#: ../networking_config-agents.rst:398
msgid "Configure Hyper-V L2 agent"
msgstr ""
#: ../networking_config-agents.rst:400
msgid ""
"Before you install the OpenStack Networking Hyper-V L2 agent on a Hyper-V "
"compute node, ensure the compute node has been configured correctly using "
"these `instructions <http://docs.openstack.org/liberty/config-reference/"
"content/ hyper-v-virtualization-platform.html>`__."
msgstr ""
#: ../networking_config-agents.rst:406
msgid ""
"**To install the OpenStack Networking Hyper-V agent and configure the node**"
msgstr ""
#: ../networking_config-agents.rst:408
msgid "Download the OpenStack Networking code from the repository:"
msgstr ""
#: ../networking_config-agents.rst:415
msgid "Install the OpenStack Networking Hyper-V Agent:"
msgstr ""
#: ../networking_config-agents.rst:422
msgid "Copy the :file:`policy.json` file:"
msgstr ""
#: ../networking_config-agents.rst:428
msgid ""
"Create the :file:`C:\\etc\\neutron-hyperv-agent.conf` file and add the "
"proper configuration options and the `Hyper-V related options <http://docs."
"openstack.org/liberty/config-reference/content/ networking-plugin-"
"hyperv_agent.html>`__. Here is a sample config file:"
msgstr ""
#: ../networking_config-agents.rst:457
msgid "Start the OpenStack Networking Hyper-V agent:"
msgstr ""
#: ../networking_config-agents.rst:465
msgid "Basic operations on agents"
msgstr ""
#: ../networking_config-agents.rst:467
msgid ""
"This table shows examples of Networking commands that enable you to complete "
"basic operations on agents:"
msgstr ""
#: ../networking_config-agents.rst:473
msgid "List all available agents."
msgstr ""
#: ../networking_config-agents.rst:475
msgid "``$ neutron agent-list``"
msgstr ""
#: ../networking_config-agents.rst:477
msgid "Show information of a given agent."
msgstr ""
#: ../networking_config-agents.rst:480
msgid "``$ neutron agent-show AGENT_ID``"
msgstr ""
#: ../networking_config-agents.rst:482
msgid ""
"Update the admin status and description for a specified agent. The command "
"can be used to enable and disable agents by using ``--admin-state-up`` "
"parameter set to ``False`` or ``True``."
msgstr ""
#: ../networking_config-agents.rst:488
msgid "``$ neutron agent-update --admin`` ``-state-up False AGENT_ID``"
msgstr ""
#: ../networking_config-agents.rst:491
msgid "Delete a given agent. Consider disabling the agent before deletion."
msgstr ""
#: ../networking_config-agents.rst:494
msgid "``$ neutron agent-delete AGENT_ID``"
msgstr ""
#: ../networking_config-agents.rst:497
msgid "**Basic operations on Networking agents**"
msgstr ""
#: ../networking_config-agents.rst:499
msgid ""
"See the `OpenStack Command-Line Interface Reference <http://docs.openstack."
"org/cli-reference/content/neutronclient_commands.html>`__ for more "
"information on Networking commands."
msgstr ""
#: ../networking_config-identity.rst:0
msgid "**nova.conf API and credential settings**"
msgstr ""
#: ../networking_config-identity.rst:3
msgid "Configure Identity service for Networking"
msgstr ""
#: ../networking_config-identity.rst:5
msgid "**To configure the Identity service for use with Networking**"
msgstr ""
#: ../networking_config-identity.rst:7
msgid "Create the ``get_id()`` function"
msgstr ""
#: ../networking_config-identity.rst:9
msgid ""
"The ``get_id()`` function stores the ID of created objects, and removes the "
"need to copy and paste object IDs in later steps:"
msgstr ""
#: ../networking_config-identity.rst:12
msgid "Add the following function to your :file:`.bashrc` file:"
msgstr ""
#: ../networking_config-identity.rst:20
msgid "Source the :file:`.bashrc` file:"
msgstr ""
#: ../networking_config-identity.rst:26
msgid "Create the Networking service entry"
msgstr ""
#: ../networking_config-identity.rst:28
msgid ""
"Networking must be available in the Compute service catalog. Create the "
"service:"
msgstr ""
#: ../networking_config-identity.rst:36
msgid "Create the Networking service endpoint entry"
msgstr ""
#: ../networking_config-identity.rst:38
msgid ""
"The way that you create a Networking endpoint entry depends on whether you "
"are using the SQL or the template catalog driver:"
msgstr ""
#: ../networking_config-identity.rst:41
msgid ""
"If you are using the *SQL driver*, run the following command with the "
"specified region (``$REGION``), IP address of the Networking server (``"
"$IP``), and service ID (``$NEUTRON_SERVICE_ID``, obtained in the previous "
"step)."
msgstr ""
#: ../networking_config-identity.rst:63
msgid ""
"If you are using the *template driver*, specify the following parameters in "
"your Compute catalog template file (:file:`default_catalog.templates`), "
"along with the region (``$REGION``) and IP address of the Networking server "
"(``$IP``)."
msgstr ""
#: ../networking_config-identity.rst:84
msgid "Create the Networking service user"
msgstr ""
#: ../networking_config-identity.rst:86
msgid ""
"You must provide admin user credentials that Compute and some internal "
"Networking components can use to access the Networking API. Create a special "
"``service`` tenant and a ``neutron`` user within this tenant, and assign an "
"``admin`` role to this role."
msgstr ""
#: ../networking_config-identity.rst:91
msgid "Create the ``admin`` role:"
msgstr ""
#: ../networking_config-identity.rst:97
msgid "Create the ``neutron`` user:"
msgstr ""
#: ../networking_config-identity.rst:104
msgid "Create the ``service`` tenant:"
msgstr ""
#: ../networking_config-identity.rst:111
msgid "Establish the relationship among the tenant, user, and role:"
msgstr ""
#: ../networking_config-identity.rst:118
msgid ""
"For information about how to create service entries and users, see the "
"OpenStack Installation Guide for your distribution (`docs.openstack.org "
"<http://docs.openstack.org/index.html#install-guides>`__)."
msgstr ""
#: ../networking_config-identity.rst:125
msgid ""
"If you use Networking, do not run the Compute nova-network service (like you "
"do in traditional Compute deployments). Instead, Compute delegates most "
"network-related decisions to Networking. Compute proxies tenant-facing API "
"calls to manage security groups and floating IPs to Networking APIs. "
"However, operator-facing tools such as nova-manage, are not proxied and "
"should not be used."
msgstr ""
#: ../networking_config-identity.rst:134
msgid ""
"When you configure networking, you must use this guide. Do not rely on "
"Compute networking documentation or past experience with Compute. If a :"
"command:`nova` command or configuration option related to networking is not "
"mentioned in this guide, the command is probably not supported for use with "
"Networking. In particular, you cannot use CLI tools like ``nova-manage`` and "
"``nova`` to manage networks or IP addressing, including both fixed and "
"floating IPs, with Networking."
msgstr ""
#: ../networking_config-identity.rst:144
msgid ""
"Uninstall nova-network and reboot any physical nodes that have been running "
"nova-network before using them to run Networking. Inadvertently running the "
"nova-network process while using Networking can cause problems, as can stale "
"iptables rules pushed down by previously running nova-network."
msgstr ""
#: ../networking_config-identity.rst:150
msgid ""
"To ensure that Compute works properly with Networking (rather than the "
"legacy nova-network mechanism), you must adjust settings in the :file:`nova."
"conf` configuration file."
msgstr ""
#: ../networking_config-identity.rst:155
msgid "Networking API and credential configuration"
msgstr ""
#: ../networking_config-identity.rst:157
msgid ""
"Each time you provision or de-provision a VM in Compute, nova-\\* services "
"communicate with Networking using the standard API. For this to happen, you "
"must configure the following items in the :file:`nova.conf` file (used by "
"each nova-compute and nova-api instance)."
msgstr ""
#: ../networking_config-identity.rst:168
msgid "``[DEFAULT] network_api_class``"
msgstr ""
#: ../networking_config-identity.rst:169
msgid ""
"Modify from the default to ``nova.network.neutronv2.api.API``, to indicate "
"that Networking should be used rather than the traditional nova-network "
"networking model."
msgstr ""
#: ../networking_config-identity.rst:172
msgid "``[neutron] url``"
msgstr ""
#: ../networking_config-identity.rst:173
msgid ""
"Update to the hostname/IP and port of the neutron-server instance for this "
"deployment."
msgstr ""
#: ../networking_config-identity.rst:175
msgid "``[neutron] auth_strategy``"
msgstr ""
#: ../networking_config-identity.rst:176
msgid "Keep the default ``keystone`` value for all production deployments."
msgstr ""
#: ../networking_config-identity.rst:177
msgid "``[neutron] admin_tenant_name``"
msgstr ""
#: ../networking_config-identity.rst:178
msgid ""
"Update to the name of the service tenant created in the above section on "
"Identity configuration."
msgstr ""
#: ../networking_config-identity.rst:180
msgid "``[neutron] admin_username``"
msgstr ""
#: ../networking_config-identity.rst:181
msgid ""
"Update to the name of the user created in the above section on Identity "
"configuration."
msgstr ""
#: ../networking_config-identity.rst:183
msgid "``[neutron] admin_password``"
msgstr ""
#: ../networking_config-identity.rst:184
msgid ""
"Update to the password of the user created in the above section on Identity "
"configuration."
msgstr ""
#: ../networking_config-identity.rst:186
msgid "``[neutron] admin_auth_url``"
msgstr ""
#: ../networking_config-identity.rst:187
msgid ""
"Update to the Identity server IP and port. This is the Identity (keystone) "
"admin API server IP and port value, and not the Identity service API IP and "
"port."
msgstr ""
#: ../networking_config-identity.rst:192
msgid "Configure security groups"
msgstr ""
#: ../networking_config-identity.rst:194
msgid ""
"The Networking service provides security group functionality using a "
"mechanism that is more flexible and powerful than the security group "
"capabilities built into Compute. Therefore, if you use Networking, you "
"should always disable built-in security groups and proxy all security group "
"calls to the Networking API. If you do not, security policies will conflict "
"by being simultaneously applied by both services."
msgstr ""
#: ../networking_config-identity.rst:201
msgid ""
"To proxy security groups to Networking, use the following configuration "
"values in :file:`nova.conf`:"
msgstr ""
#: ../networking_config-identity.rst:204
msgid "**nova.conf security group settings**"
msgstr ""
# #-#-#-#-# networking_config-identity.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# networking_multi-dhcp-agents.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../networking_config-identity.rst:207 ../networking_config-identity.rst:232
#: ../networking_multi-dhcp-agents.rst:54
msgid "Configuration"
msgstr ""
#: ../networking_config-identity.rst:207 ../networking_config-identity.rst:232
msgid "Item"
msgstr ""
#: ../networking_config-identity.rst:209
msgid ""
"Update to ``nova.virt.firewall.NoopFirewallDriver``, so that nova-compute "
"does not perform iptables-based filtering itself."
msgstr ""
#: ../networking_config-identity.rst:209
msgid "``firewall_driver``"
msgstr ""
#: ../networking_config-identity.rst:213
msgid ""
"Update to ``neutron``, so that all security group requests are proxied to "
"the Network Service."
msgstr ""
#: ../networking_config-identity.rst:213
msgid "``security_group_api``"
msgstr ""
#: ../networking_config-identity.rst:218
msgid "Configure metadata"
msgstr ""
#: ../networking_config-identity.rst:220
msgid ""
"The Compute service allows VMs to query metadata associated with a VM by "
"making a web request to a special 169.254.169.254 address. Networking "
"supports proxying those requests to nova-api, even when the requests are "
"made from isolated networks, or from multiple networks that use overlapping "
"IP addresses."
msgstr ""
#: ../networking_config-identity.rst:226
msgid ""
"To enable proxying the requests, you must update the following fields in "
"``[neutron]`` section in :file:`nova.conf`."
msgstr ""
#: ../networking_config-identity.rst:229
msgid "**nova.conf metadata settings**"
msgstr ""
#: ../networking_config-identity.rst:234
msgid ""
"Update to ``true``, otherwise nova-api will not properly respond to requests "
"from the neutron-metadata-agent."
msgstr ""
#: ../networking_config-identity.rst:234
msgid "``service_metadata_proxy``"
msgstr ""
#: ../networking_config-identity.rst:238
msgid ""
"Update to a string \"password\" value. You must also configure the same "
"value in the ``metadata_agent.ini`` file, to authenticate requests made for "
"metadata."
msgstr ""
#: ../networking_config-identity.rst:238
msgid "``metadata_proxy_shared_secret``"
msgstr ""
#: ../networking_config-identity.rst:243
msgid ""
"The default value of an empty string in both files will allow metadata to "
"function, but will not be secure if any non-trusted entities have access to "
"the metadata APIs exposed by nova-api."
msgstr ""
#: ../networking_config-identity.rst:252
msgid ""
"As a precaution, even when using ``metadata_proxy_shared_secret``, we "
"recommend that you do not expose metadata using the same nova-api instances "
"that are used for tenants. Instead, you should run a dedicated set of nova-"
"api instances for metadata that are available only on your management "
"network. Whether a given nova-api instance exposes metadata APIs is "
"determined by the value of ``enabled_apis`` in its :file:`nova.conf`."
msgstr ""
#: ../networking_config-identity.rst:261
msgid "Example nova.conf (for nova-compute and nova-api)"
msgstr ""
#: ../networking_config-identity.rst:263
msgid ""
"Example values for the above settings, assuming a cloud controller node "
"running Compute and Networking with an IP address of 192.168.1.2:"
msgstr ""
#: ../networking_config-plugins.rst:3
msgid "Plug-in configurations"
msgstr ""
#: ../networking_config-plugins.rst:5
msgid ""
"For configurations options, see `Networking configuration options <http://"
"docs.openstack.org/kilo/config-reference /content/section_networking-options-"
"reference.html>`__ in Configuration Reference. These sections explain how to "
"configure specific plug-ins."
msgstr ""
#: ../networking_config-plugins.rst:12
msgid "Configure Big Switch (Floodlight REST Proxy) plug-in"
msgstr ""
#: ../networking_config-plugins.rst:14
msgid "Edit the :file:`/etc/neutron/neutron.conf` file and add this line:"
msgstr ""
#: ../networking_config-plugins.rst:20
msgid ""
"In the :file:`/etc/neutron/neutron.conf` file, set the ``service_plugins`` "
"option:"
msgstr ""
#: ../networking_config-plugins.rst:27
msgid ""
"Edit the :file:`/etc/neutron/plugins/bigswitch/restproxy.ini` file for the "
"plug-in and specify a comma-separated list of controller\\_ip:port pairs:"
msgstr ""
#: ../networking_config-plugins.rst:34
msgid ""
"For database configuration, see `Install Networking Services <http://docs."
"openstack.org/liberty/install-guide-ubuntu /neutron-controller-install."
"html>`__ in the Installation Guide in the `OpenStack Documentation index "
"<http://docs.openstack.org>`__. (The link defaults to the Ubuntu version.)"
msgstr ""
#: ../networking_config-plugins.rst:41
msgid "Restart neutron-server to apply the settings:"
msgstr ""
#: ../networking_config-plugins.rst:48
msgid "Configure Brocade plug-in"
msgstr ""
#: ../networking_config-plugins.rst:50
msgid ""
"Install the Brocade-modified Python netconf client (ncclient) library, which "
"is available at https://github.com/brocade/ncclient:"
msgstr ""
#: ../networking_config-plugins.rst:57
msgid "As root, run this command:"
msgstr ""
#: ../networking_config-plugins.rst:63
msgid ""
"Edit the :file:`/etc/neutron/neutron.conf` file and set the following option:"
msgstr ""
#: ../networking_config-plugins.rst:70
msgid ""
"Edit the :file:`/etc/neutron/plugins/brocade/brocade.ini` file for the "
"Brocade plug-in and specify the admin user name, password, and IP address of "
"the Brocade switch:"
msgstr ""
#: ../networking_config-plugins.rst:82
msgid ""
"For database configuration, see `Install Networking Services <http://docs."
"openstack.org/liberty/install-guide-ubuntu/ neutron-controller-install."
"html>`__ in any of the Installation Guides in the `OpenStack Documentation "
"index <http://docs.openstack.org>`__. (The link defaults to the Ubuntu "
"version.)"
msgstr ""
#: ../networking_config-plugins.rst:89 ../networking_config-plugins.rst:248
msgid "Restart the neutron-server service to apply the settings:"
msgstr ""
#: ../networking_config-plugins.rst:96
msgid "Configure NSX-mh plug-in"
msgstr ""
#: ../networking_config-plugins.rst:98
msgid ""
"The instructions in this section refer to the VMware NSX-mh platform, "
"formerly known as Nicira NVP."
msgstr ""
#: ../networking_config-plugins.rst:101
msgid "Install the NSX plug-in:"
msgstr ""
#: ../networking_config-plugins.rst:107 ../networking_config-plugins.rst:225
msgid "Edit the :file:`/etc/neutron/neutron.conf` file and set this line:"
msgstr ""
#: ../networking_config-plugins.rst:113
msgid "Example :file:`neutron.conf`: file for NSX-mh integration:"
msgstr ""
#: ../networking_config-plugins.rst:121
msgid ""
"To configure the NSX-mh controller cluster for OpenStack Networking, locate "
"the ``[default]`` section in the :file:`/etc/neutron/plugins/vmware/nsx.ini` "
"file and add the following entries:"
msgstr ""
#: ../networking_config-plugins.rst:126
msgid ""
"To establish and configure the connection with the controller cluster you "
"must set some parameters, including NSX-mh API endpoints, access "
"credentials, and optionally specify settings for HTTP timeouts, redirects "
"and retries in case of connection failures:"
msgstr ""
#: ../networking_config-plugins.rst:140
msgid ""
"To ensure correct operations, the ``nsx_user`` user must have administrator "
"credentials on the NSX-mh platform."
msgstr ""
#: ../networking_config-plugins.rst:143
msgid ""
"A controller API endpoint consists of the IP address and port for the "
"controller; if you omit the port, port 443 is used. If multiple API "
"endpoints are specified, it is up to the user to ensure that all these "
"endpoints belong to the same controller cluster. The OpenStack Networking "
"VMware NSX-mh plug-in does not perform this check, and results might be "
"unpredictable."
msgstr ""
#: ../networking_config-plugins.rst:150
msgid ""
"When you specify multiple API endpoints, the plug-in takes care of load "
"balancing requests on the various API endpoints."
msgstr ""
#: ../networking_config-plugins.rst:153
msgid ""
"The UUID of the NSX-mh transport zone that should be used by default when a "
"tenant creates a network. You can get this value from the Transport Zones "
"page for the NSX-mh manager:"
msgstr ""
#: ../networking_config-plugins.rst:157
msgid ""
"Alternatively the transport zone identifier can be retrieved by query the "
"NSX-mh API: ``/ws.v1/transport-zone``"
msgstr ""
#: ../networking_config-plugins.rst:170
msgid ""
"Ubuntu packaging currently does not update the neutron init script to point "
"to the NSX-mh configuration file. Instead, you must manually update :file:`/"
"etc/default/neutron-server` to add this line:"
msgstr ""
#: ../networking_config-plugins.rst:179 ../networking_config-plugins.rst:243
msgid ""
"For database configuration, see `Install Networking Services <http://docs."
"openstack.org/liberty/install-guide-ubuntu/ neutron-controller-install."
"html>`__ in the Installation Guide."
msgstr ""
#: ../networking_config-plugins.rst:184
msgid "Restart neutron-server to apply settings:"
msgstr ""
#: ../networking_config-plugins.rst:192
msgid ""
"The neutron NSX-mh plug-in does not implement initial re-synchronization of "
"Neutron resources. Therefore resources that might already exist in the "
"database when Neutron is switched to the NSX-mh plug-in will not be created "
"on the NSX-mh backend upon restart."
msgstr ""
#: ../networking_config-plugins.rst:198
msgid "Example :file:`nsx.ini` file:"
msgstr ""
#: ../networking_config-plugins.rst:223
msgid "Configure PLUMgrid plug-in"
msgstr ""
#: ../networking_config-plugins.rst:231
msgid ""
"Edit the [PLUMgridDirector] section in the :file:`/etc/neutron/plugins/"
"plumgrid/plumgrid.ini` file and specify the IP address, port, admin user "
"name, and password of the PLUMgrid Director:"
msgstr ""
#: ../networking_introduction.rst:3
msgid "Introduction to Networking"
msgstr ""
#: ../networking_introduction.rst:5
msgid ""
"The Networking service, code-named neutron, provides an API that lets you "
"define network connectivity and addressing in the cloud. The Networking "
"service enables operators to leverage different networking technologies to "
"power their cloud networking. The Networking service also provides an API to "
"configure and manage a variety of network services ranging from L3 "
"forwarding and NAT to load balancing, edge firewalls, and IPsec VPN."
msgstr ""
#: ../networking_introduction.rst:13
msgid ""
"For a detailed description of the Networking API abstractions and their "
"attributes, see the `OpenStack Networking API v2.0 Reference <http://"
"developer.openstack.org/api-ref-networking-v2.html>`__."
msgstr ""
#: ../networking_introduction.rst:19
msgid ""
"If you use the Networking service, do not run the Compute nova-network "
"service (like you do in traditional Compute deployments). When you configure "
"networking, see the Compute-related topics in this Networking section."
msgstr ""
#: ../networking_introduction.rst:25
msgid "Networking API"
msgstr ""
#: ../networking_introduction.rst:27
msgid ""
"Networking is a virtual network service that provides a powerful API to "
"define the network connectivity and IP addressing that devices from other "
"services, such as Compute, use."
msgstr ""
#: ../networking_introduction.rst:31
msgid ""
"The Compute API has a virtual server abstraction to describe computing "
"resources. Similarly, the Networking API has virtual network, subnet, and "
"port abstractions to describe networking resources."
msgstr ""
# #-#-#-#-# networking_introduction.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-measurements.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../networking_introduction.rst:36 ../telemetry-measurements.rst:97
#: ../telemetry-measurements.rst:430 ../telemetry-measurements.rst:484
#: ../telemetry-measurements.rst:528 ../telemetry-measurements.rst:589
#: ../telemetry-measurements.rst:664 ../telemetry-measurements.rst:697
#: ../telemetry-measurements.rst:762 ../telemetry-measurements.rst:812
#: ../telemetry-measurements.rst:849 ../telemetry-measurements.rst:920
#: ../telemetry-measurements.rst:983 ../telemetry-measurements.rst:1069
#: ../telemetry-measurements.rst:1147 ../telemetry-measurements.rst:1212
#: ../telemetry-measurements.rst:1263 ../telemetry-measurements.rst:1289
#: ../telemetry-measurements.rst:1312 ../telemetry-measurements.rst:1332
msgid "Resource"
msgstr ""
#: ../networking_introduction.rst:38
msgid "**Network**"
msgstr ""
#: ../networking_introduction.rst:38
msgid ""
"An isolated L2 segment, analogous to VLAN in the physical networking world."
msgstr ""
#: ../networking_introduction.rst:41
msgid "**Subnet**"
msgstr ""
#: ../networking_introduction.rst:41
msgid "A block of v4 or v6 IP addresses and associated configuration state."
msgstr ""
#: ../networking_introduction.rst:44
msgid "**Port**"
msgstr ""
#: ../networking_introduction.rst:44
msgid ""
"A connection point for attaching a single device, such as the NIC of a "
"virtual server, to a virtual network. Also describes the associated network "
"configuration, such as the MAC and IP addresses to be used on that port."
msgstr ""
#: ../networking_introduction.rst:50
msgid "**Networking resources**"
msgstr ""
#: ../networking_introduction.rst:52
msgid ""
"To configure rich network topologies, you can create and configure networks "
"and subnets and instruct other OpenStack services like Compute to attach "
"virtual devices to ports on these networks."
msgstr ""
#: ../networking_introduction.rst:56
msgid ""
"In particular, Networking supports each tenant having multiple private "
"networks and enables tenants to choose their own IP addressing scheme, even "
"if those IP addresses overlap with those that other tenants use."
msgstr ""
#: ../networking_introduction.rst:60
msgid "The Networking service:"
msgstr ""
#: ../networking_introduction.rst:62
msgid ""
"Enables advanced cloud networking use cases, such as building multi-tiered "
"web applications and enabling migration of applications to the cloud without "
"changing IP addresses."
msgstr ""
#: ../networking_introduction.rst:66
msgid ""
"Offers flexibility for the cloud administrator to customize network "
"offerings."
msgstr ""
#: ../networking_introduction.rst:69
msgid ""
"Enables developers to extend the Networking API. Over time, the extended "
"functionality becomes part of the core Networking API."
msgstr ""
#: ../networking_introduction.rst:73
msgid "Configure SSL support for networking API"
msgstr ""
#: ../networking_introduction.rst:75
msgid ""
"OpenStack Networking supports SSL for the Networking API server. By default, "
"SSL is disabled but you can enable it in the :file:`neutron.conf` file."
msgstr ""
#: ../networking_introduction.rst:79
msgid "Set these options to configure SSL:"
msgstr ""
#: ../networking_introduction.rst:82
msgid "Enables SSL on the networking API server."
msgstr ""
#: ../networking_introduction.rst:82
msgid "``use_ssl = True``"
msgstr ""
#: ../networking_introduction.rst:85
msgid ""
"Certificate file that is used when you securely start the Networking API "
"server."
msgstr ""
#: ../networking_introduction.rst:86
msgid "``ssl_cert_file = PATH_TO_CERTFILE``"
msgstr ""
#: ../networking_introduction.rst:89
msgid ""
"Private key file that is used when you securely start the Networking API "
"server."
msgstr ""
#: ../networking_introduction.rst:90
msgid "``ssl_key_file = PATH_TO_KEYFILE``"
msgstr ""
#: ../networking_introduction.rst:93
msgid ""
"Optional. CA certificate file that is used when you securely start the "
"Networking API server. This file verifies connecting clients. Set this "
"option when API clients must authenticate to the API server by using SSL "
"certificates that are signed by a trusted CA."
msgstr ""
#: ../networking_introduction.rst:96
msgid "``ssl_ca_file = PATH_TO_CAFILE``"
msgstr ""
#: ../networking_introduction.rst:99
msgid ""
"The value of TCP\\_KEEPIDLE, in seconds, for each server socket when "
"starting the API server. Not supported on OS X."
msgstr ""
#: ../networking_introduction.rst:100
msgid "``tcp_keepidle = 600``"
msgstr ""
#: ../networking_introduction.rst:103
msgid "Number of seconds to keep retrying to listen."
msgstr ""
#: ../networking_introduction.rst:103
msgid "``retry_until_window = 30``"
msgstr ""
#: ../networking_introduction.rst:106
msgid "Number of backlog requests with which to configure the socket."
msgstr ""
#: ../networking_introduction.rst:106
msgid "``backlog = 4096``"
msgstr ""
#: ../networking_introduction.rst:109
msgid "Load-Balancer-as-a-Service (LBaaS) overview"
msgstr ""
#: ../networking_introduction.rst:111
msgid ""
"Load-Balancer-as-a-Service (LBaaS) enables Networking to distribute incoming "
"requests evenly among designated instances. This distribution ensures that "
"the workload is shared predictably among instances and enables more "
"effective use of system resources. Use one of these load balancing methods "
"to distribute incoming requests:"
msgstr ""
#: ../networking_introduction.rst:118
msgid "Rotates requests evenly between multiple instances."
msgstr ""
#: ../networking_introduction.rst:118
msgid "Round robin"
msgstr ""
#: ../networking_introduction.rst:121
msgid ""
"Requests from a unique source IP address are consistently directed to the "
"same instance."
msgstr ""
#: ../networking_introduction.rst:122
msgid "Source IP"
msgstr ""
#: ../networking_introduction.rst:125
msgid ""
"Allocates requests to the instance with the least number of active "
"connections."
msgstr ""
#: ../networking_introduction.rst:126
msgid "Least connections"
msgstr ""
#: ../networking_introduction.rst:129
msgid "Feature"
msgstr ""
#: ../networking_introduction.rst:131
msgid "**Monitors**"
msgstr ""
#: ../networking_introduction.rst:131
msgid ""
"LBaaS provides availability monitoring with the ``ping``, TCP, HTTP and "
"HTTPS GET methods. Monitors are implemented to determine whether pool "
"members are available to handle requests."
msgstr ""
#: ../networking_introduction.rst:136
msgid "**Management**"
msgstr ""
#: ../networking_introduction.rst:136
msgid ""
"LBaaS is managed using a variety of tool sets. The REST API is available for "
"programmatic administration and scripting. Users perform administrative "
"management of load balancers through either the CLI (``neutron``) or the "
"OpenStack dashboard."
msgstr ""
#: ../networking_introduction.rst:143
msgid "**Connection limits**"
msgstr ""
#: ../networking_introduction.rst:143
msgid ""
"Ingress traffic can be shaped with *connection limits*. This feature allows "
"workload control, and can also assist with mitigating DoS (Denial of "
"Service) attacks."
msgstr ""
#: ../networking_introduction.rst:148
msgid "**Session persistence**"
msgstr ""
#: ../networking_introduction.rst:148
msgid ""
"LBaaS supports session persistence by ensuring incoming requests are routed "
"to the same instance within a pool of multiple instances. LBaaS supports "
"routing decisions based on cookies and source IP address."
msgstr ""
#: ../networking_introduction.rst:157
msgid "Firewall-as-a-Service (FWaaS) overview"
msgstr ""
#: ../networking_introduction.rst:159
msgid ""
"The Firewall-as-a-Service (FWaaS) plug-in adds perimeter firewall management "
"to Networking. FWaaS uses iptables to apply firewall policy to all "
"Networking routers within a project. FWaaS supports one firewall policy and "
"logical firewall instance per project."
msgstr ""
#: ../networking_introduction.rst:164
msgid ""
"Whereas security groups operate at the instance-level, FWaaS operates at the "
"perimeter to filter traffic at the neutron router."
msgstr ""
#: ../networking_introduction.rst:169
msgid ""
"FWaaS is currently in technical preview; untested operation is not "
"recommended."
msgstr ""
#: ../networking_introduction.rst:172
msgid ""
"The example diagram illustrates the flow of ingress and egress traffic for "
"the VM2 instance:"
msgstr ""
#: ../networking_introduction.rst:175
msgid "|FWaaS architecture|"
msgstr ""
#: ../networking_introduction.rst:177
msgid "**To enable FWaaS**"
msgstr ""
#: ../networking_introduction.rst:179
msgid "FWaaS management options are also available in the OpenStack dashboard."
msgstr ""
#: ../networking_introduction.rst:181
msgid "Enable the FWaaS plug-in in the :file:`/etc/neutron/neutron.conf` file:"
msgstr ""
#: ../networking_introduction.rst:198
msgid ""
"On Ubuntu, modify the ``[fwaas]`` section in the :file:`/etc/neutron/"
"fwaas_driver.ini` file instead of :file:`/etc/neutron/neutron.conf`."
msgstr ""
#: ../networking_introduction.rst:208
msgid ""
"Enable the option in the :file:`/usr/share/openstack-dashboard/"
"openstack_dashboard/local/local_settings.py` file, which is typically "
"located on the controller node:"
msgstr ""
#: ../networking_introduction.rst:220
msgid "Apply the settings by restarting the web server."
msgstr ""
#: ../networking_introduction.rst:222
msgid ""
"Restart the neutron-l3-agent and neutron-server services to apply the "
"settings."
msgstr ""
#: ../networking_introduction.rst:225
msgid "**To configure Firewall-as-a-Service**"
msgstr ""
#: ../networking_introduction.rst:227
msgid ""
"Create the firewall rules and create a policy that contains them. Then, "
"create a firewall that applies the policy."
msgstr ""
#: ../networking_introduction.rst:230
msgid "Create a firewall rule:"
msgstr ""
#: ../networking_introduction.rst:237
msgid ""
"The Networking client requires a protocol value; if the rule is protocol "
"agnostic, you can use the ``any`` value."
msgstr ""
#: ../networking_introduction.rst:240
msgid "Create a firewall policy:"
msgstr ""
#: ../networking_introduction.rst:247
msgid ""
"Separate firewall rule IDs or names with spaces. The order in which you "
"specify the rules is important."
msgstr ""
#: ../networking_introduction.rst:250
msgid ""
"You can create a firewall policy without any rules and add rules later, as "
"follows:"
msgstr ""
#: ../networking_introduction.rst:253
msgid "To add multiple rules, use the update operation."
msgstr ""
#: ../networking_introduction.rst:255
msgid "To add a single rule, use the insert-rule operation."
msgstr ""
#: ../networking_introduction.rst:257
msgid ""
"For more details, see `Networking command-line client <http://docs.openstack."
"org/cli-reference/content/ neutronclient_commands."
"html#neutronclient_subcommand_ firewall-policy-create>`__ in the OpenStack "
"Command-Line Interface Reference."
msgstr ""
#: ../networking_introduction.rst:265
msgid ""
"FWaaS always adds a default ``deny all`` rule at the lowest precedence of "
"each policy. Consequently, a firewall policy with no rules blocks all "
"traffic by default."
msgstr ""
#: ../networking_introduction.rst:269
msgid "Create a firewall:"
msgstr ""
#: ../networking_introduction.rst:277
msgid ""
"The firewall remains in PENDING\\_CREATE state until you create a Networking "
"router and attach an interface to it."
msgstr ""
#: ../networking_introduction.rst:280
msgid "**Allowed-address-pairs.**"
msgstr ""
#: ../networking_introduction.rst:282
msgid ""
"``Allowed-address-pairs`` enables you to specify mac_address/"
"ip_address(cidr) pairs that pass through a port regardless of subnet. This "
"enables the use of protocols such as VRRP, which floats an IP address "
"between two instances to enable fast data plane failover."
msgstr ""
#: ../networking_introduction.rst:289
msgid ""
"Currently, only the ML2, Open vSwitch, and VMware NSX plug-ins support the "
"allowed-address-pairs extension."
msgstr ""
#: ../networking_introduction.rst:292
msgid "**Basic allowed-address-pairs operations.**"
msgstr ""
#: ../networking_introduction.rst:294
msgid "Create a port with a specified allowed address pair:"
msgstr ""
#: ../networking_introduction.rst:301
msgid "Update a port by adding allowed address pairs:"
msgstr ""
#: ../networking_introduction.rst:312
msgid "Virtual-Private-Network-as-a-Service (VPNaaS)"
msgstr ""
#: ../networking_introduction.rst:314
msgid ""
"The VPNaaS extension enables OpenStack tenants to extend private networks "
"across the internet."
msgstr ""
#: ../networking_introduction.rst:317
msgid "This extension introduces these resources:"
msgstr ""
#: ../networking_introduction.rst:319
msgid ""
":term:`service`. A parent object that associates VPN with a specific subnet "
"and router."
msgstr ""
#: ../networking_introduction.rst:322
msgid ""
"The Internet Key Exchange (IKE) policy that identifies the authentication "
"and encryption algorithm to use during phase one and two negotiation of a "
"VPN connection."
msgstr ""
#: ../networking_introduction.rst:326
msgid ""
"The IP security policy that specifies the authentication and encryption "
"algorithm and encapsulation mode to use for the established VPN connection."
msgstr ""
#: ../networking_introduction.rst:330
msgid ""
"Details for the site-to-site IPsec connection, including the peer CIDRs, "
"MTU, authentication mode, peer address, DPD settings, and status."
msgstr ""
#: ../networking_introduction.rst:333
msgid "This initial implementation of the VPNaaS extension provides:"
msgstr ""
#: ../networking_introduction.rst:335
msgid "Site-to-site VPN that connects two private networks."
msgstr ""
#: ../networking_introduction.rst:337
msgid "Multiple VPN connections per tenant."
msgstr ""
#: ../networking_introduction.rst:339
msgid ""
"IKEv1 policy support with 3des, aes-128, aes-256, or aes-192 encryption."
msgstr ""
#: ../networking_introduction.rst:341
msgid ""
"IPSec policy support with 3des, aes-128, aes-192, or aes-256 encryption, "
"sha1 authentication, ESP, AH, or AH-ESP transform protocol, and tunnel or "
"transport mode encapsulation."
msgstr ""
#: ../networking_introduction.rst:345
msgid ""
"Dead Peer Detection (DPD) with hold, clear, restart, disabled, or restart-by-"
"peer actions."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:3
msgid "Scalable and highly available DHCP agents"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:5
msgid ""
"This section describes how to use the agent management (alias agent) and "
"scheduler (alias agent_scheduler) extensions for DHCP agents scalability and "
"HA."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:11
msgid ""
"Use the :command:`neutron ext-list` client command to check if these "
"extensions are enabled:"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:33
msgid "There will be three hosts in the setup."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:41
msgid "OpenStack controller host - controlnod"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:42
msgid ""
"Runs the Networking, Identity, and Compute services that are required to "
"deploy VMs. The node must have at least one network interface that is "
"connected to the Management Network. Note that ``nova-network`` should not "
"be running because it is replaced by Neutron."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:47
msgid "Runs ``nova-compute``, the Neutron L2 agent and DHCP agent"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:49
msgid "Same as HostA"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:51
msgid "**Hosts for demo**"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:56
msgid "**controlnode: neutron server**"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:58
#: ../networking_multi-dhcp-agents.rst:85
msgid "Neutron configuration file :file:`/etc/neutron/neutron.conf`:"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:69
#: ../networking_multi-dhcp-agents.rst:95
msgid ""
"Update the plug-in configuration file :file:`/etc/neutron/plugins/"
"linuxbridge/linuxbridge_conf.ini`:"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:83
msgid "**HostA and HostB: L2 agent**"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:109
msgid "Update the nova configuration file :file:`/etc/nova/nova.conf`:"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:125
msgid "**HostA and HostB: DHCP agent**"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:127
msgid "Update the DHCP configuration file :file:`/etc/neutron/dhcp_agent.ini`:"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:135
msgid "Commands in agent management and scheduler extensions"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:137
msgid ""
"The following commands require the tenant running the command to have an "
"admin role."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:142
msgid ""
"Ensure that the following environment variables are set. These are used by "
"the various clients to access the Identity service."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:152
msgid "**Settings**"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:154
msgid "To experiment, you need VMs and a neutron network:"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:176
msgid "**Manage agents in neutron deployment**"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:178
msgid ""
"Every agent that supports these extensions will register itself with the "
"neutron server when it starts up."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:181
msgid "List all agents:"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:196
msgid ""
"The output shows information for four agents. The ``alive`` field shows "
"``:-)`` if the agent reported its state within the period defined by the "
"``agent_down_time`` option in the :file:`neutron.conf` file. Otherwise the "
"``alive`` is ``xxx``."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:201
msgid "List the DHCP agents that host a specified network:"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:203
msgid ""
"In some deployments, one DHCP agent is not enough to hold all network data. "
"In addition, you must have a backup for it even when the deployment is "
"small. The same network can be assigned to more than one DHCP agent and one "
"DHCP agent can host more than one network."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:208
msgid "List DHCP agents that host a specified network:"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:220
msgid "List the networks hosted by a given DHCP agent:"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:222
msgid "This command is to show which networks a given dhcp agent is managing."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:236
msgid "Show agent details."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:238
msgid "The :command:`agent-show` command shows details for a specified agent:"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:268
msgid ""
"In this output, ``heartbeat_timestamp`` is the time on the neutron server. "
"You do not need to synchronize all agents to this time for this extension to "
"run correctly. ``configurations`` describes the static configuration for the "
"agent or run time data. This agent is a DHCP agent and it hosts one network, "
"one subnet, and three ports."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:274
msgid ""
"Different types of agents show different details. The following output shows "
"information for a Linux bridge agent:"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:302
msgid ""
"The output shows ``bridge-mapping`` and the number of virtual network "
"devices on this L2 agent."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:305
msgid "**Manage assignment of networks to DHCP agent**"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:307
msgid ""
"Now that you have run the :command:`net-list-on-dhcp-agent` and :command:"
"`dhcp-agent-list-hosting-net` commands, you can add a network to a DHCP "
"agent and remove one from it."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:311
msgid "Default scheduling."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:313
msgid ""
"When you create a network with one port, you can schedule it to an active "
"DHCP agent. If many active DHCP agents are running, select one randomly. You "
"can design more sophisticated scheduling algorithms in the same way as nova-"
"schedule later on."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:331
msgid ""
"It is allocated to DHCP agent on HostA. If you want to validate the behavior "
"through the :command:`dnsmasq` command, you must create a subnet for the "
"network because the DHCP agent starts the dnsmasq service only if there is a "
"DHCP."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:336
msgid "Assign a network to a given DHCP agent."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:338
msgid "To add another DHCP agent to host the network, run this command:"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:355
msgid "Remove a network from a specified DHCP agent."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:357
msgid ""
"This command is the sibling command for the previous one. Remove ``net2`` "
"from the DHCP agent for HostA:"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:373
msgid ""
"You can see that only the DHCP agent for HostB is hosting the ``net2`` "
"network."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:376
msgid "**HA of DHCP agents**"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:378
msgid ""
"Boot a VM on net2. Let both DHCP agents host ``net2``. Fail the agents in "
"turn to see if the VM can still get the desired IP."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:381
msgid "Boot a VM on net2:"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:416
msgid "Make sure both DHCP agents hosting 'net2':"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:418
msgid "Use the previous commands to assign the network to agents."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:431
msgid "**Test the HA**"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:433
msgid ""
"Log in to the ``myserver4`` VM, and run ``udhcpc``, ``dhclient`` or other "
"DHCP client."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:436
msgid ""
"Stop the DHCP agent on HostA. Besides stopping the ``neutron-dhcp-agent`` "
"binary, you must stop the ``dnsmasq`` processes."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:439
msgid "Run a DHCP client in VM to see if it can get the wanted IP."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:441
msgid "Stop the DHCP agent on HostB too."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:443
msgid "Run ``udhcpc`` in the VM; it cannot get the wanted IP."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:445
msgid "Start DHCP agent on HostB. The VM gets the wanted IP again."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:447
msgid "**Disable and remove an agent**"
msgstr ""
#: ../networking_multi-dhcp-agents.rst:449
msgid ""
"An administrator might want to disable an agent if a system hardware or "
"software upgrade is planned. Some agents that support scheduling also "
"support disabling and enabling agents, such as L3 and DHCP agents. After the "
"agent is disabled, the scheduler does not schedule new resources to the "
"agent. After the agent is disabled, you can safely remove the agent. Remove "
"the resources on the agent before you delete the agent."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:456
msgid "To run the following commands, you must stop the DHCP agent on HostA."
msgstr ""
#: ../networking_multi-dhcp-agents.rst:487
msgid ""
"After deletion, if you restart the DHCP agent, it appears on the agent list "
"again."
msgstr ""
#: ../networking_use.rst:3
msgid "Use Networking"
msgstr ""
#: ../networking_use.rst:5
msgid ""
"You can manage OpenStack Networking services by using the service command. "
"For example:"
msgstr ""
#: ../networking_use.rst:15
msgid "Log files are in the :file:`/var/log/neutron` directory."
msgstr ""
#: ../networking_use.rst:17
msgid "Configuration files are in the :file:`/etc/neutron` directory."
msgstr ""
#: ../networking_use.rst:19
msgid ""
"Cloud administrators and tenants can use OpenStack Networking to build rich "
"network topologies. Cloud administrators can create network connectivity on "
"behalf of tenants."
msgstr ""
#: ../networking_use.rst:24
msgid "Core Networking API features"
msgstr ""
#: ../networking_use.rst:26
msgid ""
"After you install and configure Networking, tenants and administrators can "
"perform create-read-update-delete (CRUD) API networking operations by using "
"the Networking API directly or neutron command-line interface (CLI). The "
"neutron CLI is a wrapper around the Networking API. Every Networking API "
"call has a corresponding neutron command."
msgstr ""
#: ../networking_use.rst:32
msgid ""
"The CLI includes a number of options. For details, see the `OpenStack End "
"User Guide <http://docs.openstack.org/user-guide/index.html>`__."
msgstr ""
#: ../networking_use.rst:36
msgid "Basic Networking operations"
msgstr ""
#: ../networking_use.rst:38
msgid ""
"To learn about advanced capabilities available through the neutron command-"
"line interface (CLI), read the networking section in the `OpenStack End User "
"Guide <http://docs.openstack.org/user-guide/index.html>`__."
msgstr ""
#: ../networking_use.rst:43
msgid ""
"This table shows example neutron commands that enable you to complete basic "
"network operations:"
msgstr ""
#: ../networking_use.rst:49
msgid "Creates a network."
msgstr ""
#: ../networking_use.rst:51
msgid "``$ neutron net-create net1``"
msgstr ""
#: ../networking_use.rst:53
msgid "Creates a subnet that is associated with net1."
msgstr ""
#: ../networking_use.rst:56
msgid "``$ neutron subnet-create`` ``net1 10.0.0.0/24``"
msgstr ""
#: ../networking_use.rst:59
msgid "Lists ports for a specified tenant."
msgstr ""
#: ../networking_use.rst:62
msgid "``$ neutron port-list``"
msgstr ""
#: ../networking_use.rst:64
msgid ""
"Lists ports for a specified tenant and displays the ``id``, ``fixed_ips``, "
"and ``device_owner`` columns."
msgstr ""
#: ../networking_use.rst:71
msgid "``$ neutron port-list -c id`` ``-c fixed_ips -c device_owner``"
msgstr ""
#: ../networking_use.rst:74
msgid "Shows information for a specified port."
msgstr ""
#: ../networking_use.rst:76
msgid "``$ neutron port-show PORT_ID``"
msgstr ""
#: ../networking_use.rst:79
msgid "**Basic Networking operations**"
msgstr ""
#: ../networking_use.rst:83
msgid ""
"The ``device_owner`` field describes who owns the port. A port whose "
"``device_owner`` begins with:"
msgstr ""
#: ../networking_use.rst:86
msgid "``network`` is created by Networking."
msgstr ""
#: ../networking_use.rst:88
msgid "``compute`` is created by Compute."
msgstr ""
#: ../networking_use.rst:91
msgid "Administrative operations"
msgstr ""
#: ../networking_use.rst:93
msgid ""
"The cloud administrator can run any :command:`neutron` command on behalf of "
"tenants by specifying an Identity ``tenant_id`` in the command, as follows:"
msgstr ""
#: ../networking_use.rst:109
msgid ""
"To view all tenant IDs in Identity, run the following command as an Identity "
"service admin user:"
msgstr ""
#: ../networking_use.rst:117
msgid "Advanced Networking operations"
msgstr ""
#: ../networking_use.rst:119
msgid ""
"This table shows example Networking commands that enable you to complete "
"advanced network operations:"
msgstr ""
#: ../networking_use.rst:125
msgid "Creates a network that all tenants can use."
msgstr ""
#: ../networking_use.rst:128
msgid "``$ neutron net-create`` ``--shared public-net``"
msgstr ""
#: ../networking_use.rst:131
msgid "Creates a subnet with a specified gateway IP address."
msgstr ""
#: ../networking_use.rst:134
msgid "``$ neutron subnet-create`` ``--gateway 10.0.0.254 net1 10.0.0.0/24``"
msgstr ""
#: ../networking_use.rst:137
msgid "Creates a subnet that has no gateway IP address."
msgstr ""
#: ../networking_use.rst:140
msgid "``$ neutron subnet-create`` ``--no-gateway net1 10.0.0.0/24``"
msgstr ""
#: ../networking_use.rst:143
msgid "Creates a subnet with DHCP disabled."
msgstr ""
#: ../networking_use.rst:146
msgid "``$ neutron subnet-create`` ``net1 10.0.0.0/24 --enable-dhcp False``"
msgstr ""
#: ../networking_use.rst:149
msgid "Specifies a set of host routes"
msgstr ""
#: ../networking_use.rst:151
msgid ""
"``$ neutron subnet-create`` ``test-net1 40.0.0.0/24 --host-routes`` "
"``type=dict list=true`` ``destination=40.0.1.0/24,`` ``nexthop=40.0.0.2``"
msgstr ""
#: ../networking_use.rst:157
msgid "Creates a subnet with a specified set of dns name servers."
msgstr ""
#: ../networking_use.rst:161
msgid ""
"``$ neutron subnet-create test-net1`` ``40.0.0.0/24 --dns-nameservers`` "
"``list=true 8.8.4.4 8.8.8.8``"
msgstr ""
#: ../networking_use.rst:165
msgid "Displays all ports and IPs allocated on a network."
msgstr ""
#: ../networking_use.rst:168
msgid "``$ neutron port-list --network_id NET_ID``"
msgstr ""
#: ../networking_use.rst:171
msgid "**Advanced Networking operations**"
msgstr ""
#: ../networking_use.rst:174
msgid "Use Compute with Networking"
msgstr ""
#: ../networking_use.rst:177
msgid "Basic Compute and Networking operations"
msgstr ""
#: ../networking_use.rst:179
msgid ""
"This table shows example neutron and nova commands that enable you to "
"complete basic VM networking operations:"
msgstr ""
#: ../networking_use.rst:183
msgid "Action"
msgstr ""
#: ../networking_use.rst:185
msgid "Checks available networks."
msgstr ""
#: ../networking_use.rst:187
msgid "``$ neutron net-list``"
msgstr ""
#: ../networking_use.rst:189
msgid "Boots a VM with a single NIC on a selected Networking network."
msgstr ""
#: ../networking_use.rst:192
msgid ""
"``$ nova boot --image IMAGE --flavor`` ``FLAVOR --nic net-id=NET_ID VM_NAME``"
msgstr ""
#: ../networking_use.rst:195
msgid ""
"Searches for ports with a ``device_id`` that matches the Compute instance "
"UUID. See :ref: `Create and delete VMs`"
msgstr ""
#: ../networking_use.rst:200
msgid "``$ neutron port-list --device_id VM_ID``"
msgstr ""
#: ../networking_use.rst:202
msgid "Searches for ports, but shows only the ``mac_address`` of the port."
msgstr ""
#: ../networking_use.rst:206
msgid "``$ neutron port-list --field`` ``mac_address --device_id VM_ID``"
msgstr ""
#: ../networking_use.rst:209
msgid "Temporarily disables a port from sending traffic."
msgstr ""
#: ../networking_use.rst:212
msgid "``$ neutron port-update PORT_ID`` ``--admin_state_up False``"
msgstr ""
#: ../networking_use.rst:216
msgid "**Basic Compute and Networking operations**"
msgstr ""
#: ../networking_use.rst:220
msgid "The ``device_id`` can also be a logical router ID."
msgstr ""
#: ../networking_use.rst:224
msgid ""
"When you boot a Compute VM, a port on the network that corresponds to the VM "
"NIC is automatically created and associated with the default security group. "
"You can configure `security group rules <#enabling_ping_and_ssh>`__ to "
"enable users to access the VM."
msgstr ""
#: ../networking_use.rst:235
msgid "Advanced VM creation operations"
msgstr ""
#: ../networking_use.rst:237
msgid ""
"This table shows example nova and neutron commands that enable you to "
"complete advanced VM creation operations:"
msgstr ""
#: ../networking_use.rst:243
msgid "Boots a VM with multiple NICs."
msgstr ""
#: ../networking_use.rst:246
msgid ""
"``$ nova boot --image IMAGE --flavor`` ``FLAVOR --nic net-id=NET1-ID --nic`` "
"``net-id=NET2-ID VM_NAME``"
msgstr ""
#: ../networking_use.rst:250
msgid ""
"Boots a VM with a specific IP address. Note that you cannot use the ``--num-"
"instances`` parameter in this case."
msgstr ""
#: ../networking_use.rst:256
msgid "``$ nova boot --image IMAGE --flavor``"
msgstr ""
#: ../networking_use.rst:256
msgid "``FLAVOR --nic net-id=NET-ID,`` ``v4-fixed-ip=IP-ADDR VM_NAME``"
msgstr ""
#: ../networking_use.rst:259
msgid ""
"Boots a VM that connects to all networks that are accessible to the tenant "
"who submits the request (without the ``--nic`` option)."
msgstr ""
#: ../networking_use.rst:264
msgid "``$ nova boot --image IMAGE --flavor`` ``FLAVOR VM_NAME``"
msgstr ""
#: ../networking_use.rst:268
msgid "**Advanced VM creation operations**"
msgstr ""
#: ../networking_use.rst:272
msgid ""
"Cloud images that distribution vendors offer usually have only one active "
"NIC configured. When you boot with multiple NICs, you must configure "
"additional interfaces on the image or the NICs are not reachable."
msgstr ""
#: ../networking_use.rst:277
msgid ""
"The following Debian/Ubuntu-based example shows how to set up the interfaces "
"within the instance in the ``/etc/network/interfaces`` file. You must apply "
"this configuration to the image."
msgstr ""
#: ../networking_use.rst:294
msgid "Enable ping and SSH on VMs (security groups)"
msgstr ""
#: ../networking_use.rst:296
msgid ""
"You must configure security group rules depending on the type of plug-in you "
"are using. If you are using a plug-in that:"
msgstr ""
#: ../networking_use.rst:299
msgid ""
"Implements Networking security groups, you can configure security group "
"rules directly by using the :command:`neutron security-group-rule-create` "
"command. This example enables ``ping`` and ``ssh`` access to your VMs."
msgstr ""
#: ../networking_use.rst:313
msgid ""
"Does not implement Networking security groups, you can configure security "
"group rules by using the :command:`nova secgroup-add-rule` or :command:`euca-"
"authorize` command. These :command:`nova` commands enable ``ping`` and "
"``ssh`` access to your VMs."
msgstr ""
#: ../networking_use.rst:325
msgid ""
"If your plug-in implements Networking security groups, you can also leverage "
"Compute security groups by setting ``security_group_api = neutron`` in the :"
"file:`nova.conf` file. After you set this option, all Compute security group "
"commands are proxied to Networking."
msgstr ""
#: ../objectstorage-admin.rst:3
msgid "System administration for Object Storage"
msgstr ""
#: ../objectstorage-admin.rst:5
msgid ""
"By understanding Object Storage concepts, you can better monitor and "
"administer your storage solution. The majority of the administration "
"information is maintained in developer documentation at `docs.openstack.org/"
"developer/swift/ <http://docs.openstack.org/developer/swift/>`__."
msgstr ""
#: ../objectstorage-admin.rst:10
msgid ""
"See the `OpenStack Configuration Reference <http://docs.openstack.org/"
"liberty/config-reference/content/ch_configuring-object-storage.html>`__ for "
"a list of configuration options for Object Storage."
msgstr ""
#: ../objectstorage-monitoring.rst:3
msgid "Object Storage monitoring"
msgstr ""
#: ../objectstorage-monitoring.rst:5
msgid ""
"Excerpted from a blog post by `Darrell Bishop <http://swiftstack.com/"
"blog/2012/04/11/swift-monitoring-with-statsd>`__"
msgstr ""
#: ../objectstorage-monitoring.rst:8
msgid ""
"An OpenStack Object Storage cluster is a collection of many daemons that "
"work together across many nodes. With so many different components, you must "
"be able to tell what is going on inside the cluster. Tracking server-level "
"meters like CPU utilization, load, memory consumption, disk usage and "
"utilization, and so on is necessary, but not sufficient."
msgstr ""
#: ../objectstorage-monitoring.rst:14
msgid ""
"What are different daemons are doing on each server? What is the volume of "
"object replication on node8? How long is it taking? Are there errors? If so, "
"when did they happen?"
msgstr ""
#: ../objectstorage-monitoring.rst:18
msgid ""
"In such a complex ecosystem, you can use multiple approaches to get the "
"answers to these questions. This section describes several approaches."
msgstr ""
#: ../objectstorage-monitoring.rst:22
msgid "Swift Recon"
msgstr ""
#: ../objectstorage-monitoring.rst:24
msgid ""
"The Swift Recon middleware (see http://swift.openstack.org/admin_guide."
"html#cluster-telemetry-and-monitoring) provides general machine statistics, "
"such as load average, socket statistics, ``/proc/meminfo`` contents, and so "
"on, as well as Swift-specific meters:"
msgstr ""
#: ../objectstorage-monitoring.rst:30
msgid "The MD5 sum of each ring file."
msgstr ""
#: ../objectstorage-monitoring.rst:32
msgid "The most recent object replication time."
msgstr ""
#: ../objectstorage-monitoring.rst:34
msgid "Count of each type of quarantined file: Account, container, or object."
msgstr ""
#: ../objectstorage-monitoring.rst:37
msgid "Count of \"async\\_pendings\" (deferred container updates) on disk."
msgstr ""
#: ../objectstorage-monitoring.rst:39
msgid ""
"Swift Recon is middleware that is installed in the object servers pipeline "
"and takes one required option: A local cache directory. To track "
"``async_pendings``, you must set up an additional cron job for each object "
"server. You access data by either sending HTTP requests directly to the "
"object server or using the ``swift-recon`` command-line client."
msgstr ""
#: ../objectstorage-monitoring.rst:46
msgid ""
"There are some good Object Storage cluster statistics but the general server "
"meters overlap with existing server monitoring systems. To get the Swift-"
"specific meters into a monitoring system, they must be polled. Swift Recon "
"essentially acts as a middleware meters collector. The process that feeds "
"meters to your statistics system, such as ``collectd`` and ``gmond``, "
"probably already runs on the storage node. So, you can choose to either talk "
"to Swift Recon or collect the meters directly."
msgstr ""
#: ../objectstorage-monitoring.rst:56
msgid "Swift-Informant"
msgstr ""
#: ../objectstorage-monitoring.rst:58
msgid ""
"Florian Hines developed the Swift-Informant middleware (see https://github."
"com/pandemicsyn/swift-informant) to get real-time visibility into Object "
"Storage client requests. It sits in the pipeline for the proxy server, and "
"after each request to the proxy server, sends three meters to a StatsD "
"server (see http://codeascraft.etsy.com/2011/02/15/measure-anything-measure-"
"everything/):"
msgstr ""
#: ../objectstorage-monitoring.rst:65
msgid ""
"A counter increment for a meter like ``obj.GET.200`` or ``cont.PUT.404``."
msgstr ""
#: ../objectstorage-monitoring.rst:68
msgid ""
"Timing data for a meter like ``acct.GET.200`` or ``obj.GET.200``. [The "
"README says the meters look like ``duration.acct.GET.200``, but I do not see "
"the ``duration`` in the code. I am not sure what the Etsy server does but "
"our StatsD server turns timing meters into five derivative meters with new "
"segments appended, so it probably works as coded. The first meter turns into "
"``acct.GET.200.lower``, ``acct.GET.200.upper``, ``acct.GET.200.mean``, "
"``acct.GET.200.upper_90``, and ``acct.GET.200.count``]."
msgstr ""
#: ../objectstorage-monitoring.rst:77
msgid ""
"A counter increase by the bytes transferred for a meter like ``tfer.obj."
"PUT.201``."
msgstr ""
#: ../objectstorage-monitoring.rst:80
msgid ""
"This is good for getting a feel for the quality of service clients are "
"experiencing with the timing meters, as well as getting a feel for the "
"volume of the various permutations of request server type, command, and "
"response code. Swift-Informant also requires no change to core Object "
"Storage code because it is implemented as middleware. However, it gives you "
"no insight into the workings of the cluster past the proxy server. If the "
"responsiveness of one storage node degrades, you can only see that some of "
"your requests are bad, either as high latency or error status codes. You do "
"not know exactly why or where that request tried to go. Maybe the container "
"server in question was on a good node but the object server was on a "
"different, poorly-performing node."
msgstr ""
#: ../objectstorage-monitoring.rst:93
msgid "Statsdlog"
msgstr ""
#: ../objectstorage-monitoring.rst:95
msgid ""
"Florian's `Statsdlog <https://github.com/pandemicsyn/statsdlog>`__ project "
"increments StatsD counters based on logged events. Like Swift-Informant, it "
"is also non-intrusive, but statsdlog can track events from all Object "
"Storage daemons, not just proxy-server. The daemon listens to a UDP stream "
"of syslog messages and StatsD counters are incremented when a log line "
"matches a regular expression. Meter names are mapped to regex match patterns "
"in a JSON file, allowing flexible configuration of what meters are extracted "
"from the log stream."
msgstr ""
#: ../objectstorage-monitoring.rst:104
msgid ""
"Currently, only the first matching regex triggers a StatsD counter "
"increment, and the counter is always incremented by one. There is no way to "
"increment a counter by more than one or send timing data to StatsD based on "
"the log line content. The tool could be extended to handle more meters for "
"each line and data extraction, including timing data. But a coupling would "
"still exist between the log textual format and the log parsing regexes, "
"which would themselves be more complex to support multiple matches for each "
"line and data extraction. Also, log processing introduces a delay between "
"the triggering event and sending the data to StatsD. It would be preferable "
"to increment error counters where they occur and send timing data as soon as "
"it is known to avoid coupling between a log string and a parsing regex and "
"prevent a time delay between events and sending data to StatsD."
msgstr ""
#: ../objectstorage-monitoring.rst:118
msgid ""
"The next section describes another method for gathering Object Storage "
"operational meters."
msgstr ""
#: ../objectstorage-monitoring.rst:122
msgid "Swift StatsD logging"
msgstr ""
#: ../objectstorage-monitoring.rst:124
msgid ""
"StatsD (see http://codeascraft.etsy.com/2011/02/15/measure-anything-measure-"
"everything/) was designed for application code to be deeply instrumented; "
"meters are sent in real-time by the code that just noticed or did something. "
"The overhead of sending a meter is extremely low: a ``sendto`` of one UDP "
"packet. If that overhead is still too high, the StatsD client library can "
"send only a random portion of samples and StatsD approximates the actual "
"number when flushing meters upstream."
msgstr ""
#: ../objectstorage-monitoring.rst:133
msgid ""
"To avoid the problems inherent with middleware-based monitoring and after-"
"the-fact log processing, the sending of StatsD meters is integrated into "
"Object Storage itself. The submitted change set (see https://review."
"openstack.org/#change,6058) currently reports 124 meters across 15 Object "
"Storage daemons and the tempauth middleware. Details of the meters tracked "
"are in the `Administrator's Guide <http://docs.openstack.org/developer/swift/"
"admin_guide.html>`__."
msgstr ""
#: ../objectstorage-monitoring.rst:141
msgid ""
"The sending of meters is integrated with the logging framework. To enable, "
"configure ``log_statsd_host`` in the relevant config file. You can also "
"specify the port and a default sample rate. The specified default sample "
"rate is used unless a specific call to a statsd logging method (see the list "
"below) overrides it. Currently, no logging calls override the sample rate, "
"but it is conceivable that some meters may require accuracy (sample\\_rate "
"== 1) while others may not."
msgstr ""
#: ../objectstorage-monitoring.rst:157
msgid ""
"Then the LogAdapter object returned by ``get_logger()``, usually stored in "
"``self.logger``, has these new methods:"
msgstr ""
#: ../objectstorage-monitoring.rst:160
msgid ""
"``set_statsd_prefix(self, prefix)`` Sets the client library stat prefix "
"value which gets prefixed to every meter. The default prefix is the \"name\" "
"of the logger such as \"object-server\", \"container-auditor\", and so on. "
"This is currently used to turn \"proxy-server\" into one of \"proxy-server."
"Account\", \"proxy-server.Container\", or \"proxy-server.Object\" as soon as "
"the Controller object is determined and instantiated for the request."
msgstr ""
#: ../objectstorage-monitoring.rst:168
msgid ""
"``update_stats(self, metric, amount, sample_rate=1)`` Increments the "
"supplied meter by the given amount. This is used when you need to add or "
"subtract more that one from a counter, like incrementing \"suffix.hashes\" "
"by the number of computed hashes in the object replicator."
msgstr ""
#: ../objectstorage-monitoring.rst:174
msgid ""
"``increment(self, metric, sample_rate=1)`` Increments the given counter "
"meter by one."
msgstr ""
#: ../objectstorage-monitoring.rst:177
msgid ""
"``decrement(self, metric, sample_rate=1)`` Lowers the given counter meter by "
"one."
msgstr ""
#: ../objectstorage-monitoring.rst:180
msgid ""
"``timing(self, metric, timing_ms, sample_rate=1)`` Record that the given "
"meter took the supplied number of milliseconds."
msgstr ""
#: ../objectstorage-monitoring.rst:183
msgid ""
"``timing_since(self, metric, orig_time, sample_rate=1)`` Convenience method "
"to record a timing meter whose value is \"now\" minus an existing timestamp."
msgstr ""
#: ../objectstorage-monitoring.rst:187
msgid ""
"Note that these logging methods may safely be called anywhere you have a "
"logger object. If StatsD logging has not been configured, the methods are no-"
"ops. This avoids messy conditional logic each place a meter is recorded. "
"These example usages show the new logging methods:"
msgstr ""
#: ../objectstorage-monitoring.rst:236
msgid ""
"The development team of StatsD wanted to use the `pystatsd <https://github."
"com/sivy/py-statsd>`__ client library (not to be confused with a `similar-"
"looking project <https://github.com/sivy/py-statsd>`__ also hosted on "
"GitHub), but the released version on PyPI was missing two desired features "
"the latest version in GitHub had: the ability to configure a meters prefix "
"in the client object and a convenience method for sending timing data "
"between \"now\" and a \"start\" timestamp you already have. So they just "
"implemented a simple StatsD client library from scratch with the same "
"interface. This has the nice fringe benefit of not introducing another "
"external library dependency into Object Storage."
msgstr ""
#: ../objectstorage-troubleshoot.rst:0
msgid ""
"**Description of configuration options for [drive-audit] in drive-audit."
"conf**"
msgstr ""
#: ../objectstorage-troubleshoot.rst:3
msgid "Troubleshoot Object Storage"
msgstr ""
#: ../objectstorage-troubleshoot.rst:5
msgid ""
"For Object Storage, everything is logged in :file:`/var/log/syslog` (or :"
"file:`messages` on some distros). Several settings enable further "
"customization of logging, such as ``log_name``, ``log_facility``, and "
"``log_level``, within the object server configuration files."
msgstr ""
#: ../objectstorage-troubleshoot.rst:11
msgid "Drive failure"
msgstr ""
#: ../objectstorage-troubleshoot.rst:13
msgid ""
"In the event that a drive has failed, the first step is to make sure the "
"drive is unmounted. This will make it easier for Object Storage to work "
"around the failure until it has been resolved. If the drive is going to be "
"replaced immediately, then it is just best to replace the drive, format it, "
"remount it, and let replication fill it up."
msgstr ""
#: ../objectstorage-troubleshoot.rst:19
msgid ""
"If you cannot replace the drive immediately, then it is best to leave it "
"unmounted, and remove the drive from the ring. This will allow all the "
"replicas that were on that drive to be replicated elsewhere until the drive "
"is replaced. Once the drive is replaced, it can be re-added to the ring."
msgstr ""
#: ../objectstorage-troubleshoot.rst:25
msgid ""
"You can look at error messages in :file:`/var/log/kern.log` for hints of "
"drive failure."
msgstr ""
#: ../objectstorage-troubleshoot.rst:29
msgid "Server failure"
msgstr ""
#: ../objectstorage-troubleshoot.rst:31
msgid ""
"If a server is having hardware issues, it is a good idea to make sure the "
"Object Storage services are not running. This will allow Object Storage to "
"work around the failure while you troubleshoot."
msgstr ""
#: ../objectstorage-troubleshoot.rst:35
msgid ""
"If the server just needs a reboot, or a small amount of work that should "
"only last a couple of hours, then it is probably best to let Object Storage "
"work around the failure and get the machine fixed and back online. When the "
"machine comes back online, replication will make sure that anything that is "
"missing during the downtime will get updated."
msgstr ""
#: ../objectstorage-troubleshoot.rst:41
msgid ""
"If the server has more serious issues, then it is probably best to remove "
"all of the server's devices from the ring. Once the server has been repaired "
"and is back online, the server's devices can be added back into the ring. It "
"is important that the devices are reformatted before putting them back into "
"the ring as it is likely to be responsible for a different set of partitions "
"than before."
msgstr ""
#: ../objectstorage-troubleshoot.rst:49
msgid "Detect failed drives"
msgstr ""
#: ../objectstorage-troubleshoot.rst:51
msgid ""
"It has been our experience that when a drive is about to fail, error "
"messages appear in :file:`/var/log/kern.log`. There is a script called "
"``swift-drive-audit`` that can be run via cron to watch for bad drives. If "
"errors are detected, it will unmount the bad drive, so that Object Storage "
"can work around it. The script takes a configuration file with the following "
"settings:"
msgstr ""
#: ../objectstorage-troubleshoot.rst:63
msgid "``device_dir = /srv/node``"
msgstr ""
#: ../objectstorage-troubleshoot.rst:64
msgid "Directory devices are mounted under"
msgstr ""
#: ../objectstorage-troubleshoot.rst:65
msgid "``error_limit = 1``"
msgstr ""
#: ../objectstorage-troubleshoot.rst:66
msgid "Number of errors to find before a device is unmounted"
msgstr ""
#: ../objectstorage-troubleshoot.rst:67
msgid "``log_address = /dev/log``"
msgstr ""
#: ../objectstorage-troubleshoot.rst:68
msgid "Location where syslog sends the logs to"
msgstr ""
#: ../objectstorage-troubleshoot.rst:69
msgid "``log_facility = LOG_LOCAL0``"
msgstr ""
#: ../objectstorage-troubleshoot.rst:70
msgid "Syslog log facility"
msgstr ""
#: ../objectstorage-troubleshoot.rst:71
msgid "``log_file_pattern = /var/log/kern.*[!.][!g][!z]``"
msgstr ""
#: ../objectstorage-troubleshoot.rst:72
msgid ""
"Location of the log file with globbing pattern to check against device "
"errors locate device blocks with errors in the log file"
msgstr ""
#: ../objectstorage-troubleshoot.rst:74
msgid "``log_level = INFO``"
msgstr ""
#: ../objectstorage-troubleshoot.rst:75
msgid "Logging level"
msgstr ""
#: ../objectstorage-troubleshoot.rst:76
msgid "``log_max_line_length = 0``"
msgstr ""
#: ../objectstorage-troubleshoot.rst:77
msgid ""
"Caps the length of log lines to the value given; no limit if set to 0, the "
"default."
msgstr ""
#: ../objectstorage-troubleshoot.rst:79
msgid "``log_to_console = False``"
msgstr ""
#: ../objectstorage-troubleshoot.rst:80 ../objectstorage-troubleshoot.rst:86
#: ../objectstorage-troubleshoot.rst:88
msgid "No help text available for this option."
msgstr ""
#: ../objectstorage-troubleshoot.rst:81
msgid "``minutes = 60``"
msgstr ""
#: ../objectstorage-troubleshoot.rst:82
msgid "Number of minutes to look back in :file:`/var/log/kern.log`"
msgstr ""
#: ../objectstorage-troubleshoot.rst:83
msgid "``recon_cache_path = /var/cache/swift``"
msgstr ""
#: ../objectstorage-troubleshoot.rst:84
msgid "Directory where stats for a few items will be stored"
msgstr ""
#: ../objectstorage-troubleshoot.rst:85
msgid "``regex_pattern_1 = \\berror\\b.*\\b(dm-[0-9]{1,2}\\d?)\\b``"
msgstr ""
#: ../objectstorage-troubleshoot.rst:87
msgid "``unmount_failed_device = True``"
msgstr ""
#: ../objectstorage-troubleshoot.rst:92
msgid ""
"This script has only been tested on Ubuntu 10.04; use with caution on other "
"operating systems in production."
msgstr ""
#: ../objectstorage-troubleshoot.rst:96
msgid "Emergency recovery of ring builder files"
msgstr ""
#: ../objectstorage-troubleshoot.rst:98
msgid ""
"You should always keep a backup of swift ring builder files. However, if an "
"emergency occurs, this procedure may assist in returning your cluster to an "
"operational state."
msgstr ""
#: ../objectstorage-troubleshoot.rst:102
msgid ""
"Using existing swift tools, there is no way to recover a builder file from "
"a :file:`ring.gz` file. However, if you have a knowledge of Python, it is "
"possible to construct a builder file that is pretty close to the one you "
"have lost."
msgstr ""
#: ../objectstorage-troubleshoot.rst:109
msgid ""
"This procedure is a last-resort for emergency circumstances. It requires "
"knowledge of the swift python code and may not succeed."
msgstr ""
#: ../objectstorage-troubleshoot.rst:112
msgid "Load the ring and a new ringbuilder object in a Python REPL:"
msgstr ""
#: ../objectstorage-troubleshoot.rst:119
msgid "Start copying the data we have in the ring into the builder:"
msgstr ""
#: ../objectstorage-troubleshoot.rst:142
msgid ""
"For ``min_part_hours`` you either have to remember what the value you used "
"was, or just make up a new one:"
msgstr ""
#: ../objectstorage-troubleshoot.rst:149
msgid ""
"Validate the builder. If this raises an exception, check your previous code:"
msgstr ""
#: ../objectstorage-troubleshoot.rst:156
msgid ""
"After it validates, save the builder and create a new ``account.builder``:"
msgstr ""
#: ../objectstorage-troubleshoot.rst:164
msgid ""
"You should now have a file called :file:`account.builder` in the current "
"working directory. Run :command:`swift-ring-builder account.builder "
"write_ring` and compare the new :file:`account.ring.gz` to the :file:"
"`account.ring.gz` that you started from. They probably are not byte-for-byte "
"identical, but if you load them in a REPL and their ``_replica2part2dev_id`` "
"and ``devs`` attributes are the same (or nearly so), then you are in good "
"shape."
msgstr ""
#: ../objectstorage-troubleshoot.rst:172
msgid ""
"Repeat the procedure for :file:`container.ring.gz` and :file:`object.ring."
"gz`, and you might get usable builder files."
msgstr ""
#: ../objectstorage_EC.rst:3
msgid "Erasure coding"
msgstr ""
#: ../objectstorage_EC.rst:5
msgid ""
"Erasure coding is a set of algorithms that allows the reconstruction of "
"missing data from a set of original data. In theory, erasure coding uses "
"less capacity with similar durability characteristics as replicas. From an "
"application perspective, erasure coding support is transparent. Object "
"Storage (swift) implements erasure coding as a Storage Policy. See `Storage "
"Policies <http://docs.openstack.org/developer/swift/overview_policies."
"html>`_ for more details."
msgstr ""
#: ../objectstorage_EC.rst:14
msgid ""
"There is no external API related to erasure coding. Create a container using "
"a Storage Policy; the interaction with the cluster is the same as any other "
"durability policy. Because support implements as a Storage Policy, you can "
"isolate all storage devices that associate with your cluster's erasure "
"coding capability. It is entirely possible to share devices between storage "
"policies, but for erasure coding it may make more sense to use not only "
"separate devices but possibly even entire nodes dedicated for erasure coding."
msgstr ""
#: ../objectstorage_EC.rst:24
msgid ""
"The erasure code support in Object Storage is considered beta in Kilo. Most "
"major functionality is included, but it has not been tested or validated at "
"large scale. This feature relies on `ssync` for durability. We recommend "
"deployers do extensive testing and not deploy production data using an "
"erasure code storage policy. If any bugs are found during testing, please "
"report them to https://bugs.launchpad.net/swift"
msgstr ""
#: ../objectstorage_account_reaper.rst:3
msgid "Account reaper"
msgstr ""
#: ../objectstorage_account_reaper.rst:5
msgid ""
"In the background, the account reaper removes data from the deleted accounts."
msgstr ""
#: ../objectstorage_account_reaper.rst:8
msgid ""
"A reseller marks an account for deletion by issuing a ``DELETE`` request on "
"the account's storage URL. This action sets the ``status`` column of the "
"account\\_stat table in the account database and replicas to ``DELETED``, "
"marking the account's data for deletion."
msgstr ""
#: ../objectstorage_account_reaper.rst:13
msgid ""
"Typically, a specific retention time or undelete are not provided. However, "
"you can set a ``delay_reaping`` value in the ``[account-reaper]`` section of "
"the :file:`account-server.conf` file to delay the actual deletion of data. "
"At this time, to undelete you have to update the account database replicas "
"directly, set the status column to an empty string and update the put"
"\\_timestamp to be greater than the delete\\_timestamp."
msgstr ""
#: ../objectstorage_account_reaper.rst:23
msgid ""
"It is on the development to-do list to write a utility that performs this "
"task, preferably through a REST call."
msgstr ""
#: ../objectstorage_account_reaper.rst:26
msgid ""
"The account reaper runs on each account server and scans the server "
"occasionally for account databases marked for deletion. It only fires up on "
"the accounts for which the server is the primary node, so that multiple "
"account servers aren't trying to do it simultaneously. Using multiple "
"servers to delete one account might improve the deletion speed but requires "
"coordination to avoid duplication. Speed really is not a big concern with "
"data deletion, and large accounts aren't deleted often."
msgstr ""
#: ../objectstorage_account_reaper.rst:34
msgid ""
"Deleting an account is simple. For each account container, all objects are "
"deleted and then the container is deleted. Deletion requests that fail will "
"not stop the overall process but will cause the overall process to fail "
"eventually (for example, if an object delete times out, you will not be able "
"to delete the container or the account). The account reaper keeps trying to "
"delete an account until it is empty, at which point the database reclaim "
"process within the db\\_replicator will remove the database files."
msgstr ""
#: ../objectstorage_account_reaper.rst:43
msgid ""
"A persistent error state may prevent the deletion of an object or container. "
"If this happens, you will see a message in the log, for example::"
msgstr ""
#: ../objectstorage_account_reaper.rst:48
msgid ""
"You can control when this is logged with the ``reap_warn_after`` value in "
"the ``[account-reaper]`` section of the :file:`account-server.conf` file. "
"The default value is 30 days."
msgstr ""
#: ../objectstorage_arch.rst:3
msgid "Cluster architecture"
msgstr ""
#: ../objectstorage_arch.rst:6
msgid "Access tier"
msgstr ""
#: ../objectstorage_arch.rst:7
msgid ""
"Large-scale deployments segment off an access tier, which is considered the "
"Object Storage system's central hub. The access tier fields the incoming API "
"requests from clients and moves data in and out of the system. This tier "
"consists of front-end load balancers, ssl-terminators, and authentication "
"services. It runs the (distributed) brain of the Object Storage system: the "
"proxy server processes."
msgstr ""
#: ../objectstorage_arch.rst:15
msgid ""
"If you want to use OpenStack Identity API v3 for authentication, you have "
"the following options available in :file:`/etc/swift/dispersion.conf`: "
"``auth_version``, ``user_domain_name``, ``project_domain_name``, and "
"``project_name``."
msgstr ""
#: ../objectstorage_arch.rst:20
msgid "**Object Storage architecture**"
msgstr ""
#: ../objectstorage_arch.rst:28
msgid ""
"Because access servers are collocated in their own tier, you can scale out "
"read/write access regardless of the storage capacity. For example, if a "
"cluster is on the public Internet, requires SSL termination, and has a high "
"demand for data access, you can provision many access servers. However, if "
"the cluster is on a private network and used primarily for archival "
"purposes, you need fewer access servers."
msgstr ""
#: ../objectstorage_arch.rst:35
msgid ""
"Since this is an HTTP addressable storage service, you may incorporate a "
"load balancer into the access tier."
msgstr ""
#: ../objectstorage_arch.rst:38
msgid ""
"Typically, the tier consists of a collection of 1U servers. These machines "
"use a moderate amount of RAM and are network I/O intensive. Since these "
"systems field each incoming API request, you should provision them with two "
"high-throughput (10GbE) interfaces - one for the incoming \"front-end\" "
"requests and the other for the \"back-end\" access to the object storage "
"nodes to put and fetch data."
msgstr ""
#: ../objectstorage_arch.rst:46 ../objectstorage_arch.rst:78
msgid "Factors to consider"
msgstr ""
#: ../objectstorage_arch.rst:47
msgid ""
"For most publicly facing deployments as well as private deployments "
"available across a wide-reaching corporate network, you use SSL to encrypt "
"traffic to the client. SSL adds significant processing load to establish "
"sessions between clients, which is why you have to provision more capacity "
"in the access layer. SSL may not be required for private deployments on "
"trusted networks."
msgstr ""
#: ../objectstorage_arch.rst:55
msgid "Storage nodes"
msgstr ""
#: ../objectstorage_arch.rst:56
msgid ""
"In most configurations, each of the five zones should have an equal amount "
"of storage capacity. Storage nodes use a reasonable amount of memory and "
"CPU. Metadata needs to be readily available to return objects quickly. The "
"object stores run services not only to field incoming requests from the "
"access tier, but to also run replicators, auditors, and reapers. You can "
"provision object stores provisioned with single gigabit or 10 gigabit "
"network interface depending on the expected workload and desired performance."
msgstr ""
#: ../objectstorage_arch.rst:65
msgid "**Object Storage (swift)**"
msgstr ""
#: ../objectstorage_arch.rst:73
msgid ""
"Currently, a 2 TB or 3 TB SATA disk delivers good performance for the price. "
"You can use desktop-grade drives if you have responsive remote hands in the "
"datacenter and enterprise-grade drives if you don't."
msgstr ""
#: ../objectstorage_arch.rst:79
msgid ""
"You should keep in mind the desired I/O performance for single-threaded "
"requests. This system does not use RAID, so a single disk handles each "
"request for an object. Disk performance impacts single-threaded response "
"rates."
msgstr ""
#: ../objectstorage_arch.rst:84
msgid ""
"To achieve apparent higher throughput, the object storage system is designed "
"to handle concurrent uploads/downloads. The network I/O capacity (1GbE, "
"bonded 1GbE pair, or 10GbE) should match your desired concurrent throughput "
"needs for reads and writes."
msgstr ""
#: ../objectstorage_auditors.rst:3
msgid "Object Auditor"
msgstr ""
#: ../objectstorage_auditors.rst:5
msgid ""
"On system failures, the XFS file system can sometimes truncate files it is "
"trying to write and produce zero-byte files. The object-auditor will catch "
"these problems but in the case of a system crash it is advisable to run an "
"extra, less rate limited sweep, to check for these specific files. You can "
"run this command as follows::"
msgstr ""
#: ../objectstorage_auditors.rst:14
msgid ""
"\"-z\" means to only check for zero-byte files at 1000 files per second."
msgstr ""
#: ../objectstorage_auditors.rst:16
msgid ""
"It is useful to run the object auditor on a specific device or set of "
"devices. You can run the object-auditor once as follows::"
msgstr ""
#: ../objectstorage_auditors.rst:23
msgid ""
"This will run the object auditor on only the sda and sdb devices. This "
"parameter accepts a comma-separated list of values."
msgstr ""
#: ../objectstorage_characteristics.rst:3
msgid "Object Storage characteristics"
msgstr ""
#: ../objectstorage_characteristics.rst:5
msgid "The key characteristics of Object Storage are that:"
msgstr ""
#: ../objectstorage_characteristics.rst:7
msgid "All objects stored in Object Storage have a URL."
msgstr ""
#: ../objectstorage_characteristics.rst:9
msgid ""
"All objects stored are replicated 3✕ in as-unique-as-possible zones, which "
"can be defined as a group of drives, a node, a rack, and so on."
msgstr ""
#: ../objectstorage_characteristics.rst:12
msgid "All objects have their own metadata."
msgstr ""
#: ../objectstorage_characteristics.rst:14
msgid ""
"Developers interact with the object storage system through a RESTful HTTP "
"API."
msgstr ""
#: ../objectstorage_characteristics.rst:17
msgid "Object data can be located anywhere in the cluster."
msgstr ""
#: ../objectstorage_characteristics.rst:19
msgid ""
"The cluster scales by adding additional nodes without sacrificing "
"performance, which allows a more cost-effective linear storage expansion "
"than fork-lift upgrades."
msgstr ""
#: ../objectstorage_characteristics.rst:23
msgid "Data does not have to be migrated to an entirely new storage system."
msgstr ""
#: ../objectstorage_characteristics.rst:25
msgid "New nodes can be added to the cluster without downtime."
msgstr ""
#: ../objectstorage_characteristics.rst:27
msgid "Failed nodes and disks can be swapped out without downtime."
msgstr ""
#: ../objectstorage_characteristics.rst:29
msgid ""
"It runs on industry-standard hardware, such as Dell, HP, and Supermicro."
msgstr ""
#: ../objectstorage_characteristics.rst:34
msgid "Object Storage (swift)"
msgstr ""
#: ../objectstorage_characteristics.rst:38
msgid ""
"Developers can either write directly to the Swift API or use one of the many "
"client libraries that exist for all of the popular programming languages, "
"such as Java, Python, Ruby, and C#. Amazon S3 and RackSpace Cloud Files "
"users should be very familiar with Object Storage. Users new to object "
"storage systems will have to adjust to a different approach and mindset than "
"those required for a traditional filesystem."
msgstr ""
#: ../objectstorage_components.rst:3
msgid "Components"
msgstr ""
#: ../objectstorage_components.rst:5
msgid ""
"The components that enable Object Storage to deliver high availability, high "
"durability, and high concurrency are:"
msgstr ""
#: ../objectstorage_components.rst:8
msgid "**Proxy servers.** Handle all of the incoming API requests."
msgstr ""
#: ../objectstorage_components.rst:10
msgid "**Rings.** Map logical names of data to locations on particular disks."
msgstr ""
#: ../objectstorage_components.rst:13
msgid ""
"**Zones.** Isolate data from other zones. A failure in one zone doesn't "
"impact the rest of the cluster because data is replicated across zones."
msgstr ""
#: ../objectstorage_components.rst:17
msgid ""
"**Accounts and containers.** Each account and container are individual "
"databases that are distributed across the cluster. An account database "
"contains the list of containers in that account. A container database "
"contains the list of objects in that container."
msgstr ""
#: ../objectstorage_components.rst:22
msgid "**Objects.** The data itself."
msgstr ""
#: ../objectstorage_components.rst:24
msgid ""
"**Partitions.** A partition stores objects, account databases, and container "
"databases and helps manage locations where data lives in the cluster."
msgstr ""
#: ../objectstorage_components.rst:32
msgid "**Object Storage building blocks**"
msgstr ""
#: ../objectstorage_components.rst:39
msgid "Proxy servers"
msgstr ""
#: ../objectstorage_components.rst:41
msgid ""
"Proxy servers are the public face of Object Storage and handle all of the "
"incoming API requests. Once a proxy server receives a request, it determines "
"the storage node based on the object's URL, for example, https://swift."
"example.com/v1/account/container/object. Proxy servers also coordinate "
"responses, handle failures, and coordinate timestamps."
msgstr ""
#: ../objectstorage_components.rst:47
msgid ""
"Proxy servers use a shared-nothing architecture and can be scaled as needed "
"based on projected workloads. A minimum of two proxy servers should be "
"deployed for redundancy. If one proxy server fails, the others take over."
msgstr ""
#: ../objectstorage_components.rst:52
msgid ""
"For more information concerning proxy server configuration, please see the "
"`Configuration Reference <http://docs.openstack.org/trunk/config-reference/"
"content/proxy-server-configuration.html>`__."
msgstr ""
#: ../objectstorage_components.rst:57
msgid "Rings"
msgstr ""
#: ../objectstorage_components.rst:59
msgid ""
"A ring represents a mapping between the names of entities stored on disk and "
"their physical locations. There are separate rings for accounts, containers, "
"and objects. When other components need to perform any operation on an "
"object, container, or account, they need to interact with the appropriate "
"ring to determine their location in the cluster."
msgstr ""
#: ../objectstorage_components.rst:65
msgid ""
"The ring maintains this mapping using zones, devices, partitions, and "
"replicas. Each partition in the ring is replicated, by default, three times "
"across the cluster, and partition locations are stored in the mapping "
"maintained by the ring. The ring is also responsible for determining which "
"devices are used for handoff in failure scenarios."
msgstr ""
#: ../objectstorage_components.rst:71
msgid ""
"Data can be isolated into zones in the ring. Each partition replica is "
"guaranteed to reside in a different zone. A zone could represent a drive, a "
"server, a cabinet, a switch, or even a data center."
msgstr ""
#: ../objectstorage_components.rst:75
msgid ""
"The partitions of the ring are equally divided among all of the devices in "
"the Object Storage installation. When partitions need to be moved around "
"(for example, if a device is added to the cluster), the ring ensures that a "
"minimum number of partitions are moved at a time, and only one replica of a "
"partition is moved at a time."
msgstr ""
#: ../objectstorage_components.rst:81
msgid ""
"You can use weights to balance the distribution of partitions on drives "
"across the cluster. This can be useful, for example, when differently sized "
"drives are used in a cluster."
msgstr ""
#: ../objectstorage_components.rst:85
msgid ""
"The ring is used by the proxy server and several background processes (like "
"replication)."
msgstr ""
#: ../objectstorage_components.rst:92
msgid "**The ring**"
msgstr ""
#: ../objectstorage_components.rst:98
msgid ""
"These rings are externally managed, in that the server processes themselves "
"do not modify the rings, they are instead given new rings modified by other "
"tools."
msgstr ""
#: ../objectstorage_components.rst:102
msgid ""
"The ring uses a configurable number of bits from an MD5 hash for a path as a "
"partition index that designates a device. The number of bits kept from the "
"hash is known as the partition power, and 2 to the partition power indicates "
"the partition count. Partitioning the full MD5 hash ring allows other parts "
"of the cluster to work in batches of items at once which ends up either more "
"efficient or at least less complex than working with each item separately or "
"the entire cluster all at once."
msgstr ""
#: ../objectstorage_components.rst:110
msgid ""
"Another configurable value is the replica count, which indicates how many of "
"the partition-device assignments make up a single ring. For a given "
"partition number, each replica's device will not be in the same zone as any "
"other replica's device. Zones can be used to group devices based on physical "
"locations, power separations, network separations, or any other attribute "
"that would improve the availability of multiple replicas at the same time."
msgstr ""
#: ../objectstorage_components.rst:119
msgid "Zones"
msgstr ""
#: ../objectstorage_components.rst:121
msgid ""
"Object Storage allows configuring zones in order to isolate failure "
"boundaries. Each data replica resides in a separate zone, if possible. At "
"the smallest level, a zone could be a single drive or a grouping of a few "
"drives. If there were five object storage servers, then each server would "
"represent its own zone. Larger deployments would have an entire rack (or "
"multiple racks) of object servers, each representing a zone. The goal of "
"zones is to allow the cluster to tolerate significant outages of storage "
"servers without losing all replicas of the data."
msgstr ""
#: ../objectstorage_components.rst:130
msgid ""
"As mentioned earlier, everything in Object Storage is stored, by default, "
"three times. Swift will place each replica \"as-uniquely-as-possible\" to "
"ensure both high availability and high durability. This means that when "
"choosing a replica location, Object Storage chooses a server in an unused "
"zone before an unused server in a zone that already has a replica of the "
"data."
msgstr ""
#: ../objectstorage_components.rst:141
msgid "**Zones**"
msgstr ""
#: ../objectstorage_components.rst:147
msgid ""
"When a disk fails, replica data is automatically distributed to the other "
"zones to ensure there are three copies of the data."
msgstr ""
#: ../objectstorage_components.rst:151
msgid "Accounts and containers"
msgstr ""
#: ../objectstorage_components.rst:153
msgid ""
"Each account and container is an individual SQLite database that is "
"distributed across the cluster. An account database contains the list of "
"containers in that account. A container database contains the list of "
"objects in that container."
msgstr ""
#: ../objectstorage_components.rst:162
msgid "**Accounts and containers**"
msgstr ""
#: ../objectstorage_components.rst:168
msgid ""
"To keep track of object data locations, each account in the system has a "
"database that references all of its containers, and each container database "
"references each object."
msgstr ""
#: ../objectstorage_components.rst:173
msgid "Partitions"
msgstr ""
#: ../objectstorage_components.rst:175
msgid ""
"A partition is a collection of stored data, including account databases, "
"container databases, and objects. Partitions are core to the replication "
"system."
msgstr ""
#: ../objectstorage_components.rst:179
msgid ""
"Think of a partition as a bin moving throughout a fulfillment center "
"warehouse. Individual orders get thrown into the bin. The system treats that "
"bin as a cohesive entity as it moves throughout the system. A bin is easier "
"to deal with than many little things. It makes for fewer moving parts "
"throughout the system."
msgstr ""
#: ../objectstorage_components.rst:185
msgid ""
"System replicators and object uploads/downloads operate on partitions. As "
"the system scales up, its behavior continues to be predictable because the "
"number of partitions is a fixed number."
msgstr ""
#: ../objectstorage_components.rst:189
msgid ""
"Implementing a partition is conceptually simple, a partition is just a "
"directory sitting on a disk with a corresponding hash table of what it "
"contains."
msgstr ""
#: ../objectstorage_components.rst:197
msgid "**Partitions**"
msgstr ""
#: ../objectstorage_components.rst:204
msgid "Replicators"
msgstr ""
#: ../objectstorage_components.rst:206
msgid ""
"In order to ensure that there are three copies of the data everywhere, "
"replicators continuously examine each partition. For each local partition, "
"the replicator compares it against the replicated copies in the other zones "
"to see if there are any differences."
msgstr ""
#: ../objectstorage_components.rst:211
msgid ""
"The replicator knows if replication needs to take place by examining hashes. "
"A hash file is created for each partition, which contains hashes of each "
"directory in the partition. Each of the three hash files is compared. For a "
"given partition, the hash files for each of the partition's copies are "
"compared. If the hashes are different, then it is time to replicate, and the "
"directory that needs to be replicated is copied over."
msgstr ""
#: ../objectstorage_components.rst:219
msgid ""
"This is where partitions come in handy. With fewer things in the system, "
"larger chunks of data are transferred around (rather than lots of little TCP "
"connections, which is inefficient) and there is a consistent number of "
"hashes to compare."
msgstr ""
#: ../objectstorage_components.rst:224
msgid ""
"The cluster eventually has a consistent behavior where the newest data has a "
"priority."
msgstr ""
#: ../objectstorage_components.rst:231
msgid "**Replication**"
msgstr ""
#: ../objectstorage_components.rst:237
msgid ""
"If a zone goes down, one of the nodes containing a replica notices and "
"proactively copies data to a handoff location."
msgstr ""
#: ../objectstorage_components.rst:241
msgid "Use cases"
msgstr ""
#: ../objectstorage_components.rst:243
msgid ""
"The following sections show use cases for object uploads and downloads and "
"introduce the components."
msgstr ""
#: ../objectstorage_components.rst:248
msgid "Upload"
msgstr ""
#: ../objectstorage_components.rst:250
msgid ""
"A client uses the REST API to make a HTTP request to PUT an object into an "
"existing container. The cluster receives the request. First, the system must "
"figure out where the data is going to go. To do this, the account name, "
"container name, and object name are all used to determine the partition "
"where this object should live."
msgstr ""
#: ../objectstorage_components.rst:256
msgid ""
"Then a lookup in the ring figures out which storage nodes contain the "
"partitions in question."
msgstr ""
#: ../objectstorage_components.rst:259
msgid ""
"The data is then sent to each storage node where it is placed in the "
"appropriate partition. At least two of the three writes must be successful "
"before the client is notified that the upload was successful."
msgstr ""
#: ../objectstorage_components.rst:263
msgid ""
"Next, the container database is updated asynchronously to reflect that there "
"is a new object in it."
msgstr ""
#: ../objectstorage_components.rst:270
msgid "**Object Storage in use**"
msgstr ""
#: ../objectstorage_components.rst:277
msgid "Download"
msgstr ""
#: ../objectstorage_components.rst:279
msgid ""
"A request comes in for an account/container/object. Using the same "
"consistent hashing, the partition name is generated. A lookup in the ring "
"reveals which storage nodes contain that partition. A request is made to one "
"of the storage nodes to fetch the object and, if that fails, requests are "
"made to the other nodes."
msgstr ""
#: ../objectstorage_features.rst:3
msgid "Features and benefits"
msgstr ""
#: ../objectstorage_features.rst:9
msgid "Features"
msgstr ""
#: ../objectstorage_features.rst:10
msgid "Benefits"
msgstr ""
#: ../objectstorage_features.rst:11
msgid "Leverages commodity hardware"
msgstr ""
#: ../objectstorage_features.rst:12
msgid "No lock-in, lower price/GB."
msgstr ""
#: ../objectstorage_features.rst:13
msgid "HDD/node failure agnostic"
msgstr ""
#: ../objectstorage_features.rst:14
msgid "Self-healing, reliable, data redundancy protects from failures."
msgstr ""
#: ../objectstorage_features.rst:15
msgid "Unlimited storage"
msgstr ""
#: ../objectstorage_features.rst:16
msgid ""
"Large and flat namespace, highly scalable read/write access, able to serve "
"content directly from storage system."
msgstr ""
#: ../objectstorage_features.rst:18
msgid "Multi-dimensional scalability"
msgstr ""
#: ../objectstorage_features.rst:19
msgid ""
"Scale-out architecture: Scale vertically and horizontally-distributed "
"storage. Backs up and archives large amounts of data with linear performance."
msgstr ""
#: ../objectstorage_features.rst:22
msgid "Account/container/object structure"
msgstr ""
#: ../objectstorage_features.rst:23
msgid ""
"No nesting, not a traditional file system: Optimized for scale, it scales to "
"multiple petabytes and billions of objects."
msgstr ""
#: ../objectstorage_features.rst:25
msgid "Built-in replication 3✕ + data redundancy (compared with 2✕ on RAID)"
msgstr ""
#: ../objectstorage_features.rst:27
msgid ""
"A configurable number of accounts, containers and object copies for high "
"availability."
msgstr ""
#: ../objectstorage_features.rst:29
msgid "Easily add capacity (unlike RAID resize)"
msgstr ""
#: ../objectstorage_features.rst:30
msgid "Elastic data scaling with ease ."
msgstr ""
#: ../objectstorage_features.rst:31
msgid "No central database"
msgstr ""
#: ../objectstorage_features.rst:32
msgid "Higher performance, no bottlenecks."
msgstr ""
#: ../objectstorage_features.rst:33
msgid "RAID not required"
msgstr ""
#: ../objectstorage_features.rst:34
msgid "Handle many small, random reads and writes efficiently."
msgstr ""
#: ../objectstorage_features.rst:35
msgid "Built-in management utilities"
msgstr ""
#: ../objectstorage_features.rst:36
msgid ""
"Account management: Create, add, verify, and delete users; Container "
"management: Upload, download, and verify; Monitoring: Capacity, host, "
"network, log trawling, and cluster health."
msgstr ""
#: ../objectstorage_features.rst:39
msgid "Drive auditing"
msgstr ""
#: ../objectstorage_features.rst:40
msgid "Detect drive failures preempting data corruption."
msgstr ""
#: ../objectstorage_features.rst:41
msgid "Expiring objects"
msgstr ""
#: ../objectstorage_features.rst:42
msgid ""
"Users can set an expiration time or a TTL on an object to control access."
msgstr ""
#: ../objectstorage_features.rst:44
msgid "Direct object access"
msgstr ""
#: ../objectstorage_features.rst:45
msgid "Enable direct browser access to content, such as for a control panel."
msgstr ""
#: ../objectstorage_features.rst:47
msgid "Realtime visibility into client requests"
msgstr ""
#: ../objectstorage_features.rst:48
msgid "Know what users are requesting."
msgstr ""
#: ../objectstorage_features.rst:49
msgid "Supports S3 API"
msgstr ""
#: ../objectstorage_features.rst:50
msgid "Utilize tools that were designed for the popular S3 API."
msgstr ""
#: ../objectstorage_features.rst:51
msgid "Restrict containers per account"
msgstr ""
#: ../objectstorage_features.rst:52
msgid "Limit access to control usage by user."
msgstr ""
#: ../objectstorage_features.rst:53
msgid "Support for NetApp, Nexenta, Solidfire"
msgstr ""
#: ../objectstorage_features.rst:54
msgid "Unified support for block volumes using a variety of storage systems."
msgstr ""
#: ../objectstorage_features.rst:56
msgid "Snapshot and backup API for block volumes."
msgstr ""
#: ../objectstorage_features.rst:57
msgid "Data protection and recovery for VM data."
msgstr ""
#: ../objectstorage_features.rst:58
msgid "Standalone volume API available"
msgstr ""
#: ../objectstorage_features.rst:59
msgid "Separate endpoint and API for integration with other compute systems."
msgstr ""
#: ../objectstorage_features.rst:61
msgid "Integration with Compute"
msgstr ""
#: ../objectstorage_features.rst:62
msgid ""
"Fully integrated with Compute for attaching block volumes and reporting on "
"usage."
msgstr ""
#: ../objectstorage_intro.rst:3
msgid "Introduction to Object Storage"
msgstr ""
#: ../objectstorage_intro.rst:5
msgid ""
"OpenStack Object Storage (code-named swift) is open source software for "
"creating redundant, scalable data storage using clusters of standardized "
"servers to store petabytes of accessible data. It is a long-term storage "
"system for large amounts of static data that can be retrieved, leveraged, "
"and updated. Object Storage uses a distributed architecture with no central "
"point of control, providing greater scalability, redundancy, and permanence. "
"Objects are written to multiple hardware devices, with the OpenStack "
"software responsible for ensuring data replication and integrity across the "
"cluster. Storage clusters scale horizontally by adding new nodes. Should a "
"node fail, OpenStack works to replicate its content from other active nodes. "
"Because OpenStack uses software logic to ensure data replication and "
"distribution across different devices, inexpensive commodity hard drives and "
"servers can be used in lieu of more expensive equipment."
msgstr ""
#: ../objectstorage_intro.rst:20
msgid ""
"Object Storage is ideal for cost effective, scale-out storage. It provides a "
"fully distributed, API-accessible storage platform that can be integrated "
"directly into applications or used for backup, archiving, and data retention."
msgstr ""
#: ../objectstorage_replication.rst:3
msgid "Replication"
msgstr ""
#: ../objectstorage_replication.rst:5
msgid ""
"Because each replica in Object Storage functions independently and clients "
"generally require only a simple majority of nodes to respond to consider an "
"operation successful, transient failures like network partitions can quickly "
"cause replicas to diverge. These differences are eventually reconciled by "
"asynchronous, peer-to-peer replicator processes. The replicator processes "
"traverse their local file systems and concurrently perform operations in a "
"manner that balances load across physical disks."
msgstr ""
#: ../objectstorage_replication.rst:14
msgid ""
"Replication uses a push model, with records and files generally only being "
"copied from local to remote replicas. This is important because data on the "
"node might not belong there (as in the case of hand offs and ring changes), "
"and a replicator cannot know which data it should pull in from elsewhere in "
"the cluster. Any node that contains data must ensure that data gets to where "
"it belongs. The ring handles replica placement."
msgstr ""
#: ../objectstorage_replication.rst:21
msgid ""
"To replicate deletions in addition to creations, every deleted record or "
"file in the system is marked by a tombstone. The replication process cleans "
"up tombstones after a time period known as the *consistency window*. This "
"window defines the duration of the replication and how long transient "
"failure can remove a node from the cluster. Tombstone cleanup must be tied "
"to replication to reach replica convergence."
msgstr ""
#: ../objectstorage_replication.rst:28
msgid ""
"If a replicator detects that a remote drive has failed, the replicator uses "
"the ``get_more_nodes`` interface for the ring to choose an alternate node "
"with which to synchronize. The replicator can maintain desired levels of "
"replication during disk failures, though some replicas might not be in an "
"immediately usable location."
msgstr ""
#: ../objectstorage_replication.rst:36
msgid ""
"The replicator does not maintain desired levels of replication when failures "
"such as entire node failures occur; most failures are transient."
msgstr ""
#: ../objectstorage_replication.rst:40
msgid "The main replication types are:"
msgstr ""
#: ../objectstorage_replication.rst:43 ../objectstorage_replication.rst:49
msgid "Database replication"
msgstr ""
#: ../objectstorage_replication.rst:43
msgid "Replicates containers and objects."
msgstr ""
#: ../objectstorage_replication.rst:46 ../objectstorage_replication.rst:75
msgid "Object replication"
msgstr ""
#: ../objectstorage_replication.rst:46
msgid "Replicates object data."
msgstr ""
#: ../objectstorage_replication.rst:50
msgid ""
"Database replication completes a low-cost hash comparison to determine "
"whether two replicas already match. Normally, this check can quickly verify "
"that most databases in the system are already synchronized. If the hashes "
"differ, the replicator synchronizes the databases by sharing records added "
"since the last synchronization point."
msgstr ""
#: ../objectstorage_replication.rst:56
msgid ""
"This synchronization point is a high water mark that notes the last record "
"at which two databases were known to be synchronized, and is stored in each "
"database as a tuple of the remote database ID and record ID. Database IDs "
"are unique across all replicas of the database, and record IDs are "
"monotonically increasing integers. After all new records are pushed to the "
"remote database, the entire synchronization table of the local database is "
"pushed, so the remote database can guarantee that it is synchronized with "
"everything with which the local database was previously synchronized."
msgstr ""
#: ../objectstorage_replication.rst:66
msgid ""
"If a replica is missing, the whole local database file is transmitted to the "
"peer by using rsync(1) and is assigned a new unique ID."
msgstr ""
#: ../objectstorage_replication.rst:69
msgid ""
"In practice, database replication can process hundreds of databases per "
"concurrency setting per second (up to the number of available CPUs or disks) "
"and is bound by the number of database transactions that must be performed."
msgstr ""
#: ../objectstorage_replication.rst:76
msgid ""
"The initial implementation of object replication performed an rsync to push "
"data from a local partition to all remote servers where it was expected to "
"reside. While this worked at small scale, replication times skyrocketed once "
"directory structures could no longer be held in RAM. This scheme was "
"modified to save a hash of the contents for each suffix directory to a per-"
"partition hashes file. The hash for a suffix directory is no longer valid "
"when the contents of that suffix directory is modified."
msgstr ""
#: ../objectstorage_replication.rst:85
msgid ""
"The object replication process reads in hash files and calculates any "
"invalidated hashes. Then, it transmits the hashes to each remote server that "
"should hold the partition, and only suffix directories with differing hashes "
"on the remote server are rsynced. After pushing files to the remote server, "
"the replication process notifies it to recalculate hashes for the rsynced "
"suffix directories."
msgstr ""
#: ../objectstorage_replication.rst:92
msgid ""
"The number of uncached directories that object replication must traverse, "
"usually as a result of invalidated suffix directory hashes, impedes "
"performance. To provide acceptable replication speeds, object replication is "
"designed to invalidate around 2 percent of the hash space on a normal node "
"each day."
msgstr ""
#: ../objectstorage_ringbuilder.rst:3
msgid "Ring-builder"
msgstr ""
#: ../objectstorage_ringbuilder.rst:5
msgid ""
"Use the swift-ring-builder utility to build and manage rings. This utility "
"assigns partitions to devices and writes an optimized Python structure to a "
"gzipped, serialized file on disk for transmission to the servers. The server "
"processes occasionally check the modification time of the file and reload in-"
"memory copies of the ring structure as needed. If you use a slightly older "
"version of the ring, one of the three replicas for a partition subset will "
"be incorrect because of the way the ring-builder manages changes to the "
"ring. You can work around this issue."
msgstr ""
#: ../objectstorage_ringbuilder.rst:15
msgid ""
"The ring-builder also keeps its own builder file with the ring information "
"and additional data required to build future rings. It is very important to "
"keep multiple backup copies of these builder files. One option is to copy "
"the builder files out to every server while copying the ring files "
"themselves. Another is to upload the builder files into the cluster itself. "
"If you lose the builder file, you have to create a new ring from scratch. "
"Nearly all partitions would be assigned to different devices and, therefore, "
"nearly all of the stored data would have to be replicated to new locations. "
"So, recovery from a builder file loss is possible, but data would be "
"unreachable for an extended time."
msgstr ""
#: ../objectstorage_ringbuilder.rst:27
msgid "Ring data structure"
msgstr ""
#: ../objectstorage_ringbuilder.rst:28
msgid ""
"The ring data structure consists of three top level fields: a list of "
"devices in the cluster, a list of lists of device ids indicating partition "
"to device assignments, and an integer indicating the number of bits to shift "
"an MD5 hash to calculate the partition for the hash."
msgstr ""
#: ../objectstorage_ringbuilder.rst:34
msgid "Partition assignment list"
msgstr ""
#: ../objectstorage_ringbuilder.rst:35
msgid ""
"This is a list of ``array('H')`` of devices ids. The outermost list contains "
"an ``array('H')`` for each replica. Each ``array('H')`` has a length equal "
"to the partition count for the ring. Each integer in the ``array('H')`` is "
"an index into the above list of devices. The partition list is known "
"internally to the Ring class as ``_replica2part2dev_id``."
msgstr ""
#: ../objectstorage_ringbuilder.rst:41
msgid ""
"So, to create a list of device dictionaries assigned to a partition, the "
"Python code would look like::"
msgstr ""
#: ../objectstorage_ringbuilder.rst:47
msgid ""
"That code is a little simplistic because it does not account for the removal "
"of duplicate devices. If a ring has more replicas than devices, a partition "
"will have more than one replica on a device."
msgstr ""
#: ../objectstorage_ringbuilder.rst:51
msgid ""
"``array('H')`` is used for memory conservation as there may be millions of "
"partitions."
msgstr ""
#: ../objectstorage_ringbuilder.rst:55
msgid "Overload"
msgstr ""
#: ../objectstorage_ringbuilder.rst:57
msgid ""
"The ring builder tries to keep replicas as far apart as possible while still "
"respecting device weights. When it can not do both, the overload factor "
"determines what happens. Each device takes an extra fraction of its desired "
"partitions to allow for replica dispersion; after that extra fraction is "
"exhausted, replicas are placed closer together than optimal."
msgstr ""
#: ../objectstorage_ringbuilder.rst:64
msgid ""
"The overload factor lets the operator trade off replica dispersion "
"(durability) against data dispersion (uniform disk usage)."
msgstr ""
#: ../objectstorage_ringbuilder.rst:67
msgid ""
"The default overload factor is 0, so device weights are strictly followed."
msgstr ""
#: ../objectstorage_ringbuilder.rst:70
msgid ""
"With an overload factor of 0.1, each device accepts 10% more partitions than "
"it otherwise would, but only if it needs to maintain partition dispersion."
msgstr ""
#: ../objectstorage_ringbuilder.rst:74
msgid ""
"For example, consider a 3-node cluster of machines with equal-size disks; "
"node A has 12 disks, node B has 12 disks, and node C has 11 disks. The ring "
"has an overload factor of 0.1 (10%)."
msgstr ""
#: ../objectstorage_ringbuilder.rst:78
msgid ""
"Without the overload, some partitions would end up with replicas only on "
"nodes A and B. However, with the overload, every device can accept up to 10% "
"more partitions for the sake of dispersion. The missing disk in C means "
"there is one disk's worth of partitions to spread across the remaining 11 "
"disks, which gives each disk in C an extra 9.09% load. Since this is less "
"than the 10% overload, there is one replica of each partition on each node."
msgstr ""
#: ../objectstorage_ringbuilder.rst:86
msgid ""
"However, this does mean that the disks in node C have more data than the "
"disks in nodes A and B. If 80% full is the warning threshold for the "
"cluster, node C's disks reach 80% full while A and B's disks are only 72.7% "
"full."
msgstr ""
#: ../objectstorage_ringbuilder.rst:93
msgid "Replica counts"
msgstr ""
#: ../objectstorage_ringbuilder.rst:94
msgid ""
"To support the gradual change in replica counts, a ring can have a real "
"number of replicas and is not restricted to an integer number of replicas."
msgstr ""
#: ../objectstorage_ringbuilder.rst:98
msgid ""
"A fractional replica count is for the whole ring and not for individual "
"partitions. It indicates the average number of replicas for each partition. "
"For example, a replica count of 3.2 means that 20 percent of partitions have "
"four replicas and 80 percent have three replicas."
msgstr ""
#: ../objectstorage_ringbuilder.rst:103
msgid "The replica count is adjustable."
msgstr ""
#: ../objectstorage_ringbuilder.rst:110
msgid ""
"You must rebalance the replica ring in globally distributed clusters. "
"Operators of these clusters generally want an equal number of replicas and "
"regions. Therefore, when an operator adds or removes a region, the operator "
"adds or removes a replica. Removing unneeded replicas saves on the cost of "
"disks."
msgstr ""
#: ../objectstorage_ringbuilder.rst:116
msgid ""
"You can gradually increase the replica count at a rate that does not "
"adversely affect cluster performance."
msgstr ""
#: ../objectstorage_ringbuilder.rst:129
msgid ""
"Changes take effect after the ring is rebalanced. Therefore, if you intend "
"to change from 3 replicas to 3.01 but you accidentally type 2.01, no data is "
"lost."
msgstr ""
#: ../objectstorage_ringbuilder.rst:133
msgid ""
"Additionally, the ``swift-ring-builder X.builder create`` command can now "
"take a decimal argument for the number of replicas."
msgstr ""
#: ../objectstorage_ringbuilder.rst:137
msgid "Partition shift value"
msgstr ""
#: ../objectstorage_ringbuilder.rst:138
msgid ""
"The partition shift value is known internally to the Ring class as "
"``_part_shift``. This value is used to shift an MD5 hash to calculate the "
"partition where the data for that hash should reside. Only the top four "
"bytes of the hash is used in this process. For example, to compute the "
"partition for the :file:`/account/container/object` path using Python::"
msgstr ""
#: ../objectstorage_ringbuilder.rst:148
msgid ""
"For a ring generated with part\\_power P, the partition shift value is ``32 "
"- P``."
msgstr ""
#: ../objectstorage_ringbuilder.rst:152
msgid "Build the ring"
msgstr ""
#: ../objectstorage_ringbuilder.rst:153
msgid "The ring builder process includes these high-level steps:"
msgstr ""
#: ../objectstorage_ringbuilder.rst:155
msgid ""
"The utility calculates the number of partitions to assign to each device "
"based on the weight of the device. For example, for a partition at the power "
"of 20, the ring has 1,048,576 partitions. One thousand devices of equal "
"weight each want 1,048.576 partitions. The devices are sorted by the number "
"of partitions they desire and kept in order throughout the initialization "
"process."
msgstr ""
#: ../objectstorage_ringbuilder.rst:164
msgid ""
"Each device is also assigned a random tiebreaker value that is used when two "
"devices desire the same number of partitions. This tiebreaker is not stored "
"on disk anywhere, and so two different rings created with the same "
"parameters will have different partition assignments. For repeatable "
"partition assignments, ``RingBuilder.rebalance()`` takes an optional seed "
"value that seeds the Python pseudo-random number generator."
msgstr ""
#: ../objectstorage_ringbuilder.rst:172
msgid ""
"The ring builder assigns each partition replica to the device that requires "
"most partitions at that point while keeping it as far away as possible from "
"other replicas. The ring builder prefers to assign a replica to a device in "
"a region that does not already have a replica. If no such region is "
"available, the ring builder searches for a device in a different zone, or on "
"a different server. If it does not find one, it looks for a device with no "
"replicas. Finally, if all options are exhausted, the ring builder assigns "
"the replica to the device that has the fewest replicas already assigned."
msgstr ""
#: ../objectstorage_ringbuilder.rst:184
msgid ""
"The ring builder assigns multiple replicas to one device only if the ring "
"has fewer devices than it has replicas."
msgstr ""
#: ../objectstorage_ringbuilder.rst:187
msgid ""
"When building a new ring from an old ring, the ring builder recalculates the "
"desired number of partitions that each device wants."
msgstr ""
#: ../objectstorage_ringbuilder.rst:190
msgid ""
"The ring builder unassigns partitions and gathers these partitions for "
"reassignment, as follows:"
msgstr ""
#: ../objectstorage_ringbuilder.rst:193
msgid ""
"The ring builder unassigns any assigned partitions from any removed devices "
"and adds these partitions to the gathered list."
msgstr ""
#: ../objectstorage_ringbuilder.rst:195
msgid ""
"The ring builder unassigns any partition replicas that can be spread out for "
"better durability and adds these partitions to the gathered list."
msgstr ""
#: ../objectstorage_ringbuilder.rst:198
msgid ""
"The ring builder unassigns random partitions from any devices that have more "
"partitions than they need and adds these partitions to the gathered list."
msgstr ""
#: ../objectstorage_ringbuilder.rst:202
msgid ""
"The ring builder reassigns the gathered partitions to devices by using a "
"similar method to the one described previously."
msgstr ""
#: ../objectstorage_ringbuilder.rst:205
msgid ""
"When the ring builder reassigns a replica to a partition, the ring builder "
"records the time of the reassignment. The ring builder uses this value when "
"it gathers partitions for reassignment so that no partition is moved twice "
"in a configurable amount of time. The RingBuilder class knows this "
"configurable amount of time as ``min_part_hours``. The ring builder ignores "
"this restriction for replicas of partitions on removed devices because "
"removal of a device happens on device failure only, and reassignment is the "
"only choice."
msgstr ""
#: ../objectstorage_ringbuilder.rst:214
msgid ""
"These steps do not always perfectly rebalance a ring due to the random "
"nature of gathering partitions for reassignment. To help reach a more "
"balanced ring, the rebalance process is repeated until near perfect (less "
"than 1 percent off) or when the balance does not improve by at least 1 "
"percent (indicating we probably cannot get perfect balance due to wildly "
"imbalanced zones or too many partitions recently moved)."
msgstr ""
#: ../objectstorage_tenant_specific_image_storage.rst:3
msgid "Configure tenant-specific image locations with Object Storage"
msgstr ""
#: ../objectstorage_tenant_specific_image_storage.rst:5
msgid ""
"For some deployers, it is not ideal to store all images in one place to "
"enable all tenants and users to access them. You can configure the Image "
"service to store image data in tenant-specific image locations. Then, only "
"the following tenants can use the Image service to access the created image:"
msgstr ""
#: ../objectstorage_tenant_specific_image_storage.rst:11
msgid "The tenant who owns the image"
msgstr ""
#: ../objectstorage_tenant_specific_image_storage.rst:12
msgid ""
"Tenants that are defined in ``swift_store_admin_tenants`` and that have "
"admin-level accounts"
msgstr ""
#: ../objectstorage_tenant_specific_image_storage.rst:15
msgid "**To configure tenant-specific image locations**"
msgstr ""
#: ../objectstorage_tenant_specific_image_storage.rst:17
msgid ""
"Configure swift as your ``default_store`` in the :file:`glance-api.conf` "
"file."
msgstr ""
#: ../objectstorage_tenant_specific_image_storage.rst:20
msgid "Set these configuration options in the :file:`glance-api.conf` file:"
msgstr ""
#: ../objectstorage_tenant_specific_image_storage.rst:23
msgid ""
"Set to ``True`` to enable tenant-specific storage locations. Default is "
"``False``."
msgstr ""
#: ../objectstorage_tenant_specific_image_storage.rst:24
msgid "swift_store_multi_tenant"
msgstr ""
#: ../objectstorage_tenant_specific_image_storage.rst:27
msgid ""
"Specify a list of tenant IDs that can grant read and write access to all "
"Object Storage containers that are created by the Image service."
msgstr ""
#: ../objectstorage_tenant_specific_image_storage.rst:28
msgid "swift_store_admin_tenants"
msgstr ""
#: ../objectstorage_tenant_specific_image_storage.rst:30
msgid ""
"With this configuration, images are stored in an Object Storage service "
"(swift) endpoint that is pulled from the service catalog for the "
"authenticated user."
msgstr ""
#: ../orchestration-auth-model.rst:5
msgid "Orchestration authorization model"
msgstr ""
#: ../orchestration-auth-model.rst:8
msgid ""
"The Orchestration authorization model defines the authorization process for "
"requests during deferred operations. A common example is an auto-scaling "
"group update. During the operation, the Orchestration service requests "
"resources of other components (such as servers from Compute or networks from "
"Networking) to extend or reduce the capacity of an auto-scaling group."
msgstr ""
#: ../orchestration-auth-model.rst:16
msgid "The Orchestration service provides the following authorization models:"
msgstr ""
#: ../orchestration-auth-model.rst:18 ../orchestration-auth-model.rst:23
msgid "Password authorization"
msgstr ""
#: ../orchestration-auth-model.rst:20 ../orchestration-auth-model.rst:52
msgid "OpenStack Identity trusts authorization"
msgstr ""
#: ../orchestration-auth-model.rst:25
msgid ""
"The Orchestration service supports password authorization. Password "
"authorization requires that a user pass a username/password to the service. "
"The Orchestration service stores the encrypted password in the database and "
"uses it for deferred operations."
msgstr ""
#: ../orchestration-auth-model.rst:31
msgid "Password authorization involves the following steps:"
msgstr ""
#: ../orchestration-auth-model.rst:33
msgid ""
"A user requests stack creation, by providing a token and username/password. "
"The Dashboard or python-heatclient requests the token on the user's behalf."
msgstr ""
#: ../orchestration-auth-model.rst:37
msgid ""
"If the stack contains any resources that require deferred operations, then "
"the orchestration engine fails its validation checks if the user did not "
"provide a valid username/password."
msgstr ""
#: ../orchestration-auth-model.rst:41
msgid ""
"The username/password are encrypted and stored in the Orchestration database."
msgstr ""
#: ../orchestration-auth-model.rst:44
msgid "The stack is created."
msgstr ""
#: ../orchestration-auth-model.rst:46
msgid ""
"Later, the Orchestration service retrieves the credentials and requests "
"another token on behalf of the user. The token is not limited in scope and "
"provides access to all the roles of the stack owner."
msgstr ""
#: ../orchestration-auth-model.rst:54
msgid ""
"A trust is an OpenStack Identity extension that enables delegation, and "
"optionally impersonation through the OpenStack Identity service. The key "
"terminology is *trustor* (the user delegating) and *trustee* (the user being "
"delegated to)."
msgstr ""
#: ../orchestration-auth-model.rst:59
msgid ""
"To create a trust, the *trustor* (in this case, the user creating the stack "
"in the Orchestration service) provides the OpenStack Identity service with "
"the following information:"
msgstr ""
#: ../orchestration-auth-model.rst:63
msgid ""
"The ID of the *trustee* (who you want to delegate to, in this case, the "
"Orchestration service user)."
msgstr ""
#: ../orchestration-auth-model.rst:66
msgid ""
"The roles to be delegated. The roles are configurable through the ``heat."
"conf`` file, but it must contain whatever roles are required to perform the "
"deferred operations on the user's behalf. For example, launching an "
"OpenStack Compute instance in response to an auto-scaling event."
msgstr ""
#: ../orchestration-auth-model.rst:72
msgid "Whether to enable impersonation."
msgstr ""
#: ../orchestration-auth-model.rst:74
msgid ""
"Then, the OpenStack Identity service provides a *trust id*, which is "
"consumed by *only* the trustee to obtain a *trust scoped token*. This token "
"is limited in scope, such that the trustee has limited access to those roles "
"delegated. In addition, the trustee has effective impersonation of the "
"trustor user if it was selected when creating the trust. For more "
"information, see the :ref:`Identity management <identity_management>` "
"section."
msgstr ""
#: ../orchestration-auth-model.rst:83
msgid "Trusts authorization involves the following steps:"
msgstr ""
#: ../orchestration-auth-model.rst:85
msgid ""
"A user creates a stack through an API request (only the token is required)."
msgstr ""
#: ../orchestration-auth-model.rst:88
msgid ""
"The Orchestration service uses the token to create a trust between the stack "
"owner (trustor) and the Orchestration service user (trustee). The service "
"delegates a special role (or roles) as defined in the "
"*trusts_delegated_roles* list in the Orchestration configuration file. By "
"default, the Orchestration service sets all the roles from trustor available "
"for trustee. Deployers might modify this list to reflect a local RBAC "
"policy. For example, to ensure that the heat process can access only those "
"services that are expected while impersonating a stack owner."
msgstr ""
#: ../orchestration-auth-model.rst:98
msgid ""
"Orchestration stores the encrypted *trust id* in the Orchestration database."
msgstr ""
#: ../orchestration-auth-model.rst:101
msgid ""
"When a deferred operation is required, the Orchestration service retrieves "
"the *trust id* and requests a trust scoped token which enables the service "
"user to impersonate the stack owner during the deferred operation. "
"Impersonation is helpful, for example, so the service user can launch "
"Compute instances on behalf of the stack owner in response to an auto-"
"scaling event."
msgstr ""
#: ../orchestration-auth-model.rst:109
msgid "Authorization model configuration"
msgstr ""
#: ../orchestration-auth-model.rst:111
msgid ""
"Initially, the password authorization model was the default authorization "
"model. Since the Kilo release, the Identity trusts authorization model is "
"enabled for the Orchestration service by default."
msgstr ""
#: ../orchestration-auth-model.rst:116
msgid ""
"To enable the password authorization model, change the following parameter "
"in the ``heat.conf`` file:"
msgstr ""
#: ../orchestration-auth-model.rst:123
msgid ""
"To enable the trusts authorization model, change the following parameter in "
"the ``heat.conf`` file:"
msgstr ""
#: ../orchestration-auth-model.rst:130
msgid ""
"To specify the trustor roles that it delegates to trustee during "
"authorization, specify the ``trusts_delegated_roles`` parameter in the "
"``heat.conf`` file. If ``trusts_delegated_roles`` is not defined, then all "
"the trustor roles are delegated to trustee."
msgstr ""
#: ../orchestration-auth-model.rst:137
msgid ""
"The trustor delegated roles must be pre-configured in the OpenStack Identity "
"service before using them in the Orchestration service."
msgstr ""
#: ../orchestration-introduction.rst:5
msgid ""
"The OpenStack Orchestration service, a tool for orchestrating clouds, "
"automatically configures and deploys resources in stacks. The deployments "
"can be simple, such as deploying WordPress on Ubuntu with an SQL back end. "
"They can also be complex, such as starting a group of servers that auto "
"scale by starting and stopping based on real-time CPU loading information "
"from the Telemetry service."
msgstr ""
#: ../orchestration-introduction.rst:12
msgid ""
"Orchestration stacks are defined with templates, which are non-procedural "
"documents that describe tasks in terms of resources, parameters, inputs, "
"constraints, and dependencies. When the Orchestration service was originally "
"introduced, it worked with AWS CloudFormation templates, which are in the "
"JSON format."
msgstr ""
#: ../orchestration-introduction.rst:18
msgid ""
"The Orchestration service also runs Heat Orchestration Template (HOT) "
"templates that are written in YAML. YAML is a terse notation that loosely "
"follows structural conventions (colons, returns, indentation) that are "
"similar to Python or Ruby. Therefore, it is easier to write, parse, grep, "
"generate with tools, and maintain source-code management systems."
msgstr ""
#: ../orchestration-introduction.rst:24
msgid ""
"Orchestration can be accessed through a CLI and RESTful queries. The "
"Orchestration service provides both an OpenStack-native REST API and a "
"CloudFormation-compatible Query API. The Orchestration service is also "
"integrated with the OpenStack dashboard to perform stack functions through a "
"web interface."
msgstr ""
#: ../orchestration-introduction.rst:30
msgid ""
"For more information about using the Orchestration service through the "
"command line, see the `OpenStack Command-Line Interface Reference`_."
msgstr ""
#: ../orchestration-stack-domain-users.rst:5
msgid "Stack domain users"
msgstr ""
#: ../orchestration-stack-domain-users.rst:7
msgid ""
"Stack domain users allow the Orchestration service to authorize and start "
"the following operations within booted virtual machines:"
msgstr ""
#: ../orchestration-stack-domain-users.rst:11
msgid ""
"Provide metadata to agents inside instances, which poll for changes and "
"apply the configuration that is expressed in the metadata to the instance."
msgstr ""
#: ../orchestration-stack-domain-users.rst:15
msgid ""
"Detect when an action is complete, typically software configuration on a "
"virtual machine after it is booted. Compute moves the VM state to \"Active\" "
"as soon as it creates it, not when the Orchestration service has fully "
"configured it."
msgstr ""
#: ../orchestration-stack-domain-users.rst:20
msgid ""
"Provide application level status or meters from inside the instance. For "
"example, allow auto-scaling actions to be performed in response to some "
"measure of performance or quality of service."
msgstr ""
#: ../orchestration-stack-domain-users.rst:24
msgid ""
"The Orchestration service provides APIs that enable all of these operations, "
"but all of those APIs require authentication. For example, credentials to "
"access the instance that the agent is running upon. The heat-cfntools agents "
"use signed requests, which require an ec2 key pair that is created through "
"Identity. Then, the key pair is used to sign requests to the Orchestration "
"CloudFormation and CloudWatch compatible APIs, which are authenticated "
"through signature validation. Signature validation uses the Identity "
"ec2tokens extension. Stack domain users encapsulate all stack-defined users "
"(users who are created as a result of data that is contained in an "
"Orchestration template) in a separate domain. The separate domain is created "
"specifically to contain data that is related to the Orchestration stacks "
"only. A user is created which is the *domain admin*, and Orchestration uses "
"that user to manage the lifecycle of the users in the stack *user domain*."
msgstr ""
#: ../orchestration-stack-domain-users.rst:41
msgid "Stack domain users configuration"
msgstr ""
#: ../orchestration-stack-domain-users.rst:43
msgid "To configure stack domain users, the following steps occur:"
msgstr ""
#: ../orchestration-stack-domain-users.rst:45
msgid ""
"A special OpenStack Identity service domain is created. For example, a "
"domain that is called ``heat`` and the ID is set with the "
"``stack_user_domain`` option in the :file:`heat.conf` file."
msgstr ""
#: ../orchestration-stack-domain-users.rst:48
msgid ""
"A user with sufficient permissions to create and delete projects and users "
"in the ``heat`` domain is created."
msgstr ""
#: ../orchestration-stack-domain-users.rst:50
msgid ""
"The username and password for the domain admin user is set in the :file:"
"`heat.conf` file (``stack_domain_admin`` and "
"``stack_domain_admin_password``). This user administers *stack domain users* "
"on behalf of stack owners, so they no longer need to be administrators "
"themselves. The risk of this escalation path is limited because the "
"``heat_domain_admin`` is only given administrative permission for the "
"``heat`` domain."
msgstr ""
#: ../orchestration-stack-domain-users.rst:58
msgid "To set up stack domain users, complete the following steps:"
msgstr ""
#: ../orchestration-stack-domain-users.rst:60
msgid "Create the domain:"
msgstr ""
#: ../orchestration-stack-domain-users.rst:62
msgid ""
"``$OS_TOKEN`` refers to a token. For example, the service admin token or "
"some other valid token for a user with sufficient roles to create users and "
"domains. ``$KS_ENDPOINT_V3`` refers to the v3 OpenStack Identity endpoint "
"(for example, ``http://keystone_address:5000/v3`` where *keystone_address* "
"is the IP address or resolvable name for the Identity service)."
msgstr ""
#: ../orchestration-stack-domain-users.rst:76
msgid ""
"The domain ID is returned by this command, and is referred to as ``"
"$HEAT_DOMAIN_ID`` below."
msgstr ""
#: ../orchestration-stack-domain-users.rst:79
msgid "Create the user::"
msgstr ""
#: ../orchestration-stack-domain-users.rst:86
msgid ""
"The user ID is returned by this command and is referred to as ``"
"$DOMAIN_ADMIN_ID`` below."
msgstr ""
#: ../orchestration-stack-domain-users.rst:89
msgid "Make the user a domain admin::"
msgstr ""
#: ../orchestration-stack-domain-users.rst:95
msgid ""
"Then you must add the domain ID, username and password from these steps to "
"the :file:`heat.conf` file:"
msgstr ""
#: ../orchestration-stack-domain-users.rst:105
msgid "Usage workflow"
msgstr ""
#: ../orchestration-stack-domain-users.rst:107
msgid "The following steps are run during stack creation:"
msgstr ""
#: ../orchestration-stack-domain-users.rst:109
msgid ""
"Orchestration creates a new *stack domain project* in the ``heat`` domain if "
"the stack contains any resources that require creation of a *stack domain "
"user*."
msgstr ""
#: ../orchestration-stack-domain-users.rst:113
msgid ""
"For any resources that require a user, the Orchestration service creates the "
"user in the *stack domain project*. The *stack domain project* is associated "
"with the Orchestration stack in the Orchestration database, but is separate "
"and unrelated (from an authentication perspective) to the stack owners "
"project. The users who are created in the stack domain are still assigned "
"the ``heat_stack_user`` role, so the API surface they can access is limited "
"through the :file:`policy.json` file. For more information, see :ref:"
"`OpenStack Identity documentation <identity_management>`."
msgstr ""
#: ../orchestration-stack-domain-users.rst:124
msgid ""
"When API requests are processed, the Orchestration service performs an "
"internal lookup and allows stack details for a given stack to be retrieved. "
"Details are retrieved from the database for both the stack owner's project "
"(the default API path to the stack) and the stack domain project, subject to "
"the :file:`policy.json` restrictions."
msgstr ""
#: ../orchestration-stack-domain-users.rst:131
msgid ""
"This means there are now two paths that can result in the same data being "
"retrieved through the Orchestration API. The following example is for "
"resource-metadata::"
msgstr ""
#: ../orchestration-stack-domain-users.rst:138
msgid "or::"
msgstr ""
#: ../orchestration-stack-domain-users.rst:143
msgid ""
"The stack owner uses the former (via ``heat resource-metadata {stack_name} "
"{resource_name}``), and any agents in the instance use the latter."
msgstr ""
#: ../orchestration.rst:5
msgid "Orchestration"
msgstr ""
#: ../orchestration.rst:7
msgid ""
"Orchestration is an orchestration engine that provides the possibility to "
"launch multiple composite cloud applications based on templates in the form "
"of text files that can be treated like code. A native Heat Orchestration "
"Template (HOT) format is evolving, but it also endeavors to provide "
"compatibility with the AWS CloudFormation template format, so that many "
"existing CloudFormation templates can be launched on OpenStack."
msgstr ""
#: ../shared_file_systems.rst:5
msgid "Shared File Systems"
msgstr ""
#: ../shared_file_systems.rst:7
msgid ""
"Shared File Systems service provides a set of services for management of "
"shared file systems in a multi-tenant cloud environment, similar to how "
"OpenStack provides for block-based storage management through the OpenStack "
"Block Storage service project. With the Shared File Systems service, you can "
"create a remote file system, mount the file system on your instances, and "
"then read and write data from your instances to and from your file system."
msgstr ""
#: ../shared_file_systems.rst:14
msgid ""
"The Shared File Systems service serves same purpose as the Amazon Elastic "
"File System (EFS) does."
msgstr ""
#: ../shared_file_systems_cgroups.rst:7
msgid ""
"Consistency groups enable you to create snapshots from multiple file system "
"shares at the same point in time. For example, a database might place its "
"tables, logs, and configuration on separate shares. To restore this database "
"from a previous point in time, it makes sense to restore the logs, tables, "
"and configuration together from the exact same point in time."
msgstr ""
#: ../shared_file_systems_cgroups.rst:13
msgid ""
"The Shared File System service allows you to create a snapshot of the "
"consistency group and restore all shares that were associated with a "
"consistency group."
msgstr ""
#: ../shared_file_systems_cgroups.rst:18
msgid ""
"The **consistency groups and snapshots** are realized as an **experimental** "
"Shared File Systems API in Liberty release. Contributors can change or "
"remove the experimental part of the Shared File Systems API in further "
"releases without maintaining backward compatibility. The experimental API "
"has ``\"X-OpenStack-Manila-API-Experimental: true\"`` header in their HTTP "
"requests."
msgstr ""
#: ../shared_file_systems_cgroups.rst:29
msgid ""
"Before using consistency groups, make sure the Shared File System driver "
"that you are running has consistency group support. You can check it in the "
"``manila-scheduler`` service reports. The ``consistency_group_support`` can "
"have such values:"
msgstr ""
#: ../shared_file_systems_cgroups.rst:34
msgid ""
"``pool`` or ``host``. Consistency groups are supported. Specifies the level "
"of consistency groups support."
msgstr ""
#: ../shared_file_systems_cgroups.rst:37
msgid "``false``. Consistency groups are not supported."
msgstr ""
#: ../shared_file_systems_cgroups.rst:39
msgid ""
"Create a new consistency group, specify a share network and one or more "
"share types. In the example a consistency group ``cgroup1`` was created with "
"specifying two comma-separated share types:"
msgstr ""
#: ../shared_file_systems_cgroups.rst:63
msgid "Check that consistency group is in available status:"
msgstr ""
#: ../shared_file_systems_cgroups.rst:85
msgid ""
"To add a share to the consistency group you need to create a share with a "
"``--consistency-group`` option where you specify the ID of the consistency "
"group in ``available`` status:"
msgstr ""
#: ../shared_file_systems_cgroups.rst:120
msgid ""
"Admin can rename the consistency group or change its description using "
"**manila cg-update** command, or delete it by **manila cg-delete**."
msgstr ""
#: ../shared_file_systems_cgroups.rst:123
msgid ""
"As an administrator, you can also reset the state of a consistency group and "
"force-delete a specified consistency group in any state. Use the :file:"
"`policy.json` file to grant permissions for these actions to other roles."
msgstr ""
#: ../shared_file_systems_cgroups.rst:127
msgid ""
"Use **manila cg-reset-state [--state <state>] <consistency_group>** to "
"update the state of a consistency group explicitly. A valid value of a "
"status are ``available``, ``error``, ``creating``, ``deleting``, "
"``error_deleting``. If no state is provided, available will be used."
msgstr ""
#: ../shared_file_systems_cgroups.rst:136
msgid ""
"Use **manila cg-delete <consistency_group> [<consistency_group> ...]** to "
"soft-delete one or more consistency group."
msgstr ""
#: ../shared_file_systems_cgroups.rst:140
msgid ""
"The consistency group can be deleted only if it has no dependent :ref:"
"`shared_file_systems_cgsnapshots`."
msgstr ""
#: ../shared_file_systems_cgroups.rst:147
msgid ""
"Use **manila cg-delete --force <consistency_group> "
"[<consistency_group> ...]** to force-delete a specified consistency group in "
"any state."
msgstr ""
#: ../shared_file_systems_cgroups.rst:157
msgid "Consistency group snapshots"
msgstr ""
#: ../shared_file_systems_cgroups.rst:159
msgid ""
"You can create snapshots of consistency groups. To create a snapshot, you "
"specify the ID or name of the consistency group that you want to snapshot. "
"After you create a consistency group snapshot, you can create a consistency "
"group from it."
msgstr ""
#: ../shared_file_systems_cgroups.rst:164
msgid "Create a snapshot of consistency group ``cgroup1``:"
msgstr ""
#: ../shared_file_systems_cgroups.rst:181
msgid "Check the status of created consistency group snapshot:"
msgstr ""
#: ../shared_file_systems_cgroups.rst:198
msgid ""
"Admin can rename the consistency group snapshot or change its description "
"using **cg-snapshot-update** command, or delete it by **cg-snapshot-delete**."
msgstr ""
#: ../shared_file_systems_cgroups.rst:201
msgid ""
"A consistency group snapshot can have **members**. The consistency group "
"snapshot members are the shares that belong to some consistency group. To "
"add a member, include the ``--consistency-group`` optional parameter in the "
"create share command. This ID must match the ID of the consistency group "
"from which the consistency group snapshot was created. Then, while restoring "
"data, for example, and operating with consistency group snapshots you can "
"quickly find which shares belong to a specified consistency group."
msgstr ""
#: ../shared_file_systems_cgroups.rst:209
msgid ""
"You created the share ``Share2`` in ``cgroup1`` consistency group. Since you "
"made a snapshot of it, you can see that the only member of the consistency "
"group snapshot is ``Share2`` share:"
msgstr ""
#: ../shared_file_systems_cgroups.rst:222
msgid ""
"After you create a consistency group snapshot, you can create a consistency "
"group from it:"
msgstr ""
#: ../shared_file_systems_cgroups.rst:245
msgid "Check the list of consistency group. There are two groups now:"
msgstr ""
#: ../shared_file_systems_cgroups.rst:257
msgid ""
"Check a list of the shares. New share with ``ba52454e-2ea3-47fa-"
"a683-3176a01295e6`` ID was created when you created a consistency group "
"``cgroup2`` from a snapshot with a member."
msgstr ""
#: ../shared_file_systems_cgroups.rst:271
msgid "Print detailed information about new share:"
msgstr ""
#: ../shared_file_systems_cgroups.rst:274
msgid ""
"Pay attention on the ``source_cgsnapshot_member_id`` and "
"``consistency_group_id`` fields in a new share. It has "
"``source_cgsnapshot_member_id`` that is equal to the ID of the consistency "
"group snapshot and ``consistency_group_id`` that is equal to the ID of "
"``cgroup2`` that was created from a snapshot."
msgstr ""
#: ../shared_file_systems_cgroups.rst:310
msgid ""
"As an administrator, you can also reset the state of a consistency group "
"snapshot with **cg-snapshot-reset-state** and force-delete a specified "
"consistency group snapshot in any state using **cg-snapshot-delete** with "
"``--force`` key. Use the :file:`policy.json` file to grant permissions for "
"these actions to other roles."
msgstr ""
#: ../shared_file_systems_crud_share.rst:5
msgid "Share basic operations"
msgstr ""
#: ../shared_file_systems_crud_share.rst:8
msgid "General concepts"
msgstr ""
#: ../shared_file_systems_crud_share.rst:10
msgid "As general concepts, to create a file share and access it you need to:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:12
#: ../shared_file_systems_crud_share.rst:88
#: ../shared_file_systems_crud_share.rst:201
msgid ""
"To create a share, use :command:`manila create` command and specify the "
"required arguments: the size of the share and the shared file system "
"protocol. ``NFS``, ``CIFS``, ``GlusterFS``, or ``HDFS`` share file system "
"protocols are supported."
msgstr ""
#: ../shared_file_systems_crud_share.rst:17
msgid "You can also optionally specify the share network and the share type."
msgstr ""
#: ../shared_file_systems_crud_share.rst:19
#: ../shared_file_systems_crud_share.rst:110
#: ../shared_file_systems_crud_share.rst:224
msgid ""
"After the share becomes available, use the :command:`manila show` command to "
"get the share export location."
msgstr ""
#: ../shared_file_systems_crud_share.rst:22
msgid ""
"After getting the share export location, you can create an :ref:`access rule "
"<access_to_share>` for the share, mount it and work with files on the remote "
"file system."
msgstr ""
#: ../shared_file_systems_crud_share.rst:26
msgid ""
"There are big number of the share drivers created by different vendors in "
"the Shared File Systems service. As a Python class, each share driver can be "
"set for the :ref:`back end <shared_file_systems_multi_backend>` and run in "
"the back end to manage the share operations."
msgstr ""
#: ../shared_file_systems_crud_share.rst:31
msgid "Initially there are two driver modes for the back ends:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:33
msgid "no share servers mode"
msgstr ""
#: ../shared_file_systems_crud_share.rst:34
msgid "share servers mode"
msgstr ""
#: ../shared_file_systems_crud_share.rst:36
msgid ""
"Each share driver supports one or two of possible back end modes that can be "
"configured in :file:`manila.conf` file. The configuration option "
"``driver_handles_share_servers`` in :file:`manila.conf` file sets the share "
"servers mode or no share servers mode, and defines the driver mode for share "
"storage life cycle management:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:43
msgid "Config option"
msgstr ""
#: ../shared_file_systems_crud_share.rst:43
msgid "Mode"
msgstr ""
#: ../shared_file_systems_crud_share.rst:45
msgid ""
"An administrator rather than a share driver manages the bare metal storage "
"with some net interface instead of the presence of the share servers."
msgstr ""
#: ../shared_file_systems_crud_share.rst:45
msgid "driver_handles_share_servers = False"
msgstr ""
#: ../shared_file_systems_crud_share.rst:45
msgid "no share servers"
msgstr ""
#: ../shared_file_systems_crud_share.rst:54
msgid ""
"The share driver creates the share server and manages, or handles, the share "
"server life cycle."
msgstr ""
#: ../shared_file_systems_crud_share.rst:54
msgid "driver_handles_share_servers = True"
msgstr ""
#: ../shared_file_systems_crud_share.rst:54
msgid "share servers"
msgstr ""
#: ../shared_file_systems_crud_share.rst:62
msgid ""
"It is :ref:`the share types <shared_file_systems_share_types>` which have "
"the extra specifications that help scheduler to filter back ends and choose "
"the appropriate back end for the user that requested to create a share. The "
"required extra boolean specification for each share type is "
"``driver_handles_share_servers``. As an administrator, you can create the "
"share types with the specifications you need. For details of managing the "
"share types and configuration the back ends, see :ref:"
"`shared_file_systems_share_types` and :ref:"
"`shared_file_systems_multi_backend` documentation."
msgstr ""
#: ../shared_file_systems_crud_share.rst:71
msgid "You can create a share in two described above modes:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:73
msgid ""
"in a no share servers mode without specifying the share network and "
"specifying the share type with ``driver_handles_share_servers = False`` "
"parameter. See subsection :ref:`create_share_in_no_share_server_mode`."
msgstr ""
#: ../shared_file_systems_crud_share.rst:77
msgid ""
"in a share servers mode with specifying the share network and the share type "
"with ``driver_handles_share_servers = True`` parameter. See subsection :ref:"
"`create_share_in_share_server_mode`."
msgstr ""
#: ../shared_file_systems_crud_share.rst:84
msgid "Create a share in no share servers mode"
msgstr ""
#: ../shared_file_systems_crud_share.rst:86
msgid "To create a file share in no share servers mode, you need to:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:93
msgid ""
"You should specify the :ref:`share type <shared_file_systems_share_types>` "
"with ``driver_handles_share_servers = False`` extra specification."
msgstr ""
#: ../shared_file_systems_crud_share.rst:96
msgid ""
"You must not specify the ``share network`` because no share servers are "
"created. In this mode the Shared File Systems service expects that "
"administrator has some bare metal storage with some net interface."
msgstr ""
#: ../shared_file_systems_crud_share.rst:100
#: ../shared_file_systems_crud_share.rst:212
msgid ""
"The :command:`manila create` command creates a share. This command does the "
"following things:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:103
msgid ""
"The :ref:`manila-scheduler <shared_file_systems_scheduling>` service will "
"find the back end with ``driver_handles_share_servers = False`` mode due to "
"filtering the extra specifications of the share type."
msgstr ""
#: ../shared_file_systems_crud_share.rst:107
msgid ""
"The shared is created using the storage that is specified in the found back "
"end."
msgstr ""
#: ../shared_file_systems_crud_share.rst:113
msgid ""
"In the example to create a share, the created already share type named "
"``my_type`` with ``driver_handles_share_servers = False`` extra "
"specification is used."
msgstr ""
#: ../shared_file_systems_crud_share.rst:117
#: ../shared_file_systems_crud_share.rst:236
msgid "Check share types that exist, run:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:128
msgid ""
"Create a private share with ``my_type`` share type, NFS shared file system "
"protocol, and size 1 GB:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:162
msgid "New share ``Share2`` should have a status ``available``:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:197
msgid "Create a share in share servers mode"
msgstr ""
#: ../shared_file_systems_crud_share.rst:199
msgid "To create a file share in share servers mode, you need to:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:206
msgid ""
"You should specify the :ref:`share type <shared_file_systems_share_types>` "
"with ``driver_handles_share_servers = True`` extra specification."
msgstr ""
#: ../shared_file_systems_crud_share.rst:209
msgid ""
"You should specify the :ref:`share network "
"<shared_file_systems_share_networks>`."
msgstr ""
#: ../shared_file_systems_crud_share.rst:215
msgid ""
"The :ref:`manila-scheduler <shared_file_systems_scheduling>` service will "
"find the back end with ``driver_handles_share_servers = True`` mode due to "
"filtering the extra specifications of the share type."
msgstr ""
#: ../shared_file_systems_crud_share.rst:219
msgid ""
"The share driver will create a share server with the share network. For "
"details of creating the resources, see the `documentation <http://docs. "
"openstack.org/developer/manila/devref/index.html#share-backends>`_ of the "
"specific share driver."
msgstr ""
#: ../shared_file_systems_crud_share.rst:227
msgid ""
"In the example to create a share, the default share type and the already "
"existing share network are used."
msgstr ""
#: ../shared_file_systems_crud_share.rst:231
msgid ""
"There is no default share type just after you started manila as the "
"administrator. See :ref:`shared_file_systems_share_types` to create the "
"default share type. To create a share network, use :ref:"
"`shared_file_systems_share_networks`."
msgstr ""
#: ../shared_file_systems_crud_share.rst:247
msgid "Check share networks that exist, run:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:258
msgid ""
"Create a public share with ``my_share_net`` network, ``default`` share type, "
"NFS shared file system protocol, and size 1 GB:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:292
msgid ""
"The share also can be created from a share snapshot. For details, see :ref:"
"`shared_file_systems_snapshots`."
msgstr ""
#: ../shared_file_systems_crud_share.rst:295
msgid "See the share in a share list:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:306
msgid ""
"Check the share status and see the share export location. After ``creating`` "
"status share should have status ``available``:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:339
msgid ""
"``is_public`` defines the level of visibility for the share: whether other "
"tenants can or cannot see the share. By default, the share is private."
msgstr ""
#: ../shared_file_systems_crud_share.rst:343
msgid "Update share"
msgstr ""
#: ../shared_file_systems_crud_share.rst:345
msgid ""
"Update the name, or description, or level of visibility for all tenants for "
"the share if you need:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:380
msgid "A share can have one of these status values:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:385
msgid "The share is being created."
msgstr ""
#: ../shared_file_systems_crud_share.rst:385
msgid "creating"
msgstr ""
#: ../shared_file_systems_crud_share.rst:387
msgid "The share is being deleted."
msgstr ""
#: ../shared_file_systems_crud_share.rst:387
msgid "deleting"
msgstr ""
#: ../shared_file_systems_crud_share.rst:389
msgid "An error occurred during share creation."
msgstr ""
#: ../shared_file_systems_crud_share.rst:389
msgid "error"
msgstr ""
#: ../shared_file_systems_crud_share.rst:391
msgid "An error occurred during share deletion."
msgstr ""
#: ../shared_file_systems_crud_share.rst:391
msgid "error_deleting"
msgstr ""
#: ../shared_file_systems_crud_share.rst:393
msgid "The share is ready to use."
msgstr ""
#: ../shared_file_systems_crud_share.rst:393
msgid "available"
msgstr ""
#: ../shared_file_systems_crud_share.rst:395
msgid "Share manage started."
msgstr ""
#: ../shared_file_systems_crud_share.rst:395
msgid "manage_starting"
msgstr ""
#: ../shared_file_systems_crud_share.rst:397
msgid "Share manage failed."
msgstr ""
#: ../shared_file_systems_crud_share.rst:397
msgid "manage_error"
msgstr ""
#: ../shared_file_systems_crud_share.rst:399
msgid "Share unmanage started."
msgstr ""
#: ../shared_file_systems_crud_share.rst:399
msgid "unmanage_starting"
msgstr ""
#: ../shared_file_systems_crud_share.rst:401
msgid "Share cannot be unmanaged."
msgstr ""
#: ../shared_file_systems_crud_share.rst:401
msgid "unmanage_error"
msgstr ""
#: ../shared_file_systems_crud_share.rst:403
msgid "Share was unmanaged."
msgstr ""
#: ../shared_file_systems_crud_share.rst:403
msgid "unmanaged"
msgstr ""
#: ../shared_file_systems_crud_share.rst:405
msgid "The extend, or increase, share size request was issued successfully."
msgstr ""
#: ../shared_file_systems_crud_share.rst:405
msgid "extending"
msgstr ""
#: ../shared_file_systems_crud_share.rst:408
msgid "Extend share failed."
msgstr ""
#: ../shared_file_systems_crud_share.rst:408
msgid "extending_error"
msgstr ""
#: ../shared_file_systems_crud_share.rst:410
msgid "Share is being shrunk."
msgstr ""
#: ../shared_file_systems_crud_share.rst:410
msgid "shrinking"
msgstr ""
#: ../shared_file_systems_crud_share.rst:412
msgid "Failed to update quota on share shrinking."
msgstr ""
#: ../shared_file_systems_crud_share.rst:412
msgid "shrinking_error"
msgstr ""
#: ../shared_file_systems_crud_share.rst:415
msgid "Shrink share failed due to possible data loss."
msgstr ""
#: ../shared_file_systems_crud_share.rst:415
msgid "shrinking_possible_data_loss_error"
msgstr ""
#: ../shared_file_systems_crud_share.rst:422
msgid "Share metadata"
msgstr ""
#: ../shared_file_systems_crud_share.rst:424
msgid "If you want to set the metadata key-value pairs on the share, run:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:430
msgid "Get all metadata key-value pairs of the share:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:443
msgid "You can update the metadata:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:454
msgid ""
"You also can unset the metadata using **manila metadata <share_name> unset "
"<metadata_key(s)>**."
msgstr ""
#: ../shared_file_systems_crud_share.rst:458
msgid "Reset share state"
msgstr ""
#: ../shared_file_systems_crud_share.rst:460
msgid "As administrator, you can reset the state of a share."
msgstr ""
#: ../shared_file_systems_crud_share.rst:462
msgid ""
"Use **manila reset-state [--state <state>] <share>** command to reset share "
"state, where ``state`` indicates which state to assign the share. Options "
"include ``available``, ``error``, ``creating``, ``deleting``, "
"``error_deleting`` states."
msgstr ""
#: ../shared_file_systems_crud_share.rst:501
msgid "Delete and force-delete share"
msgstr ""
#: ../shared_file_systems_crud_share.rst:503
msgid ""
"You also can force-delete a share. The shares cannot be deleted in "
"transitional states. The transitional states are ``creating``, ``deleting``, "
"``managing``, ``unmanaging``, ``extending``, and ``shrinking`` statuses for "
"the shares. Force-deletion deletes an object in any state. Use the :file:"
"`policy.json` file to grant permissions for this action to other roles."
msgstr ""
#: ../shared_file_systems_crud_share.rst:511
msgid ""
"The configuration file ``policy.json`` may be used from different places. "
"The path ``/etc/manila/policy.json`` is one of expected paths by default."
msgstr ""
#: ../shared_file_systems_crud_share.rst:514
msgid ""
"Use **manila delete <share_name_or_ID>** command to delete a specified share:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:521
msgid ""
"If you specified :ref:`the consistency group <shared_file_systems_cgroups>` "
"while creating a share, you should provide the ``--consistency-group`` "
"parameter to delete the share:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:530
msgid ""
"If you try to delete the share in one of the transitional state using soft-"
"deletion you'll get an error:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:539
msgid ""
"A share cannot be deleted in a transitional status, that it why an error "
"from ``python-manilaclient`` appeared."
msgstr ""
#: ../shared_file_systems_crud_share.rst:542
msgid "Print the list of all shares for all tenants:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:554
msgid ""
"Force-delete Share2 and check that it is absent in the list of shares, run:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:571
msgid "Manage access to share"
msgstr ""
#: ../shared_file_systems_crud_share.rst:573
msgid ""
"The Shared File Systems service allows to grant or deny access to a "
"specified share, and list the permissions for a specified share."
msgstr ""
#: ../shared_file_systems_crud_share.rst:576
msgid ""
"To grant or deny access to a share, specify one of these supported share "
"access levels:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:579
msgid "**rw**. Read and write (RW) access. This is the default value."
msgstr ""
#: ../shared_file_systems_crud_share.rst:581
msgid "**ro**. Read-only (RO) access."
msgstr ""
#: ../shared_file_systems_crud_share.rst:583
msgid "You must also specify one of these supported authentication methods:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:585
msgid ""
"**ip**. Authenticates an instance through its IP address. A valid format is "
"``XX.XX.XX.XX`` or ``XX.XX.XX.XX/XX``. For example ``0.0.0.0/0``."
msgstr ""
#: ../shared_file_systems_crud_share.rst:588
msgid ""
"**cert**. Authenticates an instance through a TLS certificate. Specify the "
"TLS identity as the IDENTKEY. A valid value is any string up to 64 "
"characters long in the common name (CN) of the certificate. The meaning of a "
"string depends on its interpretation."
msgstr ""
#: ../shared_file_systems_crud_share.rst:593
msgid ""
"**user**. Authenticates by a specified user or group name. A valid value is "
"an alphanumeric string that can contain some special characters and is from "
"4 to 32 characters long."
msgstr ""
#: ../shared_file_systems_crud_share.rst:597
msgid ""
"Try to mount NFS share with export path ``10.254.0.5:/shares/"
"share-5789ddcf-35c9-4b64-a28a-7f6a4a574b6a`` on the node with IP address "
"``10.254.0.4``:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:609
msgid ""
"An error message \"Permission denied\" appeared, so you are not allowed to "
"mount a share without an access rule. Allow access to the share with ``ip`` "
"access type and ``10.254.0.4`` IP address:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:627
msgid "Try to mount a share again. This time it is mounted successfully:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:633
msgid ""
"Since it is allowed node on 10.254.0.4 read and write access, try to create "
"a file on a mounted share:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:643
msgid ""
"Connect via SSH to the 10.254.0.5 node and check new file `my_file.txt` in "
"the ``/shares/share-5789ddcf-35c9-4b64-a28a-7f6a4a574b6a`` directory:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:656
msgid ""
"You have successfully created a file from instance that was given access by "
"its IP address."
msgstr ""
#: ../shared_file_systems_crud_share.rst:659
msgid "Allow access to the share with ``user`` access type:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:676
msgid ""
"Different share features are supported by different share drivers. For the "
"example, the Generic driver with the Block Storage service as a back-end "
"doesn't support ``user`` and ``cert`` authentications methods. For details "
"of supporting of features by different drivers, see `Manila share features "
"support mapping <http://docs.openstack.org/developer/manila/devref /"
"share_back_ends_feature_support_mapping.html>`_."
msgstr ""
#: ../shared_file_systems_crud_share.rst:683
msgid ""
"To verify that the access rules (ACL) were configured correctly for a share, "
"you list permissions for a share:"
msgstr ""
#: ../shared_file_systems_crud_share.rst:696
msgid ""
"Deny access to the share and check that deleted access rule is absent in the "
"access rule list:"
msgstr ""
#: ../shared_file_systems_intro.rst:7
msgid ""
"The OpenStack File Share service allows you to offer file-share services to "
"users of an OpenStack installation. The Shared File Systems service can be "
"configured to run in a single-node configuration or across multiple nodes. "
"The Shared File Systems service can be configured to provision shares from "
"one or more back ends, so it is required to declare at least one back end. "
"To administer the Shared File Systems service, it is helpful to understand a "
"number of concepts like share networks, shares, multi-tenancy and back ends "
"that can be configured with the Shared File Systems service. The Shared File "
"Systems service consists of three types of services, which are similar to "
"those of the Block Storage service:"
msgstr ""
#: ../shared_file_systems_intro.rst:18
msgid "``manila-api``"
msgstr ""
#: ../shared_file_systems_intro.rst:19
msgid "``manila-scheduler``"
msgstr ""
#: ../shared_file_systems_intro.rst:20
msgid "``manila-share``"
msgstr ""
#: ../shared_file_systems_intro.rst:22
msgid ""
"Installation of first two - ``manila-api`` and ``manila-scheduler`` is "
"common for almost all deployments. But configuration of ``manila-share`` is "
"backend-specific and can differ from deployment to deployment."
msgstr ""
#: ../shared_file_systems_key_concepts.rst:5
msgid "Key concepts"
msgstr ""
#: ../shared_file_systems_key_concepts.rst:8
msgid "Share"
msgstr ""
#: ../shared_file_systems_key_concepts.rst:10
msgid ""
"In the Shared File Systems service ``share`` is the fundamental resource "
"unit allocated by the Shared File System service. It represents an "
"allocation of a persistent, readable, and writable filesystem that can be "
"accessed by OpenStack compute instances, or clients outside of OpenStack, "
"which depends on deployment configuration."
msgstr ""
#: ../shared_file_systems_key_concepts.rst:17
msgid ""
"A ``share`` is an abstract storage object that may or may not directly map "
"to a \"share\" concept from the underlying storage provider."
msgstr ""
#: ../shared_file_systems_key_concepts.rst:22
msgid "Snapshot"
msgstr ""
#: ../shared_file_systems_key_concepts.rst:24
msgid ""
"A ``snapshot`` is a point-in-time, read-only copy of a ``share``. "
"``Snapshots`` can be created from an existing ``share`` that is operational "
"regardless of whether a client has mounted the file system. A ``snapshot`` "
"can serve as the content source for a new ``share`` when the ``share`` is "
"created with the create from snapshot option specified."
msgstr ""
#: ../shared_file_systems_key_concepts.rst:31
msgid "Storage Pools"
msgstr ""
#: ../shared_file_systems_key_concepts.rst:33
msgid ""
"With the Kilo release of OpenStack, the Shared File Systems service has "
"introduced the concept of ``storage pools``. The storage may present one or "
"more logical storage resource pools from which the Shared File Systems "
"service will select as a storage location when provisioning ``shares``."
msgstr ""
#: ../shared_file_systems_key_concepts.rst:39
msgid "Share Type"
msgstr ""
#: ../shared_file_systems_key_concepts.rst:41
msgid ""
"``Share type`` is an abstract collection of criteria used to characterize "
"``shares``. They are most commonly used to create a hierarchy of functional "
"capabilities that represent a tiered level of storage services; for example, "
"a cloud administrator might define a premium ``share type`` that indicates a "
"greater level of performance than a basic ``share type``, which would "
"represent a best-effort level of performance."
msgstr ""
#: ../shared_file_systems_key_concepts.rst:50
msgid "Share Access Rules"
msgstr ""
#: ../shared_file_systems_key_concepts.rst:52
msgid ""
"``Share access rules`` define which users can access a particular ``share``. "
"For example, access rules can be declared for NFS shares by listing the "
"valid IP networks, in CIDR notation, which should have access to the "
"``share``."
msgstr ""
#: ../shared_file_systems_key_concepts.rst:57
msgid "Security Services"
msgstr ""
#: ../shared_file_systems_key_concepts.rst:59
msgid ""
"``Security services`` are the concept in the Shared File Systems service "
"that allow Finer-grained client access rules to be declared for "
"authentication or authorization to access ``share`` content. External "
"services including LDAP, Active Directory, Kerberos can be declared as "
"resources that should be consulted when making an access decision to a "
"particular ``share``. ``Shares`` can be associated to multiple security "
"services but only one service per one type."
msgstr ""
#: ../shared_file_systems_key_concepts.rst:68
msgid "Share Networks"
msgstr ""
#: ../shared_file_systems_key_concepts.rst:70
msgid ""
"A ``share network`` is an object that defines a relationship between a "
"tenant's network and subnet, as defined in an OpenStack Networking service "
"or Compute service, and the ``shares`` created by the same tenant; that is, "
"a tenant may find it desirable to provision ``shares`` such that only "
"instances connected to a particular OpenStack-defined network have access to "
"the ``share``. Also, ``security services`` can be attached to ``share "
"networks``, because most of auth protocols require some interaction with "
"network services."
msgstr ""
#: ../shared_file_systems_key_concepts.rst:78
msgid ""
"The Shared File Systems service has the ability to work outside of "
"OpenStack. That is due to the ``StandaloneNetworkPlugin`` that can be used "
"with any network platform and does not require some specific network "
"services in OpenStack like Compute or Networking service. You can set the "
"network parameters in its configuration file."
msgstr ""
#: ../shared_file_systems_key_concepts.rst:85
msgid "Share Servers"
msgstr ""
#: ../shared_file_systems_key_concepts.rst:87
msgid ""
"A ``share server`` is a logical entity that hosts the shares that are "
"created on a specific ``share network``. A ``share server`` may be a "
"configuration object within the storage controller, or it may represent "
"logical resources provisioned within an OpenStack deployment that are used "
"to support the data path used to access ``shares``."
msgstr ""
#: ../shared_file_systems_key_concepts.rst:93
msgid ""
"``Share servers`` interact with network services to determine the "
"appropriate IP addresses on which to export ``shares`` according to the "
"related ``share network``. The Shared File Systems service has a pluggable "
"network model that allows ``share servers`` to work with different "
"implementations of Network service."
msgstr ""
#: ../shared_file_systems_manage_and_unmanage_share.rst:5
msgid "Manage and unmanage share"
msgstr ""
#: ../shared_file_systems_manage_and_unmanage_share.rst:7
msgid ""
"To ``manage`` a share means that an administrator rather than a share driver "
"manages the storage life cycle. This approach is appropriate when an "
"administrator already has the custom non-manila share with its size, shared "
"file system protocol, export path and so on, and administrator wants to "
"register it in the Shared File System service."
msgstr ""
#: ../shared_file_systems_manage_and_unmanage_share.rst:13
msgid ""
"To ``unmanage`` a share means to unregister a specified share from the "
"Shared File Systems service. An administrator can manage the custom share "
"back."
msgstr ""
#: ../shared_file_systems_manage_and_unmanage_share.rst:19
msgid "Unmanage share"
msgstr ""
#: ../shared_file_systems_manage_and_unmanage_share.rst:20
msgid ""
"You can ``unmanage`` a share, to unregister it from the Shared File System "
"service, and take manual control on share life cycle. The ``unmanage`` "
"operation is not supported for shares that were created on top of share "
"servers and created with share networks), so share service should have "
"option ``driver_handles_share_servers = False`` in its configuration. You "
"can unmanage a share that has no dependent snapshots."
msgstr ""
#: ../shared_file_systems_manage_and_unmanage_share.rst:27
msgid ""
"To unmanage managed share, run :command:`manila unmanage <share>` command. "
"Then try to print the information about it. The expected behavior is that "
"Shared File Systems service won't find the share:"
msgstr ""
#: ../shared_file_systems_manage_and_unmanage_share.rst:40
msgid "Manage share"
msgstr ""
#: ../shared_file_systems_manage_and_unmanage_share.rst:41
msgid ""
"To register the non-managed share in File System service you need to run :"
"command:`manila manage` command which has such arguments:"
msgstr ""
#: ../shared_file_systems_manage_and_unmanage_share.rst:51
msgid "The positional arguments are:"
msgstr ""
#: ../shared_file_systems_manage_and_unmanage_share.rst:53
msgid ""
"service_host. The manage-share service host in this format: "
"``host@backend#POOL`` which consists of the host name for the back end, the "
"name of the back end and the pool name for the back end."
msgstr ""
#: ../shared_file_systems_manage_and_unmanage_share.rst:57
msgid ""
"protocol. The Shared File Systems protocol of the share to manage. A valid "
"value is NFS, CIFS, GlusterFS, or HDFS."
msgstr ""
#: ../shared_file_systems_manage_and_unmanage_share.rst:60
msgid ""
"export_path. The share export path in the format appropriate for the "
"protocol:"
msgstr ""
#: ../shared_file_systems_manage_and_unmanage_share.rst:63
msgid "NFS protocol. 10.0.0.1:/foo_path."
msgstr ""
#: ../shared_file_systems_manage_and_unmanage_share.rst:65
msgid "CIFS protocol. \\\\\\\\10.0.0.1\\\\foo_name_of_cifs_share."
msgstr ""
#: ../shared_file_systems_manage_and_unmanage_share.rst:67
msgid "HDFS protocol. hdfs://10.0.0.1:foo_port/foo_share_name."
msgstr ""
#: ../shared_file_systems_manage_and_unmanage_share.rst:69
msgid "GlusterFS. 10.0.0.1:/foo_volume."
msgstr ""
#: ../shared_file_systems_manage_and_unmanage_share.rst:71
msgid ""
"The ``driver_options`` is an optional set of one or more key and value "
"pairs, that describe driver options. Note that the share type must have "
"``driver_handles_share_servers = False`` option, so special share type named "
"``for_managing`` was used in example."
msgstr ""
#: ../shared_file_systems_manage_and_unmanage_share.rst:76
msgid "To manage share, run:"
msgstr ""
#: ../shared_file_systems_manage_and_unmanage_share.rst:109
msgid "Check that the share is available:"
msgstr ""
#: ../shared_file_systems_multi_backend.rst:5
msgid "Multi-storage configuration"
msgstr ""
#: ../shared_file_systems_multi_backend.rst:7
msgid ""
"The Shared File Systems service can provide access to multiple file storage "
"back ends. In general, the workflow with multiple back ends looks very "
"similar to the Block Storage service one, see :ref:`Configure multiple-"
"storage back ends in Openstack Block Storage service <multi_backend>`."
msgstr ""
#: ../shared_file_systems_multi_backend.rst:12
msgid ""
"Using `manila.conf`, you can spawn multiple share services. To do it, you "
"should set the `enabled_share_backends` flag in the `manila.conf` file. This "
"flag defines the comma-separated names of the configuration stanzas for the "
"different back ends: one name is associated to one configuration group for a "
"back end."
msgstr ""
#: ../shared_file_systems_multi_backend.rst:18
msgid "The following example runs three configured share services:"
msgstr ""
#: ../shared_file_systems_multi_backend.rst:54
msgid ""
"To spawn separate groups of share services, you can use separate "
"configuration files. If it is necessary to control each back end in a "
"separate way, you should provide a single configuration file per each back "
"end."
msgstr ""
#: ../shared_file_systems_network_plugins.rst:5
msgid "Network plug-ins"
msgstr ""
#: ../shared_file_systems_network_plugins.rst:7
msgid ""
"The Shared File Systems service architecture defines an abstraction layer "
"for network resource provisioning and allowing administrators to choose from "
"a different options for how network resources are assigned to their tenants "
"networked storage. There are a set of network plug-ins that provide a "
"variety of integration approaches with the network services that are "
"available with OpenStack."
msgstr ""
#: ../shared_file_systems_network_plugins.rst:14
msgid ""
"The Shared File Systems service may need a network resource provisioning if "
"share service with specified driver works in mode, when a share driver "
"manages life cycle of share servers on its own. This behavior is defined by "
"a flag ``driver_handles_share_servers`` in share service configuration. "
"When ``driver_handles_share_servers`` is set to ``True``, a share driver "
"will be called to create share servers for shares using information provided "
"within a share network. This information will be provided to one of the "
"enabled network plug-ins that will handle reservation, creation and deletion "
"of network resources including IP addresses and network interfaces."
msgstr ""
#: ../shared_file_systems_network_plugins.rst:25
msgid "What network plug-ins are available?"
msgstr ""
#: ../shared_file_systems_network_plugins.rst:27
msgid ""
"There are three different network plug-ins and five python classes in the "
"Shared File Systems service:"
msgstr ""
#: ../shared_file_systems_network_plugins.rst:30
msgid ""
"Network plug-in for using the OpenStack Networking service. It allows to use "
"any network segmentation that the Networking service supports. It is up to "
"each share driver to support at least one network segmentation type."
msgstr ""
#: ../shared_file_systems_network_plugins.rst:34
msgid ""
"``manila.network.neutron.neutron_network_plugin.NeutronNetworkPlugin``. This "
"is a default network plug-in. It requires the ``neutron_net_id`` and the "
"``neutron_subnet_id`` to be provided when defining the share network that "
"will be used for the creation of share servers. The user may define any "
"number of share networks corresponding to the various physical network "
"segments in a tenant environment."
msgstr ""
#: ../shared_file_systems_network_plugins.rst:41
msgid ""
"``manila.network.neutron.neutron_network_plugin. "
"NeutronSingleNetworkPlugin``. This is a simplification of the previous case. "
"It accepts values for ``neutron_net_id`` and ``neutron_subnet_id`` from the "
"``manila.conf`` configuration file and uses one network for all shares."
msgstr ""
#: ../shared_file_systems_network_plugins.rst:47
msgid ""
"When only a single network is needed, the NeutronSingleNetworkPlugin (1.b) "
"is a simple solution. Otherwise NeutronNetworkPlugin (1.a) should be chosen."
msgstr ""
#: ../shared_file_systems_network_plugins.rst:50
msgid ""
"Network plug-in for working with OpenStack Networking from the Compute "
"service. It supports either flat networks or VLAN-segmented networks."
msgstr ""
#: ../shared_file_systems_network_plugins.rst:53
msgid ""
"``manila.network.nova_network_plugin.NovaNetworkPlugin``. This plug-in "
"serves the networking needs when ``Nova networking`` is configured in the "
"cloud instead of Neutron. It requires a single parameter, ``nova_net_id``."
msgstr ""
#: ../shared_file_systems_network_plugins.rst:58
msgid ""
"``manila.network.nova_network_plugin.NovaSingleNetworkPlugin``. This plug-in "
"works the same way as ``manila.network.nova_network_plugin."
"NovaNetworkPlugin``, except it takes ``nova_net_id`` from the Shared File "
"Systems service configuration file and creates the share servers using only "
"one network."
msgstr ""
#: ../shared_file_systems_network_plugins.rst:64
msgid ""
"When only a single network is needed, the NovaSingleNetworkPlugin (2.b) is a "
"simple solution. Otherwise NovaNetworkPlugin (2.a) should be chosen."
msgstr ""
#: ../shared_file_systems_network_plugins.rst:67
msgid ""
"Network plug-in for specifying networks independently from OpenStack "
"networking services."
msgstr ""
#: ../shared_file_systems_network_plugins.rst:70
msgid ""
"``manila.network.standalone_network_plugin.StandaloneNetworkPlugin``. This "
"plug-in uses a pre-existing network that is available to the manila-share "
"host. This network may be handled either by OpenStack or be created "
"independently by any other means. The plug-in supports any type of network - "
"flat and segmented. As above, it is completely up to the share driver to "
"support the network type for which the network plug-in is configured."
msgstr ""
#: ../shared_file_systems_network_plugins.rst:80
msgid ""
"These network plug-ins were introduced in the OpenStack Kilo release. In the "
"OpenStack Juno version, only NeutronNetworkPlugin is available."
msgstr ""
#: ../shared_file_systems_network_plugins.rst:83
msgid ""
"More information about network plug-ins can be found in `Manila developer "
"documentation <http://docs.openstack.org/developer/manila/adminref/"
"network_plugins.html>`_"
msgstr ""
#: ../shared_file_systems_networking.rst:7
msgid ""
"Unlike the OpenStack Block Storage service, the Shared File Systems service "
"requires interaction with the Networking service. First of all, it is "
"because the share services require the option to self-manage share servers. "
"Also, for authentication and authorization of the clients, the Shared File "
"Systems service can be optionally configured to work with different network "
"authentication services, like LDAP, Kerberos protocols, or Microsoft Active "
"Directory."
msgstr ""
#: ../shared_file_systems_quotas.rst:5
msgid "Quotas and limits"
msgstr ""
#: ../shared_file_systems_quotas.rst:8
msgid "Limits"
msgstr ""
#: ../shared_file_systems_quotas.rst:10
msgid ""
"Limits are the resource limitations that are allowed for each tenant "
"(project). An administrator can configure limits in the :file:`manila.conf` "
"file."
msgstr ""
#: ../shared_file_systems_quotas.rst:13
msgid "Users can query their rate and absolute limits."
msgstr ""
#: ../shared_file_systems_quotas.rst:15
msgid "To see the absolute limits, run:"
msgstr ""
#: ../shared_file_systems_quotas.rst:35
msgid ""
"Rate limits control the frequency at which users can issue specific API "
"requests. Administrators use rate limiting to configure limits on the type "
"and number of API calls that can be made in a specific time interval. For "
"example, a rate limit can control the number of GET requests that can be "
"processed during a one-minute period."
msgstr ""
#: ../shared_file_systems_quotas.rst:41
msgid ""
"To set the API rate limits, add configuration to the :file:`etc/manila/api-"
"paste.ini` file that is a part of the WSGI pipeline and defines the actual "
"limits. You need to restart ``manila-api`` service after you edited :file:"
"`etc/manila/api-paste.ini` file."
msgstr ""
#: ../shared_file_systems_quotas.rst:52
msgid ""
"Also, add ``ratelimit`` to ``noauth``, ``keystone``, ``keystone_nolimit`` "
"parameters in the ``[composite:openstack_share_api]`` group."
msgstr ""
#: ../shared_file_systems_quotas.rst:62
msgid "To see the rate limits, run:"
msgstr ""
#: ../shared_file_systems_quotas.rst:76
msgid "Quotas"
msgstr ""
#: ../shared_file_systems_quotas.rst:78
msgid "Quota sets provide quotas management support."
msgstr ""
#: ../shared_file_systems_quotas.rst:80
msgid ""
"To list the quotas for a tenant or user, use the **manila quota-show** "
"command. If you specify the optional ``--user`` parameter, you get the "
"quotas for this user in the specified tenant. If you omit this parameter, "
"you get the quotas for the specified project."
msgstr ""
#: ../shared_file_systems_quotas.rst:98
msgid ""
"There are default quotas for a project that are set from the :file:`manila."
"conf` file. To list the default quotas for a project, use the **manila quota-"
"defaults** command:"
msgstr ""
#: ../shared_file_systems_quotas.rst:115
msgid ""
"The administrator can update the quotas for a specified tenant or for a "
"specified user by providing both the ``--tenant`` and ``--user`` optional "
"arguments. It is possible to update the ``snapshots``, ``gigabytes``, "
"``snapshot-gigabytes``, and ``share-networks`` quotas."
msgstr ""
#: ../shared_file_systems_quotas.rst:124
msgid ""
"As administrator, you can also permit or deny the force-update of a quota "
"that is already used and the requested value exceeds the configured quota. "
"To force-update a quota, use ``force`` optional key."
msgstr ""
#: ../shared_file_systems_quotas.rst:132
msgid "To revert quotas to default for a project or for a user, delete quotas:"
msgstr ""
#: ../shared_file_systems_scheduling.rst:5
msgid "Scheduling"
msgstr ""
#: ../shared_file_systems_scheduling.rst:7
msgid ""
"The Shared File Systems service provides unified access for a variety of "
"different types of shared file systems. To achieve this, the Shared File "
"Systems service uses a scheduler. The scheduler collects information from "
"the active shared services and makes decisions such as what shared services "
"will be used to create a new share. To manage this process, the Shared File "
"Systems service provides Share types API."
msgstr ""
#: ../shared_file_systems_scheduling.rst:14
msgid ""
"A share type is a list from key-value pairs called extra-specs. Some of "
"them, called required and un-scoped extra-specs, the scheduler uses for "
"lookup of the shared service suitable for a new share with the specified "
"share type. For more information about extra-specs and their type, see "
"`Capabilities and Extra-Specs <http://docs.openstack.org/developer/manila/"
"devref/capabilities_and_extra_specs.html>`_ section in developer "
"documentation."
msgstr ""
#: ../shared_file_systems_scheduling.rst:20
msgid "The general scheduler workflow is described below."
msgstr ""
#: ../shared_file_systems_scheduling.rst:22
msgid ""
"Share services report information about the number of existing pools, their "
"capacities and capabilities."
msgstr ""
#: ../shared_file_systems_scheduling.rst:25
msgid ""
"When a request on share creation comes in, the scheduler picks a service and "
"pool that fits the need best to serve the request, using share type filters "
"and back end capabilities. If back end capabilities pass through, all "
"filters request to the selected back end where the target pool resides."
msgstr ""
#: ../shared_file_systems_scheduling.rst:30
msgid ""
"The share driver gets the message and lets the target pool serve the request "
"as the scheduler instructs. The scoped and un-scoped share type extra-specs "
"are available for the driver implementation to use as needed."
msgstr ""
#: ../shared_file_systems_security_services.rst:5
msgid "Security services"
msgstr ""
#: ../shared_file_systems_security_services.rst:7
msgid ""
"A security service stores configuration information for clients for "
"authentication and authorization (AuthN/AuthZ). For example, a share server "
"will be the client for an existing service such as LDAP, Kerberos, or "
"Microsoft Active Directory."
msgstr ""
#: ../shared_file_systems_security_services.rst:12
msgid ""
"You can associate a share with from one to three security service types:"
msgstr ""
#: ../shared_file_systems_security_services.rst:14
msgid "``ldap``. LDAP."
msgstr ""
#: ../shared_file_systems_security_services.rst:16
msgid "``kerberos``. Kerberos."
msgstr ""
#: ../shared_file_systems_security_services.rst:18
msgid "``active_directory``. Microsoft Active Directory."
msgstr ""
#: ../shared_file_systems_security_services.rst:20
msgid "You can configure a security service with these options:"
msgstr ""
#: ../shared_file_systems_security_services.rst:22
msgid "A DNS IP address."
msgstr ""
#: ../shared_file_systems_security_services.rst:24
msgid "An IP address or host name."
msgstr ""
#: ../shared_file_systems_security_services.rst:26
msgid "A domain."
msgstr ""
#: ../shared_file_systems_security_services.rst:28
msgid "A user or group name."
msgstr ""
#: ../shared_file_systems_security_services.rst:30
msgid "The password for the user, if you specify a user name."
msgstr ""
#: ../shared_file_systems_security_services.rst:32
msgid ""
"The security service can be added to the :ref:`share network "
"<shared_file_systems_share_networks>`."
msgstr ""
#: ../shared_file_systems_security_services.rst:35
msgid ""
"To create a security service, specify the security service type and "
"optionally name, description of a security service, DNS IP address used "
"inside tenant's network, security service IP address or hostname, domain, "
"security service user or group used by tenant, a password of user."
msgstr ""
#: ../shared_file_systems_security_services.rst:40
msgid "Create a ``ldap`` security service:"
msgstr ""
#: ../shared_file_systems_security_services.rst:63
msgid "To create ``kerberos`` security service, run:"
msgstr ""
#: ../shared_file_systems_security_services.rst:86
msgid ""
"To see the list of created security service use **manila security-service-"
"list**:"
msgstr ""
#: ../shared_file_systems_security_services.rst:99
msgid ""
"You can add a security service to the existing :ref:`share network "
"<shared_file_systems_share_networks>` that is not used yet (is not "
"associated with a share)."
msgstr ""
#: ../shared_file_systems_security_services.rst:103
msgid ""
"Add a security service to the share network with **share-network-security-"
"service-add** specifying share network, security service and print the "
"information about the security service. You can see new attribute "
"``share_networks`` with associated share network ID."
msgstr ""
#: ../shared_file_systems_security_services.rst:132
msgid ""
"It is possible to see the list of security services associated with given "
"share network. List security services for ``share_net2`` share network:"
msgstr ""
#: ../shared_file_systems_security_services.rst:144
msgid ""
"You also can dissociate a security service from the share network and see "
"that a security service now has empty list of share networks:"
msgstr ""
#: ../shared_file_systems_security_services.rst:171
msgid ""
"Shared File Systems service allows you to update a security service fields "
"using **manila security-service-update** command with optional arguments "
"such as ``--dns-ip``, ``--server``, ``--domain``, ``--user``, ``--"
"password``, ``--name``, or ``--description``."
msgstr ""
#: ../shared_file_systems_security_services.rst:176
msgid ""
"To remove a security service, that is not associated with any share "
"networks, run:"
msgstr ""
#: ../shared_file_systems_services_manage.rst:5
msgid "Manage shares services"
msgstr ""
#: ../shared_file_systems_services_manage.rst:7
msgid ""
"The Shared File Systems service provides API that allows to manage running "
"share services (`Share services API <http://developer.openstack.org/api-ref-"
"share-v2.html#share-services>`_). Using ``manila service-list`` command, it "
"is possible to get a list of all kinds of running services. To select only "
"share services, you can pick items that have field ``binary`` equals to "
"``manila-share``. Also, you can enable or disable share services using raw "
"API requests. Disabling means that share service excludes from scheduler "
"cycle and new shares will not be placed on disabled back end, but shares "
"from this service stay available."
msgstr ""
#: ../shared_file_systems_share_management.rst:5
msgid "Share management"
msgstr ""
#: ../shared_file_systems_share_management.rst:7
msgid ""
"A share is a remote, mountable file system. You can mount a share to and "
"access a share from several hosts by several users at a time."
msgstr ""
#: ../shared_file_systems_share_management.rst:10
msgid ""
"You can create a share and associate it with a network, list shares, and "
"show information for, update, and delete a specified share. You can also "
"create snapshots of shares. To create a snapshot, you specify the ID of the "
"share that you want to snapshot."
msgstr ""
#: ../shared_file_systems_share_management.rst:15
msgid "The shares are based on of the supported Shared File Systems protocols:"
msgstr ""
#: ../shared_file_systems_share_management.rst:17
msgid "*NFS*. Network File System (NFS)."
msgstr ""
#: ../shared_file_systems_share_management.rst:18
msgid "*CIFS*. Common Internet File System (CIFS)."
msgstr ""
#: ../shared_file_systems_share_management.rst:19
msgid "*GLUSTERFS*. Gluster file system (GlusterFS)."
msgstr ""
#: ../shared_file_systems_share_management.rst:20
msgid "*HDFS*. Hadoop Distributed File System (HDFS)."
msgstr ""
#: ../shared_file_systems_share_management.rst:22
msgid ""
"The Shared File Systems service provides set of drivers that enable you to "
"use various network file storage devices, instead of the base "
"implementation. That is the real purpose of the Shared File Systems service "
"service in production."
msgstr ""
#: ../shared_file_systems_share_networks.rst:5
msgid "Share networks"
msgstr ""
#: ../shared_file_systems_share_networks.rst:7
msgid ""
"Share network is an entity that encapsulates interaction with the OpenStack "
"Networking service. If the share driver that you selected runs in a mode "
"requiring Networking service interaction, specify the share network when "
"creating a share network."
msgstr ""
#: ../shared_file_systems_share_networks.rst:13
msgid "How to create share network"
msgstr ""
#: ../shared_file_systems_share_networks.rst:15
msgid "List networks in a tenant, run:"
msgstr ""
#: ../shared_file_systems_share_networks.rst:29
msgid ""
"A share network stores network information that share servers can use where "
"shares are hosted. You can associate a share with a single share network. "
"When you create or update a share, you can optionally specify the ID of a "
"share network through which instances can access the share."
msgstr ""
#: ../shared_file_systems_share_networks.rst:34
msgid ""
"When you create a share network, you can specify only one type of network:"
msgstr ""
#: ../shared_file_systems_share_networks.rst:36
msgid ""
"OpenStack Networking (neutron). Specify a network ID and subnet ID. In this "
"case ``manila.network.nova_network_plugin.NeutronNetworkPlugin`` will be "
"used."
msgstr ""
#: ../shared_file_systems_share_networks.rst:40
msgid ""
"Legacy networking (nova-network). Specify a network ID. In this case "
"``manila.network.nova_network_plugin.NoveNetworkPlugin`` will be used."
msgstr ""
#: ../shared_file_systems_share_networks.rst:44
msgid ""
"For more information about supported plug-ins for share networks, see :ref:"
"`shared_file_systems_network_plugins`."
msgstr ""
#: ../shared_file_systems_share_networks.rst:47
msgid "A share network has these attributes:"
msgstr ""
#: ../shared_file_systems_share_networks.rst:49
msgid ""
"The IP block in Classless Inter-Domain Routing (CIDR) notation from which to "
"allocate the network."
msgstr ""
#: ../shared_file_systems_share_networks.rst:52
msgid "The IP version of the network."
msgstr ""
#: ../shared_file_systems_share_networks.rst:54
msgid "The network type, which is `vlan`, `vxlan`, `gre`, or `flat`."
msgstr ""
#: ../shared_file_systems_share_networks.rst:56
msgid ""
"If the network uses segmentation, a segmentation identifier. For example, "
"VLAN, VXLAN, and GRE networks use segmentation."
msgstr ""
#: ../shared_file_systems_share_networks.rst:59
msgid "To create a share network with private network and subnetwork, run:"
msgstr ""
#: ../shared_file_systems_share_networks.rst:82
msgid ""
"The ``segmentation_id``, ``cidr``, ``ip_version``, and ``network_type`` "
"share network attributes are automatically set to the values determined by "
"the network provider."
msgstr ""
#: ../shared_file_systems_share_networks.rst:86
msgid "Check network list, run:"
msgstr ""
#: ../shared_file_systems_share_networks.rst:97
msgid ""
"If you configured the Generic driver with ``driver_handles_share_servers = "
"True`` (with the share servers) and had already some operations in the "
"Shared File Systems service, you can see ``manila_service_network`` in the "
"neutron list of networks. This network was created by Generic driver for "
"inner usage."
msgstr ""
#: ../shared_file_systems_share_networks.rst:115
msgid ""
"You also can see detailed information about the share network including "
"``network_type``, ``segmentation_id`` fields:"
msgstr ""
#: ../shared_file_systems_share_networks.rst:139
msgid ""
"You also can add and remove the security services to the share network. For "
"details, see :ref:`shared_file_systems_security_services`."
msgstr ""
#: ../shared_file_systems_share_resize.rst:5
msgid "Resize share"
msgstr ""
#: ../shared_file_systems_share_resize.rst:7
msgid ""
"To change file share size, use :command:`manila extend` and :command:`manila "
"shrink`. For most drivers it is safe operation. If you want to be sure that "
"your data is safe, you can make a share back up by creating a snapshot of it."
msgstr ""
#: ../shared_file_systems_share_resize.rst:12
msgid ""
"You can extend and shrink the share with **manila extend** and **manila "
"shrink** commands correspondingly and specifying the share and new size that "
"doesn't exceed the quota. For details, see :ref:`Quotas and Limits "
"<shared_file_systems_quotas>`. You also cannot shrink size to 0 or to a "
"greater value than the current share size."
msgstr ""
#: ../shared_file_systems_share_resize.rst:18
msgid ""
"While extending the share gets ``extending`` status that means that the "
"increase share size request was issued successfully."
msgstr ""
#: ../shared_file_systems_share_resize.rst:21
msgid "To extend the share and check the result, run:"
msgstr ""
#: ../shared_file_systems_share_resize.rst:54
msgid ""
"While shrinking the share gets ``shrinking`` status that means that the "
"decrease share size request was issued successfully. To shrink the share and "
"check the result, run:"
msgstr ""
#: ../shared_file_systems_share_types.rst:5
msgid "Share types"
msgstr ""
#: ../shared_file_systems_share_types.rst:7
msgid ""
"A share type enables you to filter or choose back ends before you create a "
"share and to set data for the share driver. A share type behaves in the same "
"way as a Block Storage volume type behaves."
msgstr ""
#: ../shared_file_systems_share_types.rst:11
msgid ""
"In the Shared File Systems configuration file :file:`manila.conf`, the "
"administrator can set the share type used by default for the share creation "
"and then create a default share type."
msgstr ""
#: ../shared_file_systems_share_types.rst:15
msgid "To create a share type, use **manila type-create** command as:"
msgstr ""
#: ../shared_file_systems_share_types.rst:23
msgid ""
"where the ``name`` is the share type name, ``--is_public`` defines the level "
"of the visibility for the share type, ``snapshot_support`` and "
"``spec_driver_handles_share_servers`` are the extra specifications used to "
"filter back ends. Administrators can create share types with these extra "
"specifications that are used for the back ends filtering:"
msgstr ""
#: ../shared_file_systems_share_types.rst:30
msgid ""
"``driver_handles_share_servers``. Required. Defines the driver mode for "
"share server life cycle management. Valid values are ``true``/``1`` and "
"``false``/``0``. Set to True when the share driver can manage, or handle, "
"the share server life cycle. Set to False when an administrator rather than "
"a share driver manages the bare metal storage with some net interface "
"instead of the presence of the share servers."
msgstr ""
#: ../shared_file_systems_share_types.rst:39
msgid ""
"``snapshot_support``. Filters back ends by whether they do or do not support "
"share snapshots. Default is ``True``. Set to True to find back ends that "
"support share snapshots. Set to False to find back ends that do not support "
"share snapshots."
msgstr ""
#: ../shared_file_systems_share_types.rst:46
msgid ""
"The extra specifications set in the share types are operated in the :ref:"
"`shared_file_systems_scheduling`."
msgstr ""
#: ../shared_file_systems_share_types.rst:49
msgid ""
"Administrators can also set additional extra specifications for a share type "
"for the following purposes:"
msgstr ""
#: ../shared_file_systems_share_types.rst:52
msgid ""
"*Filter back ends*. Unqualified extra specifications that are written in "
"this format: ``extra_spec=value``. For example, **netapp_raid_type=raid4**."
msgstr ""
#: ../shared_file_systems_share_types.rst:55
msgid ""
"*Set data for the driver*. Qualified extra specifications that are written "
"always with the prefix with a colon, except for the special ``capabilities`` "
"prefix, in this format: ``vendor:extra_spec=value``. For example, **netapp:"
"thin_provisioned=true**."
msgstr ""
#: ../shared_file_systems_share_types.rst:60
msgid ""
"The scheduler uses the special capabilities prefix for filtering. The "
"scheduler can only create a share on a back end that reports capabilities "
"that match the un-scoped extra-spec keys for the share type. For details, "
"see `Capabilities and Extra-Specs <http://docs.openstack.org/developer/"
"manila/devref/ capabilities_and_extra_specs.html>`_."
msgstr ""
#: ../shared_file_systems_share_types.rst:66
msgid ""
"Each driver implementation determines which extra specification keys it "
"uses. For details, see the documentation for the driver."
msgstr ""
#: ../shared_file_systems_share_types.rst:69
msgid ""
"An administrator can use the :file:`policy.json` file to grant permissions "
"for share type creation with extra specifications to other roles."
msgstr ""
#: ../shared_file_systems_share_types.rst:72
msgid ""
"You set a share type to private or public and :ref:`manage the "
"access<share_type_access>` to the private share types. By default a share "
"type is created as publicly accessible. Set ``--is_public`` to ``False`` to "
"make the share type private."
msgstr ""
#: ../shared_file_systems_share_types.rst:78
msgid "Share type operations"
msgstr ""
#: ../shared_file_systems_share_types.rst:80
msgid ""
"To create a new share type you need to specify name of new share type and "
"required extra spec ``driver_handles_share_servers``. Also, the new share "
"type can be public."
msgstr ""
#: ../shared_file_systems_share_types.rst:95
msgid ""
"You can set or unset extra specifications for a share type using **manila "
"type-key <share_type> set <key=value>** command. Since it is up to each "
"driver what extra specification keys it uses, see the documentation for a "
"specified driver."
msgstr ""
#: ../shared_file_systems_share_types.rst:104
msgid ""
"It is also possible for administrator to see a list of current share types "
"and extra specifications:"
msgstr ""
#: ../shared_file_systems_share_types.rst:118
msgid ""
"Use **manila type-key <share_type> unset <key>** to unset an extra "
"specification."
msgstr ""
#: ../shared_file_systems_share_types.rst:121
msgid ""
"The public or private share type can be deleted by means of **manila type-"
"delete <share_type>** command."
msgstr ""
#: ../shared_file_systems_share_types.rst:127
msgid "Share type access"
msgstr ""
#: ../shared_file_systems_share_types.rst:129
msgid ""
"You can manage the access to the private share type for the different "
"projects: add access, remove access, and get information about access for a "
"specified private share type."
msgstr ""
#: ../shared_file_systems_share_types.rst:133
msgid "Create a private type:"
msgstr ""
#: ../shared_file_systems_share_types.rst:145
msgid ""
"If you run **manila type-list** you see only public types. To see the "
"private types also, run **manila type-list** with ``-all`` optional argument."
msgstr ""
#: ../shared_file_systems_share_types.rst:149
msgid ""
"Grant access to created private type for a demo and alt_demo projects by "
"providing their IDs:"
msgstr ""
#: ../shared_file_systems_share_types.rst:157
msgid "Get information about access for a private share type ``my_type1``:"
msgstr ""
#: ../shared_file_systems_share_types.rst:169
msgid ""
"After you granted the access to the share type users that belong to project "
"with granted access can see the type in the list and create shares with "
"allowed private share type."
msgstr ""
#: ../shared_file_systems_share_types.rst:173
msgid ""
"To deny access for a specified project, use **manila type-access-remove "
"<share_type> <project_id>** command."
msgstr ""
#: ../shared_file_systems_snapshots.rst:5
msgid "Share snapshots"
msgstr ""
#: ../shared_file_systems_snapshots.rst:7
msgid ""
"The Shared File Systems service provides a snapshot mechanism to help users "
"restore data by running the `manila snapshot-create` command."
msgstr ""
#: ../shared_file_systems_snapshots.rst:10
msgid ""
"To export a snapshot, you can create shares from it, then mount new share to "
"instance and then directly copy files from attached share into archive."
msgstr ""
#: ../shared_file_systems_snapshots.rst:13
msgid ""
"To import a snapshot, create a new share with appropriate size, attach it to "
"instance and then copy file from archive to attached file system."
msgstr ""
#: ../shared_file_systems_snapshots.rst:17
msgid "You cannot delete a share while it has saved dependent snapshots."
msgstr ""
#: ../shared_file_systems_snapshots.rst:19
msgid "Create a snapshot from the share:"
msgstr ""
#: ../shared_file_systems_snapshots.rst:38
msgid "Update name or description of a snapshot if it is needed:"
msgstr ""
#: ../shared_file_systems_snapshots.rst:44
msgid "Check that status of a snapshot is ``available``:"
msgstr ""
#: ../shared_file_systems_snapshots.rst:63
msgid ""
"To restore your data from snapshot, use :command:`manila create` with key "
"``--snapshot-id``. This creates a new share from exiting snapshot. Create a "
"share from a snapshot and check whether it is available:"
msgstr ""
#: ../shared_file_systems_snapshots.rst:126
msgid ""
"You can soft-delete a snapshot using **manila snapshot-delete "
"<snapshot_name_or_ID>**. If a snapshot is in busy state and during deleting "
"got the ``error_deleting`` status, administrator can force-delete it or "
"explicitly reset the state. Use **snapshot-reset-state [--state <state>] "
"<snapshot>** to update the state of a snapshot explicitly. A valid value of "
"a status are ``available``, ``error``, ``creating``, ``deleting``, "
"``error_deleting``. If no state is provided, available will be used."
msgstr ""
#: ../shared_file_systems_snapshots.rst:135
msgid ""
"Use **manila snapshot-force-delete <snapshot>** to force-delete a specified "
"share snapshot in any state."
msgstr ""
#: ../shared_file_systems_troubleshoot.rst:5
msgid "Troubleshoot Shared File Systems service"
msgstr ""
#: ../shared_file_systems_troubleshoot.rst:8
msgid "Failures in Share File Systems service during a share creation"
msgstr ""
#: ../shared_file_systems_troubleshoot.rst:10
msgid ""
"If new shares go into ``error`` state during creation, follow next steps:"
msgstr ""
#: ../shared_file_systems_troubleshoot.rst:12
msgid ""
"Make sure, that share services are running in debug mode. If the debug mode "
"is not set, you will not get any tips from logs how to fix your issue."
msgstr ""
#: ../shared_file_systems_troubleshoot.rst:15
msgid ""
"Find what share service holds a specified share. Do to that, run command :"
"command:`manila show <share_id_or_name>` and find a share host in the "
"output. Host uniquely identifies what share service holds the broken share."
msgstr ""
#: ../shared_file_systems_troubleshoot.rst:19
msgid ""
"Look thought logs of this share service. Usually, it can be found at ``/etc/"
"var/log/manila-share.log``. This log should contain kind of traceback with "
"extra information to help you to find the origin of issues."
msgstr ""
#: ../shared_file_systems_troubleshoot.rst:24
msgid "No valid host was found"
msgstr ""
#: ../shared_file_systems_troubleshoot.rst:26
msgid ""
"You should manage share types very carefully. If share type contains invalid "
"extra spec scheduler will never find a valid host for shares of this type. "
"To diagnose this issue, make sure that scheduler service is running in debug "
"mode, try to create a new share and look for message ``Failed to schedule "
"create_share: No valid host was found.`` in ``/etc/var/log/manila-scheduler."
"log``."
msgstr ""
#: ../shared_file_systems_troubleshoot.rst:33
msgid ""
"To solve this issue look carefully through the list of extra specs in the "
"share type and list of share services reported capabilities. Make sure that "
"extra specs are pointed in the right way."
msgstr ""
#: ../shared_file_systems_troubleshoot.rst:38
msgid "Created share is unreachable"
msgstr ""
#: ../shared_file_systems_troubleshoot.rst:40
msgid ""
"By default, a new share does not have any active access rules. To provide "
"access to new created share, you need to create appropriate access rule with "
"right value that defines access."
msgstr ""
#: ../shared_file_systems_troubleshoot.rst:45
msgid "Service becomes unavailable after upgrade"
msgstr ""
#: ../shared_file_systems_troubleshoot.rst:47
msgid ""
"After upgrading the Shared File Systems service from version v1 to version "
"v2.x, please be attentive to update the service endpoint in the OpenStack "
"Identity service. Use :command:`keystone service-list` to get service type "
"related to Shared File Systems service and then :command:`keystone service-"
"list --service <share-service-type>` command. You will get the endpoints "
"expected from running the Shared File Systems service. Make sure that these "
"endpoints are updated. If it is not true, you need to delete the outdated "
"endpoints and create new ones."
msgstr ""
#: ../shared_file_systems_troubleshoot.rst:58
msgid "Failures during management of internal resources"
msgstr ""
#: ../shared_file_systems_troubleshoot.rst:60
msgid ""
"Some drivers in the Shared File Systems service can create service entities, "
"like servers and networks. If it is necessary to reach it you can log in to "
"tenant ``service`` and get manual control over it."
msgstr ""
#: ../support-compute.rst:7
msgid "Troubleshoot Compute"
msgstr ""
#: ../support-compute.rst:9
msgid ""
"Common problems for Compute typically involve misconfigured networking or "
"credentials that are not sourced properly in the environment. Also, most "
"flat networking configurations do not enable :command:`ping` or :command:"
"`ssh` from a compute node to the instances that run on that node. Another "
"common problem is trying to run 32-bit images on a 64-bit compute node. This "
"section shows you how to troubleshoot Compute."
msgstr ""
#: ../support-compute.rst:21
msgid "Compute service logging"
msgstr ""
#: ../support-compute.rst:23
msgid ""
"Compute stores a log file for each service in ``/var/log/nova``. For "
"example, ``nova-compute.log`` is the log for the ``nova-compute`` service. "
"You can set the following options to format log strings for the ``nova.log`` "
"module in the ``nova.conf`` file:"
msgstr ""
#: ../support-compute.rst:29
msgid "``logging_context_format_string``"
msgstr ""
#: ../support-compute.rst:31
msgid "``logging_default_format_string``"
msgstr ""
#: ../support-compute.rst:33
msgid ""
"If the log level is set to ``debug``, you can also specify "
"``logging_debug_format_suffix`` to append extra formatting. For information "
"about what variables are available for the formatter see http://docs.python."
"org/library/logging.html#formatter-objects."
msgstr ""
#: ../support-compute.rst:38
msgid ""
"You have two options for logging for OpenStack Compute based on "
"configuration settings. In ``nova.conf``, include the ``logfile`` option to "
"enable logging. Alternatively you can set ``use_syslog = 1`` so that the "
"nova daemon logs to syslog."
msgstr ""
#: ../support-compute.rst:47
msgid "Guru Meditation reports"
msgstr ""
#: ../support-compute.rst:49
msgid ""
"A Guru Meditation report is sent by the Compute service upon receipt of the "
"``SIGUSR1`` signal. This report is a general-purpose error report, including "
"a complete report of the service's current state, and is sent to ``stderr``."
msgstr ""
#: ../support-compute.rst:54
msgid ""
"For example, if you redirect error output to ``nova-api-err.log`` using :"
"command:`nova-api 2>/var/log/nova/nova-api-err.log`, resulting in the "
"process ID 8675, you can then run:"
msgstr ""
#: ../support-compute.rst:62
msgid ""
"This command triggers the Guru Meditation report to be printed to ``/var/log/"
"nova/nova-api-err.log``."
msgstr ""
#: ../support-compute.rst:65
msgid "The report has the following sections:"
msgstr ""
#: ../support-compute.rst:67
msgid ""
"Package: Displays information about the package to which the process "
"belongs, including version information."
msgstr ""
#: ../support-compute.rst:70
msgid ""
"Threads: Displays stack traces and thread IDs for each of the threads within "
"the process."
msgstr ""
#: ../support-compute.rst:73
msgid ""
"Green Threads: Displays stack traces for each of the green threads within "
"the process (green threads do not have thread IDs)."
msgstr ""
#: ../support-compute.rst:76
msgid ""
"Configuration: Lists all configuration options currently accessible through "
"the CONF object for the current process."
msgstr ""
#: ../support-compute.rst:79
msgid ""
"For more information, see `Guru Meditation Reports <http://docs.openstack."
"org/developer/nova/devref/gmr.html>`_."
msgstr ""
#: ../support-compute.rst:85
msgid "Common errors and fixes for Compute"
msgstr ""
#: ../support-compute.rst:87
msgid ""
"The `ask.openstack.org <http://ask.openstack.org>`_ site offers a place to "
"ask and answer questions, and you can also mark questions as frequently "
"asked questions. This section describes some errors people have posted "
"previously. Bugs are constantly being fixed, so online resources are a great "
"way to get the most up-to-date errors and fixes."
msgstr ""
#: ../support-compute.rst:93
msgid "**Credential errors, 401, and 403 forbidden errors**"
msgstr ""
#: ../support-compute.rst:95
msgid ""
"Missing credentials cause a ``403 forbidden`` error. To resolve this issue, "
"use one of these methods:"
msgstr ""
#: ../support-compute.rst:99
msgid ""
"Gets the ``novarc`` file from the project ZIP file, saves existing "
"credentials in case of override, and manually sources the ``novarc`` file."
msgstr ""
#: ../support-compute.rst:101
msgid "Manual method"
msgstr ""
#: ../support-compute.rst:104
msgid "Generates ``novarc`` from the project ZIP file and sources it for you."
msgstr ""
#: ../support-compute.rst:104
msgid "Script method"
msgstr ""
#: ../support-compute.rst:106
msgid ""
"When you run ``nova-api`` the first time, it generates the certificate "
"authority information, including ``openssl.cnf``. If you start the CA "
"services before this, you might not be able to create your ZIP file. Restart "
"the services. When your CA information is available, create your ZIP file."
msgstr ""
#: ../support-compute.rst:112
msgid ""
"Also, check your HTTP proxy settings to see whether they cause problems with "
"``novarc`` creation."
msgstr ""
#: ../support-compute.rst:115
msgid "**Instance errors**"
msgstr ""
#: ../support-compute.rst:117
msgid ""
"Sometimes a particular instance shows ``pending`` or you cannot SSH to it. "
"Sometimes the image itself is the problem. For example, when you use flat "
"manager networking, you do not have a DHCP server and certain images do not "
"support interface injection; you cannot connect to them. The fix for this "
"problem is to use an image that does support this method, such as Ubuntu, "
"which obtains an IP address correctly with FlatManager network settings."
msgstr ""
#: ../support-compute.rst:125
msgid ""
"To troubleshoot other possible problems with an instance, such as an "
"instance that stays in a spawning state, check the directory for the "
"particular instance under ``/var/lib/nova/instances`` on the ``nova-"
"compute`` host and make sure that these files are present:"
msgstr ""
#: ../support-compute.rst:130
msgid "``libvirt.xml``"
msgstr ""
#: ../support-compute.rst:131
msgid "``disk``"
msgstr ""
#: ../support-compute.rst:132
msgid "``disk-raw``"
msgstr ""
#: ../support-compute.rst:133
msgid "``kernel``"
msgstr ""
#: ../support-compute.rst:134
msgid "``ramdisk``"
msgstr ""
#: ../support-compute.rst:135
msgid "``console.log``, after the instance starts."
msgstr ""
#: ../support-compute.rst:137
msgid ""
"If any files are missing, empty, or very small, the ``nova-compute`` service "
"did not successfully download the images from the Image service."
msgstr ""
#: ../support-compute.rst:140
msgid ""
"Also check ``nova-compute.log`` for exceptions. Sometimes they do not appear "
"in the console output."
msgstr ""
#: ../support-compute.rst:143
msgid ""
"Next, check the log file for the instance in the ``/var/log/libvirt/qemu`` "
"directory to see if it exists and has any useful error messages in it."
msgstr ""
#: ../support-compute.rst:146
msgid ""
"Finally, from the ``/var/lib/nova/instances`` directory for the instance, "
"see if this command returns an error:"
msgstr ""
#: ../support-compute.rst:153
msgid "**Empty log output for Linux instances**"
msgstr ""
#: ../support-compute.rst:155
msgid ""
"You can view the log output of running instances from either the :guilabel:"
"`Log` tab of the dashboard or the output of :command:`nova console-log`. In "
"some cases, the log output of a running Linux instance will be empty or only "
"display a single character (for example, the `?` character)."
msgstr ""
#: ../support-compute.rst:161
msgid ""
"This occurs when the Compute service attempts to retrieve the log output of "
"the instance via a serial console while the instance itself is not "
"configured to send output to the console. To rectify this, append the "
"following parameters to kernel arguments specified in the instance's boot "
"loader:"
msgstr ""
#: ../support-compute.rst:171
msgid ""
"Upon rebooting, the instance will be configured to send output to the "
"Compute service."
msgstr ""
#: ../support-compute.rst:178
msgid "Reset the state of an instance"
msgstr ""
#: ../support-compute.rst:180
msgid ""
"If an instance remains in an intermediate state, such as ``deleting``, you "
"can use the :command:`nova reset-state` command to manually reset the state "
"of an instance to an error state. You can then delete the instance. For "
"example:"
msgstr ""
#: ../support-compute.rst:190
msgid ""
"You can also use the :option:`--active` parameter to force the instance back "
"to an active state instead of an error state. For example:"
msgstr ""
#: ../support-compute.rst:201
msgid "Injection problems"
msgstr ""
#: ../support-compute.rst:203
msgid ""
"If instances do not boot or boot slowly, investigate file injection as a "
"cause."
msgstr ""
#: ../support-compute.rst:205
msgid "To disable injection in libvirt, set the following in ``nova.conf``:"
msgstr ""
#: ../support-compute.rst:212
msgid ""
"If you have not enabled the configuration drive and you want to make user-"
"specified files available from the metadata server for to improve "
"performance and avoid boot failure if injection fails, you must disable "
"injection."
msgstr ""
#: ../support-compute.rst:222
msgid "Disable live snapshotting"
msgstr ""
#: ../support-compute.rst:224
msgid ""
"If you use libvirt version ``1.2.2``, you may experience problems with live "
"snapshots creation. Occasionally, libvirt of the specified version fails to "
"perform the live snapshotting under load that presupposes a concurrent "
"creation of multiple snapshots."
msgstr ""
#: ../support-compute.rst:229
msgid ""
"To effectively disable the libvirt live snapshotting, until the problem is "
"resolved, configure the ``disable_libvirt_livesnapshot`` option. You can "
"turn off the live snapshotting mechanism by setting up its value to ``True`` "
"in the ``[workarounds]`` section of the ``nova.conf`` file:"
msgstr ""
#: ../telemetry-alarms.rst:5
msgid "Alarms"
msgstr ""
#: ../telemetry-alarms.rst:7
msgid ""
"Alarms provide user-oriented Monitoring-as-a-Service for resources running "
"on OpenStack. This type of monitoring ensures you can automatically scale in "
"or out a group of instances through the Orchestration service, but you can "
"also use alarms for general-purpose awareness of your cloud resources' "
"health."
msgstr ""
#: ../telemetry-alarms.rst:13
msgid "These alarms follow a tri-state model:"
msgstr ""
#: ../telemetry-alarms.rst:16
msgid "The rule governing the alarm has been evaluated as ``False``."
msgstr ""
#: ../telemetry-alarms.rst:16
msgid "ok"
msgstr ""
#: ../telemetry-alarms.rst:19
msgid "The rule governing the alarm have been evaluated as ``True``."
msgstr ""
#: ../telemetry-alarms.rst:19
msgid "alarm"
msgstr ""
#: ../telemetry-alarms.rst:22
msgid ""
"There are not enough datapoints available in the evaluation periods to "
"meaningfully determine the alarm state."
msgstr ""
#: ../telemetry-alarms.rst:23
msgid "insufficient data"
msgstr ""
#: ../telemetry-alarms.rst:26
msgid "Alarm definitions"
msgstr ""
#: ../telemetry-alarms.rst:28
msgid ""
"The definition of an alarm provides the rules that govern when a state "
"transition should occur, and the actions to be taken thereon. The nature of "
"these rules depend on the alarm type."
msgstr ""
#: ../telemetry-alarms.rst:33
msgid "Threshold rule alarms"
msgstr ""
#: ../telemetry-alarms.rst:35
msgid ""
"For conventional threshold-oriented alarms, state transitions are governed "
"by:"
msgstr ""
#: ../telemetry-alarms.rst:38
msgid ""
"A static threshold value with a comparison operator such as greater than or "
"less than."
msgstr ""
#: ../telemetry-alarms.rst:41
msgid "A statistic selection to aggregate the data."
msgstr ""
#: ../telemetry-alarms.rst:43
msgid ""
"A sliding time window to indicate how far back into the recent past you want "
"to look."
msgstr ""
#: ../telemetry-alarms.rst:47
msgid "Combination rule alarms"
msgstr ""
#: ../telemetry-alarms.rst:49
msgid ""
"The Telemetry service also supports the concept of a meta-alarm, which "
"aggregates over the current state of a set of underlying basic alarms "
"combined via a logical operator (AND or OR)."
msgstr ""
#: ../telemetry-alarms.rst:54
msgid "Alarm dimensioning"
msgstr ""
#: ../telemetry-alarms.rst:56
msgid ""
"A key associated concept is the notion of *dimensioning* which defines the "
"set of matching meters that feed into an alarm evaluation. Recall that "
"meters are per-resource-instance, so in the simplest case an alarm might be "
"defined over a particular meter applied to all resources visible to a "
"particular user. More useful however would be the option to explicitly "
"select which specific resources you are interested in alarming on."
msgstr ""
#: ../telemetry-alarms.rst:64
msgid ""
"At one extreme you might have narrowly dimensioned alarms where this "
"selection would have only a single target (identified by resource ID). At "
"the other extreme, you could have widely dimensioned alarms where this "
"selection identifies many resources over which the statistic is aggregated. "
"For example all instances booted from a particular image or all instances "
"with matching user metadata (the latter is how the Orchestration service "
"identifies autoscaling groups)."
msgstr ""
#: ../telemetry-alarms.rst:74
msgid "Alarm evaluation"
msgstr ""
#: ../telemetry-alarms.rst:76
msgid ""
"Alarms are evaluated by the ``alarm-evaluator`` service on a periodic basis, "
"defaulting to once every minute."
msgstr ""
#: ../telemetry-alarms.rst:80
msgid "Alarm actions"
msgstr ""
#: ../telemetry-alarms.rst:82
msgid ""
"Any state transition of individual alarm (to ``ok``, ``alarm``, or "
"``insufficient data``) may have one or more actions associated with it. "
"These actions effectively send a signal to a consumer that the state "
"transition has occurred, and provide some additional context. This includes "
"the new and previous states, with some reason data describing the "
"disposition with respect to the threshold, the number of datapoints involved "
"and most recent of these. State transitions are detected by the ``alarm-"
"evaluator``, whereas the ``alarm-notifier`` effects the actual notification "
"action."
msgstr ""
#: ../telemetry-alarms.rst:92
msgid "**Webhooks**"
msgstr ""
#: ../telemetry-alarms.rst:94
msgid ""
"These are the *de facto* notification type used by Telemetry alarming and "
"simply involve an HTTP POST request being sent to an endpoint, with a "
"request body containing a description of the state transition encoded as a "
"JSON fragment."
msgstr ""
#: ../telemetry-alarms.rst:99
msgid "**Log actions**"
msgstr ""
#: ../telemetry-alarms.rst:101
msgid ""
"These are a lightweight alternative to webhooks, whereby the state "
"transition is simply logged by the ``alarm-notifier``, and are intended "
"primarily for testing purposes."
msgstr ""
#: ../telemetry-alarms.rst:106
msgid "Workload partitioning"
msgstr ""
#: ../telemetry-alarms.rst:108
msgid ""
"The alarm evaluation process uses the same mechanism for workload "
"partitioning as the central and compute agents. The `Tooz <https://pypi."
"python.org/pypi/tooz>`_ library provides the coordination within the groups "
"of service instances. For further information about this approach, see the "
"section called :ref:`Support for HA deployment of the central and compute "
"agent services <ha-deploy-services>`."
msgstr ""
#: ../telemetry-alarms.rst:116
msgid ""
"To use this workload partitioning solution set the ``evaluation_service`` "
"option to ``default``. For more information, see the alarm section in the "
"`OpenStack Configuration Reference <http://docs.openstack.org/liberty/config-"
"reference/content/ch_configuring-openstack-telemetry.html>`_."
msgstr ""
#: ../telemetry-alarms.rst:122
msgid "Using alarms"
msgstr ""
#: ../telemetry-alarms.rst:125
msgid "Alarm creation"
msgstr ""
#: ../telemetry-alarms.rst:127
msgid ""
"An example of creating a threshold-oriented alarm, based on an upper bound "
"on the CPU utilization for a particular instance:"
msgstr ""
#: ../telemetry-alarms.rst:140
msgid ""
"This creates an alarm that will fire when the average CPU utilization for an "
"individual instance exceeds 70% for three consecutive 10 minute periods. The "
"notification in this case is simply a log message, though it could "
"alternatively be a webhook URL."
msgstr ""
#: ../telemetry-alarms.rst:146
msgid ""
"Alarm names must be unique for the alarms associated with an individual "
"project. The cloud administrator can limit the maximum resulting actions for "
"three different states, and the ability for a normal user to create ``log://"
"`` and ``test://`` notifiers is disabled. This prevents unintentional "
"consumption of disk and memory resources by the Telemetry service."
msgstr ""
#: ../telemetry-alarms.rst:154
msgid ""
"The sliding time window over which the alarm is evaluated is 30 minutes in "
"this example. This window is not clamped to wall-clock time boundaries, "
"rather it's anchored on the current time for each evaluation cycle, and "
"continually creeps forward as each evaluation cycle rolls around (by "
"default, this occurs every minute)."
msgstr ""
#: ../telemetry-alarms.rst:160
msgid ""
"The period length is set to 600s in this case to reflect the out-of-the-box "
"default cadence for collection of the associated meter. This period matching "
"illustrates an important general principal to keep in mind for alarms:"
msgstr ""
#: ../telemetry-alarms.rst:166
msgid ""
"The alarm period should be a whole number multiple (1 or more) of the "
"interval configured in the pipeline corresponding to the target meter."
msgstr ""
#: ../telemetry-alarms.rst:170
msgid ""
"Otherwise the alarm will tend to flit in and out of the ``insufficient "
"data`` state due to the mismatch between the actual frequency of datapoints "
"in the metering store and the statistics queries used to compare against the "
"alarm threshold. If a shorter alarm period is needed, then the corresponding "
"interval should be adjusted in the :file:`pipeline.yaml` file."
msgstr ""
#: ../telemetry-alarms.rst:177
msgid ""
"Other notable alarm attributes that may be set on creation, or via a "
"subsequent update, include:"
msgstr ""
#: ../telemetry-alarms.rst:181
msgid "The initial alarm state (defaults to ``insufficient data``)."
msgstr ""
#: ../telemetry-alarms.rst:181
msgid "state"
msgstr ""
#: ../telemetry-alarms.rst:184
msgid ""
"A free-text description of the alarm (defaults to a synopsis of the alarm "
"rule)."
msgstr ""
#: ../telemetry-alarms.rst:185
msgid "description"
msgstr ""
#: ../telemetry-alarms.rst:188
msgid ""
"True if evaluation and actioning is to be enabled for this alarm (defaults "
"to True)."
msgstr ""
#: ../telemetry-alarms.rst:192
msgid ""
"True if actions should be repeatedly notified while the alarm remains in the "
"target state (defaults to False)."
msgstr ""
#: ../telemetry-alarms.rst:193
msgid "repeat-actions"
msgstr ""
#: ../telemetry-alarms.rst:196
msgid "An action to invoke when the alarm state transitions to ``ok``."
msgstr ""
#: ../telemetry-alarms.rst:196
msgid "ok-action"
msgstr ""
#: ../telemetry-alarms.rst:199
msgid ""
"An action to invoke when the alarm state transitions to ``insufficient "
"data``."
msgstr ""
#: ../telemetry-alarms.rst:200
msgid "insufficient-data-action"
msgstr ""
#: ../telemetry-alarms.rst:203
msgid ""
"Used to restrict evaluation of the alarm to certain times of the day or days "
"of the week (expressed as ``cron`` expression with an optional timezone)."
msgstr ""
#: ../telemetry-alarms.rst:205
msgid "time-constraint"
msgstr ""
#: ../telemetry-alarms.rst:207
msgid ""
"An example of creating a combination alarm, based on the combined state of "
"two underlying alarms:"
msgstr ""
#: ../telemetry-alarms.rst:218
msgid ""
"This creates an alarm that will fire when either one of two underlying "
"alarms transition into the alarm state. The notification in this case is a "
"webhook call. Any number of underlying alarms can be combined in this way, "
"using either ``and`` or ``or``."
msgstr ""
#: ../telemetry-alarms.rst:224
msgid "Alarm retrieval"
msgstr ""
#: ../telemetry-alarms.rst:226
msgid ""
"You can display all your alarms via (some attributes are omitted for "
"brevity):"
msgstr ""
#: ../telemetry-alarms.rst:238
msgid ""
"In this case, the state is reported as ``insufficient data`` which could "
"indicate that:"
msgstr ""
#: ../telemetry-alarms.rst:241
msgid ""
"meters have not yet been gathered about this instance over the evaluation "
"window into the recent past (for example a brand-new instance)"
msgstr ""
#: ../telemetry-alarms.rst:245
msgid ""
"*or*, that the identified instance is not visible to the user/tenant owning "
"the alarm"
msgstr ""
#: ../telemetry-alarms.rst:248
msgid ""
"*or*, simply that an alarm evaluation cycle hasn't kicked off since the "
"alarm was created (by default, alarms are evaluated once per minute)."
msgstr ""
#: ../telemetry-alarms.rst:253
msgid ""
"The visibility of alarms depends on the role and project associated with the "
"user issuing the query:"
msgstr ""
#: ../telemetry-alarms.rst:256
msgid "admin users see *all* alarms, regardless of the owner"
msgstr ""
#: ../telemetry-alarms.rst:258
msgid ""
"on-admin users see only the alarms associated with their project (as per the "
"normal tenant segregation in OpenStack)"
msgstr ""
#: ../telemetry-alarms.rst:262
msgid "Alarm update"
msgstr ""
#: ../telemetry-alarms.rst:264
msgid ""
"Once the state of the alarm has settled down, we might decide that we set "
"that bar too low with 70%, in which case the threshold (or most any other "
"alarm attribute) can be updated thusly:"
msgstr ""
#: ../telemetry-alarms.rst:272
msgid ""
"The change will take effect from the next evaluation cycle, which by default "
"occurs every minute."
msgstr ""
#: ../telemetry-alarms.rst:275
msgid ""
"Most alarm attributes can be changed in this way, but there is also a "
"convenient short-cut for getting and setting the alarm state:"
msgstr ""
#: ../telemetry-alarms.rst:283
msgid ""
"Over time the state of the alarm may change often, especially if the "
"threshold is chosen to be close to the trending value of the statistic. You "
"can follow the history of an alarm over its lifecycle via the audit API:"
msgstr ""
#: ../telemetry-alarms.rst:303
msgid "Alarm deletion"
msgstr ""
#: ../telemetry-alarms.rst:305
msgid ""
"An alarm that is no longer required can be disabled so that it is no longer "
"actively evaluated:"
msgstr ""
#: ../telemetry-alarms.rst:312
msgid "or even deleted permanently (an irreversible step):"
msgstr ""
#: ../telemetry-alarms.rst:319
msgid "By default, alarm history is retained for deleted alarms."
msgstr ""
#: ../telemetry-best-practices.rst:2
msgid "Telemetry best practices"
msgstr ""
#: ../telemetry-best-practices.rst:4
msgid ""
"The following are some suggested best practices to follow when deploying and "
"configuring the Telemetry service. The best practices are divided into data "
"collection and storage."
msgstr ""
# #-#-#-#-# telemetry-best-practices.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-data-collection.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../telemetry-best-practices.rst:9 ../telemetry-data-collection.rst:5
msgid "Data collection"
msgstr ""
#: ../telemetry-best-practices.rst:11
msgid ""
"The Telemetry service collects a continuously growing set of data. Not all "
"the data will be relevant for a cloud administrator to monitor."
msgstr ""
#: ../telemetry-best-practices.rst:14
msgid ""
"Based on your needs, you can edit the :file:`pipeline.yaml` configuration "
"file to include a selected number of meters while disregarding the rest."
msgstr ""
#: ../telemetry-best-practices.rst:18
msgid ""
"By default, Telemetry service polls the service APIs every 10 minutes. You "
"can change the polling interval on a per meter basis by editing the :file:"
"`pipeline.yaml` configuration file."
msgstr ""
#: ../telemetry-best-practices.rst:24
msgid ""
"If the polling interval is too short, it will likely cause increase of "
"stored data and the stress on the service APIs."
msgstr ""
#: ../telemetry-best-practices.rst:27
msgid ""
"Expand the configuration to have greater control over different meter "
"intervals."
msgstr ""
#: ../telemetry-best-practices.rst:32
msgid "For more information, see the :ref:`telemetry-pipeline-configuration`."
msgstr ""
#: ../telemetry-best-practices.rst:35
msgid ""
"If you are using the Kilo version of Telemetry, you can delay or adjust "
"polling requests by enabling the jitter support. This adds a random delay on "
"how the polling agents send requests to the service APIs. To enable jitter, "
"set ``shuffle_time_before_polling_task`` in the :file:`ceilometer.conf` "
"configuration file to an integer greater than 0."
msgstr ""
#: ../telemetry-best-practices.rst:42
msgid ""
"If you are using Juno or later releases, based on the number of resources "
"that will be polled, you can add additional central and compute agents as "
"necessary. The agents are designed to scale horizontally."
msgstr ""
#: ../telemetry-best-practices.rst:49
msgid "For more information see, :ref:`ha-deploy-services`."
msgstr ""
#: ../telemetry-best-practices.rst:51
msgid ""
"If you are using Juno or later releases, use the ``notifier://`` publisher "
"rather than ``rpc://`` as there is a certain level of overhead that comes "
"with RPC."
msgstr ""
#: ../telemetry-best-practices.rst:57
msgid ""
"For more information on RPC overhead, see `RPC overhead info <https://www."
"rabbitmq.com/tutorials/tutorial-six-python.html>`__."
msgstr ""
#: ../telemetry-best-practices.rst:62
msgid "Data storage"
msgstr ""
#: ../telemetry-best-practices.rst:64
msgid ""
"We recommend that you avoid open-ended queries. In order to get better "
"performance you can use reasonable time ranges and/or other query "
"constraints for retrieving measurements."
msgstr ""
#: ../telemetry-best-practices.rst:68
msgid ""
"For example, this open-ended query might return an unpredictable amount of "
"data:"
msgstr ""
#: ../telemetry-best-practices.rst:75
msgid ""
"Whereas, this well-formed query returns a more reasonable amount of data, "
"hence better performance:"
msgstr ""
#: ../telemetry-best-practices.rst:84
msgid ""
"As of the Liberty release, the number of items returned will be restricted "
"to the value defined by ``default_api_return_limit`` in the :file:"
"`ceilometer.conf` configuration file. Alternatively, the value can be set "
"per query by passing ``limit`` option in request."
msgstr ""
#: ../telemetry-best-practices.rst:89
msgid ""
"You can install the API behind ``mod_wsgi``, as it provides more settings to "
"tweak, like ``threads`` and ``processes`` in case of ``WSGIDaemon``."
msgstr ""
#: ../telemetry-best-practices.rst:95
msgid ""
"For more information on how to configure ``mod_wsgi``, see the `Telemetry "
"Install Documentation <http://docs.openstack.org/developer/ceilometer/"
"install/mod_wsgi.html>`__."
msgstr ""
#: ../telemetry-best-practices.rst:99
msgid ""
"The collection service provided by the Telemetry project is not intended to "
"be an archival service. Set a Time to Live (TTL) value to expire data and "
"minimize the database size. If you would like to keep your data for longer "
"time period, you may consider storing it in a data warehouse outside of "
"Telemetry."
msgstr ""
#: ../telemetry-best-practices.rst:107
msgid ""
"For more information on how to set the TTL, see :ref:`telemetry-storing-"
"samples`."
msgstr ""
#: ../telemetry-best-practices.rst:110
msgid ""
"We recommend that you do not use SQLAlchemy back end prior to the Juno "
"release, as it previously contained extraneous relationships to handle "
"deprecated data models. This resulted in extremely poor query performance."
msgstr ""
#: ../telemetry-best-practices.rst:115
msgid ""
"We recommend that you do not run MongoDB on the same node as the controller. "
"Keep it on a separate node optimized for fast storage for better "
"performance. Also it is advisable for the MongoDB node to have a lot of "
"memory."
msgstr ""
#: ../telemetry-best-practices.rst:122
msgid ""
"For more information on how much memory you need, see `MongoDB FAQ <http://"
"docs.mongodb.org/manual/faq/diagnostics/#how-do-i-calculate-how-much-ram-i-"
"need-for-my-application>`__."
msgstr ""
#: ../telemetry-best-practices.rst:125
msgid ""
"Use replica sets in MongoDB. Replica sets provide high availability through "
"automatic failover. If your primary node fails, MongoDB will elect a "
"secondary node to replace the primary node, and your cluster will remain "
"functional."
msgstr ""
#: ../telemetry-best-practices.rst:130
msgid ""
"For more information on replica sets, see the `MongoDB replica sets docs "
"<http://docs.mongodb.org/manual/tutorial/deploy-replica-set/>`__."
msgstr ""
#: ../telemetry-best-practices.rst:133
msgid ""
"Use sharding in MongoDB. Sharding helps in storing data records across "
"multiple machines and is the MongoDBs approach to meet the demands of data "
"growth."
msgstr ""
#: ../telemetry-best-practices.rst:137
msgid ""
"For more information on sharding, see the `MongoDB sharding docs <http://"
"docs.mongodb.org/manual/sharding/>`__."
msgstr ""
#: ../telemetry-data-collection.rst:7
msgid ""
"The main responsibility of Telemetry in OpenStack is to collect information "
"about the system that can be used by billing systems or interpreted by "
"analytic tooling. The original focus, regarding to the collected data, was "
"on the counters that can be used for billing, but the range is getting wider "
"continuously."
msgstr ""
#: ../telemetry-data-collection.rst:13
msgid ""
"Collected data can be stored in the form of samples or events in the "
"supported databases, listed in :ref:`telemetry-supported-databases`."
msgstr ""
#: ../telemetry-data-collection.rst:16
msgid ""
"Samples can have various sources regarding to the needs and configuration of "
"Telemetry, which requires multiple methods to collect data."
msgstr ""
#: ../telemetry-data-collection.rst:20
msgid "The available data collection mechanisms are:"
msgstr ""
#: ../telemetry-data-collection.rst:23
msgid ""
"Processing notifications from other OpenStack services, by consuming "
"messages from the configured message queue system."
msgstr ""
#: ../telemetry-data-collection.rst:27
msgid ""
"Retrieve information directly from the hypervisor or from the host machine "
"using SNMP, or by using the APIs of other OpenStack services."
msgstr ""
#: ../telemetry-data-collection.rst:29 ../telemetry-data-collection.rst:212
msgid "Polling"
msgstr ""
#: ../telemetry-data-collection.rst:32
msgid "Pushing samples via the RESTful API of Telemetry."
msgstr ""
#: ../telemetry-data-collection.rst:32
msgid "RESTful API"
msgstr ""
#: ../telemetry-data-collection.rst:36
msgid ""
"All the services send notifications about the executed operations or system "
"state in OpenStack. Several notifications carry information that can be "
"metered, like the CPU time of a VM instance created by OpenStack Compute "
"service."
msgstr ""
#: ../telemetry-data-collection.rst:41
msgid ""
"The Telemetry service has a separate agent that is responsible for consuming "
"notifications, namely the notification agent. This component is responsible "
"for consuming from the message bus and transforming notifications into "
"events and measurement samples. Beginning in the Liberty release, the "
"notification agent is responsible for all data processing such as "
"transformations and publishing. After processing, the data is sent via AMQP "
"to the collector service or any external service, which is responsible for "
"persisting the data into the configured database back end."
msgstr ""
#: ../telemetry-data-collection.rst:51
msgid ""
"The different OpenStack services emit several notifications about the "
"various types of events that happen in the system during normal operation. "
"Not all these notifications are consumed by the Telemetry service, as the "
"intention is only to capture the billable events and notifications that can "
"be used for monitoring or profiling purposes. The notification agent filters "
"by the event type, that is contained by each notification message. The "
"following table contains the event types by each OpenStack service that are "
"transformed to samples by Telemetry."
msgstr ""
#: ../telemetry-data-collection.rst:61
msgid "Event types"
msgstr ""
# #-#-#-#-# telemetry-data-collection.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-measurements.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../telemetry-data-collection.rst:61 ../telemetry-data-collection.rst:1043
#: ../telemetry-measurements.rst:97 ../telemetry-measurements.rst:430
#: ../telemetry-measurements.rst:484 ../telemetry-measurements.rst:528
#: ../telemetry-measurements.rst:589 ../telemetry-measurements.rst:664
#: ../telemetry-measurements.rst:697 ../telemetry-measurements.rst:762
#: ../telemetry-measurements.rst:812 ../telemetry-measurements.rst:849
#: ../telemetry-measurements.rst:920 ../telemetry-measurements.rst:983
#: ../telemetry-measurements.rst:1069 ../telemetry-measurements.rst:1147
#: ../telemetry-measurements.rst:1212 ../telemetry-measurements.rst:1263
#: ../telemetry-measurements.rst:1289 ../telemetry-measurements.rst:1312
#: ../telemetry-measurements.rst:1332
msgid "Note"
msgstr ""
#: ../telemetry-data-collection.rst:61
msgid "OpenStack service"
msgstr ""
#: ../telemetry-data-collection.rst:63
msgid ""
"For a more detailed list of Compute notifications please check the `System "
"Usage Data Data wiki page <https://wiki .openstack.org/wiki/ "
"SystemUsageData>`__."
msgstr ""
# #-#-#-#-# telemetry-data-collection.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-measurements.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../telemetry-data-collection.rst:63 ../telemetry-measurements.rst:93
msgid "OpenStack Compute"
msgstr ""
#: ../telemetry-data-collection.rst:63
msgid "scheduler.run\\_insta\\ nce.scheduled"
msgstr ""
#: ../telemetry-data-collection.rst:66
msgid "scheduler.select\\_\\ destinations"
msgstr ""
#: ../telemetry-data-collection.rst:69
msgid "compute.instance.\\*"
msgstr ""
# #-#-#-#-# telemetry-data-collection.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-measurements.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../telemetry-data-collection.rst:71 ../telemetry-measurements.rst:468
msgid "Bare metal service"
msgstr ""
#: ../telemetry-data-collection.rst:71
msgid "hardware.ipmi.\\*"
msgstr ""
# #-#-#-#-# telemetry-data-collection.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-measurements.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../telemetry-data-collection.rst:73 ../telemetry-measurements.rst:660
msgid "OpenStack Image service"
msgstr ""
#: ../telemetry-data-collection.rst:73
msgid ""
"The required configuration for Image service can be found in `Configure the "
"Image service for Telemetry section <http://docs.openstack.org /liberty/"
"install-guide-ubuntu /ceilometer-glance.html>`__ section in the OpenStack "
"Installation Guide"
msgstr ""
# #-#-#-#-# telemetry-data-collection.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-measurements.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../telemetry-data-collection.rst:73 ../telemetry-measurements.rst:676
msgid "image.update"
msgstr ""
# #-#-#-#-# telemetry-data-collection.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-measurements.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../telemetry-data-collection.rst:75 ../telemetry-measurements.rst:679
msgid "image.upload"
msgstr ""
# #-#-#-#-# telemetry-data-collection.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-measurements.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../telemetry-data-collection.rst:77 ../telemetry-measurements.rst:682
msgid "image.delete"
msgstr ""
#: ../telemetry-data-collection.rst:79
msgid "image.send"
msgstr ""
# #-#-#-#-# telemetry-data-collection.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-measurements.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../telemetry-data-collection.rst:83 ../telemetry-data-collection.rst:273
#: ../telemetry-measurements.rst:916
msgid "OpenStack Networking"
msgstr ""
#: ../telemetry-data-collection.rst:83
msgid "floatingip.create.end"
msgstr ""
#: ../telemetry-data-collection.rst:85
msgid "floatingip.update.\\*"
msgstr ""
#: ../telemetry-data-collection.rst:87
msgid "floatingip.exists"
msgstr ""
#: ../telemetry-data-collection.rst:89
msgid "network.create.end"
msgstr ""
#: ../telemetry-data-collection.rst:91
msgid "network.update.\\*"
msgstr ""
#: ../telemetry-data-collection.rst:93
msgid "network.exists"
msgstr ""
#: ../telemetry-data-collection.rst:95
msgid "port.create.end"
msgstr ""
#: ../telemetry-data-collection.rst:97
msgid "port.update.\\*"
msgstr ""
#: ../telemetry-data-collection.rst:99
msgid "port.exists"
msgstr ""
#: ../telemetry-data-collection.rst:101
msgid "router.create.end"
msgstr ""
#: ../telemetry-data-collection.rst:103
msgid "router.update.\\*"
msgstr ""
#: ../telemetry-data-collection.rst:105
msgid "router.exists"
msgstr ""
#: ../telemetry-data-collection.rst:107
msgid "subnet.create.end"
msgstr ""
#: ../telemetry-data-collection.rst:109
msgid "subnet.update.\\*"
msgstr ""
#: ../telemetry-data-collection.rst:111
msgid "subnet.exists"
msgstr ""
#: ../telemetry-data-collection.rst:113
msgid "l3.meter"
msgstr ""
# #-#-#-#-# telemetry-data-collection.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-measurements.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../telemetry-data-collection.rst:115 ../telemetry-measurements.rst:1259
msgid "Orchestration service"
msgstr ""
#: ../telemetry-data-collection.rst:115
msgid "orchestration.stack\\ .create.end"
msgstr ""
#: ../telemetry-data-collection.rst:118
msgid "orchestration.stack\\ .update.end"
msgstr ""
#: ../telemetry-data-collection.rst:121
msgid "orchestration.stack\\ .delete.end"
msgstr ""
#: ../telemetry-data-collection.rst:124
msgid "orchestration.stack\\ .resume.end"
msgstr ""
#: ../telemetry-data-collection.rst:127
msgid "orchestration.stack\\ .suspend.end"
msgstr ""
# #-#-#-#-# telemetry-data-collection.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-measurements.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../telemetry-data-collection.rst:130 ../telemetry-data-collection.rst:277
#: ../telemetry-measurements.rst:693
msgid "OpenStack Block Storage"
msgstr ""
#: ../telemetry-data-collection.rst:130
msgid ""
"The required configuration for Block Storage service can be found in the "
"`Add the Block Storage service agent for Telemetry section <http: //docs."
"openstack.org/liberty/ install-guide-ubuntu/ /ceilometer-cinder.html>`__ "
"section in the OpenStack Installation Guide."
msgstr ""
#: ../telemetry-data-collection.rst:130
msgid "volume.exists"
msgstr ""
#: ../telemetry-data-collection.rst:132
msgid "volume.create.\\*"
msgstr ""
#: ../telemetry-data-collection.rst:134
msgid "volume.delete.\\*"
msgstr ""
#: ../telemetry-data-collection.rst:136
msgid "volume.update.\\*"
msgstr ""
#: ../telemetry-data-collection.rst:138
msgid "volume.resize.\\*"
msgstr ""
#: ../telemetry-data-collection.rst:140
msgid "volume.attach.\\*"
msgstr ""
#: ../telemetry-data-collection.rst:142
msgid "volume.detach.\\*"
msgstr ""
#: ../telemetry-data-collection.rst:144
msgid "snapshot.exists"
msgstr ""
#: ../telemetry-data-collection.rst:146
msgid "snapshot.create.\\*"
msgstr ""
#: ../telemetry-data-collection.rst:148
msgid "snapshot.delete.\\*"
msgstr ""
#: ../telemetry-data-collection.rst:150
msgid "snapshot.update.\\*"
msgstr ""
#: ../telemetry-data-collection.rst:152
msgid "volume.backup.create.\\ \\*"
msgstr ""
#: ../telemetry-data-collection.rst:155
msgid "volume.backup.delete.\\ \\*"
msgstr ""
#: ../telemetry-data-collection.rst:158
msgid "volume.backup.restore.\\ \\*"
msgstr ""
#: ../telemetry-data-collection.rst:164
msgid ""
"Some services require additional configuration to emit the notifications "
"using the correct control exchange on the message queue and so forth. These "
"configuration needs are referred in the above table for each OpenStack "
"service that needs it."
msgstr ""
#: ../telemetry-data-collection.rst:169
msgid ""
"Specific notifications from the Compute service are important for "
"administrators and users. Configuring nova_notifications in the :file:`nova."
"conf` file allows administrators to respond to events rapidly. For more "
"information on configuring notifications for the compute service, see "
"`Chapter 11 on Telemetry services <http://docs.openstack.org/ liberty/"
"install-guide-ubuntu/ceilometer-nova.html>`__ in the OpenStack Installation "
"Guide."
msgstr ""
#: ../telemetry-data-collection.rst:180
msgid ""
"When the ``store_events`` option is set to True in :file:`ceilometer.conf`, "
"Prior to the Kilo release, the notification agent needed database access in "
"order to work properly."
msgstr ""
#: ../telemetry-data-collection.rst:185
msgid "Middleware for the OpenStack Object Storage service"
msgstr ""
#: ../telemetry-data-collection.rst:186
msgid ""
"A subset of Object Store statistics requires additional middleware to be "
"installed behind the proxy of Object Store. This additional component emits "
"notifications containing data-flow-oriented meters, namely the ``storage."
"objects.(incoming|outgoing).bytes values``. The list of these meters are "
"listed in :ref:`telemetry-object-storage-meter`, marked with "
"``notification`` as origin."
msgstr ""
#: ../telemetry-data-collection.rst:193
msgid ""
"The instructions on how to install this middleware can be found in "
"`Configure the Object Storage service for Telemetry <http://docs.openstack."
"org/liberty/install-guide-ubuntu/ceilometer-swift.html>`__ section in the "
"OpenStack Installation Guide."
msgstr ""
#: ../telemetry-data-collection.rst:199
msgid "Telemetry middleware"
msgstr ""
#: ../telemetry-data-collection.rst:200
msgid ""
"Telemetry provides the capability of counting the HTTP requests and "
"responses for each API endpoint in OpenStack. This is achieved by storing a "
"sample for each event marked as ``audit.http.request``, ``audit.http."
"response``, ``http.request`` or ``http.response``."
msgstr ""
#: ../telemetry-data-collection.rst:205
msgid ""
"It is recommended that these notifications be consumed as events rather than "
"samples to better index the appropriate values and avoid massive load on the "
"Metering database. If preferred, Telemetry can consume these events as "
"samples if the services are configured to emit ``http.*`` notifications."
msgstr ""
#: ../telemetry-data-collection.rst:213
msgid ""
"The Telemetry service is intended to store a complex picture of the "
"infrastructure. This goal requires additional information than what is "
"provided by the events and notifications published by each service. Some "
"information is not emitted directly, like resource usage of the VM instances."
msgstr ""
#: ../telemetry-data-collection.rst:219
msgid ""
"Therefore Telemetry uses another method to gather this data by polling the "
"infrastructure including the APIs of the different OpenStack services and "
"other assets, like hypervisors. The latter case requires closer interaction "
"with the compute hosts. To solve this issue, Telemetry uses an agent based "
"architecture to fulfill the requirements against the data collection."
msgstr ""
#: ../telemetry-data-collection.rst:226
msgid ""
"There are three types of agents supporting the polling mechanism, the "
"compute agent, the central agent, and the IPMI agent. Under the hood, all "
"the types of polling agents are the same ``ceilometer-polling`` agent, "
"except that they load different polling plug-ins (pollsters) from different "
"namespaces to gather data. The following subsections give further "
"information regarding the architectural and configuration details of these "
"components."
msgstr ""
#: ../telemetry-data-collection.rst:234
msgid "Running ceilometer-agent-compute is exactly the same as::"
msgstr ""
#: ../telemetry-data-collection.rst:239
msgid "Running ceilometer-agent-central is exactly the same as::"
msgstr ""
#: ../telemetry-data-collection.rst:244
msgid "Running ceilometer-agent-ipmi is exactly the same as::"
msgstr ""
#: ../telemetry-data-collection.rst:249
msgid ""
"In addition to loading all the polling plug-ins registered in the specified "
"namespaces, the ceilometer-polling agent can also specify the polling plug-"
"ins to be loaded by using the ``pollster-list`` option::"
msgstr ""
#: ../telemetry-data-collection.rst:258
msgid "HA deployment is NOT supported if the ``pollster-list`` option is used."
msgstr ""
#: ../telemetry-data-collection.rst:263
msgid "The ceilometer-polling service is available since Kilo release."
msgstr ""
#: ../telemetry-data-collection.rst:266
msgid "Central agent"
msgstr ""
#: ../telemetry-data-collection.rst:267
msgid ""
"This agent is responsible for polling public REST APIs to retrieve "
"additional information on OpenStack resources not already surfaced via "
"notifications, and also for polling hardware resources over SNMP."
msgstr ""
#: ../telemetry-data-collection.rst:271
msgid "The following services can be polled with this agent:"
msgstr ""
# #-#-#-#-# telemetry-data-collection.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-measurements.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../telemetry-data-collection.rst:275 ../telemetry-measurements.rst:758
msgid "OpenStack Object Storage"
msgstr ""
#: ../telemetry-data-collection.rst:279
msgid "Hardware resources via SNMP"
msgstr ""
#: ../telemetry-data-collection.rst:281
msgid ""
"Energy consumption meters via `Kwapi <https://launchpad.net/kwapi>`__ "
"framework"
msgstr ""
#: ../telemetry-data-collection.rst:284
msgid ""
"To install and configure this service use the `Add the Telemetry service "
"<http://docs.openstack.org/liberty/install-guide-ubuntu/ceilometer.html>`__ "
"section in the OpenStack Installation Guide."
msgstr ""
#: ../telemetry-data-collection.rst:288
msgid ""
"The central agent does not need direct database connection. The samples "
"collected by this agent are sent via AMQP to the notification agent to be "
"processed."
msgstr ""
#: ../telemetry-data-collection.rst:294
msgid ""
"Prior to the Liberty release, data from the polling agents was processed "
"locally and published accordingly rather than by the notification agent."
msgstr ""
#: ../telemetry-data-collection.rst:298
msgid "Compute agent"
msgstr ""
#: ../telemetry-data-collection.rst:299
msgid ""
"This agent is responsible for collecting resource usage data of VM instances "
"on individual compute nodes within an OpenStack deployment. This mechanism "
"requires a closer interaction with the hypervisor, therefore a separate "
"agent type fulfills the collection of the related meters, which is placed on "
"the host machines to locally retrieve this information."
msgstr ""
#: ../telemetry-data-collection.rst:306
msgid ""
"A compute agent instance has to be installed on each and every compute node, "
"installation instructions can be found in the `Install the Compute agent for "
"Telemetry <http://docs.openstack.org/liberty/install-guide-ubuntu/ceilometer-"
"nova.html>`__ section in the OpenStack Installation Guide."
msgstr ""
#: ../telemetry-data-collection.rst:312
msgid ""
"Just like the central agent, this component also does not need a direct "
"database connection. The samples are sent via AMQP to the notification agent."
msgstr ""
#: ../telemetry-data-collection.rst:315
msgid ""
"The list of supported hypervisors can be found in :ref:`telemetry-supported-"
"hypervisors`. The compute agent uses the API of the hypervisor installed on "
"the compute hosts. Therefore the supported meters may be different in case "
"of each virtualization back end, as each inspection tool provides a "
"different set of meters."
msgstr ""
#: ../telemetry-data-collection.rst:321
msgid ""
"The list of collected meters can be found in :ref:`telemetry-compute-"
"meters`. The support column provides the information that which meter is "
"available for each hypervisor supported by the Telemetry service."
msgstr ""
#: ../telemetry-data-collection.rst:327
msgid "Telemetry supports Libvirt, which hides the hypervisor under it."
msgstr ""
#: ../telemetry-data-collection.rst:332
msgid "IPMI agent"
msgstr ""
#: ../telemetry-data-collection.rst:333
msgid ""
"This agent is responsible for collecting IPMI sensor data and Intel Node "
"Manager data on individual compute nodes within an OpenStack deployment. "
"This agent requires an IPMI capable node with the ipmitool utility "
"installed, which is commonly used for IPMI control on various Linux "
"distributions."
msgstr ""
#: ../telemetry-data-collection.rst:338
msgid ""
"An IPMI agent instance could be installed on each and every compute node "
"with IPMI support, except when the node is managed by the Bare metal service "
"and the ``conductor.send_sensor_data`` option is set to ``true`` in the Bare "
"metal service. It is no harm to install this agent on a compute node without "
"IPMI or Intel Node Manager support, as the agent checks for the hardware and "
"if none is available, returns empty data. It is suggested that you install "
"the IPMI agent only on an IPMI capable node for performance reasons."
msgstr ""
#: ../telemetry-data-collection.rst:347
msgid ""
"Just like the central agent, this component also does not need direct "
"database access. The samples are sent via AMQP to the notification agent."
msgstr ""
#: ../telemetry-data-collection.rst:350
msgid ""
"The list of collected meters can be found in :ref:`telemetry-bare-metal-"
"service`."
msgstr ""
#: ../telemetry-data-collection.rst:355
msgid ""
"Do not deploy both the IPMI agent and the Bare metal service on one compute "
"node. If ``conductor.send_sensor_data`` is set, this misconfiguration causes "
"duplicated IPMI sensor samples."
msgstr ""
#: ../telemetry-data-collection.rst:363
msgid "Support for HA deployment"
msgstr ""
#: ../telemetry-data-collection.rst:364
msgid ""
"Both the polling agents and notification agents can run in an HA deployment, "
"which means that multiple instances of these services can run in parallel "
"with workload partitioning among these running instances."
msgstr ""
#: ../telemetry-data-collection.rst:368
msgid ""
"The `Tooz <https://pypi.python.org/pypi/tooz>`__ library provides the "
"coordination within the groups of service instances. It provides an API "
"above several back ends that can be used for building distributed "
"applications."
msgstr ""
#: ../telemetry-data-collection.rst:373
msgid ""
"Tooz supports `various drivers <http://docs.openstack.org/developer/tooz/"
"drivers.html>`__ including the following back end solutions:"
msgstr ""
#: ../telemetry-data-collection.rst:377
msgid ""
"`Zookeeper <http://zookeeper.apache.org/>`__. Recommended solution by the "
"Tooz project."
msgstr ""
#: ../telemetry-data-collection.rst:380
msgid "`Redis <http://redis.io/>`__. Recommended solution by the Tooz project."
msgstr ""
#: ../telemetry-data-collection.rst:383
msgid "`Memcached <http://memcached.org/>`__. Recommended for testing."
msgstr ""
#: ../telemetry-data-collection.rst:385
msgid ""
"You must configure a supported Tooz driver for the HA deployment of the "
"Telemetry services."
msgstr ""
#: ../telemetry-data-collection.rst:388
msgid ""
"For information about the required configuration options that have to be set "
"in the :file:`ceilometer.conf` configuration file for both the central and "
"compute agents, see the `Coordination section <http://docs.openstack.org/"
"liberty/config-reference/content/ch_configuring-openstack-telemetry.html>`__ "
"in the OpenStack Configuration Reference."
msgstr ""
#: ../telemetry-data-collection.rst:395
msgid "Notification agent HA deployment"
msgstr ""
#: ../telemetry-data-collection.rst:396
msgid ""
"In the Kilo release, workload partitioning support was added to the "
"notification agent. This is particularly useful as the pipeline processing "
"is handled exclusively by the notification agent now which may result in a "
"larger amount of load."
msgstr ""
#: ../telemetry-data-collection.rst:401
msgid ""
"To enable workload partitioning by notification agent, the ``backend_url`` "
"option must be set in the :file:`ceilometer.conf` configuration file. "
"Additionally, ``workload_partitioning`` should be enabled in the "
"`Notification section <http://docs.openstack.org/liberty/config-reference/"
"content/ch_configuring-openstack-telemetry.html>`__ in the OpenStack "
"Configuration Reference."
msgstr ""
#: ../telemetry-data-collection.rst:408
msgid ""
"In Liberty, the notification agent creates multiple queues to divide the "
"workload across all active agents. The number of queues can be controlled by "
"the ``pipeline_processing_queues`` option in the :file:`ceilometer.conf` "
"configuration file. A larger value will result in better distribution of "
"tasks but will also require more memory and longer startup time. It is "
"recommended to have a value approximately three times the number of active "
"notification agents. At a minimum, the value should be equal to the number "
"of active agents."
msgstr ""
#: ../telemetry-data-collection.rst:418
msgid "Polling agent HA deployment"
msgstr ""
#: ../telemetry-data-collection.rst:422
msgid ""
"Without the ``backend_url`` option being set only one instance of both the "
"central and compute agent service is able to run and function correctly."
msgstr ""
#: ../telemetry-data-collection.rst:426
msgid ""
"The availability check of the instances is provided by heartbeat messages. "
"When the connection with an instance is lost, the workload will be "
"reassigned within the remained instances in the next polling cycle."
msgstr ""
#: ../telemetry-data-collection.rst:433
msgid ""
"``Memcached`` uses a ``timeout`` value, which should always be set to a "
"value that is higher than the ``heartbeat`` value set for Telemetry."
msgstr ""
#: ../telemetry-data-collection.rst:437
msgid ""
"For backward compatibility and supporting existing deployments, the central "
"agent configuration also supports using different configuration files for "
"groups of service instances of this type that are running in parallel. For "
"enabling this configuration set a value for the "
"``partitioning_group_prefix`` option in the `Central section <http://docs."
"openstack.org/liberty/config-reference/content/ch_configuring-openstack-"
"telemetry.html>`__ in the OpenStack Configuration Reference."
msgstr ""
#: ../telemetry-data-collection.rst:447
msgid ""
"For each sub-group of the central agent pool with the same "
"``partitioning_group_prefix`` a disjoint subset of meters must be polled, "
"otherwise samples may be missing or duplicated. The list of meters to poll "
"can be set in the :file:`/etc/ceilometer/pipeline.yaml` configuration file. "
"For more information about pipelines see :ref:`data-collection-and-"
"processing`."
msgstr ""
#: ../telemetry-data-collection.rst:454
msgid ""
"To enable the compute agent to run multiple instances simultaneously with "
"workload partitioning, the ``workload_partitioning`` option has to be set to "
"``True`` under the `Compute section <http://docs.openstack.org/liberty/"
"config-reference/content/ch_configuring-openstack-telemetry.html>`__ in the :"
"file:`ceilometer.conf` configuration file."
msgstr ""
#: ../telemetry-data-collection.rst:462
msgid "Send samples to Telemetry"
msgstr ""
#: ../telemetry-data-collection.rst:463
msgid ""
"While most parts of the data collection in the Telemetry service are "
"automated, Telemetry provides the possibility to submit samples via the REST "
"API to allow users to send custom samples into this service."
msgstr ""
#: ../telemetry-data-collection.rst:467
msgid ""
"This option makes it possible to send any kind of samples without the need "
"of writing extra code lines or making configuration changes."
msgstr ""
#: ../telemetry-data-collection.rst:470
msgid ""
"The samples that can be sent to Telemetry are not limited to the actual "
"existing meters. There is a possibility to provide data for any new, "
"customer defined counter by filling out all the required fields of the POST "
"request."
msgstr ""
#: ../telemetry-data-collection.rst:475
msgid ""
"If the sample corresponds to an existing meter, then the fields like ``meter-"
"type`` and meter name should be matched accordingly."
msgstr ""
#: ../telemetry-data-collection.rst:478
msgid ""
"The required fields for sending a sample using the command line client are:"
msgstr ""
#: ../telemetry-data-collection.rst:481
msgid "ID of the corresponding resource. (``--resource-id``)"
msgstr ""
#: ../telemetry-data-collection.rst:483
msgid "Name of meter. (``--meter-name``)"
msgstr ""
#: ../telemetry-data-collection.rst:485
msgid "Type of meter. (``--meter-type``)"
msgstr ""
#: ../telemetry-data-collection.rst:487
msgid "Predefined meter types:"
msgstr ""
# #-#-#-#-# telemetry-data-collection.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-measurements.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../telemetry-data-collection.rst:489 ../telemetry-measurements.rst:37
#: ../telemetry-measurements.rst:101 ../telemetry-measurements.rst:105
#: ../telemetry-measurements.rst:109 ../telemetry-measurements.rst:113
#: ../telemetry-measurements.rst:122 ../telemetry-measurements.rst:126
#: ../telemetry-measurements.rst:133 ../telemetry-measurements.rst:140
#: ../telemetry-measurements.rst:147 ../telemetry-measurements.rst:154
#: ../telemetry-measurements.rst:158 ../telemetry-measurements.rst:161
#: ../telemetry-measurements.rst:168 ../telemetry-measurements.rst:176
#: ../telemetry-measurements.rst:184 ../telemetry-measurements.rst:193
#: ../telemetry-measurements.rst:200 ../telemetry-measurements.rst:205
#: ../telemetry-measurements.rst:210 ../telemetry-measurements.rst:216
#: ../telemetry-measurements.rst:221 ../telemetry-measurements.rst:226
#: ../telemetry-measurements.rst:236 ../telemetry-measurements.rst:245
#: ../telemetry-measurements.rst:254 ../telemetry-measurements.rst:263
#: ../telemetry-measurements.rst:268 ../telemetry-measurements.rst:273
#: ../telemetry-measurements.rst:278 ../telemetry-measurements.rst:283
#: ../telemetry-measurements.rst:290 ../telemetry-measurements.rst:296
#: ../telemetry-measurements.rst:301 ../telemetry-measurements.rst:304
#: ../telemetry-measurements.rst:307 ../telemetry-measurements.rst:311
#: ../telemetry-measurements.rst:314 ../telemetry-measurements.rst:318
#: ../telemetry-measurements.rst:324 ../telemetry-measurements.rst:329
#: ../telemetry-measurements.rst:334 ../telemetry-measurements.rst:340
#: ../telemetry-measurements.rst:348 ../telemetry-measurements.rst:434
#: ../telemetry-measurements.rst:449 ../telemetry-measurements.rst:452
#: ../telemetry-measurements.rst:455 ../telemetry-measurements.rst:458
#: ../telemetry-measurements.rst:461 ../telemetry-measurements.rst:488
#: ../telemetry-measurements.rst:491 ../telemetry-measurements.rst:494
#: ../telemetry-measurements.rst:498 ../telemetry-measurements.rst:501
#: ../telemetry-measurements.rst:532 ../telemetry-measurements.rst:535
#: ../telemetry-measurements.rst:541 ../telemetry-measurements.rst:544
#: ../telemetry-measurements.rst:547 ../telemetry-measurements.rst:552
#: ../telemetry-measurements.rst:557 ../telemetry-measurements.rst:561
#: ../telemetry-measurements.rst:565 ../telemetry-measurements.rst:593
#: ../telemetry-measurements.rst:596 ../telemetry-measurements.rst:599
#: ../telemetry-measurements.rst:602 ../telemetry-measurements.rst:605
#: ../telemetry-measurements.rst:608 ../telemetry-measurements.rst:611
#: ../telemetry-measurements.rst:614 ../telemetry-measurements.rst:617
#: ../telemetry-measurements.rst:620 ../telemetry-measurements.rst:623
#: ../telemetry-measurements.rst:655 ../telemetry-measurements.rst:668
#: ../telemetry-measurements.rst:672 ../telemetry-measurements.rst:701
#: ../telemetry-measurements.rst:704 ../telemetry-measurements.rst:709
#: ../telemetry-measurements.rst:712 ../telemetry-measurements.rst:766
#: ../telemetry-measurements.rst:769 ../telemetry-measurements.rst:772
#: ../telemetry-measurements.rst:786 ../telemetry-measurements.rst:789
#: ../telemetry-measurements.rst:816 ../telemetry-measurements.rst:818
#: ../telemetry-measurements.rst:821 ../telemetry-measurements.rst:824
#: ../telemetry-measurements.rst:829 ../telemetry-measurements.rst:832
#: ../telemetry-measurements.rst:924 ../telemetry-measurements.rst:934
#: ../telemetry-measurements.rst:944 ../telemetry-measurements.rst:953
#: ../telemetry-measurements.rst:963 ../telemetry-measurements.rst:987
#: ../telemetry-measurements.rst:990 ../telemetry-measurements.rst:1031
#: ../telemetry-measurements.rst:1034 ../telemetry-measurements.rst:1037
#: ../telemetry-measurements.rst:1040 ../telemetry-measurements.rst:1043
#: ../telemetry-measurements.rst:1045 ../telemetry-measurements.rst:1048
#: ../telemetry-measurements.rst:1073 ../telemetry-measurements.rst:1077
#: ../telemetry-measurements.rst:1081 ../telemetry-measurements.rst:1085
#: ../telemetry-measurements.rst:1093 ../telemetry-measurements.rst:1151
#: ../telemetry-measurements.rst:1155 ../telemetry-measurements.rst:1180
#: ../telemetry-measurements.rst:1194 ../telemetry-measurements.rst:1216
#: ../telemetry-measurements.rst:1220 ../telemetry-measurements.rst:1244
#: ../telemetry-measurements.rst:1316 ../telemetry-measurements.rst:1319
#: ../telemetry-measurements.rst:1322 ../telemetry-measurements.rst:1338
msgid "Gauge"
msgstr ""
# #-#-#-#-# telemetry-data-collection.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-measurements.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../telemetry-data-collection.rst:491 ../telemetry-data-collection.rst:661
#: ../telemetry-measurements.rst:35 ../telemetry-measurements.rst:355
#: ../telemetry-measurements.rst:676 ../telemetry-measurements.rst:679
#: ../telemetry-measurements.rst:682 ../telemetry-measurements.rst:685
#: ../telemetry-measurements.rst:688 ../telemetry-measurements.rst:717
#: ../telemetry-measurements.rst:720 ../telemetry-measurements.rst:723
#: ../telemetry-measurements.rst:727 ../telemetry-measurements.rst:730
#: ../telemetry-measurements.rst:734 ../telemetry-measurements.rst:738
#: ../telemetry-measurements.rst:741 ../telemetry-measurements.rst:744
#: ../telemetry-measurements.rst:747 ../telemetry-measurements.rst:750
#: ../telemetry-measurements.rst:775 ../telemetry-measurements.rst:778
#: ../telemetry-measurements.rst:781 ../telemetry-measurements.rst:853
#: ../telemetry-measurements.rst:856 ../telemetry-measurements.rst:859
#: ../telemetry-measurements.rst:862 ../telemetry-measurements.rst:865
#: ../telemetry-measurements.rst:868 ../telemetry-measurements.rst:871
#: ../telemetry-measurements.rst:874 ../telemetry-measurements.rst:877
#: ../telemetry-measurements.rst:880 ../telemetry-measurements.rst:883
#: ../telemetry-measurements.rst:886 ../telemetry-measurements.rst:889
#: ../telemetry-measurements.rst:892 ../telemetry-measurements.rst:895
#: ../telemetry-measurements.rst:898 ../telemetry-measurements.rst:901
#: ../telemetry-measurements.rst:906 ../telemetry-measurements.rst:910
#: ../telemetry-measurements.rst:927 ../telemetry-measurements.rst:931
#: ../telemetry-measurements.rst:937 ../telemetry-measurements.rst:941
#: ../telemetry-measurements.rst:947 ../telemetry-measurements.rst:950
#: ../telemetry-measurements.rst:956 ../telemetry-measurements.rst:960
#: ../telemetry-measurements.rst:967 ../telemetry-measurements.rst:970
#: ../telemetry-measurements.rst:973 ../telemetry-measurements.rst:1107
#: ../telemetry-measurements.rst:1111 ../telemetry-measurements.rst:1115
#: ../telemetry-measurements.rst:1119 ../telemetry-measurements.rst:1123
#: ../telemetry-measurements.rst:1127 ../telemetry-measurements.rst:1131
#: ../telemetry-measurements.rst:1136 ../telemetry-measurements.rst:1162
#: ../telemetry-measurements.rst:1166 ../telemetry-measurements.rst:1170
#: ../telemetry-measurements.rst:1175 ../telemetry-measurements.rst:1184
#: ../telemetry-measurements.rst:1189 ../telemetry-measurements.rst:1198
#: ../telemetry-measurements.rst:1202 ../telemetry-measurements.rst:1226
#: ../telemetry-measurements.rst:1230 ../telemetry-measurements.rst:1234
#: ../telemetry-measurements.rst:1239 ../telemetry-measurements.rst:1248
#: ../telemetry-measurements.rst:1253 ../telemetry-measurements.rst:1267
#: ../telemetry-measurements.rst:1270 ../telemetry-measurements.rst:1273
#: ../telemetry-measurements.rst:1276 ../telemetry-measurements.rst:1279
#: ../telemetry-measurements.rst:1293 ../telemetry-measurements.rst:1298
#: ../telemetry-measurements.rst:1302
msgid "Delta"
msgstr ""
# #-#-#-#-# telemetry-data-collection.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# telemetry-measurements.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../telemetry-data-collection.rst:493 ../telemetry-measurements.rst:33
#: ../telemetry-measurements.rst:1336
msgid "Cumulative"
msgstr ""
#: ../telemetry-data-collection.rst:495
msgid "Unit of meter. (``--meter-unit``)"
msgstr ""
#: ../telemetry-data-collection.rst:497
msgid "Volume of sample. (``--sample-volume``)"
msgstr ""
#: ../telemetry-data-collection.rst:499
msgid ""
"To send samples to Telemetry using the command line client, the following "
"command should be invoked::"
msgstr ""
#: ../telemetry-data-collection.rst:523
msgid "Data collection and processing"
msgstr ""
#: ../telemetry-data-collection.rst:524
msgid ""
"The mechanism by which data is collected and processed is called a pipeline. "
"Pipelines, at the configuration level, describe a coupling between sources "
"of data and the corresponding sinks for transformation and publication of "
"data."
msgstr ""
#: ../telemetry-data-collection.rst:529
msgid ""
"A source is a producer of data: samples or events. In effect, it is a set of "
"pollsters or notification handlers emitting datapoints for a set of matching "
"meters and event types."
msgstr ""
#: ../telemetry-data-collection.rst:533
msgid ""
"Each source configuration encapsulates name matching, polling interval "
"determination, optional resource enumeration or discovery, and mapping to "
"one or more sinks for publication."
msgstr ""
#: ../telemetry-data-collection.rst:537
msgid ""
"Data gathered can be used for different purposes, which can impact how "
"frequently it needs to be published. Typically, a meter published for "
"billing purposes needs to be updated every 30 minutes while the same meter "
"may be needed for performance tuning every minute."
msgstr ""
#: ../telemetry-data-collection.rst:544
msgid ""
"Rapid polling cadences should be avoided, as it results in a huge amount of "
"data in a short time frame, which may negatively affect the performance of "
"both Telemetry and the underlying database back end. We therefore strongly "
"recommend you do not use small granularity values like 10 seconds."
msgstr ""
#: ../telemetry-data-collection.rst:550
msgid ""
"A sink, on the other hand, is a consumer of data, providing logic for the "
"transformation and publication of data emitted from related sources."
msgstr ""
#: ../telemetry-data-collection.rst:553
msgid ""
"In effect, a sink describes a chain of handlers. The chain starts with zero "
"or more transformers and ends with one or more publishers. The first "
"transformer in the chain is passed data from the corresponding source, takes "
"some action such as deriving rate of change, performing unit conversion, or "
"aggregating, before passing the modified data to the next step that is "
"described in :ref:`telemetry-publishers`."
msgstr ""
#: ../telemetry-data-collection.rst:563
msgid "Pipeline configuration"
msgstr ""
#: ../telemetry-data-collection.rst:564
msgid ""
"Pipeline configuration by default, is stored in separate configuration "
"files, called :file:`pipeline.yaml` and :file:`event_pipeline.yaml`, next to "
"the :file:`ceilometer.conf` file. The meter pipeline and event pipeline "
"configuration files can be set by the ``pipeline_cfg_file`` and "
"``event_pipeline_cfg_file`` options listed in the `Description of "
"configuration options for api table <http://docs.openstack.org/liberty/"
"config-reference/content/ch_configuring-openstack-telemetry.html>`__ section "
"in the OpenStack Configuration Reference respectively. Multiple pipelines "
"can be defined in one pipeline configuration file."
msgstr ""
#: ../telemetry-data-collection.rst:574
msgid "The meter pipeline definition looks like the following::"
msgstr ""
#: ../telemetry-data-collection.rst:592
msgid ""
"The interval parameter in the sources section should be defined in seconds. "
"It determines the polling cadence of sample injection into the pipeline, "
"where samples are produced under the direct control of an agent."
msgstr ""
#: ../telemetry-data-collection.rst:597
msgid ""
"There are several ways to define the list of meters for a pipeline source. "
"The list of valid meters can be found in :ref:`telemetry-measurements`. "
"There is a possibility to define all the meters, or just included or "
"excluded meters, with which a source should operate:"
msgstr ""
#: ../telemetry-data-collection.rst:602
msgid ""
"To include all meters, use the ``*`` wildcard symbol. It is highly advisable "
"to select only the meters that you intend on using to avoid flooding the "
"metering database with unused data."
msgstr ""
#: ../telemetry-data-collection.rst:606
msgid "To define the list of meters, use either of the following:"
msgstr ""
#: ../telemetry-data-collection.rst:608
msgid "To define the list of included meters, use the ``meter_name`` syntax."
msgstr ""
#: ../telemetry-data-collection.rst:611
msgid "To define the list of excluded meters, use the ``!meter_name`` syntax."
msgstr ""
#: ../telemetry-data-collection.rst:614
msgid ""
"For meters, which have variants identified by a complex name field, use the "
"wildcard symbol to select all, e.g. for \"instance:m1.tiny\", use \"instance:"
"\\*\"."
msgstr ""
#: ../telemetry-data-collection.rst:620
msgid ""
"Please be aware that we do not have any duplication check between pipelines "
"and if you add a meter to multiple pipelines then it is assumed the "
"duplication is intentional and may be stored multiple times according to the "
"specified sinks."
msgstr ""
#: ../telemetry-data-collection.rst:625
msgid "The above definition methods can be used in the following combinations:"
msgstr ""
#: ../telemetry-data-collection.rst:627
msgid "Use only the wildcard symbol."
msgstr ""
#: ../telemetry-data-collection.rst:629
msgid "Use the list of included meters."
msgstr ""
#: ../telemetry-data-collection.rst:631
msgid "Use the list of excluded meters."
msgstr ""
#: ../telemetry-data-collection.rst:633
msgid "Use wildcard symbol with the list of excluded meters."
msgstr ""
#: ../telemetry-data-collection.rst:637
msgid ""
"At least one of the above variations should be included in the meters "
"section. Included and excluded meters cannot co-exist in the same pipeline. "
"Wildcard and included meters cannot co-exist in the same pipeline definition "
"section."
msgstr ""
#: ../telemetry-data-collection.rst:642
msgid ""
"The optional resources section of a pipeline source allows a static list of "
"resource URLs to be configured for polling."
msgstr ""
#: ../telemetry-data-collection.rst:645
msgid ""
"The transformers section of a pipeline sink provides the possibility to add "
"a list of transformer definitions. The available transformers are:"
msgstr ""
#: ../telemetry-data-collection.rst:649
msgid "Name of transformer"
msgstr ""
#: ../telemetry-data-collection.rst:649
msgid "Reference name for configuration"
msgstr ""
#: ../telemetry-data-collection.rst:651
msgid "Accumulator"
msgstr ""
#: ../telemetry-data-collection.rst:651
msgid "accumulator"
msgstr ""
#: ../telemetry-data-collection.rst:653
msgid "Aggregator"
msgstr ""
#: ../telemetry-data-collection.rst:653
msgid "aggregator"
msgstr ""
#: ../telemetry-data-collection.rst:655
msgid "Arithmetic"
msgstr ""
#: ../telemetry-data-collection.rst:655
msgid "arithmetic"
msgstr ""
#: ../telemetry-data-collection.rst:657
msgid "Rate of change"
msgstr ""
#: ../telemetry-data-collection.rst:657
msgid "rate\\_of\\_change"
msgstr ""
#: ../telemetry-data-collection.rst:659
msgid "Unit conversion"
msgstr ""
#: ../telemetry-data-collection.rst:659
msgid "unit\\_conversion"
msgstr ""
#: ../telemetry-data-collection.rst:661
msgid "delta"
msgstr ""
#: ../telemetry-data-collection.rst:664
msgid ""
"The publishers section contains the list of publishers, where the samples "
"data should be sent after the possible transformations."
msgstr ""
#: ../telemetry-data-collection.rst:667
msgid "Similarly, the event pipeline definition looks like the following::"
msgstr ""
#: ../telemetry-data-collection.rst:681
msgid "The event filter uses the same filtering logic as the meter pipeline."
msgstr ""
#: ../telemetry-data-collection.rst:686
msgid "Transformers"
msgstr ""
#: ../telemetry-data-collection.rst:688
msgid "The definition of transformers can contain the following fields:"
msgstr ""
#: ../telemetry-data-collection.rst:691
msgid "Name of the transformer."
msgstr ""
#: ../telemetry-data-collection.rst:694
msgid "Parameters of the transformer."
msgstr ""
#: ../telemetry-data-collection.rst:694
msgid "parameters"
msgstr ""
#: ../telemetry-data-collection.rst:696
msgid ""
"The parameters section can contain transformer specific fields, like source "
"and target fields with different subfields in case of the rate of change, "
"which depends on the implementation of the transformer."
msgstr ""
#: ../telemetry-data-collection.rst:700
msgid ""
"In the case of the transformer that creates the ``cpu_util`` meter, the "
"definition looks like the following::"
msgstr ""
#: ../telemetry-data-collection.rst:712
msgid ""
"The rate of change the transformer generates is the ``cpu_util`` meter from "
"the sample values of the ``cpu`` counter, which represents cumulative CPU "
"time in nanoseconds. The transformer definition above defines a scale factor "
"(for nanoseconds and multiple CPUs), which is applied before the "
"transformation derives a sequence of gauge samples with unit '%', from "
"sequential values of the ``cpu`` meter."
msgstr ""
#: ../telemetry-data-collection.rst:719
msgid ""
"The definition for the disk I/O rate, which is also generated by the rate of "
"change transformer::"
msgstr ""
#: ../telemetry-data-collection.rst:735
msgid "**Unit conversion transformer**"
msgstr ""
#: ../telemetry-data-collection.rst:737
msgid ""
"Transformer to apply a unit conversion. It takes the volume of the meter and "
"multiplies it with the given ``scale`` expression. Also supports "
"``map_from`` and ``map_to`` like the rate of change transformer."
msgstr ""
#: ../telemetry-data-collection.rst:741
msgid "Sample configuration::"
msgstr ""
#: ../telemetry-data-collection.rst:751
msgid "With ``map_from`` and ``map_to`` ::"
msgstr ""
#: ../telemetry-data-collection.rst:765
msgid "**Aggregator transformer**"
msgstr ""
#: ../telemetry-data-collection.rst:767
msgid ""
"A transformer that sums up the incoming samples until enough samples have "
"come in or a timeout has been reached."
msgstr ""
#: ../telemetry-data-collection.rst:770
msgid ""
"Timeout can be specified with the ``retention_time`` option. If we want to "
"flush the aggregation after a set number of samples have been aggregated, we "
"can specify the size parameter."
msgstr ""
#: ../telemetry-data-collection.rst:774
msgid ""
"The volume of the created sample is the sum of the volumes of samples that "
"came into the transformer. Samples can be aggregated by the attributes "
"``project_id``, ``user_id`` and ``resource_metadata``. To aggregate by the "
"chosen attributes, specify them in the configuration and set which value of "
"the attribute to take for the new sample (first to take the first sample's "
"attribute, last to take the last sample's attribute, and drop to discard the "
"attribute)."
msgstr ""
#: ../telemetry-data-collection.rst:782
msgid ""
"To aggregate 60s worth of samples by ``resource_metadata`` and keep the "
"``resource_metadata`` of the latest received sample::"
msgstr ""
#: ../telemetry-data-collection.rst:791
msgid ""
"To aggregate each 15 samples by ``user_id`` and ``resource_metadata`` and "
"keep the ``user_id`` of the first received sample and drop the "
"``resource_metadata``::"
msgstr ""
#: ../telemetry-data-collection.rst:802
msgid "**Accumulator transformer**"
msgstr ""
#: ../telemetry-data-collection.rst:804
msgid ""
"This transformer simply caches the samples until enough samples have arrived "
"and then flushes them all down the pipeline at once::"
msgstr ""
#: ../telemetry-data-collection.rst:812
msgid "**Muli meter arithmetic transformer**"
msgstr ""
#: ../telemetry-data-collection.rst:814
msgid ""
"This transformer enables us to perform arithmetic calculations over one or "
"more meters and/or their metadata, for example::"
msgstr ""
#: ../telemetry-data-collection.rst:819
msgid ""
"A new sample is created with the properties described in the ``target`` "
"section of the transformer's configuration. The sample's volume is the "
"result of the provided expression. The calculation is performed on samples "
"from the same resource."
msgstr ""
#: ../telemetry-data-collection.rst:826
msgid "The calculation is limited to meters with the same interval."
msgstr ""
#: ../telemetry-data-collection.rst:828 ../telemetry-data-collection.rst:861
msgid "Example configuration::"
msgstr ""
#: ../telemetry-data-collection.rst:839
msgid ""
"To demonstrate the use of metadata, here is the implementation of a silly "
"meter that shows average CPU time per core::"
msgstr ""
#: ../telemetry-data-collection.rst:853
msgid ""
"Expression evaluation gracefully handles NaNs and exceptions. In such a case "
"it does not create a new sample but only logs a warning."
msgstr ""
#: ../telemetry-data-collection.rst:856
msgid "**Delta transformer**"
msgstr ""
#: ../telemetry-data-collection.rst:858
msgid ""
"This transformer calculates the change between two sample datapoints of a "
"resource. It can be configured to capture only the positive growth deltas."
msgstr ""
#: ../telemetry-data-collection.rst:873
msgid "Meter definitions"
msgstr ""
#: ../telemetry-data-collection.rst:874
msgid ""
"The Telemetry service collects a subset of the meters by filtering "
"notifications emitted by other OpenStack services. Starting with the Liberty "
"release, you can find the meter definitions in a separate configuration "
"file, called :file:`ceilometer/meter/data/meter.yaml`. This enables "
"operators/administrators to add new meters to Telemetry project by updating "
"the :file:`meter.yaml` file without any need for additional code changes."
msgstr ""
#: ../telemetry-data-collection.rst:883
msgid ""
"The :file:`meter.yaml` file should be modified with care. Unless intended do "
"not remove any existing meter definitions from the file. Also, the collected "
"meters can differ in some cases from what is referenced in the documentation."
msgstr ""
#: ../telemetry-data-collection.rst:888
msgid "A standard meter definition looks like the following::"
msgstr ""
#: ../telemetry-data-collection.rst:900
msgid ""
"The definition above shows a simple meter definition with some fields, from "
"which ``name``, ``event_type``, ``type``, ``unit``, and ``volume`` are "
"required. If there is a match on the event type, samples are generated for "
"the meter."
msgstr ""
#: ../telemetry-data-collection.rst:905
msgid ""
"If you take a look at the :file:`meter.yaml` file, it contains the sample "
"definitions for all the meters that Telemetry is collecting from "
"notifications. The value of each field is specified by using json path in "
"order to find the right value from the notification message. In order to be "
"able to specify the right field you need to be aware of the format of the "
"consumed notification. The values that need to be searched in the "
"notification message are set with a json path starting with ``$.`` For "
"instance, if you need the ``size`` information from the payload you can "
"define it like ``$.payload.size``."
msgstr ""
#: ../telemetry-data-collection.rst:915
msgid ""
"A notification message may contain multiple meters. You can use ``*`` in the "
"meter definition to capture all the meters and generate samples "
"respectively. You can use wild cards as shown in the following example::"
msgstr ""
#: ../telemetry-data-collection.rst:930
msgid ""
"In the above example, the ``name`` field is a json path with matching a list "
"of meter names defined in the notification message."
msgstr ""
#: ../telemetry-data-collection.rst:933
msgid ""
"You can even use complex operations on json paths. In the following example, "
"``volume`` and ``resource_id`` fields perform an arithmetic and string "
"concatenation::"
msgstr ""
#: ../telemetry-data-collection.rst:947
msgid ""
"You will find some existence meters in the :file:`meter.yaml`. These meters "
"have a ``volume`` as ``1`` and are at the bottom of the yaml file with a "
"note suggesting that these will be removed in Mitaka release."
msgstr ""
#: ../telemetry-data-collection.rst:951
msgid "For example, the meter definition for existence meters is as follows::"
msgstr ""
#: ../telemetry-data-collection.rst:965
msgid ""
"These meters are not loaded by default. To load these meters, flip the "
"`disable_non_metric_meters` option in the :file:`ceilometer.conf` file."
msgstr ""
#: ../telemetry-data-collection.rst:970
msgid "Block Storage audit script setup to get notifications"
msgstr ""
#: ../telemetry-data-collection.rst:971
msgid ""
"If you want to collect OpenStack Block Storage notification on demand, you "
"can use ``cinder-volume-usage-audit`` from OpenStack Block Storage. This "
"script becomes available when you install OpenStack Block Storage, so you "
"can use it without any specific settings and you don't need to authenticate "
"to access the data. To use it, you must run this command in the following "
"format::"
msgstr ""
#: ../telemetry-data-collection.rst:981
msgid ""
"This script outputs what volumes or snapshots were created, deleted, or "
"exists in a given period of time and some information about these volumes or "
"snapshots. Information about the existence and size of volumes and snapshots "
"is store in the Telemetry service. This data is also stored as an event "
"which is the recommended usage as it provides better indexing of data."
msgstr ""
#: ../telemetry-data-collection.rst:988
msgid ""
"Using this script via cron you can get notifications periodically, for "
"example, every 5 minutes::"
msgstr ""
#: ../telemetry-data-collection.rst:996
msgid "Storing samples"
msgstr ""
#: ../telemetry-data-collection.rst:997
msgid ""
"The Telemetry service has a separate service that is responsible for "
"persisting the data that comes from the pollsters or is received as "
"notifications. The data can be stored in a file or a database back end, for "
"which the list of supported databases can be found in :ref:`telemetry-"
"supported-databases`. The data can also be sent to an external data store by "
"using an HTTP dispatcher."
msgstr ""
#: ../telemetry-data-collection.rst:1004
msgid ""
"The ``ceilometer-collector`` service receives the data as messages from the "
"message bus of the configured AMQP service. It sends these datapoints "
"without any modification to the configured target. The service has to run on "
"a host machine from which it has access to the configured dispatcher."
msgstr ""
#: ../telemetry-data-collection.rst:1012
msgid "Multiple dispatchers can be configured for Telemetry at one time."
msgstr ""
#: ../telemetry-data-collection.rst:1014
msgid ""
"Multiple ``ceilometer-collector`` processes can be run at a time. It is also "
"supported to start multiple worker threads per collector process. The "
"``collector_workers`` configuration option has to be modified in the "
"`Collector section <http://docs.openstack.org/liberty/config-reference/"
"content/ch_configuring-openstack-telemetry.html>`__ of the :file:`ceilometer."
"conf` configuration file."
msgstr ""
#: ../telemetry-data-collection.rst:1022
msgid "Database dispatcher"
msgstr ""
#: ../telemetry-data-collection.rst:1023
msgid ""
"When the database dispatcher is configured as data store, you have the "
"option to set a ``time_to_live`` option (ttl) for samples. By default the "
"time to live value for samples is set to -1, which means that they are kept "
"in the database forever."
msgstr ""
#: ../telemetry-data-collection.rst:1028
msgid ""
"The time to live value is specified in seconds. Each sample has a time "
"stamp, and the ``ttl`` value indicates that a sample will be deleted from "
"the database when the number of seconds has elapsed since that sample "
"reading was stamped. For example, if the time to live is set to 600, all "
"samples older than 600 seconds will be purged from the database."
msgstr ""
#: ../telemetry-data-collection.rst:1035
msgid ""
"Certain databases support native TTL expiration. In cases where this is not "
"possible, a command-line script, which you can use for this purpose is "
"ceilometer-expirer. You can run it in a cron job, which helps to keep your "
"database in a consistent state."
msgstr ""
#: ../telemetry-data-collection.rst:1040
msgid "The level of support differs in case of the configured back end:"
msgstr ""
#: ../telemetry-data-collection.rst:1043
msgid "TTL value support"
msgstr ""
#: ../telemetry-data-collection.rst:1045
msgid "MongoDB"
msgstr ""
#: ../telemetry-data-collection.rst:1045
msgid ""
"MongoDB has native TTL support for deleting samples that are older than the "
"configured ttl value."
msgstr ""
#: ../telemetry-data-collection.rst:1049
msgid "SQL-based back ends"
msgstr ""
#: ../telemetry-data-collection.rst:1049
msgid ""
"ceilometer-expirer has to be used for deleting samples and its related data "
"from the database."
msgstr ""
#: ../telemetry-data-collection.rst:1053
msgid "HBase"
msgstr ""
#: ../telemetry-data-collection.rst:1053
msgid ""
"Telemetry's HBase support does not include native TTL nor ceilometer-expirer "
"support."
msgstr ""
#: ../telemetry-data-collection.rst:1057
msgid "DB2 NoSQL"
msgstr ""
#: ../telemetry-data-collection.rst:1057
msgid "DB2 NoSQL does not have native TTL nor ceilometer-expirer support."
msgstr ""
#: ../telemetry-data-collection.rst:1062
msgid "HTTP dispatcher"
msgstr ""
#: ../telemetry-data-collection.rst:1063
msgid ""
"The Telemetry service supports sending samples to an external HTTP target. "
"The samples are sent without any modification. To set this option as the "
"collector's target, the ``dispatcher`` has to be changed to ``http`` in the :"
"file:`ceilometer.conf` configuration file. For the list of options that you "
"need to set, see the see the `dispatcher_http section <http://docs.openstack."
"org/liberty/config-reference/content/ch_configuring-openstack-telemetry."
"html>`__ in the OpenStack Configuration Reference."
msgstr ""
#: ../telemetry-data-collection.rst:1072
msgid "File dispatcher"
msgstr ""
#: ../telemetry-data-collection.rst:1073
msgid ""
"You can store samples in a file by setting the ``dispatcher`` option in the :"
"file:`ceilometer.conf` file. For the list of configuration options, see the "
"`dispatcher_file section <http://docs.openstack.org/liberty/config-reference/"
"content/ch_configuring-openstack-telemetry.html>`__ in the OpenStack "
"Configuration Reference."
msgstr ""
#: ../telemetry-data-collection.rst:1080
msgid "Gnocchi dispatcher"
msgstr ""
#: ../telemetry-data-collection.rst:1081
msgid ""
"The Telemetry service supports sending the metering data to Gnocchi back end "
"through the gnocchi dispatcher. To set this option as the target, change the "
"``dispatcher`` to ``gnocchi`` in the :file:`ceilometer.conf` configuration "
"file."
msgstr ""
#: ../telemetry-data-collection.rst:1086
msgid ""
"For the list of options that you need to set, see the `dispatcher_gnocchi "
"section <http://docs.openstack.org/liberty/config-reference/content/"
"ch_configuring-openstack-telemetry.html>`__ in the OpenStack Configuration "
"Reference."
msgstr ""
#: ../telemetry-data-retrieval.rst:3
msgid "Data retrieval"
msgstr ""
#: ../telemetry-data-retrieval.rst:5
msgid ""
"The Telemetry service offers several mechanisms from which the persisted "
"data can be accessed. As described in :ref:`telemetry-system-architecture` "
"and in :ref:`telemetry-data-collection`, the collected information can be "
"stored in one or more database back ends, which are hidden by the Telemetry "
"RESTful API."
msgstr ""
#: ../telemetry-data-retrieval.rst:12
msgid ""
"It is highly recommended not to access the database directly and read or "
"modify any data in it. The API layer hides all the changes in the actual "
"database schema and provides a standard interface to expose the samples, "
"alarms and so forth."
msgstr ""
#: ../telemetry-data-retrieval.rst:18
msgid "Telemetry v2 API"
msgstr ""
#: ../telemetry-data-retrieval.rst:19
msgid ""
"The Telemetry service provides a RESTful API, from which the collected "
"samples and all the related information can be retrieved, like the list of "
"meters, alarm definitions and so forth."
msgstr ""
#: ../telemetry-data-retrieval.rst:23
msgid ""
"The Telemetry API URL can be retrieved from the service catalog provided by "
"OpenStack Identity, which is populated during the installation process. The "
"API access needs a valid token and proper permission to retrieve data, as "
"described in :ref:`telemetry-users-roles-tenants`."
msgstr ""
#: ../telemetry-data-retrieval.rst:28
msgid ""
"Further information about the available API endpoints can be found in the "
"`Telemetry API Reference <http://developer.openstack.org/api-ref-telemetry-"
"v2.html>`__."
msgstr ""
#: ../telemetry-data-retrieval.rst:33
msgid "Query"
msgstr ""
#: ../telemetry-data-retrieval.rst:34
msgid ""
"The API provides some additional functionalities, like querying the "
"collected data set. For the samples and alarms API endpoints, both simple "
"and complex query styles are available, whereas for the other endpoints only "
"simple queries are supported."
msgstr ""
#: ../telemetry-data-retrieval.rst:39
msgid ""
"After validating the query parameters, the processing is done on the "
"database side in the case of most database back ends in order to achieve "
"better performance."
msgstr ""
#: ../telemetry-data-retrieval.rst:43
msgid "**Simple query**"
msgstr ""
#: ../telemetry-data-retrieval.rst:45
msgid ""
"Many of the API endpoints accept a query filter argument, which should be a "
"list of data structures that consist of the following items:"
msgstr ""
#: ../telemetry-data-retrieval.rst:48
msgid "``field``"
msgstr ""
#: ../telemetry-data-retrieval.rst:50
msgid "``op``"
msgstr ""
#: ../telemetry-data-retrieval.rst:52
msgid "``value``"
msgstr ""
#: ../telemetry-data-retrieval.rst:54
msgid "``type``"
msgstr ""
#: ../telemetry-data-retrieval.rst:56
msgid ""
"Regardless of the endpoint on which the filter is applied on, it will always "
"target the fields of the `Sample type <http://docs.openstack.org/developer/"
"ceilometer/webapi/v2.html#Sample>`__."
msgstr ""
#: ../telemetry-data-retrieval.rst:60
msgid ""
"Several fields of the API endpoints accept shorter names than the ones "
"defined in the reference. The API will do the transformation internally and "
"return the output with the fields that are listed in the `API reference "
"<http://docs.openstack.org/developer/ceilometer/webapi/v2.html>`__. The "
"fields are the following:"
msgstr ""
#: ../telemetry-data-retrieval.rst:66
msgid "``project_id``: project"
msgstr ""
#: ../telemetry-data-retrieval.rst:68
msgid "``resource_id``: resource"
msgstr ""
#: ../telemetry-data-retrieval.rst:70
msgid "``user_id``: user"
msgstr ""
#: ../telemetry-data-retrieval.rst:72
msgid ""
"When a filter argument contains multiple constraints of the above form, a "
"logical ``AND`` relation between them is implied."
msgstr ""
#: ../telemetry-data-retrieval.rst:77
msgid "**Complex query**"
msgstr ""
#: ../telemetry-data-retrieval.rst:79
msgid ""
"The filter expressions of the complex query feature operate on the fields of "
"``Sample``, ``Alarm`` and ``AlarmChange`` types. The following comparison "
"operators are supported:"
msgstr ""
#: ../telemetry-data-retrieval.rst:83
msgid "``=``"
msgstr ""
#: ../telemetry-data-retrieval.rst:85
msgid "``!=``"
msgstr ""
#: ../telemetry-data-retrieval.rst:87
msgid "``<``"
msgstr ""
#: ../telemetry-data-retrieval.rst:89
msgid "``<=``"
msgstr ""
#: ../telemetry-data-retrieval.rst:91
msgid "``>``"
msgstr ""
#: ../telemetry-data-retrieval.rst:93
msgid "``>=``"
msgstr ""
#: ../telemetry-data-retrieval.rst:95
msgid "The following logical operators can be used:"
msgstr ""
#: ../telemetry-data-retrieval.rst:97
msgid "``and``"
msgstr ""
#: ../telemetry-data-retrieval.rst:99
msgid "``or``"
msgstr ""
#: ../telemetry-data-retrieval.rst:101
msgid "``not``"
msgstr ""
#: ../telemetry-data-retrieval.rst:105
msgid ""
"The ``not`` operator has different behavior in MongoDB and in the SQLAlchemy-"
"based database engines. If the ``not`` operator is applied on a non existent "
"metadata field then the result depends on the database engine. In case of "
"MongoDB, it will return every sample as the ``not`` operator is evaluated "
"true for every sample where the given field does not exist. On the other "
"hand the SQL-based database engine will return an empty result because of "
"the underlying ``join`` operation."
msgstr ""
#: ../telemetry-data-retrieval.rst:114
msgid ""
"Complex query supports specifying a list of ``orderby`` expressions. This "
"means that the result of the query can be ordered based on the field names "
"provided in this list. When multiple keys are defined for the ordering, "
"these will be applied sequentially in the order of the specification. The "
"second expression will be applied on the groups for which the values of the "
"first expression are the same. The ordering can be ascending or descending."
msgstr ""
#: ../telemetry-data-retrieval.rst:122
msgid "The number of returned items can be bounded using the ``limit`` option."
msgstr ""
#: ../telemetry-data-retrieval.rst:124
msgid "The ``filter``, ``orderby`` and ``limit`` fields are optional."
msgstr ""
#: ../telemetry-data-retrieval.rst:128
msgid ""
"As opposed to the simple query, complex query is available via a separate "
"API endpoint. For more information see the `Telemetry v2 Web API Reference "
"<http://docs.openstack.org/developer/ceilometer/webapi/v2.html#v2-web-"
"api>`__."
msgstr ""
#: ../telemetry-data-retrieval.rst:133
msgid "Statistics"
msgstr ""
#: ../telemetry-data-retrieval.rst:134
msgid ""
"The sample data can be used in various ways for several purposes, like "
"billing or profiling. In external systems the data is often used in the form "
"of aggregated statistics. The Telemetry API provides several built-in "
"functions to make some basic calculations available without any additional "
"coding."
msgstr ""
#: ../telemetry-data-retrieval.rst:140
msgid "Telemetry supports the following statistics and aggregation functions:"
msgstr ""
#: ../telemetry-data-retrieval.rst:143
msgid "Average of the sample volumes over each period."
msgstr ""
#: ../telemetry-data-retrieval.rst:143
msgid "``avg``"
msgstr ""
#: ../telemetry-data-retrieval.rst:146
msgid ""
"Count of distinct values in each period identified by a key specified as the "
"parameter of this aggregate function. The supported parameter values are:"
msgstr ""
#: ../telemetry-data-retrieval.rst:152
msgid "``resource_id``"
msgstr ""
#: ../telemetry-data-retrieval.rst:154
msgid "``cardinality``"
msgstr ""
#: ../telemetry-data-retrieval.rst:158
msgid "The ``aggregate.param`` option is required."
msgstr ""
#: ../telemetry-data-retrieval.rst:161
msgid "Number of samples in each period."
msgstr ""
#: ../telemetry-data-retrieval.rst:161
msgid "``count``"
msgstr ""
#: ../telemetry-data-retrieval.rst:164
msgid "Maximum of the sample volumes in each period."
msgstr ""
#: ../telemetry-data-retrieval.rst:164
msgid "``max``"
msgstr ""
#: ../telemetry-data-retrieval.rst:167
msgid "Minimum of the sample volumes in each period."
msgstr ""
#: ../telemetry-data-retrieval.rst:167
msgid "``min``"
msgstr ""
#: ../telemetry-data-retrieval.rst:170
msgid "Standard deviation of the sample volumes in each period."
msgstr ""
#: ../telemetry-data-retrieval.rst:170
msgid "``stddev``"
msgstr ""
#: ../telemetry-data-retrieval.rst:173
msgid "Sum of the sample volumes over each period."
msgstr ""
#: ../telemetry-data-retrieval.rst:173
msgid "``sum``"
msgstr ""
#: ../telemetry-data-retrieval.rst:175
msgid ""
"The simple query and the statistics functionality can be used together in a "
"single API request."
msgstr ""
#: ../telemetry-data-retrieval.rst:179
msgid "Telemetry command line client and SDK"
msgstr ""
#: ../telemetry-data-retrieval.rst:180
msgid ""
"The Telemetry service provides a command line client, with which the "
"collected data is available just as the alarm definition and retrieval "
"options. The client uses the Telemetry RESTful API in order to execute the "
"requested operations."
msgstr ""
#: ../telemetry-data-retrieval.rst:185
msgid ""
"To be able to use the ``ceilometer`` command, the python-ceilometerclient "
"package needs to be installed and configured properly. For details about the "
"installation process, see the `Telemetry chapter <http://docs.openstack.org/"
"liberty/install-guide-ubuntu/ceilometer.html>`__ in the OpenStack "
"Installation Guide."
msgstr ""
#: ../telemetry-data-retrieval.rst:193
msgid ""
"The Telemetry service captures the user-visible resource usage data. "
"Therefore the database will not contain any data without the existence of "
"these resources, like VM images in the OpenStack Image service."
msgstr ""
#: ../telemetry-data-retrieval.rst:198
msgid ""
"Similarly to other OpenStack command line clients, the ``ceilometer`` client "
"uses OpenStack Identity for authentication. The proper credentials and ``--"
"auth_url`` parameter have to be defined via command line parameters or "
"environment variables."
msgstr ""
#: ../telemetry-data-retrieval.rst:203
msgid ""
"This section provides some examples without the aim of completeness. These "
"commands can be used for instance for validating an installation of "
"Telemetry."
msgstr ""
#: ../telemetry-data-retrieval.rst:207
msgid ""
"To retrieve the list of collected meters, the following command should be "
"used::"
msgstr ""
#: ../telemetry-data-retrieval.rst:225
msgid ""
"The ``ceilometer`` command was run with ``admin`` rights, which means that "
"all the data is accessible in the database. For more information about "
"access right see :ref:`telemetry-users-roles-tenants`. As it can be seen in "
"the above example, there are two VM instances existing in the system, as "
"there are VM instance related meters on the top of the result list. The "
"existence of these meters does not indicate that these instances are running "
"at the time of the request. The result contains the currently collected "
"meters per resource, in an ascending order based on the name of the meter."
msgstr ""
#: ../telemetry-data-retrieval.rst:234
msgid ""
"Samples are collected for each meter that is present in the list of meters, "
"except in case of instances that are not running or deleted from the "
"OpenStack Compute database. If an instance no longer exists and there is a "
"``time_to_live`` value set in the :file:`ceilometer.conf` configuration "
"file, then a group of samples are deleted in each expiration cycle. When the "
"last sample is deleted for a meter, the database can be cleaned up by "
"running ceilometer-expirer and the meter will not be present in the list "
"above anymore. For more information about the expiration procedure see :ref:"
"`telemetry-storing-samples`."
msgstr ""
#: ../telemetry-data-retrieval.rst:244
msgid ""
"The Telemetry API supports simple query on the meter endpoint. The query "
"functionality has the following syntax::"
msgstr ""
#: ../telemetry-data-retrieval.rst:249
msgid ""
"The following command needs to be invoked to request the meters of one VM "
"instance::"
msgstr ""
#: ../telemetry-data-retrieval.rst:274
msgid ""
"As it was described above, the whole set of samples can be retrieved that "
"are stored for a meter or filtering the result set by using one of the "
"available query types. The request for all the samples of the ``cpu`` meter "
"without any additional filtering looks like the following::"
msgstr ""
#: ../telemetry-data-retrieval.rst:292
msgid ""
"The result set of the request contains the samples for both instances "
"ordered by the timestamp field in the default descending order."
msgstr ""
#: ../telemetry-data-retrieval.rst:295
msgid ""
"The simple query makes it possible to retrieve only a subset of the "
"collected samples. The following command can be executed to request the "
"``cpu`` samples of only one of the VM instances::"
msgstr ""
#: ../telemetry-data-retrieval.rst:313
msgid ""
"As it can be seen on the output above, the result set contains samples for "
"only one instance of the two."
msgstr ""
#: ../telemetry-data-retrieval.rst:316
msgid ""
"The ``ceilometer query-samples`` command is used to execute rich queries. "
"This command accepts the following parameters:"
msgstr ""
#: ../telemetry-data-retrieval.rst:320
msgid ""
"Contains the filter expression for the query in the form of: ``{complex_op: "
"[{simple_op: {field_name: value}}]}``."
msgstr ""
#: ../telemetry-data-retrieval.rst:321
msgid "``--filter``"
msgstr ""
#: ../telemetry-data-retrieval.rst:324
msgid ""
"Contains the list of ``orderby`` expressions in the form of: ``[{field_name: "
"direction}, {field_name: direction}]``."
msgstr ""
#: ../telemetry-data-retrieval.rst:325
msgid "``--orderby``"
msgstr ""
#: ../telemetry-data-retrieval.rst:328
msgid "Specifies the maximum number of samples to return."
msgstr ""
#: ../telemetry-data-retrieval.rst:328
msgid "``--limit``"
msgstr ""
#: ../telemetry-data-retrieval.rst:330
msgid ""
"For more information about complex queries see :ref:`Complex query <complex-"
"query>`."
msgstr ""
#: ../telemetry-data-retrieval.rst:333
msgid ""
"As the complex query functionality provides the possibility of using complex "
"operators, it is possible to retrieve a subset of samples for a given VM "
"instance. To request for the first six samples for the ``cpu`` and ``disk."
"read.bytes`` meters, the following command should be invoked::"
msgstr ""
#: ../telemetry-data-retrieval.rst:352
msgid ""
"Ceilometer also captures data as events, which represents the state of a "
"resource. Refer to :doc:`/telemetry-events` for more information regarding "
"Events."
msgstr ""
#: ../telemetry-data-retrieval.rst:356
msgid ""
"To retrieve a list of recent events that occurred in the system, the "
"following command can be executed:"
msgstr ""
#: ../telemetry-data-retrieval.rst:390
msgid ""
"In Liberty, the data returned corresponds to the role and user. Non-admin "
"users will only return events that are scoped to them. Admin users will "
"return all events related to the project they administer as well as all "
"unscoped events."
msgstr ""
#: ../telemetry-data-retrieval.rst:395
msgid ""
"Similar to querying meters, additional filter parameters can be given to "
"retrieve specific events:"
msgstr ""
#: ../telemetry-data-retrieval.rst:431
msgid ""
"As of the Liberty release, the number of items returned will be restricted "
"to the value defined by ``default_api_return_limit`` in the :file:"
"`ceilometer.conf` configuration file. Alternatively, the value can be set "
"per query by passing the ``limit`` option in the request."
msgstr ""
#: ../telemetry-data-retrieval.rst:438
msgid "Telemetry python bindings"
msgstr ""
#: ../telemetry-data-retrieval.rst:439
msgid ""
"The command line client library provides python bindings in order to use the "
"Telemetry Python API directly from python programs."
msgstr ""
#: ../telemetry-data-retrieval.rst:442
msgid ""
"The first step in setting up the client is to create a client instance with "
"the proper credentials::"
msgstr ""
#: ../telemetry-data-retrieval.rst:448
msgid ""
"The ``VERSION`` parameter can be ``1`` or ``2``, specifying the API version "
"to be used."
msgstr ""
#: ../telemetry-data-retrieval.rst:451
msgid "The method calls look like the following::"
msgstr ""
#: ../telemetry-data-retrieval.rst:459
msgid ""
"For further details about the python-ceilometerclient package, see the "
"`Python bindings to the OpenStack Ceilometer API <http://docs.openstack.org/"
"developer/python-ceilometerclient/>`__ reference."
msgstr ""
#: ../telemetry-data-retrieval.rst:467
msgid "Publishers"
msgstr ""
#: ../telemetry-data-retrieval.rst:468
msgid ""
"The Telemetry service provides several transport methods to forward the data "
"collected to the ceilometer-collector service or to an external system. The "
"consumers of this data are widely different, like monitoring systems, for "
"which data loss is acceptable and billing systems, which require reliable "
"data transportation. Telemetry provides methods to fulfill the requirements "
"of both kind of systems, as it is described below."
msgstr ""
#: ../telemetry-data-retrieval.rst:476
msgid ""
"The publisher component makes it possible to persist the data into storage "
"through the message bus or to send it to one or more external consumers. One "
"chain can contain multiple publishers."
msgstr ""
#: ../telemetry-data-retrieval.rst:480
msgid ""
"To solve the above mentioned problem, the notion of multi-publisher can be "
"configured for each datapoint within the Telemetry service, allowing the "
"same technical meter or event to be published multiple times to multiple "
"destinations, each potentially using a different transport."
msgstr ""
#: ../telemetry-data-retrieval.rst:485
msgid ""
"Publishers can be specified in the ``publishers`` section for each pipeline "
"(for further details about pipelines see :ref:`data-collection-and-"
"processing`) that is defined in the `pipeline.yaml <https://git.openstack."
"org/cgit/openstack/ceilometer/plain/etc/ceilometer/pipeline.yaml>`__ file."
msgstr ""
#: ../telemetry-data-retrieval.rst:492
msgid "The following publisher types are supported:"
msgstr ""
#: ../telemetry-data-retrieval.rst:495
msgid ""
"It can be specified in the form of ``notifier://?"
"option1=value1&option2=value2``. It emits data over AMQP using oslo."
"messaging. This is the recommended method of publishing."
msgstr ""
#: ../telemetry-data-retrieval.rst:498
msgid "notifier"
msgstr ""
#: ../telemetry-data-retrieval.rst:501
msgid ""
"It can be specified in the form of ``rpc://?option1=value1&option2=value2``. "
"It emits metering data over lossy AMQP. This method is synchronous and may "
"experience performance issues. This publisher is deprecated in Liberty in "
"favor of the notifier publisher."
msgstr ""
#: ../telemetry-data-retrieval.rst:505
msgid "rpc"
msgstr ""
#: ../telemetry-data-retrieval.rst:508
msgid ""
"It can be specified in the form of ``udp://<host>:<port>/``. It emits "
"metering data for over UDP."
msgstr ""
#: ../telemetry-data-retrieval.rst:509
msgid "udp"
msgstr ""
#: ../telemetry-data-retrieval.rst:512
msgid ""
"It can be specified in the form of ``file://path?"
"option1=value1&option2=value2``. This publisher records metering data into a "
"file."
msgstr ""
#: ../telemetry-data-retrieval.rst:514
msgid "file"
msgstr ""
#: ../telemetry-data-retrieval.rst:518
msgid ""
"If a file name and location is not specified, this publisher does not log "
"any meters, instead it logs a warning message in the configured log file for "
"Telemetry."
msgstr ""
#: ../telemetry-data-retrieval.rst:523
msgid ""
"It can be specified in the form of: ``kafka://kafka_broker_ip: "
"kafka_broker_port?topic=kafka_topic &option1=value1``."
msgstr ""
#: ../telemetry-data-retrieval.rst:527
msgid "This publisher sends metering data to a kafka broker."
msgstr ""
#: ../telemetry-data-retrieval.rst:527
msgid "kafka"
msgstr ""
#: ../telemetry-data-retrieval.rst:531
msgid ""
"If the topic parameter is missing, this publisher brings out metering data "
"under a topic name, ``ceilometer``. When the port number is not specified, "
"this publisher uses 9092 as the broker's port."
msgstr ""
#: ../telemetry-data-retrieval.rst:536
msgid ""
"The following options are available for ``rpc`` and ``notifier``. The policy "
"option can be used by ``kafka`` publisher:"
msgstr ""
#: ../telemetry-data-retrieval.rst:540
msgid ""
"The value of it is 1. It is used for publishing the samples on additional "
"``metering_topic.sample_name`` topic queue besides the default "
"``metering_topic`` queue."
msgstr ""
#: ../telemetry-data-retrieval.rst:542
msgid "``per_meter_topic``"
msgstr ""
#: ../telemetry-data-retrieval.rst:545
msgid ""
"It is used for configuring the behavior for the case, when the publisher "
"fails to send the samples, where the possible predefined values are the "
"following:"
msgstr ""
#: ../telemetry-data-retrieval.rst:550
msgid "Used for waiting and blocking until the samples have been sent."
msgstr ""
#: ../telemetry-data-retrieval.rst:550
msgid "default"
msgstr ""
#: ../telemetry-data-retrieval.rst:553
msgid "Used for dropping the samples which are failed to be sent."
msgstr ""
#: ../telemetry-data-retrieval.rst:553
msgid "drop"
msgstr ""
#: ../telemetry-data-retrieval.rst:556
msgid ""
"Used for creating an in-memory queue and retrying to send the samples on the "
"queue on the next samples publishing period (the queue length can be "
"configured with ``max_queue_length``, where 1024 is the default value)."
msgstr ""
#: ../telemetry-data-retrieval.rst:559
msgid "``policy``"
msgstr ""
#: ../telemetry-data-retrieval.rst:559
msgid "queue"
msgstr ""
#: ../telemetry-data-retrieval.rst:561
msgid ""
"The following option is additionally available for the ``notifier`` "
"publisher:"
msgstr ""
#: ../telemetry-data-retrieval.rst:564
msgid ""
"The topic name of queue to publish to. Setting this will override the "
"default topic defined by ``metering_topic`` and ``event_topic`` options. "
"This option can be used to support multiple consumers. Support for this "
"feature was added in Kilo."
msgstr ""
#: ../telemetry-data-retrieval.rst:567
msgid "``topic``"
msgstr ""
#: ../telemetry-data-retrieval.rst:569
msgid "The following options are available for the ``file`` publisher:"
msgstr ""
#: ../telemetry-data-retrieval.rst:572
msgid ""
"When this option is greater than zero, it will cause a rollover. When the "
"size is about to be exceeded, the file is closed and a new file is silently "
"opened for output. If its value is zero, rollover never occurs."
msgstr ""
#: ../telemetry-data-retrieval.rst:575
msgid "``max_bytes``"
msgstr ""
#: ../telemetry-data-retrieval.rst:578
msgid ""
"If this value is non-zero, an extension will be appended to the filename of "
"the old log, as '.1', '.2', and so forth until the specified value is "
"reached. The file that is written and contains the newest data is always the "
"one that is specified without any extensions."
msgstr ""
#: ../telemetry-data-retrieval.rst:582
msgid "``backup_count``"
msgstr ""
#: ../telemetry-data-retrieval.rst:584
msgid ""
"The default publisher is ``notifier``, without any additional options "
"specified. A sample ``publishers`` section in the :file:`/etc/ceilometer/"
"pipeline.yaml` looks like the following::"
msgstr ""
#: ../telemetry-events.rst:3
msgid "Events"
msgstr ""
#: ../telemetry-events.rst:5
msgid ""
"In addition to meters, the Telemetry service collects events triggered "
"within an OpenStack environment. This section provides a brief summary of "
"the events format in the Telemetry service."
msgstr ""
#: ../telemetry-events.rst:9
msgid ""
"While a sample represents a single, numeric datapoint within a time-series, "
"an event is a broader concept that represents the state of a resource at a "
"point in time. The state may be described using various data types including "
"non-numeric data such as an instance's flavor. In general, events represent "
"any action made in the OpenStack system."
msgstr ""
#: ../telemetry-events.rst:16
msgid "Event configuration"
msgstr ""
#: ../telemetry-events.rst:17
msgid ""
"To enable the creation and storage of events in the Telemetry service "
"``store_events`` option needs to be set to ``True``. For further "
"configuration options, see the event section in the `OpenStack Configuration "
"Reference <http://docs.openstack.org/kilo/config-reference/content/"
"ch_configuring-openstack-telemetry.html>`__."
msgstr ""
#: ../telemetry-events.rst:24
msgid ""
"It is advisable to set ``disable_non_metric_meters`` to ``True`` when "
"enabling events in the Telemetry service. The Telemetry service historically "
"represented events as metering data, which may create duplication of data if "
"both events and non-metric meters are enabled."
msgstr ""
#: ../telemetry-events.rst:31
msgid "Event structure"
msgstr ""
#: ../telemetry-events.rst:32
msgid ""
"Events captured by the Telemetry service are represented by five key "
"attributes:"
msgstr ""
#: ../telemetry-events.rst:36
msgid ""
"A dotted string defining what event occurred such as ``\"compute.instance."
"resize.start\"``."
msgstr ""
#: ../telemetry-events.rst:37 ../telemetry-events.rst:125
msgid "event\\_type"
msgstr ""
#: ../telemetry-events.rst:40
msgid "A UUID for the event."
msgstr ""
#: ../telemetry-events.rst:40
msgid "message\\_id"
msgstr ""
#: ../telemetry-events.rst:43
msgid "A timestamp of when the event occurred in the system."
msgstr ""
#: ../telemetry-events.rst:43
msgid "generated"
msgstr ""
#: ../telemetry-events.rst:46
msgid ""
"A flat mapping of key-value pairs which describe the event. The event's "
"traits contain most of the details of the event. Traits are typed, and can "
"be strings, integers, floats, or datetimes."
msgstr ""
#: ../telemetry-events.rst:48 ../telemetry-events.rst:129
msgid "traits"
msgstr ""
#: ../telemetry-events.rst:51
msgid ""
"Mainly for auditing purpose, the full event message can be stored "
"(unindexed) for future evaluation."
msgstr ""
# #-#-#-#-# telemetry-events.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# ts-eql-volume-size.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../telemetry-events.rst:52 ../ts-eql-volume-size.rst:110
msgid "raw"
msgstr ""
#: ../telemetry-events.rst:55
msgid "Event indexing"
msgstr ""
#: ../telemetry-events.rst:56
msgid ""
"The general philosophy of notifications in OpenStack is to emit any and all "
"data someone might need, and let the consumer filter out what they are not "
"interested in. In order to make processing simpler and more efficient, the "
"notifications are stored and processed within Ceilometer as events. The "
"notification payload, which can be an arbitrarily complex JSON data "
"structure, is converted to a flat set of key-value pairs. This conversion is "
"specified by a config file."
msgstr ""
#: ../telemetry-events.rst:66
msgid ""
"The event format is meant for efficient processing and querying. Storage of "
"complete notifications for auditing purposes can be enabled by configuring "
"``store_raw`` option."
msgstr ""
#: ../telemetry-events.rst:71
msgid "Event conversion"
msgstr ""
#: ../telemetry-events.rst:72
msgid ""
"The conversion from notifications to events is driven by a configuration "
"file defined by the ``definitions_cfg_file`` in the :file:`ceilometer.conf` "
"configuration file."
msgstr ""
#: ../telemetry-events.rst:76
msgid ""
"This includes descriptions of how to map fields in the notification body to "
"Traits, and optional plug-ins for doing any programmatic translations "
"(splitting a string, forcing case)."
msgstr ""
#: ../telemetry-events.rst:80
msgid ""
"The mapping of notifications to events is defined per event\\_type, which "
"can be wildcarded. Traits are added to events if the corresponding fields in "
"the notification exist and are non-null."
msgstr ""
#: ../telemetry-events.rst:86
msgid ""
"The default definition file included with the Telemetry service contains a "
"list of known notifications and useful traits. The mappings provided can be "
"modified to include more or less data according to user requirements."
msgstr ""
#: ../telemetry-events.rst:91
msgid ""
"If the definitions file is not present, a warning will be logged, but an "
"empty set of definitions will be assumed. By default, any notifications that "
"do not have a corresponding event definition in the definitions file will be "
"converted to events with a set of minimal traits. This can be changed by "
"setting the option ``drop_unmatched_notifications`` in the :file:`ceilometer."
"conf` file. If this is set to True, any unmapped notifications will be "
"dropped."
msgstr ""
#: ../telemetry-events.rst:99
msgid ""
"The basic set of traits (all are TEXT type) that will be added to all events "
"if the notification has the relevant data are: service (notification's "
"publisher), tenant\\_id, and request\\_id. These do not have to be specified "
"in the event definition, they are automatically added, but their definitions "
"can be overridden for a given event\\_type."
msgstr ""
#: ../telemetry-events.rst:106
msgid "Event definitions format"
msgstr ""
#: ../telemetry-events.rst:107
msgid ""
"The event definitions file is in YAML format. It consists of a list of event "
"definitions, which are mappings. Order is significant, the list of "
"definitions is scanned in reverse order to find a definition which matches "
"the notification's event\\_type. That definition will be used to generate "
"the event. The reverse ordering is done because it is common to want to have "
"a more general wildcarded definition (such as ``compute.instance.*``) with a "
"set of traits common to all of those events, with a few more specific event "
"definitions afterwards that have all of the above traits, plus a few more."
msgstr ""
#: ../telemetry-events.rst:117
msgid "Each event definition is a mapping with two keys:"
msgstr ""
#: ../telemetry-events.rst:120
msgid ""
"This is a list (or a string, which will be taken as a 1 element list) of "
"event\\_types this definition will handle. These can be wildcarded with unix "
"shell glob syntax. An exclusion listing (starting with a ``!``) will exclude "
"any types listed from matching. If only exclusions are listed, the "
"definition will match anything not matching the exclusions."
msgstr ""
#: ../telemetry-events.rst:128
msgid ""
"This is a mapping, the keys are the trait names, and the values are trait "
"definitions."
msgstr ""
#: ../telemetry-events.rst:131
msgid "Each trait definition is a mapping with the following keys:"
msgstr ""
#: ../telemetry-events.rst:134
msgid ""
"A path specification for the field(s) in the notification you wish to "
"extract for this trait. Specifications can be written to match multiple "
"possible fields. By default the value will be the first such field. The "
"paths can be specified with a dot syntax (``payload.host``). Square bracket "
"syntax (``payload[host]``) is also supported. In either case, if the key for "
"the field you are looking for contains special characters, like ``.``, it "
"will need to be quoted (with double or single quotes): ``payload."
"image_meta.org.openstack__1__architecture``. The syntax used for the field "
"specification is a variant of `JSONPath <https://github.com/kennknowles/"
"python-jsonpath-rw>`__"
msgstr ""
#: ../telemetry-events.rst:144
msgid "fields"
msgstr ""
#: ../telemetry-events.rst:147
msgid ""
"(Optional) The data type for this trait. Valid options are: ``text``, "
"``int``, ``float``, and ``datetime``. Defaults to ``text`` if not specified."
msgstr ""
#: ../telemetry-events.rst:149
msgid "type"
msgstr ""
#: ../telemetry-events.rst:152
msgid ""
"(Optional) Used to execute simple programmatic conversions on the value in a "
"notification field."
msgstr ""
#: ../telemetry-events.rst:152
msgid "plugin"
msgstr ""
#: ../telemetry-measurements.rst:5
msgid "Measurements"
msgstr ""
#: ../telemetry-measurements.rst:7
msgid ""
"The Telemetry service collects meters within an OpenStack deployment. This "
"section provides a brief summary about meters format and origin and also "
"contains the list of available meters."
msgstr ""
#: ../telemetry-measurements.rst:11
msgid ""
"Telemetry collects meters by polling the infrastructure elements and also by "
"consuming the notifications emitted by other OpenStack services. For more "
"information about the polling mechanism and notifications see :ref:"
"`telemetry-data-collection`. There are several meters which are collected by "
"polling and by consuming. The origin for each meter is listed in the tables "
"below."
msgstr ""
#: ../telemetry-measurements.rst:20
msgid ""
"You may need to configure Telemetry or other OpenStack services in order to "
"be able to collect all the samples you need. For further information about "
"configuration requirements see the `Telemetry chapter <http://docs.openstack."
"org/liberty/install-guide-ubuntu/ceilometer.html>`__ in the OpenStack "
"Installation Guide. Also check the `Telemetry manual installation <http://"
"docs.openstack.org/developer/ceilometer/install/manual.html>`__ description."
msgstr ""
#: ../telemetry-measurements.rst:28
msgid "Telemetry uses the following meter types:"
msgstr ""
#: ../telemetry-measurements.rst:33
msgid "Increasing over time (instance hours)"
msgstr ""
#: ../telemetry-measurements.rst:35
msgid "Changing over time (bandwidth)"
msgstr ""
#: ../telemetry-measurements.rst:37
msgid ""
"Discrete items (floating IPs, image uploads) and fluctuating values (disk I/"
"O)"
msgstr ""
#: ../telemetry-measurements.rst:43
msgid ""
"Telemetry provides the possibility to store metadata for samples. This "
"metadata can be extended for OpenStack Compute and OpenStack Object Storage."
msgstr ""
#: ../telemetry-measurements.rst:47
msgid ""
"In order to add additional metadata information to OpenStack Compute you "
"have two options to choose from. The first one is to specify them when you "
"boot up a new instance. The additional information will be stored with the "
"sample in the form of ``resource_metadata.user_metadata.*``. The new field "
"should be defined by using the prefix ``metering.``. The modified boot "
"command look like the following::"
msgstr ""
#: ../telemetry-measurements.rst:56
msgid ""
"The other option is to set the ``reserved_metadata_keys`` to the list of "
"metadata keys that you would like to be included in ``resource_metadata`` of "
"the instance related samples that are collected for OpenStack Compute. This "
"option is included in the ``DEFAULT`` section of the :file:`ceilometer.conf` "
"configuration file."
msgstr ""
#: ../telemetry-measurements.rst:62
msgid ""
"You might also specify headers whose values will be stored along with the "
"sample data of OpenStack Object Storage. The additional information is also "
"stored under ``resource_metadata``. The format of the new field is "
"``resource_metadata.http_header_$name``, where ``$name`` is the name of the "
"header with ``-`` replaced by ``_``."
msgstr ""
#: ../telemetry-measurements.rst:68
msgid ""
"For specifying the new header, you need to set ``metadata_headers`` option "
"under the ``[filter:ceilometer]`` section in ``proxy-server.conf`` under the "
"``swift`` folder. You can use this additional data for instance to "
"distinguish external and internal users."
msgstr ""
#: ../telemetry-measurements.rst:73
msgid ""
"Measurements are grouped by services which are polled by Telemetry or emit "
"notifications that this service consumes."
msgstr ""
#: ../telemetry-measurements.rst:78
msgid ""
"The Telemetry service supports storing notifications as events. This "
"functionality was added later, therefore the list of meters still contains "
"existence type and other event related items. The proper way of using "
"Telemetry is to configure it to use the event store and turn off the "
"collection of the event related meters. For further information about events "
"see `Events section <http://docs.openstack.org/developer/ceilometer/events."
"html>`__ in the Telemetry documentation. For further information about how "
"to turn on and off meters see :ref:`telemetry-pipeline-configuration`. "
"Please also note that currently no migration is available to move the "
"already existing event type samples to the event store."
msgstr ""
#: ../telemetry-measurements.rst:94
msgid "The following meters are collected for OpenStack Compute:"
msgstr ""
#: ../telemetry-measurements.rst:97 ../telemetry-measurements.rst:430
#: ../telemetry-measurements.rst:484 ../telemetry-measurements.rst:528
#: ../telemetry-measurements.rst:589 ../telemetry-measurements.rst:664
#: ../telemetry-measurements.rst:697 ../telemetry-measurements.rst:762
#: ../telemetry-measurements.rst:812 ../telemetry-measurements.rst:849
#: ../telemetry-measurements.rst:920 ../telemetry-measurements.rst:983
#: ../telemetry-measurements.rst:1069 ../telemetry-measurements.rst:1147
#: ../telemetry-measurements.rst:1212 ../telemetry-measurements.rst:1263
#: ../telemetry-measurements.rst:1289 ../telemetry-measurements.rst:1312
#: ../telemetry-measurements.rst:1332
msgid "Origin"
msgstr ""
#: ../telemetry-measurements.rst:97
msgid "Support"
msgstr ""
#: ../telemetry-measurements.rst:97 ../telemetry-measurements.rst:430
#: ../telemetry-measurements.rst:484 ../telemetry-measurements.rst:528
#: ../telemetry-measurements.rst:589 ../telemetry-measurements.rst:664
#: ../telemetry-measurements.rst:697 ../telemetry-measurements.rst:762
#: ../telemetry-measurements.rst:812 ../telemetry-measurements.rst:849
#: ../telemetry-measurements.rst:920 ../telemetry-measurements.rst:983
#: ../telemetry-measurements.rst:1069 ../telemetry-measurements.rst:1147
#: ../telemetry-measurements.rst:1212 ../telemetry-measurements.rst:1263
#: ../telemetry-measurements.rst:1289 ../telemetry-measurements.rst:1312
#: ../telemetry-measurements.rst:1332
msgid "Unit"
msgstr ""
#: ../telemetry-measurements.rst:99 ../telemetry-measurements.rst:432
#: ../telemetry-measurements.rst:666 ../telemetry-measurements.rst:699
#: ../telemetry-measurements.rst:764 ../telemetry-measurements.rst:922
#: ../telemetry-measurements.rst:985 ../telemetry-measurements.rst:1265
#: ../telemetry-measurements.rst:1334
msgid "**Meters added in the Icehouse release or earlier**"
msgstr ""
#: ../telemetry-measurements.rst:101 ../telemetry-measurements.rst:200
msgid "Existence of instance"
msgstr ""
#: ../telemetry-measurements.rst:101 ../telemetry-measurements.rst:105
#: ../telemetry-measurements.rst:122 ../telemetry-measurements.rst:133
#: ../telemetry-measurements.rst:140 ../telemetry-measurements.rst:147
#: ../telemetry-measurements.rst:154 ../telemetry-measurements.rst:168
#: ../telemetry-measurements.rst:176 ../telemetry-measurements.rst:184
#: ../telemetry-measurements.rst:193 ../telemetry-measurements.rst:236
#: ../telemetry-measurements.rst:245 ../telemetry-measurements.rst:254
#: ../telemetry-measurements.rst:263
msgid "Libvirt, Hyper-V, vSphere"
msgstr ""
#: ../telemetry-measurements.rst:101 ../telemetry-measurements.rst:105
#: ../telemetry-measurements.rst:200 ../telemetry-measurements.rst:205
#: ../telemetry-measurements.rst:348
msgid "Notific\\ ation, Pollster"
msgstr ""
#: ../telemetry-measurements.rst:101 ../telemetry-measurements.rst:105
msgid "inst\\ ance"
msgstr ""
#: ../telemetry-measurements.rst:101 ../telemetry-measurements.rst:200
msgid "instance"
msgstr ""
#: ../telemetry-measurements.rst:101 ../telemetry-measurements.rst:105
#: ../telemetry-measurements.rst:109 ../telemetry-measurements.rst:113
#: ../telemetry-measurements.rst:119 ../telemetry-measurements.rst:122
#: ../telemetry-measurements.rst:126 ../telemetry-measurements.rst:130
#: ../telemetry-measurements.rst:133 ../telemetry-measurements.rst:137
#: ../telemetry-measurements.rst:140 ../telemetry-measurements.rst:144
#: ../telemetry-measurements.rst:147 ../telemetry-measurements.rst:151
#: ../telemetry-measurements.rst:154 ../telemetry-measurements.rst:158
#: ../telemetry-measurements.rst:161 ../telemetry-measurements.rst:200
#: ../telemetry-measurements.rst:205 ../telemetry-measurements.rst:210
#: ../telemetry-measurements.rst:216 ../telemetry-measurements.rst:221
#: ../telemetry-measurements.rst:226 ../telemetry-measurements.rst:290
#: ../telemetry-measurements.rst:296 ../telemetry-measurements.rst:301
#: ../telemetry-measurements.rst:304 ../telemetry-measurements.rst:314
#: ../telemetry-measurements.rst:318 ../telemetry-measurements.rst:324
#: ../telemetry-measurements.rst:348 ../telemetry-measurements.rst:355
msgid "instance ID"
msgstr ""
#: ../telemetry-measurements.rst:105 ../telemetry-measurements.rst:205
#: ../telemetry-measurements.rst:348
msgid "Existence of instance <type> (OpenStack types)"
msgstr ""
#: ../telemetry-measurements.rst:105
msgid "instance:\\ <type>"
msgstr ""
#: ../telemetry-measurements.rst:109 ../telemetry-measurements.rst:119
#: ../telemetry-measurements.rst:126 ../telemetry-measurements.rst:130
#: ../telemetry-measurements.rst:137 ../telemetry-measurements.rst:144
#: ../telemetry-measurements.rst:151 ../telemetry-measurements.rst:158
#: ../telemetry-measurements.rst:161 ../telemetry-measurements.rst:164
#: ../telemetry-measurements.rst:172 ../telemetry-measurements.rst:180
#: ../telemetry-measurements.rst:189 ../telemetry-measurements.rst:232
#: ../telemetry-measurements.rst:241 ../telemetry-measurements.rst:250
#: ../telemetry-measurements.rst:259 ../telemetry-measurements.rst:355
msgid "Libvirt, Hyper-V"
msgstr ""
#: ../telemetry-measurements.rst:109 ../telemetry-measurements.rst:113
#: ../telemetry-measurements.rst:210 ../telemetry-measurements.rst:290
#: ../telemetry-measurements.rst:296
msgid "MB"
msgstr ""
#: ../telemetry-measurements.rst:109 ../telemetry-measurements.rst:126
#: ../telemetry-measurements.rst:158 ../telemetry-measurements.rst:161
#: ../telemetry-measurements.rst:775 ../telemetry-measurements.rst:778
#: ../telemetry-measurements.rst:781
msgid "Notific\\ ation"
msgstr ""
#: ../telemetry-measurements.rst:109
msgid "Volume of RAM allocated to the instance"
msgstr ""
#: ../telemetry-measurements.rst:109
msgid "memory"
msgstr ""
#: ../telemetry-measurements.rst:113 ../telemetry-measurements.rst:119
#: ../telemetry-measurements.rst:122 ../telemetry-measurements.rst:130
#: ../telemetry-measurements.rst:133 ../telemetry-measurements.rst:137
#: ../telemetry-measurements.rst:140 ../telemetry-measurements.rst:144
#: ../telemetry-measurements.rst:147 ../telemetry-measurements.rst:151
#: ../telemetry-measurements.rst:154 ../telemetry-measurements.rst:164
#: ../telemetry-measurements.rst:168 ../telemetry-measurements.rst:172
#: ../telemetry-measurements.rst:176 ../telemetry-measurements.rst:180
#: ../telemetry-measurements.rst:184 ../telemetry-measurements.rst:189
#: ../telemetry-measurements.rst:193 ../telemetry-measurements.rst:210
#: ../telemetry-measurements.rst:216 ../telemetry-measurements.rst:221
#: ../telemetry-measurements.rst:226 ../telemetry-measurements.rst:232
#: ../telemetry-measurements.rst:236 ../telemetry-measurements.rst:241
#: ../telemetry-measurements.rst:245 ../telemetry-measurements.rst:250
#: ../telemetry-measurements.rst:254 ../telemetry-measurements.rst:259
#: ../telemetry-measurements.rst:263 ../telemetry-measurements.rst:268
#: ../telemetry-measurements.rst:273 ../telemetry-measurements.rst:278
#: ../telemetry-measurements.rst:283 ../telemetry-measurements.rst:290
#: ../telemetry-measurements.rst:296 ../telemetry-measurements.rst:301
#: ../telemetry-measurements.rst:304 ../telemetry-measurements.rst:307
#: ../telemetry-measurements.rst:311 ../telemetry-measurements.rst:314
#: ../telemetry-measurements.rst:318 ../telemetry-measurements.rst:324
#: ../telemetry-measurements.rst:329 ../telemetry-measurements.rst:334
#: ../telemetry-measurements.rst:340 ../telemetry-measurements.rst:355
#: ../telemetry-measurements.rst:532 ../telemetry-measurements.rst:535
#: ../telemetry-measurements.rst:541 ../telemetry-measurements.rst:544
#: ../telemetry-measurements.rst:547 ../telemetry-measurements.rst:552
#: ../telemetry-measurements.rst:557 ../telemetry-measurements.rst:561
#: ../telemetry-measurements.rst:565 ../telemetry-measurements.rst:593
#: ../telemetry-measurements.rst:596 ../telemetry-measurements.rst:599
#: ../telemetry-measurements.rst:602 ../telemetry-measurements.rst:605
#: ../telemetry-measurements.rst:608 ../telemetry-measurements.rst:611
#: ../telemetry-measurements.rst:614 ../telemetry-measurements.rst:617
#: ../telemetry-measurements.rst:620 ../telemetry-measurements.rst:623
#: ../telemetry-measurements.rst:626 ../telemetry-measurements.rst:630
#: ../telemetry-measurements.rst:633 ../telemetry-measurements.rst:637
#: ../telemetry-measurements.rst:641 ../telemetry-measurements.rst:645
#: ../telemetry-measurements.rst:650 ../telemetry-measurements.rst:655
#: ../telemetry-measurements.rst:766 ../telemetry-measurements.rst:769
#: ../telemetry-measurements.rst:772 ../telemetry-measurements.rst:786
#: ../telemetry-measurements.rst:789 ../telemetry-measurements.rst:816
#: ../telemetry-measurements.rst:818 ../telemetry-measurements.rst:821
#: ../telemetry-measurements.rst:824 ../telemetry-measurements.rst:829
#: ../telemetry-measurements.rst:832 ../telemetry-measurements.rst:987
#: ../telemetry-measurements.rst:990 ../telemetry-measurements.rst:993
#: ../telemetry-measurements.rst:996 ../telemetry-measurements.rst:999
#: ../telemetry-measurements.rst:1002 ../telemetry-measurements.rst:1005
#: ../telemetry-measurements.rst:1008 ../telemetry-measurements.rst:1011
#: ../telemetry-measurements.rst:1014 ../telemetry-measurements.rst:1017
#: ../telemetry-measurements.rst:1021 ../telemetry-measurements.rst:1025
#: ../telemetry-measurements.rst:1028 ../telemetry-measurements.rst:1031
#: ../telemetry-measurements.rst:1034 ../telemetry-measurements.rst:1037
#: ../telemetry-measurements.rst:1040 ../telemetry-measurements.rst:1043
#: ../telemetry-measurements.rst:1045 ../telemetry-measurements.rst:1048
#: ../telemetry-measurements.rst:1052 ../telemetry-measurements.rst:1055
#: ../telemetry-measurements.rst:1089 ../telemetry-measurements.rst:1093
#: ../telemetry-measurements.rst:1097 ../telemetry-measurements.rst:1101
#: ../telemetry-measurements.rst:1336 ../telemetry-measurements.rst:1338
msgid "Pollster"
msgstr ""
#: ../telemetry-measurements.rst:113 ../telemetry-measurements.rst:210
msgid ""
"Volume of RAM used by the instance from the amount of its allocated memory"
msgstr ""
#: ../telemetry-measurements.rst:113 ../telemetry-measurements.rst:210
#: ../telemetry-measurements.rst:290
msgid "memory.\\ usage"
msgstr ""
#: ../telemetry-measurements.rst:113
msgid "vSphere"
msgstr ""
#: ../telemetry-measurements.rst:119
msgid "CPU time used"
msgstr ""
#: ../telemetry-measurements.rst:119 ../telemetry-measurements.rst:144
#: ../telemetry-measurements.rst:151 ../telemetry-measurements.rst:164
#: ../telemetry-measurements.rst:172 ../telemetry-measurements.rst:180
#: ../telemetry-measurements.rst:189 ../telemetry-measurements.rst:232
#: ../telemetry-measurements.rst:241 ../telemetry-measurements.rst:250
#: ../telemetry-measurements.rst:259 ../telemetry-measurements.rst:437
#: ../telemetry-measurements.rst:440 ../telemetry-measurements.rst:443
#: ../telemetry-measurements.rst:446
msgid "Cumu\\ lative"
msgstr ""
#: ../telemetry-measurements.rst:119 ../telemetry-measurements.rst:355
#: ../telemetry-measurements.rst:437 ../telemetry-measurements.rst:440
#: ../telemetry-measurements.rst:443 ../telemetry-measurements.rst:446
#: ../telemetry-measurements.rst:1048
msgid "ns"
msgstr ""
#: ../telemetry-measurements.rst:122 ../telemetry-measurements.rst:216
#: ../telemetry-measurements.rst:449 ../telemetry-measurements.rst:452
#: ../telemetry-measurements.rst:455 ../telemetry-measurements.rst:458
#: ../telemetry-measurements.rst:461 ../telemetry-measurements.rst:557
#: ../telemetry-measurements.rst:561 ../telemetry-measurements.rst:565
#: ../telemetry-measurements.rst:655
msgid "%"
msgstr ""
#: ../telemetry-measurements.rst:122 ../telemetry-measurements.rst:216
msgid "Average CPU utilization"
msgstr ""
#: ../telemetry-measurements.rst:122 ../telemetry-measurements.rst:216
msgid "cpu_util"
msgstr ""
#: ../telemetry-measurements.rst:126
msgid "Number of virtual CPUs allocated to the instance"
msgstr ""
#: ../telemetry-measurements.rst:126
msgid "vcpu"
msgstr ""
#: ../telemetry-measurements.rst:126
msgid "vcpus"
msgstr ""
#: ../telemetry-measurements.rst:130 ../telemetry-measurements.rst:137
#: ../telemetry-measurements.rst:626 ../telemetry-measurements.rst:630
#: ../telemetry-measurements.rst:633 ../telemetry-measurements.rst:637
#: ../telemetry-measurements.rst:641 ../telemetry-measurements.rst:645
#: ../telemetry-measurements.rst:650
msgid "Cumul\\ ative"
msgstr ""
#: ../telemetry-measurements.rst:130 ../telemetry-measurements.rst:232
msgid "Number of read requests"
msgstr ""
#: ../telemetry-measurements.rst:130
msgid "disk.read\\ .requests"
msgstr ""
#: ../telemetry-measurements.rst:130 ../telemetry-measurements.rst:137
#: ../telemetry-measurements.rst:232 ../telemetry-measurements.rst:241
msgid "req\\ uest"
msgstr ""
#: ../telemetry-measurements.rst:133 ../telemetry-measurements.rst:236
msgid "Average rate of read requests"
msgstr ""
#: ../telemetry-measurements.rst:133
msgid "disk.read\\ .requests\\ .rate"
msgstr ""
#: ../telemetry-measurements.rst:133 ../telemetry-measurements.rst:140
#: ../telemetry-measurements.rst:236 ../telemetry-measurements.rst:245
msgid "requ\\ est/s"
msgstr ""
#: ../telemetry-measurements.rst:137 ../telemetry-measurements.rst:241
msgid "Number of write requests"
msgstr ""
#: ../telemetry-measurements.rst:137
msgid "disk.writ\\ e.requests"
msgstr ""
#: ../telemetry-measurements.rst:140 ../telemetry-measurements.rst:245
msgid "Average rate of write requests"
msgstr ""
#: ../telemetry-measurements.rst:140
msgid "disk.writ\\ e.request\\ s.rate"
msgstr ""
#: ../telemetry-measurements.rst:144 ../telemetry-measurements.rst:151
#: ../telemetry-measurements.rst:164 ../telemetry-measurements.rst:172
#: ../telemetry-measurements.rst:250 ../telemetry-measurements.rst:259
#: ../telemetry-measurements.rst:314 ../telemetry-measurements.rst:318
#: ../telemetry-measurements.rst:324 ../telemetry-measurements.rst:329
#: ../telemetry-measurements.rst:334 ../telemetry-measurements.rst:340
#: ../telemetry-measurements.rst:626 ../telemetry-measurements.rst:630
#: ../telemetry-measurements.rst:685 ../telemetry-measurements.rst:688
#: ../telemetry-measurements.rst:769 ../telemetry-measurements.rst:775
#: ../telemetry-measurements.rst:778 ../telemetry-measurements.rst:789
#: ../telemetry-measurements.rst:818 ../telemetry-measurements.rst:832
#: ../telemetry-measurements.rst:973 ../telemetry-measurements.rst:999
#: ../telemetry-measurements.rst:1002 ../telemetry-measurements.rst:1055
#: ../telemetry-measurements.rst:1097 ../telemetry-measurements.rst:1101
msgid "B"
msgstr ""
#: ../telemetry-measurements.rst:144 ../telemetry-measurements.rst:250
msgid "Volume of reads"
msgstr ""
#: ../telemetry-measurements.rst:144
msgid "disk.read\\ .bytes"
msgstr ""
#: ../telemetry-measurements.rst:147 ../telemetry-measurements.rst:221
#: ../telemetry-measurements.rst:254
msgid "Average rate of reads"
msgstr ""
#: ../telemetry-measurements.rst:147 ../telemetry-measurements.rst:154
#: ../telemetry-measurements.rst:168 ../telemetry-measurements.rst:176
#: ../telemetry-measurements.rst:221 ../telemetry-measurements.rst:226
#: ../telemetry-measurements.rst:254 ../telemetry-measurements.rst:263
#: ../telemetry-measurements.rst:268 ../telemetry-measurements.rst:273
msgid "B/s"
msgstr ""
#: ../telemetry-measurements.rst:147 ../telemetry-measurements.rst:221
msgid "disk.read\\ .bytes.\\ rate"
msgstr ""
#: ../telemetry-measurements.rst:151 ../telemetry-measurements.rst:259
msgid "Volume of writes"
msgstr ""
#: ../telemetry-measurements.rst:151
msgid "disk.writ\\ e.bytes"
msgstr ""
#: ../telemetry-measurements.rst:154 ../telemetry-measurements.rst:226
#: ../telemetry-measurements.rst:263
msgid "Average rate of writes"
msgstr ""
#: ../telemetry-measurements.rst:154
msgid "disk.writ\\ e.bytes.\\ rate"
msgstr ""
#: ../telemetry-measurements.rst:158 ../telemetry-measurements.rst:161
#: ../telemetry-measurements.rst:704 ../telemetry-measurements.rst:712
msgid "GB"
msgstr ""
#: ../telemetry-measurements.rst:158
msgid "Size of root disk"
msgstr ""
#: ../telemetry-measurements.rst:158
msgid "disk.root\\ .size"
msgstr ""
#: ../telemetry-measurements.rst:161
msgid "Size of ephemeral disk"
msgstr ""
#: ../telemetry-measurements.rst:161
msgid "disk.ephe\\ meral.size"
msgstr ""
#: ../telemetry-measurements.rst:164
msgid "Number of incoming bytes"
msgstr ""
#: ../telemetry-measurements.rst:164 ../telemetry-measurements.rst:168
#: ../telemetry-measurements.rst:172 ../telemetry-measurements.rst:176
#: ../telemetry-measurements.rst:180 ../telemetry-measurements.rst:184
#: ../telemetry-measurements.rst:189 ../telemetry-measurements.rst:193
#: ../telemetry-measurements.rst:268 ../telemetry-measurements.rst:273
#: ../telemetry-measurements.rst:278 ../telemetry-measurements.rst:283
#: ../telemetry-measurements.rst:626 ../telemetry-measurements.rst:630
#: ../telemetry-measurements.rst:633
msgid "interface ID"
msgstr ""
#: ../telemetry-measurements.rst:164
msgid "network.\\ incoming.\\ bytes"
msgstr ""
#: ../telemetry-measurements.rst:168 ../telemetry-measurements.rst:268
msgid "Average rate of incoming bytes"
msgstr ""
#: ../telemetry-measurements.rst:168 ../telemetry-measurements.rst:268
msgid "network.\\ incoming.\\ bytes.rate"
msgstr ""
#: ../telemetry-measurements.rst:172
msgid "Number of outgoing bytes"
msgstr ""
#: ../telemetry-measurements.rst:172
msgid "network.\\ outgoing\\ .bytes"
msgstr ""
#: ../telemetry-measurements.rst:176 ../telemetry-measurements.rst:273
msgid "Average rate of outgoing bytes"
msgstr ""
#: ../telemetry-measurements.rst:176 ../telemetry-measurements.rst:273
msgid "network.\\ outgoing.\\ bytes.rate"
msgstr ""
#: ../telemetry-measurements.rst:180
msgid "Number of incoming packets"
msgstr ""
#: ../telemetry-measurements.rst:180
msgid "network.\\ incoming| .packets"
msgstr ""
#: ../telemetry-measurements.rst:180 ../telemetry-measurements.rst:189
msgid "pac\\ ket"
msgstr ""
#: ../telemetry-measurements.rst:184 ../telemetry-measurements.rst:278
msgid "Average rate of incoming packets"
msgstr ""
#: ../telemetry-measurements.rst:184
msgid "network.\\ incoming\\ .packets\\ .rate"
msgstr ""
#: ../telemetry-measurements.rst:184 ../telemetry-measurements.rst:278
#: ../telemetry-measurements.rst:283
msgid "pack\\ et/s"
msgstr ""
#: ../telemetry-measurements.rst:189
msgid "Number of outgoing packets"
msgstr ""
#: ../telemetry-measurements.rst:189
msgid "network.\\ outpoing\\ .packets"
msgstr ""
#: ../telemetry-measurements.rst:193 ../telemetry-measurements.rst:283
msgid "Average rate of outgoing packets"
msgstr ""
#: ../telemetry-measurements.rst:193
msgid "network.\\ outgoing\\ .packets\\ .rate"
msgstr ""
#: ../telemetry-measurements.rst:193
msgid "pac\\ ket/s"
msgstr ""
#: ../telemetry-measurements.rst:198
msgid "**Meters added or hypervisor support changed in the Juno release**"
msgstr ""
#: ../telemetry-measurements.rst:200 ../telemetry-measurements.rst:205
#: ../telemetry-measurements.rst:216 ../telemetry-measurements.rst:221
#: ../telemetry-measurements.rst:226 ../telemetry-measurements.rst:268
#: ../telemetry-measurements.rst:273 ../telemetry-measurements.rst:278
#: ../telemetry-measurements.rst:283 ../telemetry-measurements.rst:290
#: ../telemetry-measurements.rst:348
msgid "Libvirt, Hyper-V, vSphere, XenAPI"
msgstr ""
#: ../telemetry-measurements.rst:200 ../telemetry-measurements.rst:205
#: ../telemetry-measurements.rst:348
msgid "ins\\ tance"
msgstr ""
#: ../telemetry-measurements.rst:205 ../telemetry-measurements.rst:348
msgid "instance\\ :<type>"
msgstr ""
#: ../telemetry-measurements.rst:210
msgid "vSphere, XenAPI"
msgstr ""
#: ../telemetry-measurements.rst:226
msgid "disk.\\ write.\\ bytes.rate"
msgstr ""
#: ../telemetry-measurements.rst:232 ../telemetry-measurements.rst:236
#: ../telemetry-measurements.rst:241 ../telemetry-measurements.rst:245
#: ../telemetry-measurements.rst:250 ../telemetry-measurements.rst:254
#: ../telemetry-measurements.rst:259 ../telemetry-measurements.rst:263
#: ../telemetry-measurements.rst:307 ../telemetry-measurements.rst:311
#: ../telemetry-measurements.rst:329 ../telemetry-measurements.rst:334
#: ../telemetry-measurements.rst:340 ../telemetry-measurements.rst:602
#: ../telemetry-measurements.rst:605
msgid "disk ID"
msgstr ""
#: ../telemetry-measurements.rst:232
msgid "disk.dev\\ ice.read\\ .requests"
msgstr ""
#: ../telemetry-measurements.rst:236
msgid "disk.dev\\ ice.read\\ .requests\\ .rate"
msgstr ""
#: ../telemetry-measurements.rst:241
msgid "disk.dev\\ ice.write\\ .requests"
msgstr ""
#: ../telemetry-measurements.rst:245
msgid "disk.dev\\ ice.write\\ .requests\\ .rate"
msgstr ""
#: ../telemetry-measurements.rst:250
msgid "disk.dev\\ ice.read\\ .bytes"
msgstr ""
#: ../telemetry-measurements.rst:254
msgid "disk.dev\\ ice.read\\ .bytes .rate"
msgstr ""
#: ../telemetry-measurements.rst:259
msgid "disk.dev\\ ice.write\\ .bytes"
msgstr ""
#: ../telemetry-measurements.rst:263
msgid "disk.dev\\ ice.write\\ .bytes .rate"
msgstr ""
#: ../telemetry-measurements.rst:278
msgid "network.\\ incoming.\\ packets.\\ rate"
msgstr ""
#: ../telemetry-measurements.rst:283
msgid "network.\\ outgoing.\\ packets.\\ rate"
msgstr ""
#: ../telemetry-measurements.rst:288
msgid "**Meters added or hypervisor support changed in the Kilo release**"
msgstr ""
#: ../telemetry-measurements.rst:290
msgid ""
"Volume of RAM used by the inst\\ ance from the amount of its allocated memory"
msgstr ""
#: ../telemetry-measurements.rst:296 ../telemetry-measurements.rst:314
#: ../telemetry-measurements.rst:318 ../telemetry-measurements.rst:324
#: ../telemetry-measurements.rst:329 ../telemetry-measurements.rst:334
#: ../telemetry-measurements.rst:340
msgid "Libvirt"
msgstr ""
#: ../telemetry-measurements.rst:296
msgid "Volume of RAM u\\ sed by the inst\\ ance on the phy\\ sical machine"
msgstr ""
#: ../telemetry-measurements.rst:296
msgid "memory.r\\ esident"
msgstr ""
#: ../telemetry-measurements.rst:301
msgid "Average disk la\\ tency"
msgstr ""
#: ../telemetry-measurements.rst:301 ../telemetry-measurements.rst:304
#: ../telemetry-measurements.rst:307 ../telemetry-measurements.rst:311
msgid "Hyper-V"
msgstr ""
#: ../telemetry-measurements.rst:301
msgid "disk.lat\\ ency"
msgstr ""
#: ../telemetry-measurements.rst:301 ../telemetry-measurements.rst:307
msgid "ms"
msgstr ""
#: ../telemetry-measurements.rst:304
msgid "Average disk io\\ ps"
msgstr ""
#: ../telemetry-measurements.rst:304 ../telemetry-measurements.rst:311
msgid "coun\\ t/s"
msgstr ""
#: ../telemetry-measurements.rst:304
msgid "disk.iop\\ s"
msgstr ""
#: ../telemetry-measurements.rst:307
msgid "Average disk la\\ tency per device"
msgstr ""
#: ../telemetry-measurements.rst:307
msgid "disk.dev\\ ice.late\\ ncy"
msgstr ""
#: ../telemetry-measurements.rst:311
msgid "Average disk io\\ ps per device"
msgstr ""
#: ../telemetry-measurements.rst:311
msgid "disk.dev\\ ice.iops"
msgstr ""
#: ../telemetry-measurements.rst:314
msgid "The amount of d\\ isk that the in\\ stance can see"
msgstr ""
#: ../telemetry-measurements.rst:314
msgid "disk.cap\\ acity"
msgstr ""
#: ../telemetry-measurements.rst:318
msgid ""
"The amount of d\\ isk occupied by the instance o\\ n the host mach\\ ine"
msgstr ""
#: ../telemetry-measurements.rst:318
msgid "disk.all\\ ocation"
msgstr ""
#: ../telemetry-measurements.rst:324
msgid "The physical si\\ ze in bytes of the image conta\\ iner on the host"
msgstr ""
#: ../telemetry-measurements.rst:324
msgid "disk.usa\\ ge"
msgstr ""
#: ../telemetry-measurements.rst:329
msgid "The amount of d\\ isk per device that the instan\\ ce can see"
msgstr ""
#: ../telemetry-measurements.rst:329
msgid "disk.dev\\ ice.capa\\ city"
msgstr ""
#: ../telemetry-measurements.rst:334
msgid ""
"The amount of d\\ isk per device occupied by the instance on th\\ e host "
"machine"
msgstr ""
#: ../telemetry-measurements.rst:334
msgid "disk.dev\\ ice.allo\\ cation"
msgstr ""
#: ../telemetry-measurements.rst:340
msgid ""
"The physical si\\ ze in bytes of the image conta\\ iner on the hos\\ t per "
"device"
msgstr ""
#: ../telemetry-measurements.rst:340
msgid "disk.dev\\ ice.usag\\ e"
msgstr ""
#: ../telemetry-measurements.rst:346
msgid "**Meters deprecated in the Kilo release**"
msgstr ""
#: ../telemetry-measurements.rst:353
msgid "**Meters added in the Liberty release**"
msgstr ""
#: ../telemetry-measurements.rst:355
msgid "CPU time used s\\ ince previous d\\ atapoint"
msgstr ""
#: ../telemetry-measurements.rst:355
msgid "cpu.delta"
msgstr ""
#: ../telemetry-measurements.rst:364
msgid ""
"The ``instance:<type>`` meter can be replaced by using extra parameters in "
"both the samples and statistics queries. Sample queries look like the "
"following::"
msgstr ""
#: ../telemetry-measurements.rst:376
msgid ""
"The Telemetry service supports to create new meters by using transformers. "
"For more details about transformers see :ref:`telemetry-transformers`. Among "
"the meters gathered from libvirt and Hyper-V there are a few ones which are "
"generated from other meters. The list of meters that are created by using "
"the ``rate_of_change`` transformer from the above table is the following:"
msgstr ""
#: ../telemetry-measurements.rst:383
msgid "cpu\\_util"
msgstr ""
#: ../telemetry-measurements.rst:385
msgid "disk.read.requests.rate"
msgstr ""
#: ../telemetry-measurements.rst:387
msgid "disk.write.requests.rate"
msgstr ""
#: ../telemetry-measurements.rst:389
msgid "disk.read.bytes.rate"
msgstr ""
#: ../telemetry-measurements.rst:391
msgid "disk.write.bytes.rate"
msgstr ""
#: ../telemetry-measurements.rst:393
msgid "disk.device.read.requests.rate"
msgstr ""
#: ../telemetry-measurements.rst:395
msgid "disk.device.write.requests.rate"
msgstr ""
#: ../telemetry-measurements.rst:397
msgid "disk.device.read.bytes.rate"
msgstr ""
#: ../telemetry-measurements.rst:399
msgid "disk.device.write.bytes.rate"
msgstr ""
#: ../telemetry-measurements.rst:401
msgid "network.incoming.bytes.rate"
msgstr ""
#: ../telemetry-measurements.rst:403
msgid "network.outgoing.bytes.rate"
msgstr ""
#: ../telemetry-measurements.rst:405
msgid "network.incoming.packets.rate"
msgstr ""
#: ../telemetry-measurements.rst:407
msgid "network.outgoing.packets.rate"
msgstr ""
#: ../telemetry-measurements.rst:411
msgid ""
"To enable the libvirt ``memory.usage`` support, you need to install libvirt "
"version 1.1.1+, QEMU version 1.5+, and you also need to prepare suitable "
"balloon driver in the image. It is applicable particularly for Windows "
"guests, most modern Linux distributions already have it built in. Telemetry "
"is not able to fetch the ``memory.usage`` samples without the image balloon "
"driver."
msgstr ""
#: ../telemetry-measurements.rst:418
msgid ""
"OpenStack Compute is capable of collecting ``CPU`` related meters from the "
"compute host machines. In order to use that you need to set the "
"``compute_monitors`` option to ``ComputeDriverCPUMonitor`` in the :file:"
"`nova.conf` configuration file. For further information see the Compute "
"configuration section in the `Compute chapter <http://docs.openstack.org/"
"liberty/config-reference/content/list-of-compute-config-options.html>`__ of "
"the OpenStack Configuration Reference."
msgstr ""
#: ../telemetry-measurements.rst:426
msgid ""
"The following host machine related meters are collected for OpenStack "
"Compute:"
msgstr ""
#: ../telemetry-measurements.rst:434
msgid "CPU frequency"
msgstr ""
#: ../telemetry-measurements.rst:434
msgid "MHz"
msgstr ""
#: ../telemetry-measurements.rst:434 ../telemetry-measurements.rst:437
#: ../telemetry-measurements.rst:440 ../telemetry-measurements.rst:443
#: ../telemetry-measurements.rst:446 ../telemetry-measurements.rst:449
#: ../telemetry-measurements.rst:452 ../telemetry-measurements.rst:455
#: ../telemetry-measurements.rst:458 ../telemetry-measurements.rst:461
#: ../telemetry-measurements.rst:488 ../telemetry-measurements.rst:491
#: ../telemetry-measurements.rst:494 ../telemetry-measurements.rst:498
#: ../telemetry-measurements.rst:501 ../telemetry-measurements.rst:1267
#: ../telemetry-measurements.rst:1270 ../telemetry-measurements.rst:1273
#: ../telemetry-measurements.rst:1276 ../telemetry-measurements.rst:1279
#: ../telemetry-measurements.rst:1293 ../telemetry-measurements.rst:1298
#: ../telemetry-measurements.rst:1302 ../telemetry-measurements.rst:1316
#: ../telemetry-measurements.rst:1319 ../telemetry-measurements.rst:1322
msgid "Notification"
msgstr ""
#: ../telemetry-measurements.rst:434
msgid "compute.node.cpu.\\ frequency"
msgstr ""
#: ../telemetry-measurements.rst:434 ../telemetry-measurements.rst:437
#: ../telemetry-measurements.rst:440 ../telemetry-measurements.rst:443
#: ../telemetry-measurements.rst:446 ../telemetry-measurements.rst:449
#: ../telemetry-measurements.rst:452 ../telemetry-measurements.rst:455
#: ../telemetry-measurements.rst:458 ../telemetry-measurements.rst:461
#: ../telemetry-measurements.rst:532 ../telemetry-measurements.rst:535
#: ../telemetry-measurements.rst:541 ../telemetry-measurements.rst:544
#: ../telemetry-measurements.rst:547 ../telemetry-measurements.rst:552
#: ../telemetry-measurements.rst:557 ../telemetry-measurements.rst:561
#: ../telemetry-measurements.rst:565 ../telemetry-measurements.rst:593
#: ../telemetry-measurements.rst:596 ../telemetry-measurements.rst:599
#: ../telemetry-measurements.rst:608 ../telemetry-measurements.rst:611
#: ../telemetry-measurements.rst:614 ../telemetry-measurements.rst:617
#: ../telemetry-measurements.rst:620 ../telemetry-measurements.rst:623
#: ../telemetry-measurements.rst:637 ../telemetry-measurements.rst:641
#: ../telemetry-measurements.rst:645 ../telemetry-measurements.rst:650
#: ../telemetry-measurements.rst:655
msgid "host ID"
msgstr ""
#: ../telemetry-measurements.rst:437
msgid "CPU kernel time"
msgstr ""
#: ../telemetry-measurements.rst:437
msgid "compute.node.cpu.\\ kernel.time"
msgstr ""
#: ../telemetry-measurements.rst:440
msgid "CPU idle time"
msgstr ""
#: ../telemetry-measurements.rst:440
msgid "compute.node.cpu.\\ idle.time"
msgstr ""
#: ../telemetry-measurements.rst:443
msgid "CPU user mode time"
msgstr ""
#: ../telemetry-measurements.rst:443
msgid "compute.node.cpu.\\ user.time"
msgstr ""
#: ../telemetry-measurements.rst:446
msgid "CPU I/O wait time"
msgstr ""
#: ../telemetry-measurements.rst:446
msgid "compute.node.cpu.\\ iowait.time"
msgstr ""
#: ../telemetry-measurements.rst:449
msgid "CPU kernel percentage"
msgstr ""
#: ../telemetry-measurements.rst:449
msgid "compute.node.cpu.\\ kernel.percent"
msgstr ""
#: ../telemetry-measurements.rst:452
msgid "CPU idle percentage"
msgstr ""
#: ../telemetry-measurements.rst:452
msgid "compute.node.cpu.\\ idle.percent"
msgstr ""
#: ../telemetry-measurements.rst:455
msgid "CPU user mode percentage"
msgstr ""
#: ../telemetry-measurements.rst:455
msgid "compute.node.cpu.\\ user.percent"
msgstr ""
#: ../telemetry-measurements.rst:458
msgid "CPU I/O wait percentage"
msgstr ""
#: ../telemetry-measurements.rst:458
msgid "compute.node.cpu.\\ iowait.percent"
msgstr ""
#: ../telemetry-measurements.rst:461
msgid "CPU utilization"
msgstr ""
#: ../telemetry-measurements.rst:461
msgid "compute.node.cpu.\\ percent"
msgstr ""
#: ../telemetry-measurements.rst:469
msgid ""
"Telemetry captures notifications that are emitted by the Bare metal service. "
"The source of the notifications are IPMI sensors that collect data from the "
"host machine."
msgstr ""
#: ../telemetry-measurements.rst:475
msgid ""
"The sensor data is not available in the Bare metal service by default. To "
"enable the meters and configure this module to emit notifications about the "
"measured values see the `Installation Guide <http://docs.openstack.org/"
"developer/ironic/deploy/install-guide.html>`__ for the Bare metal service."
msgstr ""
#: ../telemetry-measurements.rst:481
msgid "The following meters are recorded for the Bare metal service:"
msgstr ""
#: ../telemetry-measurements.rst:486 ../telemetry-measurements.rst:530
#: ../telemetry-measurements.rst:707 ../telemetry-measurements.rst:851
#: ../telemetry-measurements.rst:1071 ../telemetry-measurements.rst:1149
#: ../telemetry-measurements.rst:1214 ../telemetry-measurements.rst:1291
msgid "**Meters added in the Juno release**"
msgstr ""
#: ../telemetry-measurements.rst:488 ../telemetry-measurements.rst:491
msgid "Fan rounds per minute (RPM)"
msgstr ""
#: ../telemetry-measurements.rst:488 ../telemetry-measurements.rst:491
msgid "RPM"
msgstr ""
#: ../telemetry-measurements.rst:488 ../telemetry-measurements.rst:491
msgid "fan sensor"
msgstr ""
#: ../telemetry-measurements.rst:488 ../telemetry-measurements.rst:491
msgid "hardware.ipmi.fan"
msgstr ""
#: ../telemetry-measurements.rst:494 ../telemetry-measurements.rst:535
#: ../telemetry-measurements.rst:541 ../telemetry-measurements.rst:544
msgid "C"
msgstr ""
#: ../telemetry-measurements.rst:494
msgid "Temperate reading from sensor"
msgstr ""
#: ../telemetry-measurements.rst:494
msgid "hardware.ipmi\\ .temperature"
msgstr ""
#: ../telemetry-measurements.rst:494
msgid "temper\\ ature sensor"
msgstr ""
#: ../telemetry-measurements.rst:498
msgid "Current reading from sensor"
msgstr ""
#: ../telemetry-measurements.rst:498 ../telemetry-measurements.rst:532
#: ../telemetry-measurements.rst:1338
msgid "W"
msgstr ""
#: ../telemetry-measurements.rst:498
msgid "current sensor"
msgstr ""
#: ../telemetry-measurements.rst:498
msgid "hardware.ipmi\\ .current"
msgstr ""
#: ../telemetry-measurements.rst:501
msgid "V"
msgstr ""
#: ../telemetry-measurements.rst:501
msgid "Voltage reading from sensor"
msgstr ""
#: ../telemetry-measurements.rst:501
msgid "hardware.ipmi\\ .voltage"
msgstr ""
#: ../telemetry-measurements.rst:501
msgid "voltage sensor"
msgstr ""
#: ../telemetry-measurements.rst:506
msgid "IPMI based meters"
msgstr ""
#: ../telemetry-measurements.rst:507
msgid ""
"Another way of gathering IPMI based data is to use IPMI sensors "
"independently from the Bare metal service's components. Same meters as :ref:"
"`telemetry-bare-metal-service` could be fetched except that origin is "
"``Pollster`` instead of ``Notification``."
msgstr ""
#: ../telemetry-measurements.rst:512
msgid ""
"You need to deploy the ceilometer-agent-ipmi on each IPMI-capable node in "
"order to poll local sensor data. For further information about the IPMI "
"agent see :ref:`telemetry-ipmi-agent`."
msgstr ""
#: ../telemetry-measurements.rst:518
msgid ""
"To avoid duplication of metering data and unnecessary load on the IPMI "
"interface, do not deploy the IPMI agent on nodes that are managed by the "
"Bare metal service and keep the ``conductor.send_sensor_data`` option set to "
"``False`` in the :file:`ironic.conf` configuration file."
msgstr ""
#: ../telemetry-measurements.rst:524
msgid ""
"Besides generic IPMI sensor data, the following Intel Node Manager meters "
"are recorded from capable platform:"
msgstr ""
#: ../telemetry-measurements.rst:532
msgid "Current power of the system"
msgstr ""
#: ../telemetry-measurements.rst:532
msgid "hardware.ipmi.node\\ .power"
msgstr ""
#: ../telemetry-measurements.rst:535
msgid "Current tempera\\ ture of the system"
msgstr ""
#: ../telemetry-measurements.rst:535
msgid "hardware.ipmi.node\\ .temperature"
msgstr ""
#: ../telemetry-measurements.rst:539 ../telemetry-measurements.rst:591
#: ../telemetry-measurements.rst:715 ../telemetry-measurements.rst:814
#: ../telemetry-measurements.rst:904 ../telemetry-measurements.rst:1105
#: ../telemetry-measurements.rst:1160 ../telemetry-measurements.rst:1224
#: ../telemetry-measurements.rst:1314
msgid "**Meters added in the Kilo release**"
msgstr ""
#: ../telemetry-measurements.rst:541
msgid "Inlet temperatu\\ re of the system"
msgstr ""
#: ../telemetry-measurements.rst:541
msgid "hardware.ipmi.node\\ .inlet_temperature"
msgstr ""
#: ../telemetry-measurements.rst:544
msgid "Outlet temperat\\ ure of the system"
msgstr ""
#: ../telemetry-measurements.rst:544
msgid "hardware.ipmi.node\\ .outlet_temperature"
msgstr ""
#: ../telemetry-measurements.rst:547
msgid "CFM"
msgstr ""
#: ../telemetry-measurements.rst:547
msgid "Volumetric airf\\ low of the syst\\ em, expressed as 1/10th of CFM"
msgstr ""
#: ../telemetry-measurements.rst:547
msgid "hardware.ipmi.node\\ .airflow"
msgstr ""
#: ../telemetry-measurements.rst:552
msgid "CUPS"
msgstr ""
#: ../telemetry-measurements.rst:552
msgid "CUPS(Compute Us\\ age Per Second) index data of the system"
msgstr ""
#: ../telemetry-measurements.rst:552
msgid "hardware.ipmi.node\\ .cups"
msgstr ""
#: ../telemetry-measurements.rst:557
msgid "CPU CUPS utiliz\\ ation of the system"
msgstr ""
#: ../telemetry-measurements.rst:557
msgid "hardware.ipmi.node\\ .cpu_util"
msgstr ""
#: ../telemetry-measurements.rst:561
msgid "Memory CUPS utilization of the system"
msgstr ""
#: ../telemetry-measurements.rst:561
msgid "hardware.ipmi.node\\ .mem_util"
msgstr ""
#: ../telemetry-measurements.rst:565
msgid "IO CUPS utilization of the system"
msgstr ""
#: ../telemetry-measurements.rst:565
msgid "hardware.ipmi.node\\ .io_util"
msgstr ""
#: ../telemetry-measurements.rst:573
msgid "Meters renamed in the Kilo release"
msgstr ""
#: ../telemetry-measurements.rst:575
msgid "**New Name**"
msgstr ""
#: ../telemetry-measurements.rst:575
msgid "**Original Name**"
msgstr ""
#: ../telemetry-measurements.rst:577
msgid "hardware.ipmi.node.inlet_temperature"
msgstr ""
#: ../telemetry-measurements.rst:577
msgid "hardware.ipmi.node.temperature"
msgstr ""
#: ../telemetry-measurements.rst:581
msgid "SNMP based meters"
msgstr ""
#: ../telemetry-measurements.rst:582
msgid ""
"Telemetry supports gathering SNMP based generic host meters. In order to be "
"able to collect this data you need to run smpd on each target host."
msgstr ""
#: ../telemetry-measurements.rst:585
msgid ""
"The following meters are available about the host machines by using SNMP:"
msgstr ""
#: ../telemetry-measurements.rst:593
msgid "CPU load in the past 1 minute"
msgstr ""
#: ../telemetry-measurements.rst:593
msgid "hardware.cpu.load.\\ 1min"
msgstr ""
#: ../telemetry-measurements.rst:593 ../telemetry-measurements.rst:596
#: ../telemetry-measurements.rst:599
msgid "proc\\ ess"
msgstr ""
#: ../telemetry-measurements.rst:596
msgid "CPU load in the past 5 minutes"
msgstr ""
#: ../telemetry-measurements.rst:596
msgid "hardware.cpu.load.\\ 5min"
msgstr ""
#: ../telemetry-measurements.rst:599
msgid "CPU load in the past 10 minutes"
msgstr ""
#: ../telemetry-measurements.rst:599
msgid "hardware.cpu.load.\\ 10min"
msgstr ""
#: ../telemetry-measurements.rst:602 ../telemetry-measurements.rst:605
#: ../telemetry-measurements.rst:608 ../telemetry-measurements.rst:611
#: ../telemetry-measurements.rst:614 ../telemetry-measurements.rst:617
#: ../telemetry-measurements.rst:620 ../telemetry-measurements.rst:623
msgid "KB"
msgstr ""
#: ../telemetry-measurements.rst:602
msgid "Total disk size"
msgstr ""
#: ../telemetry-measurements.rst:602
msgid "hardware.disk.size\\ .total"
msgstr ""
#: ../telemetry-measurements.rst:605
msgid "Used disk size"
msgstr ""
#: ../telemetry-measurements.rst:605
msgid "hardware.disk.size\\ .used"
msgstr ""
#: ../telemetry-measurements.rst:608
msgid "Total physical memory size"
msgstr ""
#: ../telemetry-measurements.rst:608
msgid "hardware.memory.to\\ tal"
msgstr ""
#: ../telemetry-measurements.rst:611
msgid "Used physical m\\ emory size"
msgstr ""
#: ../telemetry-measurements.rst:611
msgid "hardware.memory.us\\ ed"
msgstr ""
#: ../telemetry-measurements.rst:614
msgid "Physical memory buffer size"
msgstr ""
#: ../telemetry-measurements.rst:614
msgid "hardware.memory.bu\\ ffer"
msgstr ""
#: ../telemetry-measurements.rst:617
msgid "Cached physical memory size"
msgstr ""
#: ../telemetry-measurements.rst:617
msgid "hardware.memory.ca\\ ched"
msgstr ""
#: ../telemetry-measurements.rst:620
msgid "Total swap space size"
msgstr ""
#: ../telemetry-measurements.rst:620
msgid "hardware.memory.sw\\ ap.total"
msgstr ""
#: ../telemetry-measurements.rst:623
msgid "Available swap space size"
msgstr ""
#: ../telemetry-measurements.rst:623
msgid "hardware.memory.sw\\ ap.avail"
msgstr ""
#: ../telemetry-measurements.rst:626
msgid "Bytes received by network inte\\ rface"
msgstr ""
#: ../telemetry-measurements.rst:626
msgid "hardware.network.i\\ ncoming.bytes"
msgstr ""
#: ../telemetry-measurements.rst:630
msgid "Bytes sent by n\\ etwork interface"
msgstr ""
#: ../telemetry-measurements.rst:630
msgid "hardware.network.o\\ utgoing.bytes"
msgstr ""
#: ../telemetry-measurements.rst:633
msgid "Sending error o\\ f network inter\\ face"
msgstr ""
#: ../telemetry-measurements.rst:633
msgid "hardware.network.o\\ utgoing.errors"
msgstr ""
#: ../telemetry-measurements.rst:633
msgid "pack\\ et"
msgstr ""
#: ../telemetry-measurements.rst:637
msgid "Number of recei\\ ved datagrams"
msgstr ""
#: ../telemetry-measurements.rst:637 ../telemetry-measurements.rst:641
msgid "data\\ grams"
msgstr ""
#: ../telemetry-measurements.rst:637
msgid "hardware.network.i\\ p.incoming.datagra\\ ms"
msgstr ""
#: ../telemetry-measurements.rst:641
msgid "Number of sent datagrams"
msgstr ""
#: ../telemetry-measurements.rst:641
msgid "hardware.network.i\\ p.outgoing.datagra\\ ms"
msgstr ""
#: ../telemetry-measurements.rst:645
msgid "Aggregated numb\\ er of blocks re\\ ceived to block device"
msgstr ""
#: ../telemetry-measurements.rst:645 ../telemetry-measurements.rst:650
msgid "bloc\\ ks"
msgstr ""
#: ../telemetry-measurements.rst:645
msgid "hardware.system_st\\ ats.io.incoming.bl\\ ocks"
msgstr ""
#: ../telemetry-measurements.rst:650
msgid "Aggregated numb\\ er of blocks se\\ nt to block dev\\ ice"
msgstr ""
#: ../telemetry-measurements.rst:650
msgid "hardware.system_st\\ ats.io.outgoing.bl\\ ocks"
msgstr ""
#: ../telemetry-measurements.rst:655
msgid "CPU idle percen\\ tage"
msgstr ""
#: ../telemetry-measurements.rst:655
msgid "hardware.system_st\\ ats.cpu.idle"
msgstr ""
#: ../telemetry-measurements.rst:661
msgid "The following meters are collected for OpenStack Image service:"
msgstr ""
#: ../telemetry-measurements.rst:668
msgid "Existence of the image"
msgstr ""
#: ../telemetry-measurements.rst:668 ../telemetry-measurements.rst:672
#: ../telemetry-measurements.rst:963 ../telemetry-measurements.rst:1073
#: ../telemetry-measurements.rst:1077 ../telemetry-measurements.rst:1081
#: ../telemetry-measurements.rst:1085 ../telemetry-measurements.rst:1151
#: ../telemetry-measurements.rst:1155 ../telemetry-measurements.rst:1180
#: ../telemetry-measurements.rst:1194 ../telemetry-measurements.rst:1216
#: ../telemetry-measurements.rst:1220
msgid "Notifica\\ tion, Po\\ llster"
msgstr ""
#: ../telemetry-measurements.rst:668 ../telemetry-measurements.rst:672
#: ../telemetry-measurements.rst:676 ../telemetry-measurements.rst:679
#: ../telemetry-measurements.rst:682
msgid "image"
msgstr ""
#: ../telemetry-measurements.rst:668 ../telemetry-measurements.rst:672
#: ../telemetry-measurements.rst:676 ../telemetry-measurements.rst:679
#: ../telemetry-measurements.rst:682 ../telemetry-measurements.rst:685
#: ../telemetry-measurements.rst:688
msgid "image ID"
msgstr ""
#: ../telemetry-measurements.rst:672
msgid "Size of the upl\\ oaded image"
msgstr ""
#: ../telemetry-measurements.rst:672
msgid "image.size"
msgstr ""
#: ../telemetry-measurements.rst:676 ../telemetry-measurements.rst:679
#: ../telemetry-measurements.rst:682 ../telemetry-measurements.rst:685
#: ../telemetry-measurements.rst:688 ../telemetry-measurements.rst:701
#: ../telemetry-measurements.rst:704 ../telemetry-measurements.rst:709
#: ../telemetry-measurements.rst:712 ../telemetry-measurements.rst:717
#: ../telemetry-measurements.rst:720 ../telemetry-measurements.rst:723
#: ../telemetry-measurements.rst:727 ../telemetry-measurements.rst:730
#: ../telemetry-measurements.rst:734 ../telemetry-measurements.rst:738
#: ../telemetry-measurements.rst:741 ../telemetry-measurements.rst:744
#: ../telemetry-measurements.rst:747 ../telemetry-measurements.rst:750
#: ../telemetry-measurements.rst:853 ../telemetry-measurements.rst:856
#: ../telemetry-measurements.rst:859 ../telemetry-measurements.rst:862
#: ../telemetry-measurements.rst:865 ../telemetry-measurements.rst:868
#: ../telemetry-measurements.rst:871 ../telemetry-measurements.rst:874
#: ../telemetry-measurements.rst:877 ../telemetry-measurements.rst:880
#: ../telemetry-measurements.rst:883 ../telemetry-measurements.rst:886
#: ../telemetry-measurements.rst:889 ../telemetry-measurements.rst:892
#: ../telemetry-measurements.rst:895 ../telemetry-measurements.rst:898
#: ../telemetry-measurements.rst:901 ../telemetry-measurements.rst:906
#: ../telemetry-measurements.rst:910 ../telemetry-measurements.rst:924
#: ../telemetry-measurements.rst:927 ../telemetry-measurements.rst:931
#: ../telemetry-measurements.rst:934 ../telemetry-measurements.rst:937
#: ../telemetry-measurements.rst:941 ../telemetry-measurements.rst:944
#: ../telemetry-measurements.rst:947 ../telemetry-measurements.rst:950
#: ../telemetry-measurements.rst:953 ../telemetry-measurements.rst:956
#: ../telemetry-measurements.rst:960 ../telemetry-measurements.rst:967
#: ../telemetry-measurements.rst:970 ../telemetry-measurements.rst:973
#: ../telemetry-measurements.rst:1107 ../telemetry-measurements.rst:1111
#: ../telemetry-measurements.rst:1115 ../telemetry-measurements.rst:1119
#: ../telemetry-measurements.rst:1123 ../telemetry-measurements.rst:1127
#: ../telemetry-measurements.rst:1131 ../telemetry-measurements.rst:1136
#: ../telemetry-measurements.rst:1162 ../telemetry-measurements.rst:1166
#: ../telemetry-measurements.rst:1170 ../telemetry-measurements.rst:1175
#: ../telemetry-measurements.rst:1184 ../telemetry-measurements.rst:1189
#: ../telemetry-measurements.rst:1198 ../telemetry-measurements.rst:1202
#: ../telemetry-measurements.rst:1226 ../telemetry-measurements.rst:1230
#: ../telemetry-measurements.rst:1234 ../telemetry-measurements.rst:1239
#: ../telemetry-measurements.rst:1244 ../telemetry-measurements.rst:1248
#: ../telemetry-measurements.rst:1253
msgid "Notifica\\ tion"
msgstr ""
#: ../telemetry-measurements.rst:676
msgid "Number of updat\\ es on the image"
msgstr ""
#: ../telemetry-measurements.rst:679
msgid "Number of uploa\\ ds on the image"
msgstr ""
#: ../telemetry-measurements.rst:682
msgid "Number of delet\\ es on the image"
msgstr ""
#: ../telemetry-measurements.rst:685
msgid "Image is downlo\\ aded"
msgstr ""
#: ../telemetry-measurements.rst:685
msgid "image.download"
msgstr ""
#: ../telemetry-measurements.rst:688
msgid "Image is served out"
msgstr ""
#: ../telemetry-measurements.rst:688
msgid "image.serve"
msgstr ""
#: ../telemetry-measurements.rst:694
msgid "The following meters are collected for OpenStack Block Storage:"
msgstr ""
#: ../telemetry-measurements.rst:701
msgid "Existence of the volume"
msgstr ""
#: ../telemetry-measurements.rst:701 ../telemetry-measurements.rst:717
#: ../telemetry-measurements.rst:720 ../telemetry-measurements.rst:723
#: ../telemetry-measurements.rst:727 ../telemetry-measurements.rst:730
#: ../telemetry-measurements.rst:734 ../telemetry-measurements.rst:744
#: ../telemetry-measurements.rst:747 ../telemetry-measurements.rst:750
msgid "volume"
msgstr ""
#: ../telemetry-measurements.rst:701 ../telemetry-measurements.rst:704
#: ../telemetry-measurements.rst:717 ../telemetry-measurements.rst:720
#: ../telemetry-measurements.rst:723 ../telemetry-measurements.rst:727
#: ../telemetry-measurements.rst:730 ../telemetry-measurements.rst:734
msgid "volume ID"
msgstr ""
#: ../telemetry-measurements.rst:704
msgid "Size of the vol\\ ume"
msgstr ""
#: ../telemetry-measurements.rst:704
msgid "volume.size"
msgstr ""
#: ../telemetry-measurements.rst:709
msgid "Existence of the snapshot"
msgstr ""
#: ../telemetry-measurements.rst:709 ../telemetry-measurements.rst:738
#: ../telemetry-measurements.rst:741
msgid "snapsh\\ ot"
msgstr ""
#: ../telemetry-measurements.rst:709
msgid "snapshot"
msgstr ""
#: ../telemetry-measurements.rst:709 ../telemetry-measurements.rst:712
#: ../telemetry-measurements.rst:738 ../telemetry-measurements.rst:741
msgid "snapshot ID"
msgstr ""
#: ../telemetry-measurements.rst:712
msgid "Size of the sna\\ pshot"
msgstr ""
#: ../telemetry-measurements.rst:712
msgid "snapshot.size"
msgstr ""
#: ../telemetry-measurements.rst:717
msgid "Creation of the volume"
msgstr ""
#: ../telemetry-measurements.rst:717
msgid "volume.create.(sta\\ rt|end)"
msgstr ""
#: ../telemetry-measurements.rst:720
msgid "Deletion of the volume"
msgstr ""
#: ../telemetry-measurements.rst:720
msgid "volume.delete.(sta\\ rt|end)"
msgstr ""
#: ../telemetry-measurements.rst:723
msgid "Update the name or description of the volume"
msgstr ""
#: ../telemetry-measurements.rst:723
msgid "volume.update.(sta\\ rt|end)"
msgstr ""
#: ../telemetry-measurements.rst:727
msgid "Update the size of the volume"
msgstr ""
#: ../telemetry-measurements.rst:727
msgid "volume.resize.(sta\\ rt|end)"
msgstr ""
#: ../telemetry-measurements.rst:730
msgid "Attaching the v\\ olume to an ins\\ tance"
msgstr ""
#: ../telemetry-measurements.rst:730
msgid "volume.attach.(sta\\ rt|end)"
msgstr ""
#: ../telemetry-measurements.rst:734
msgid "Detaching the v\\ olume from an i\\ nstance"
msgstr ""
#: ../telemetry-measurements.rst:734
msgid "volume.detach.(sta\\ rt|end)"
msgstr ""
#: ../telemetry-measurements.rst:738
msgid "Creation of the snapshot"
msgstr ""
#: ../telemetry-measurements.rst:738
msgid "snapshot.create.(s\\ tart|end)"
msgstr ""
#: ../telemetry-measurements.rst:741
msgid "Deletion of the snapshot"
msgstr ""
#: ../telemetry-measurements.rst:741
msgid "snapshot.delete.(s\\ tart|end)"
msgstr ""
#: ../telemetry-measurements.rst:744
msgid "Creation of the volume backup"
msgstr ""
#: ../telemetry-measurements.rst:744 ../telemetry-measurements.rst:747
#: ../telemetry-measurements.rst:750
msgid "backup ID"
msgstr ""
#: ../telemetry-measurements.rst:744
msgid "volume.backup.crea\\ te.(start|end)"
msgstr ""
#: ../telemetry-measurements.rst:747
msgid "Deletion of the volume backup"
msgstr ""
#: ../telemetry-measurements.rst:747
msgid "volume.backup.dele\\ te.(start|end)"
msgstr ""
#: ../telemetry-measurements.rst:750
msgid "Restoration of the volume back\\ up"
msgstr ""
#: ../telemetry-measurements.rst:750
msgid "volume.backup.rest\\ ore.(start|end)"
msgstr ""
#: ../telemetry-measurements.rst:759
msgid "The following meters are collected for OpenStack Object Storage:"
msgstr ""
#: ../telemetry-measurements.rst:766
msgid "Number of objec\\ ts"
msgstr ""
#: ../telemetry-measurements.rst:766 ../telemetry-measurements.rst:786
#: ../telemetry-measurements.rst:816 ../telemetry-measurements.rst:829
msgid "object"
msgstr ""
#: ../telemetry-measurements.rst:766 ../telemetry-measurements.rst:769
#: ../telemetry-measurements.rst:772 ../telemetry-measurements.rst:775
#: ../telemetry-measurements.rst:778 ../telemetry-measurements.rst:781
#: ../telemetry-measurements.rst:816 ../telemetry-measurements.rst:818
#: ../telemetry-measurements.rst:821 ../telemetry-measurements.rst:824
msgid "storage ID"
msgstr ""
#: ../telemetry-measurements.rst:766
msgid "storage.objects"
msgstr ""
#: ../telemetry-measurements.rst:769 ../telemetry-measurements.rst:818
msgid "Total size of s\\ tored objects"
msgstr ""
#: ../telemetry-measurements.rst:769
msgid "storage.objects.si\\ ze"
msgstr ""
#: ../telemetry-measurements.rst:772 ../telemetry-measurements.rst:821
msgid "Number of conta\\ iners"
msgstr ""
#: ../telemetry-measurements.rst:772
msgid "conta\\ iner"
msgstr ""
#: ../telemetry-measurements.rst:772
msgid "storage.objects.co\\ ntainers"
msgstr ""
#: ../telemetry-measurements.rst:775
msgid "Number of incom\\ ing bytes"
msgstr ""
#: ../telemetry-measurements.rst:775
msgid "storage.objects.in\\ coming.bytes"
msgstr ""
#: ../telemetry-measurements.rst:778
msgid "Number of outgo\\ ing bytes"
msgstr ""
#: ../telemetry-measurements.rst:778
msgid "storage.objects.ou\\ tgoing.bytes"
msgstr ""
#: ../telemetry-measurements.rst:781
msgid "Number of API r\\ equests against OpenStack Obje\\ ct Storage"
msgstr ""
#: ../telemetry-measurements.rst:781
msgid "requ\\ est"
msgstr ""
#: ../telemetry-measurements.rst:781
msgid "storage.api.request"
msgstr ""
#: ../telemetry-measurements.rst:786 ../telemetry-measurements.rst:829
msgid "Number of objec\\ ts in container"
msgstr ""
#: ../telemetry-measurements.rst:786 ../telemetry-measurements.rst:789
#: ../telemetry-measurements.rst:829 ../telemetry-measurements.rst:832
msgid "storage ID\\ /container"
msgstr ""
#: ../telemetry-measurements.rst:786
msgid "storage.containers\\ .objects"
msgstr ""
#: ../telemetry-measurements.rst:789
msgid "Total size of s\\ tored objects i\\ n container"
msgstr ""
#: ../telemetry-measurements.rst:789
msgid "storage.containers\\ .objects.size"
msgstr ""
#: ../telemetry-measurements.rst:795
msgid "Ceph Object Storage"
msgstr ""
#: ../telemetry-measurements.rst:796
msgid ""
"In order to gather meters from Ceph, you have to install and configure the "
"Ceph Object Gateway (radosgw) as it is described in the `Installation Manual "
"<http://docs.ceph.com/docs/master/radosgw/>`__. You have to enable `usage "
"logging <http://ceph.com/docs/master/man/8/radosgw/#usage-logging>`__ in "
"order to get the related meters from Ceph. You will also need an ``admin`` "
"user with ``users``, ``buckets``, ``metadata`` and ``usage`` ``caps`` "
"configured."
msgstr ""
#: ../telemetry-measurements.rst:804
msgid ""
"In order to access Ceph from Telemetry, you need to specify a ``service "
"group`` for ``radosgw`` in the ``ceilometer.conf`` configuration file along "
"with ``access_key`` and ``secret_key`` of the ``admin`` user mentioned above."
msgstr ""
#: ../telemetry-measurements.rst:809
msgid "The following meters are collected for Ceph Object Storage:"
msgstr ""
#: ../telemetry-measurements.rst:816
msgid "Number of objects"
msgstr ""
#: ../telemetry-measurements.rst:816
msgid "radosgw.objects"
msgstr ""
#: ../telemetry-measurements.rst:818
msgid "radosgw.objects.\\ size"
msgstr ""
#: ../telemetry-measurements.rst:821
msgid "contai\\ ner"
msgstr ""
#: ../telemetry-measurements.rst:821
msgid "radosgw.objects.\\ containers"
msgstr ""
#: ../telemetry-measurements.rst:824
msgid "Number of API r\\ equests against Ceph Object Ga\\ teway (radosgw)"
msgstr ""
#: ../telemetry-measurements.rst:824
msgid "radosgw.api.requ\\ est"
msgstr ""
#: ../telemetry-measurements.rst:824
msgid "request"
msgstr ""
#: ../telemetry-measurements.rst:829
msgid "radosgw.containe\\ rs.objects"
msgstr ""
#: ../telemetry-measurements.rst:832
msgid "Total size of s\\ tored objects in container"
msgstr ""
#: ../telemetry-measurements.rst:832
msgid "radosgw.containe\\ rs.objects.size"
msgstr ""
#: ../telemetry-measurements.rst:839
msgid ""
"The ``usage`` related information may not be updated right after an upload "
"or download, because the Ceph Object Gateway needs time to update the usage "
"properties. For instance, the default configuration needs approximately 30 "
"minutes to generate the usage logs."
msgstr ""
#: ../telemetry-measurements.rst:845
msgid "OpenStack Identity"
msgstr ""
#: ../telemetry-measurements.rst:846
msgid "The following meters are collected for OpenStack Identity:"
msgstr ""
#: ../telemetry-measurements.rst:853
msgid "User successful\\ ly authenticated"
msgstr ""
#: ../telemetry-measurements.rst:853
msgid "identity.authent\\ icate.success"
msgstr ""
#: ../telemetry-measurements.rst:853 ../telemetry-measurements.rst:856
#: ../telemetry-measurements.rst:859 ../telemetry-measurements.rst:862
#: ../telemetry-measurements.rst:865 ../telemetry-measurements.rst:868
msgid "user"
msgstr ""
#: ../telemetry-measurements.rst:853 ../telemetry-measurements.rst:856
#: ../telemetry-measurements.rst:859 ../telemetry-measurements.rst:862
#: ../telemetry-measurements.rst:865 ../telemetry-measurements.rst:868
msgid "user ID"
msgstr ""
#: ../telemetry-measurements.rst:856
msgid "User pending au\\ thentication"
msgstr ""
#: ../telemetry-measurements.rst:856
msgid "identity.authent\\ icate.pending"
msgstr ""
#: ../telemetry-measurements.rst:859
msgid "User failed to authenticate"
msgstr ""
#: ../telemetry-measurements.rst:859
msgid "identity.authent\\ icate.failure"
msgstr ""
#: ../telemetry-measurements.rst:862
msgid "User is created"
msgstr ""
#: ../telemetry-measurements.rst:862
msgid "identity.user.cr\\ eated"
msgstr ""
#: ../telemetry-measurements.rst:865
msgid "User is deleted"
msgstr ""
#: ../telemetry-measurements.rst:865
msgid "identity.user.de\\ leted"
msgstr ""
#: ../telemetry-measurements.rst:868
msgid "User is updated"
msgstr ""
#: ../telemetry-measurements.rst:868
msgid "identity.user.up\\ dated"
msgstr ""
#: ../telemetry-measurements.rst:871
msgid "Group is created"
msgstr ""
#: ../telemetry-measurements.rst:871 ../telemetry-measurements.rst:874
#: ../telemetry-measurements.rst:877
msgid "group"
msgstr ""
#: ../telemetry-measurements.rst:871 ../telemetry-measurements.rst:874
#: ../telemetry-measurements.rst:877
msgid "group ID"
msgstr ""
#: ../telemetry-measurements.rst:871
msgid "identity.group.c\\ reated"
msgstr ""
#: ../telemetry-measurements.rst:874
msgid "Group is deleted"
msgstr ""
#: ../telemetry-measurements.rst:874
msgid "identity.group.d\\ eleted"
msgstr ""
#: ../telemetry-measurements.rst:877
msgid "Group is updated"
msgstr ""
#: ../telemetry-measurements.rst:877
msgid "identity.group.u\\ pdated"
msgstr ""
#: ../telemetry-measurements.rst:880
msgid "Role is created"
msgstr ""
#: ../telemetry-measurements.rst:880
msgid "identity.role.cr\\ eated"
msgstr ""
#: ../telemetry-measurements.rst:880 ../telemetry-measurements.rst:883
#: ../telemetry-measurements.rst:886
msgid "role"
msgstr ""
#: ../telemetry-measurements.rst:880 ../telemetry-measurements.rst:883
#: ../telemetry-measurements.rst:886 ../telemetry-measurements.rst:906
#: ../telemetry-measurements.rst:910
msgid "role ID"
msgstr ""
#: ../telemetry-measurements.rst:883
msgid "Role is deleted"
msgstr ""
#: ../telemetry-measurements.rst:883
msgid "identity.role.de\\ leted"
msgstr ""
#: ../telemetry-measurements.rst:886
msgid "Role is updated"
msgstr ""
#: ../telemetry-measurements.rst:886
msgid "identity.role.up\\ dated"
msgstr ""
#: ../telemetry-measurements.rst:889
msgid "Project is crea\\ ted"
msgstr ""
#: ../telemetry-measurements.rst:889
msgid "identity.project\\ .created"
msgstr ""
#: ../telemetry-measurements.rst:889 ../telemetry-measurements.rst:892
#: ../telemetry-measurements.rst:895
msgid "project"
msgstr ""
#: ../telemetry-measurements.rst:889 ../telemetry-measurements.rst:892
#: ../telemetry-measurements.rst:895
msgid "project ID"
msgstr ""
#: ../telemetry-measurements.rst:892
msgid "Project is dele\\ ted"
msgstr ""
#: ../telemetry-measurements.rst:892
msgid "identity.project\\ .deleted"
msgstr ""
#: ../telemetry-measurements.rst:895
msgid "Project is upda\\ ted"
msgstr ""
#: ../telemetry-measurements.rst:895
msgid "identity.project\\ .updated"
msgstr ""
#: ../telemetry-measurements.rst:898
msgid "Trust is created"
msgstr ""
#: ../telemetry-measurements.rst:898
msgid "identity.trust.c\\ reated"
msgstr ""
#: ../telemetry-measurements.rst:898 ../telemetry-measurements.rst:901
msgid "trust"
msgstr ""
#: ../telemetry-measurements.rst:898 ../telemetry-measurements.rst:901
msgid "trust ID"
msgstr ""
#: ../telemetry-measurements.rst:901
msgid "Trust is deleted"
msgstr ""
#: ../telemetry-measurements.rst:901
msgid "identity.trust.d\\ eleted"
msgstr ""
#: ../telemetry-measurements.rst:906
msgid "Role is added to an actor on a target"
msgstr ""
#: ../telemetry-measurements.rst:906
msgid "identity.role_as\\ signment.created"
msgstr ""
#: ../telemetry-measurements.rst:906 ../telemetry-measurements.rst:910
msgid "role_a\\ ssignm\\ ent"
msgstr ""
#: ../telemetry-measurements.rst:910
msgid "Role is removed from an actor on a target"
msgstr ""
#: ../telemetry-measurements.rst:910
msgid "identity.role_as\\ signment.deleted"
msgstr ""
#: ../telemetry-measurements.rst:917
msgid "The following meters are collected for OpenStack Networking:"
msgstr ""
#: ../telemetry-measurements.rst:924
msgid "Existence of ne\\ twork"
msgstr ""
#: ../telemetry-measurements.rst:924 ../telemetry-measurements.rst:927
#: ../telemetry-measurements.rst:931
msgid "networ\\ k"
msgstr ""
#: ../telemetry-measurements.rst:924
msgid "network"
msgstr ""
#: ../telemetry-measurements.rst:924 ../telemetry-measurements.rst:927
#: ../telemetry-measurements.rst:931
msgid "network ID"
msgstr ""
#: ../telemetry-measurements.rst:927
msgid "Creation reques\\ ts for this net\\ work"
msgstr ""
#: ../telemetry-measurements.rst:927
msgid "network.create"
msgstr ""
#: ../telemetry-measurements.rst:931
msgid "Update requests for this network"
msgstr ""
#: ../telemetry-measurements.rst:931
msgid "network.update"
msgstr ""
#: ../telemetry-measurements.rst:934
msgid "Existence of su\\ bnet"
msgstr ""
#: ../telemetry-measurements.rst:934 ../telemetry-measurements.rst:937
#: ../telemetry-measurements.rst:941
msgid "subnet"
msgstr ""
#: ../telemetry-measurements.rst:934 ../telemetry-measurements.rst:937
#: ../telemetry-measurements.rst:941
msgid "subnet ID"
msgstr ""
#: ../telemetry-measurements.rst:937
msgid "Creation reques\\ ts for this sub\\ net"
msgstr ""
#: ../telemetry-measurements.rst:937
msgid "subnet.create"
msgstr ""
#: ../telemetry-measurements.rst:941
msgid "Update requests for this subnet"
msgstr ""
#: ../telemetry-measurements.rst:941
msgid "subnet.update"
msgstr ""
#: ../telemetry-measurements.rst:944 ../telemetry-measurements.rst:990
msgid "Existence of po\\ rt"
msgstr ""
#: ../telemetry-measurements.rst:944 ../telemetry-measurements.rst:947
#: ../telemetry-measurements.rst:950
msgid "port ID"
msgstr ""
#: ../telemetry-measurements.rst:947
msgid "Creation reques\\ ts for this port"
msgstr ""
#: ../telemetry-measurements.rst:947
msgid "port.create"
msgstr ""
#: ../telemetry-measurements.rst:950
msgid "Update requests for this port"
msgstr ""
#: ../telemetry-measurements.rst:950
msgid "port.update"
msgstr ""
#: ../telemetry-measurements.rst:953
msgid "Existence of ro\\ uter"
msgstr ""
#: ../telemetry-measurements.rst:953 ../telemetry-measurements.rst:956
#: ../telemetry-measurements.rst:960
msgid "router"
msgstr ""
#: ../telemetry-measurements.rst:953 ../telemetry-measurements.rst:956
#: ../telemetry-measurements.rst:960
msgid "router ID"
msgstr ""
#: ../telemetry-measurements.rst:956
msgid "Creation reques\\ ts for this rou\\ ter"
msgstr ""
#: ../telemetry-measurements.rst:956
msgid "router.create"
msgstr ""
#: ../telemetry-measurements.rst:960
msgid "Update requests for this router"
msgstr ""
#: ../telemetry-measurements.rst:960
msgid "router.update"
msgstr ""
#: ../telemetry-measurements.rst:963
msgid "Existence of IP"
msgstr ""
#: ../telemetry-measurements.rst:963 ../telemetry-measurements.rst:967
#: ../telemetry-measurements.rst:970
msgid "ip"
msgstr ""
#: ../telemetry-measurements.rst:963 ../telemetry-measurements.rst:967
#: ../telemetry-measurements.rst:970
msgid "ip ID"
msgstr ""
#: ../telemetry-measurements.rst:963
msgid "ip.floating"
msgstr ""
#: ../telemetry-measurements.rst:967
msgid "Creation reques\\ ts for this IP"
msgstr ""
#: ../telemetry-measurements.rst:967
msgid "ip.floating.cr\\ eate"
msgstr ""
#: ../telemetry-measurements.rst:970
msgid "Update requests for this IP"
msgstr ""
#: ../telemetry-measurements.rst:970
msgid "ip.floating.up\\ date"
msgstr ""
#: ../telemetry-measurements.rst:973
msgid "Bytes through t\\ his l3 metering label"
msgstr ""
#: ../telemetry-measurements.rst:973
msgid "bandwidth"
msgstr ""
#: ../telemetry-measurements.rst:973
msgid "label ID"
msgstr ""
#: ../telemetry-measurements.rst:979
msgid "SDN controllers"
msgstr ""
#: ../telemetry-measurements.rst:980
msgid "The following meters are collected for SDN:"
msgstr ""
#: ../telemetry-measurements.rst:987
msgid "Existence of sw\\ itch"
msgstr ""
#: ../telemetry-measurements.rst:987
msgid "switch"
msgstr ""
#: ../telemetry-measurements.rst:987 ../telemetry-measurements.rst:990
#: ../telemetry-measurements.rst:993 ../telemetry-measurements.rst:996
#: ../telemetry-measurements.rst:999 ../telemetry-measurements.rst:1002
#: ../telemetry-measurements.rst:1005 ../telemetry-measurements.rst:1008
#: ../telemetry-measurements.rst:1011 ../telemetry-measurements.rst:1014
#: ../telemetry-measurements.rst:1017 ../telemetry-measurements.rst:1021
#: ../telemetry-measurements.rst:1025 ../telemetry-measurements.rst:1028
#: ../telemetry-measurements.rst:1031 ../telemetry-measurements.rst:1034
#: ../telemetry-measurements.rst:1037 ../telemetry-measurements.rst:1040
#: ../telemetry-measurements.rst:1043 ../telemetry-measurements.rst:1045
#: ../telemetry-measurements.rst:1048 ../telemetry-measurements.rst:1052
#: ../telemetry-measurements.rst:1055
msgid "switch ID"
msgstr ""
#: ../telemetry-measurements.rst:990
msgid "switch.port"
msgstr ""
#: ../telemetry-measurements.rst:993 ../telemetry-measurements.rst:996
#: ../telemetry-measurements.rst:999 ../telemetry-measurements.rst:1002
#: ../telemetry-measurements.rst:1005 ../telemetry-measurements.rst:1008
#: ../telemetry-measurements.rst:1011 ../telemetry-measurements.rst:1014
#: ../telemetry-measurements.rst:1017 ../telemetry-measurements.rst:1021
#: ../telemetry-measurements.rst:1025 ../telemetry-measurements.rst:1028
#: ../telemetry-measurements.rst:1052 ../telemetry-measurements.rst:1055
#: ../telemetry-measurements.rst:1089 ../telemetry-measurements.rst:1097
#: ../telemetry-measurements.rst:1101
msgid "Cumula\\ tive"
msgstr ""
#: ../telemetry-measurements.rst:993
msgid "Packets receive\\ d on port"
msgstr ""
#: ../telemetry-measurements.rst:993 ../telemetry-measurements.rst:996
#: ../telemetry-measurements.rst:1005 ../telemetry-measurements.rst:1008
#: ../telemetry-measurements.rst:1011 ../telemetry-measurements.rst:1014
#: ../telemetry-measurements.rst:1017 ../telemetry-measurements.rst:1021
#: ../telemetry-measurements.rst:1025 ../telemetry-measurements.rst:1037
#: ../telemetry-measurements.rst:1040 ../telemetry-measurements.rst:1052
msgid "packet"
msgstr ""
#: ../telemetry-measurements.rst:993
msgid "switch.port.re\\ ceive.packets"
msgstr ""
#: ../telemetry-measurements.rst:996
msgid "Packets transmi\\ tted on port"
msgstr ""
#: ../telemetry-measurements.rst:996
msgid "switch.port.tr\\ ansmit.packets"
msgstr ""
#: ../telemetry-measurements.rst:999
msgid "Bytes received on port"
msgstr ""
#: ../telemetry-measurements.rst:999
msgid "switch.port.re\\ ceive.bytes"
msgstr ""
#: ../telemetry-measurements.rst:1002
msgid "Bytes transmitt\\ ed on port"
msgstr ""
#: ../telemetry-measurements.rst:1002
msgid "switch.port.tr\\ ansmit.bytes"
msgstr ""
#: ../telemetry-measurements.rst:1005
msgid "Drops received on port"
msgstr ""
#: ../telemetry-measurements.rst:1005
msgid "switch.port.re\\ ceive.drops"
msgstr ""
#: ../telemetry-measurements.rst:1008
msgid "Drops transmitt\\ ed on port"
msgstr ""
#: ../telemetry-measurements.rst:1008
msgid "switch.port.tr\\ ansmit.drops"
msgstr ""
#: ../telemetry-measurements.rst:1011
msgid "Errors received on port"
msgstr ""
#: ../telemetry-measurements.rst:1011
msgid "switch.port.re\\ ceive.errors"
msgstr ""
#: ../telemetry-measurements.rst:1014
msgid "Errors transmit\\ ted on port"
msgstr ""
#: ../telemetry-measurements.rst:1014
msgid "switch.port.tr\\ ansmit.errors"
msgstr ""
#: ../telemetry-measurements.rst:1017
msgid "Frame alignment errors receive\\ d on port"
msgstr ""
#: ../telemetry-measurements.rst:1017
msgid "switch.port.re\\ ceive.frame\\_er\\ ror"
msgstr ""
#: ../telemetry-measurements.rst:1021
msgid "Overrun errors received on port"
msgstr ""
#: ../telemetry-measurements.rst:1021
msgid "switch.port.re\\ ceive.overrun\\_\\ error"
msgstr ""
#: ../telemetry-measurements.rst:1025
msgid "CRC errors rece\\ ived on port"
msgstr ""
#: ../telemetry-measurements.rst:1025
msgid "switch.port.re\\ ceive.crc\\_error"
msgstr ""
#: ../telemetry-measurements.rst:1028
msgid "Collisions on p\\ ort"
msgstr ""
#: ../telemetry-measurements.rst:1028
msgid "count"
msgstr ""
#: ../telemetry-measurements.rst:1028
msgid "switch.port.co\\ llision.count"
msgstr ""
#: ../telemetry-measurements.rst:1031
msgid "Duration of tab\\ le"
msgstr ""
#: ../telemetry-measurements.rst:1031
msgid "switch.table"
msgstr ""
#: ../telemetry-measurements.rst:1031 ../telemetry-measurements.rst:1316
#: ../telemetry-measurements.rst:1319
msgid "table"
msgstr ""
#: ../telemetry-measurements.rst:1034
msgid "Active entries in table"
msgstr ""
#: ../telemetry-measurements.rst:1034
msgid "entry"
msgstr ""
#: ../telemetry-measurements.rst:1034
msgid "switch.table.a\\ ctive.entries"
msgstr ""
#: ../telemetry-measurements.rst:1037
msgid "Lookup packets for table"
msgstr ""
#: ../telemetry-measurements.rst:1037
msgid "switch.table.l\\ ookup.packets"
msgstr ""
#: ../telemetry-measurements.rst:1040
msgid "Packets matches for table"
msgstr ""
#: ../telemetry-measurements.rst:1040
msgid "switch.table.m\\ atched.packets"
msgstr ""
#: ../telemetry-measurements.rst:1043
msgid "Duration of flow"
msgstr ""
#: ../telemetry-measurements.rst:1043
msgid "flow"
msgstr ""
#: ../telemetry-measurements.rst:1043
msgid "switch.flow"
msgstr ""
#: ../telemetry-measurements.rst:1045
msgid "Duration of flow in seconds"
msgstr ""
#: ../telemetry-measurements.rst:1045
msgid "s"
msgstr ""
#: ../telemetry-measurements.rst:1045
msgid "switch.flow.du\\ ration.seconds"
msgstr ""
#: ../telemetry-measurements.rst:1048
msgid "Duration of flow in nanoseconds"
msgstr ""
#: ../telemetry-measurements.rst:1048
msgid "switch.flow.du\\ ration.nanosec\\ onds"
msgstr ""
#: ../telemetry-measurements.rst:1052
msgid "Packets received"
msgstr ""
#: ../telemetry-measurements.rst:1052
msgid "switch.flow.pa\\ ckets"
msgstr ""
#: ../telemetry-measurements.rst:1055
msgid "Bytes received"
msgstr ""
#: ../telemetry-measurements.rst:1055
msgid "switch.flow.by\\ tes"
msgstr ""
#: ../telemetry-measurements.rst:1061
msgid ""
"These meters are available for OpenFlow based switches. In order to enable "
"these meters, each driver needs to be properly configured."
msgstr ""
#: ../telemetry-measurements.rst:1065
msgid "Load-Balancer-as-a-Service (LBaaS)"
msgstr ""
#: ../telemetry-measurements.rst:1066
msgid "The following meters are collected for LBaaS:"
msgstr ""
#: ../telemetry-measurements.rst:1073
msgid "Existence of a LB pool"
msgstr ""
#: ../telemetry-measurements.rst:1073
msgid "network.serv\\ ices.lb.pool"
msgstr ""
#: ../telemetry-measurements.rst:1073 ../telemetry-measurements.rst:1107
#: ../telemetry-measurements.rst:1111
msgid "pool"
msgstr ""
#: ../telemetry-measurements.rst:1073 ../telemetry-measurements.rst:1089
#: ../telemetry-measurements.rst:1093 ../telemetry-measurements.rst:1097
#: ../telemetry-measurements.rst:1101 ../telemetry-measurements.rst:1107
#: ../telemetry-measurements.rst:1111
msgid "pool ID"
msgstr ""
#: ../telemetry-measurements.rst:1077
msgid "Existence of a LB VIP"
msgstr ""
#: ../telemetry-measurements.rst:1077
msgid "network.serv\\ ices.lb.vip"
msgstr ""
#: ../telemetry-measurements.rst:1077 ../telemetry-measurements.rst:1115
#: ../telemetry-measurements.rst:1119
msgid "vip"
msgstr ""
#: ../telemetry-measurements.rst:1077 ../telemetry-measurements.rst:1115
#: ../telemetry-measurements.rst:1119
msgid "vip ID"
msgstr ""
#: ../telemetry-measurements.rst:1081
msgid "Existence of a LB member"
msgstr ""
#: ../telemetry-measurements.rst:1081 ../telemetry-measurements.rst:1123
#: ../telemetry-measurements.rst:1127
msgid "member"
msgstr ""
#: ../telemetry-measurements.rst:1081 ../telemetry-measurements.rst:1123
#: ../telemetry-measurements.rst:1127
msgid "member ID"
msgstr ""
#: ../telemetry-measurements.rst:1081
msgid "network.serv\\ ices.lb.memb\\ er"
msgstr ""
#: ../telemetry-measurements.rst:1085
msgid "Existence of a LB health probe"
msgstr ""
#: ../telemetry-measurements.rst:1085 ../telemetry-measurements.rst:1131
#: ../telemetry-measurements.rst:1136
msgid "health\\ _monit\\ or"
msgstr ""
#: ../telemetry-measurements.rst:1085 ../telemetry-measurements.rst:1131
#: ../telemetry-measurements.rst:1136
msgid "monitor ID"
msgstr ""
#: ../telemetry-measurements.rst:1085
msgid "network.serv\\ ices.lb.heal\\ th_monitor"
msgstr ""
#: ../telemetry-measurements.rst:1089
msgid "Total connectio\\ ns on a LB"
msgstr ""
#: ../telemetry-measurements.rst:1089 ../telemetry-measurements.rst:1093
msgid "connec\\ tion"
msgstr ""
#: ../telemetry-measurements.rst:1089
msgid "network.serv\\ ices.lb.tota\\ l.connections"
msgstr ""
#: ../telemetry-measurements.rst:1093
msgid "Active connecti\\ ons on a LB"
msgstr ""
#: ../telemetry-measurements.rst:1093
msgid "network.serv\\ ices.lb.acti\\ ve.connections"
msgstr ""
#: ../telemetry-measurements.rst:1097
msgid "Number of incom\\ ing Bytes"
msgstr ""
#: ../telemetry-measurements.rst:1097
msgid "network.serv\\ ices.lb.inco\\ ming.bytes"
msgstr ""
#: ../telemetry-measurements.rst:1101
msgid "Number of outgo\\ ing Bytes"
msgstr ""
#: ../telemetry-measurements.rst:1101
msgid "network.serv\\ ices.lb.outg\\ oing.bytes"
msgstr ""
#: ../telemetry-measurements.rst:1107
msgid "LB pool was cre\\ ated"
msgstr ""
#: ../telemetry-measurements.rst:1107
msgid "network.serv\\ ices.lb.pool\\ .create"
msgstr ""
#: ../telemetry-measurements.rst:1111
msgid "LB pool was upd\\ ated"
msgstr ""
#: ../telemetry-measurements.rst:1111
msgid "network.serv\\ ices.lb.pool\\ .update"
msgstr ""
#: ../telemetry-measurements.rst:1115
msgid "LB VIP was crea\\ ted"
msgstr ""
#: ../telemetry-measurements.rst:1115
msgid "network.serv\\ ices.lb.vip.\\ create"
msgstr ""
#: ../telemetry-measurements.rst:1119
msgid "LB VIP was upda\\ ted"
msgstr ""
#: ../telemetry-measurements.rst:1119
msgid "network.serv\\ ices.lb.vip.\\ update"
msgstr ""
#: ../telemetry-measurements.rst:1123
msgid "LB member was c\\ reated"
msgstr ""
#: ../telemetry-measurements.rst:1123
msgid "network.serv\\ ices.lb.memb\\ er.create"
msgstr ""
#: ../telemetry-measurements.rst:1127
msgid "LB member was u\\ pdated"
msgstr ""
#: ../telemetry-measurements.rst:1127
msgid "network.serv\\ ices.lb.memb\\ er.update"
msgstr ""
#: ../telemetry-measurements.rst:1131
msgid "LB health probe was created"
msgstr ""
#: ../telemetry-measurements.rst:1131
msgid "network.serv\\ ices.lb.heal\\ th_monitor.c\\ reate"
msgstr ""
#: ../telemetry-measurements.rst:1136
msgid "LB health probe was updated"
msgstr ""
#: ../telemetry-measurements.rst:1136
msgid "network.serv\\ ices.lb.heal\\ th_monitor.u\\ pdate"
msgstr ""
#: ../telemetry-measurements.rst:1143
msgid "VPN as a Service (VPNaaS)"
msgstr ""
#: ../telemetry-measurements.rst:1144
msgid "The following meters are collected for VPNaaS:"
msgstr ""
#: ../telemetry-measurements.rst:1151
msgid "Existence of a VPN"
msgstr ""
#: ../telemetry-measurements.rst:1151
msgid "network.serv\\ ices.vpn"
msgstr ""
#: ../telemetry-measurements.rst:1151 ../telemetry-measurements.rst:1162
#: ../telemetry-measurements.rst:1166
msgid "vpn ID"
msgstr ""
#: ../telemetry-measurements.rst:1151 ../telemetry-measurements.rst:1162
#: ../telemetry-measurements.rst:1166
msgid "vpnser\\ vice"
msgstr ""
#: ../telemetry-measurements.rst:1155
msgid "Existence of an IPSec connection"
msgstr ""
#: ../telemetry-measurements.rst:1155 ../telemetry-measurements.rst:1170
#: ../telemetry-measurements.rst:1175
msgid "connection ID"
msgstr ""
#: ../telemetry-measurements.rst:1155 ../telemetry-measurements.rst:1170
#: ../telemetry-measurements.rst:1175
msgid "ipsec\\_\\ site\\_c\\ onnect\\ ion"
msgstr ""
#: ../telemetry-measurements.rst:1155
msgid "network.serv\\ ices.vpn.con\\ nections"
msgstr ""
#: ../telemetry-measurements.rst:1162
msgid "VPN was created"
msgstr ""
#: ../telemetry-measurements.rst:1162
msgid "network.serv\\ ices.vpn.cre\\ ate"
msgstr ""
#: ../telemetry-measurements.rst:1166
msgid "VPN was updated"
msgstr ""
#: ../telemetry-measurements.rst:1166
msgid "network.serv\\ ices.vpn.upd\\ ate"
msgstr ""
#: ../telemetry-measurements.rst:1170
msgid "IPSec connection was created"
msgstr ""
#: ../telemetry-measurements.rst:1170
msgid "network.serv\\ ices.vpn.con\\ nections.cre\\ ate"
msgstr ""
#: ../telemetry-measurements.rst:1175
msgid "IPSec connection was updated"
msgstr ""
#: ../telemetry-measurements.rst:1175
msgid "network.serv\\ ices.vpn.con\\ nections.upd\\ ate"
msgstr ""
#: ../telemetry-measurements.rst:1180
msgid "Existence of an IPSec policy"
msgstr ""
#: ../telemetry-measurements.rst:1180 ../telemetry-measurements.rst:1184
#: ../telemetry-measurements.rst:1189
msgid "ipsecp\\ olicy"
msgstr ""
#: ../telemetry-measurements.rst:1180 ../telemetry-measurements.rst:1184
#: ../telemetry-measurements.rst:1189
msgid "ipsecpolicy ID"
msgstr ""
#: ../telemetry-measurements.rst:1180
msgid "network.serv\\ ices.vpn.ips\\ ecpolicy"
msgstr ""
#: ../telemetry-measurements.rst:1184
msgid "IPSec policy was created"
msgstr ""
#: ../telemetry-measurements.rst:1184
msgid "network.serv\\ ices.vpn.ips\\ ecpolicy.cre\\ ate"
msgstr ""
#: ../telemetry-measurements.rst:1189
msgid "IPSec policy was updated"
msgstr ""
#: ../telemetry-measurements.rst:1189
msgid "network.serv\\ ices.vpn.ips\\ ecpolicy.upd\\ ate"
msgstr ""
#: ../telemetry-measurements.rst:1194
msgid "Existence of an Ike policy"
msgstr ""
#: ../telemetry-measurements.rst:1194 ../telemetry-measurements.rst:1198
#: ../telemetry-measurements.rst:1202
msgid "ikepol\\ icy"
msgstr ""
#: ../telemetry-measurements.rst:1194 ../telemetry-measurements.rst:1198
#: ../telemetry-measurements.rst:1202
msgid "ikepolicy ID"
msgstr ""
#: ../telemetry-measurements.rst:1194
msgid "network.serv\\ ices.vpn.ike\\ policy"
msgstr ""
#: ../telemetry-measurements.rst:1198
msgid "Ike policy was created"
msgstr ""
#: ../telemetry-measurements.rst:1198
msgid "network.serv\\ ices.vpn.ike\\ policy.create"
msgstr ""
#: ../telemetry-measurements.rst:1202
msgid "Ike policy was updated"
msgstr ""
#: ../telemetry-measurements.rst:1202
msgid "network.serv\\ ices.vpn.ike\\ policy.update"
msgstr ""
#: ../telemetry-measurements.rst:1208
msgid "Firewall as a Service (FWaaS)"
msgstr ""
#: ../telemetry-measurements.rst:1209
msgid "The following meters are collected for FWaaS:"
msgstr ""
#: ../telemetry-measurements.rst:1216
msgid "Existence of a firewall"
msgstr ""
#: ../telemetry-measurements.rst:1216 ../telemetry-measurements.rst:1226
#: ../telemetry-measurements.rst:1230
msgid "firewall"
msgstr ""
#: ../telemetry-measurements.rst:1216 ../telemetry-measurements.rst:1220
#: ../telemetry-measurements.rst:1226 ../telemetry-measurements.rst:1230
msgid "firewall ID"
msgstr ""
#: ../telemetry-measurements.rst:1216
msgid "network.serv\\ ices.firewall"
msgstr ""
#: ../telemetry-measurements.rst:1220
msgid "Existence of a firewall policy"
msgstr ""
#: ../telemetry-measurements.rst:1220 ../telemetry-measurements.rst:1234
#: ../telemetry-measurements.rst:1239
msgid "firewa\\ ll_pol\\ icy"
msgstr ""
#: ../telemetry-measurements.rst:1220
msgid "network.serv\\ ices.firewal\\ l.policy"
msgstr ""
#: ../telemetry-measurements.rst:1226
msgid "Firewall was cr\\ eated"
msgstr ""
#: ../telemetry-measurements.rst:1226
msgid "network.serv\\ ices.firewal\\ l.create"
msgstr ""
#: ../telemetry-measurements.rst:1230
msgid "Firewall was up\\ dated"
msgstr ""
#: ../telemetry-measurements.rst:1230
msgid "network.serv\\ ices.firewal\\ l.update"
msgstr ""
#: ../telemetry-measurements.rst:1234
msgid "Firewall policy was created"
msgstr ""
#: ../telemetry-measurements.rst:1234
msgid "network.serv\\ ices.firewal\\ l.policy.cre\\ ate"
msgstr ""
#: ../telemetry-measurements.rst:1234 ../telemetry-measurements.rst:1239
msgid "policy ID"
msgstr ""
#: ../telemetry-measurements.rst:1239
msgid "Firewall policy was updated"
msgstr ""
#: ../telemetry-measurements.rst:1239
msgid "network.serv\\ ices.firewal\\ l.policy.upd\\ ate"
msgstr ""
#: ../telemetry-measurements.rst:1244
msgid "Existence of a firewall rule"
msgstr ""
#: ../telemetry-measurements.rst:1244 ../telemetry-measurements.rst:1248
#: ../telemetry-measurements.rst:1253
msgid "firewa\\ ll_rule"
msgstr ""
#: ../telemetry-measurements.rst:1244
msgid "network.serv\\ ices.firewal\\ l.rule"
msgstr ""
#: ../telemetry-measurements.rst:1244 ../telemetry-measurements.rst:1248
#: ../telemetry-measurements.rst:1253
msgid "rule ID"
msgstr ""
#: ../telemetry-measurements.rst:1248
msgid "Firewall rule w\\ as created"
msgstr ""
#: ../telemetry-measurements.rst:1248
msgid "network.serv\\ ices.firewal\\ l.rule.create"
msgstr ""
#: ../telemetry-measurements.rst:1253
msgid "Firewall rule w\\ as updated"
msgstr ""
#: ../telemetry-measurements.rst:1253
msgid "network.serv\\ ices.firewal\\ l.rule.update"
msgstr ""
#: ../telemetry-measurements.rst:1260
msgid "The following meters are collected for the Orchestration service:"
msgstr ""
#: ../telemetry-measurements.rst:1267
msgid "Stack was success\\ fully created"
msgstr ""
#: ../telemetry-measurements.rst:1267 ../telemetry-measurements.rst:1270
#: ../telemetry-measurements.rst:1273 ../telemetry-measurements.rst:1276
#: ../telemetry-measurements.rst:1279
msgid "stack"
msgstr ""
#: ../telemetry-measurements.rst:1267 ../telemetry-measurements.rst:1270
#: ../telemetry-measurements.rst:1273 ../telemetry-measurements.rst:1276
#: ../telemetry-measurements.rst:1279
msgid "stack ID"
msgstr ""
#: ../telemetry-measurements.rst:1267
msgid "stack.create"
msgstr ""
#: ../telemetry-measurements.rst:1270
msgid "Stack was success\\ fully updated"
msgstr ""
#: ../telemetry-measurements.rst:1270
msgid "stack.update"
msgstr ""
#: ../telemetry-measurements.rst:1273
msgid "Stack was success\\ fully deleted"
msgstr ""
#: ../telemetry-measurements.rst:1273
msgid "stack.delete"
msgstr ""
#: ../telemetry-measurements.rst:1276
msgid "Stack was success\\ fully resumed"
msgstr ""
#: ../telemetry-measurements.rst:1276
msgid "stack.resume"
msgstr ""
#: ../telemetry-measurements.rst:1279
msgid "Stack was success\\ fully suspended"
msgstr ""
#: ../telemetry-measurements.rst:1279
msgid "stack.suspend"
msgstr ""
#: ../telemetry-measurements.rst:1284
msgid "Data processing service for OpenStack"
msgstr ""
#: ../telemetry-measurements.rst:1285
msgid ""
"The following meters are collected for the Data processing service for "
"OpenStack:"
msgstr ""
#: ../telemetry-measurements.rst:1293
msgid "Cluster was successfully created"
msgstr ""
#: ../telemetry-measurements.rst:1293 ../telemetry-measurements.rst:1298
#: ../telemetry-measurements.rst:1302
msgid "cluster"
msgstr ""
#: ../telemetry-measurements.rst:1293 ../telemetry-measurements.rst:1298
#: ../telemetry-measurements.rst:1302
msgid "cluster ID"
msgstr ""
#: ../telemetry-measurements.rst:1293
msgid "cluster.create"
msgstr ""
#: ../telemetry-measurements.rst:1298
msgid "Cluster was successfully updated"
msgstr ""
#: ../telemetry-measurements.rst:1298
msgid "cluster.update"
msgstr ""
#: ../telemetry-measurements.rst:1302
msgid "Cluster was successfully deleted"
msgstr ""
#: ../telemetry-measurements.rst:1302
msgid "cluster.delete"
msgstr ""
#: ../telemetry-measurements.rst:1308
msgid "Key Value Store module"
msgstr ""
#: ../telemetry-measurements.rst:1309
msgid "The following meters are collected for the Key Value Store module:"
msgstr ""
#: ../telemetry-measurements.rst:1316
msgid "Table was succe\\ ssfully created"
msgstr ""
#: ../telemetry-measurements.rst:1316
msgid "magnetodb.table.\\ create"
msgstr ""
#: ../telemetry-measurements.rst:1316 ../telemetry-measurements.rst:1319
#: ../telemetry-measurements.rst:1322
msgid "table ID"
msgstr ""
#: ../telemetry-measurements.rst:1319
msgid "Table was succe\\ ssfully deleted"
msgstr ""
#: ../telemetry-measurements.rst:1319
msgid "magnetodb.table\\ .delete"
msgstr ""
#: ../telemetry-measurements.rst:1322
msgid "Number of indices created in a table"
msgstr ""
#: ../telemetry-measurements.rst:1322
msgid "index"
msgstr ""
#: ../telemetry-measurements.rst:1322
msgid "magnetodb.table\\ .index.count"
msgstr ""
#: ../telemetry-measurements.rst:1328
msgid "Energy"
msgstr ""
#: ../telemetry-measurements.rst:1329
msgid "The following energy related meters are available:"
msgstr ""
#: ../telemetry-measurements.rst:1336
msgid "Amount of energy"
msgstr ""
#: ../telemetry-measurements.rst:1336
msgid "energy"
msgstr ""
#: ../telemetry-measurements.rst:1336
msgid "kWh"
msgstr ""
#: ../telemetry-measurements.rst:1336 ../telemetry-measurements.rst:1338
msgid "probe ID"
msgstr ""
#: ../telemetry-measurements.rst:1338
msgid "Power consumption"
msgstr ""
#: ../telemetry-measurements.rst:1338
msgid "power"
msgstr ""
#: ../telemetry-system-architecture.rst:7
msgid ""
"The Telemetry service uses an agent-based architecture. Several modules "
"combine their responsibilities to collect data, store samples in a database, "
"or provide an API service for handling incoming requests."
msgstr ""
#: ../telemetry-system-architecture.rst:11
msgid "The Telemetry service is built from the following agents and services:"
msgstr ""
#: ../telemetry-system-architecture.rst:14
msgid ""
"Presents aggregated metering data to consumers (such as billing engines and "
"analytics tools)."
msgstr ""
#: ../telemetry-system-architecture.rst:15
msgid "ceilometer-api"
msgstr ""
#: ../telemetry-system-architecture.rst:18
msgid ""
"Polls for different kinds of meter data by using the polling plug-ins "
"(pollsters) registered in different namespaces. It provides a single polling "
"interface across different namespaces."
msgstr ""
#: ../telemetry-system-architecture.rst:20
msgid "ceilometer-polling"
msgstr ""
#: ../telemetry-system-architecture.rst:23
msgid ""
"Polls the public RESTful APIs of other OpenStack services such as Compute "
"service and Image service, in order to keep tabs on resource existence, by "
"using the polling plug-ins (pollsters) registered in the central polling "
"namespace."
msgstr ""
#: ../telemetry-system-architecture.rst:26
msgid "ceilometer-agent-central"
msgstr ""
#: ../telemetry-system-architecture.rst:29
msgid ""
"Polls the local hypervisor or libvirt daemon to acquire performance data for "
"the local instances, messages and emits the data as AMQP messages, by using "
"the polling plug-ins (pollsters) registered in the compute polling namespace."
msgstr ""
#: ../telemetry-system-architecture.rst:32
msgid "ceilometer-agent-compute"
msgstr ""
#: ../telemetry-system-architecture.rst:35
msgid ""
"Polls the local node with IPMI support, in order to acquire IPMI sensor data "
"and Intel Node Manager data, by using the polling plug-ins (pollsters) "
"registered in the IPMI polling namespace."
msgstr ""
#: ../telemetry-system-architecture.rst:37
msgid "ceilometer-agent-ipmi"
msgstr ""
#: ../telemetry-system-architecture.rst:40
msgid "Consumes AMQP messages from other OpenStack services."
msgstr ""
#: ../telemetry-system-architecture.rst:40
msgid "ceilometer-agent-notification"
msgstr ""
#: ../telemetry-system-architecture.rst:43
msgid ""
"Consumes AMQP notifications from the agents, then dispatches these data to "
"the appropriate data store."
msgstr ""
#: ../telemetry-system-architecture.rst:44
msgid "ceilometer-collector"
msgstr ""
#: ../telemetry-system-architecture.rst:47
msgid ""
"Determines when alarms fire due to the associated statistic trend crossing a "
"threshold over a sliding time window."
msgstr ""
#: ../telemetry-system-architecture.rst:48
msgid "ceilometer-alarm-evaluator"
msgstr ""
#: ../telemetry-system-architecture.rst:51
msgid ""
"Initiates alarm actions, for example calling out to a webhook with a "
"description of the alarm state transition."
msgstr ""
#: ../telemetry-system-architecture.rst:56
msgid ""
"The ``ceilometer-polling`` service is available since the Kilo release. It "
"is intended to replace ceilometer-agent-central, ceilometer-agent-compute, "
"and ceilometer-agent-ipmi."
msgstr ""
#: ../telemetry-system-architecture.rst:58
msgid "ceilometer-alarm-notifier"
msgstr ""
#: ../telemetry-system-architecture.rst:60
msgid ""
"Besides the ``ceilometer-agent-compute`` and the ``ceilometer-agent-ipmi`` "
"services, all the other services are placed on one or more controller nodes."
msgstr ""
#: ../telemetry-system-architecture.rst:64
msgid ""
"The Telemetry architecture highly depends on the AMQP service both for "
"consuming notifications coming from OpenStack services and internal "
"communication."
msgstr ""
#: ../telemetry-system-architecture.rst:73
msgid "Supported databases"
msgstr ""
#: ../telemetry-system-architecture.rst:75
msgid ""
"The other key external component of Telemetry is the database, where events, "
"samples, alarm definitions and alarms are stored."
msgstr ""
#: ../telemetry-system-architecture.rst:80
msgid ""
"Multiple database back ends can be configured in order to store events, "
"samples and alarms separately."
msgstr ""
#: ../telemetry-system-architecture.rst:83
msgid "The list of supported database back ends:"
msgstr ""
#: ../telemetry-system-architecture.rst:85
msgid "`ElasticSearch (events only) <https://www.elastic.co/>`__"
msgstr ""
#: ../telemetry-system-architecture.rst:87
msgid "`MongoDB <https://www.mongodb.org/>`__"
msgstr ""
#: ../telemetry-system-architecture.rst:89
msgid "`MySQL <http://www.mysql.com/>`__"
msgstr ""
#: ../telemetry-system-architecture.rst:91
msgid "`PostgreSQL <http://www.postgresql.org/>`__"
msgstr ""
#: ../telemetry-system-architecture.rst:93
msgid "`HBase <http://hbase.apache.org/>`__"
msgstr ""
#: ../telemetry-system-architecture.rst:95
msgid "`DB2(deprecated) <http://www-01.ibm.com/software/data/db2/>`__"
msgstr ""
#: ../telemetry-system-architecture.rst:99
msgid ""
"DB2 nosql support is deprecated as of Liberty and will be removed in Mitaka "
"as the product is no longer in development."
msgstr ""
#: ../telemetry-system-architecture.rst:107
msgid "Supported hypervisors"
msgstr ""
#: ../telemetry-system-architecture.rst:109
msgid ""
"The Telemetry service collects information about the virtual machines, which "
"requires close connection to the hypervisor that runs on the compute hosts."
msgstr ""
#: ../telemetry-system-architecture.rst:113
msgid "The list of supported hypervisors is:"
msgstr ""
#: ../telemetry-system-architecture.rst:115
msgid ""
"The following hypervisors are supported via `Libvirt <http://libvirt.org/"
">`__:"
msgstr ""
#: ../telemetry-system-architecture.rst:121
msgid "`Quick Emulator (QEMU) <http://wiki.qemu.org/Main_Page>`__"
msgstr ""
#: ../telemetry-system-architecture.rst:125
msgid "`User-mode Linux (UML) <http://user-mode-linux.sourceforge.net/>`__"
msgstr ""
#: ../telemetry-system-architecture.rst:130
msgid ""
"For details about hypervisor support in libvirt please check the `Libvirt "
"API support matrix <http://libvirt.org/hvsupport.html>`__."
msgstr ""
#: ../telemetry-system-architecture.rst:135
msgid "`XEN <http://www.xenproject.org/help/documentation.html>`__"
msgstr ""
#: ../telemetry-system-architecture.rst:137
msgid ""
"`VMWare vSphere <http://www.vmware.com/products/vsphere-hypervisor/support."
"html>`__"
msgstr ""
#: ../telemetry-system-architecture.rst:143
msgid "Supported networking services"
msgstr ""
#: ../telemetry-system-architecture.rst:145
msgid ""
"Telemetry is able to retrieve information from OpenStack Networking and "
"external networking services:"
msgstr ""
#: ../telemetry-system-architecture.rst:148
msgid "OpenStack Networking:"
msgstr ""
#: ../telemetry-system-architecture.rst:150
msgid "Basic network meters"
msgstr ""
#: ../telemetry-system-architecture.rst:152
msgid "Firewall-as-a-Service (FWaaS) meters"
msgstr ""
#: ../telemetry-system-architecture.rst:154
msgid "Load-Balancer-as-a-Service (LBaaS) meters"
msgstr ""
#: ../telemetry-system-architecture.rst:156
msgid "VPN-as-a-Service (VPNaaS) meters"
msgstr ""
#: ../telemetry-system-architecture.rst:158
msgid "SDN controller meters:"
msgstr ""
#: ../telemetry-system-architecture.rst:160
msgid "`OpenDaylight <https://www.opendaylight.org/>`__"
msgstr ""
#: ../telemetry-system-architecture.rst:162
msgid "`OpenContrail <http://www.opencontrail.org/>`__"
msgstr ""
#: ../telemetry-system-architecture.rst:169
msgid "Users, roles and tenants"
msgstr ""
#: ../telemetry-system-architecture.rst:171
msgid ""
"This service of OpenStack uses OpenStack Identity for authenticating and "
"authorizing users. The required configuration options are listed in the "
"`Telemetry section <http://docs.openstack.org/kilo/config-reference/content/"
"ch_configuring-openstack-telemetry.html>`__ in the *OpenStack Configuration "
"Reference*."
msgstr ""
#: ../telemetry-system-architecture.rst:177
msgid ""
"The system uses two roles:`admin` and `non-admin`. The authorization happens "
"before processing each API request. The amount of returned data depends on "
"the role the requestor owns."
msgstr ""
#: ../telemetry-system-architecture.rst:181
msgid ""
"The creation of alarm definitions also highly depends on the role of the "
"user, who initiated the action. Further details about :ref:`telemetry-"
"alarms` handling can be found in this guide."
msgstr ""
#: ../telemetry-troubleshooting-guide.rst:2
msgid "Troubleshoot Telemetry"
msgstr ""
#: ../telemetry-troubleshooting-guide.rst:5
msgid "Logging in Telemetry"
msgstr ""
#: ../telemetry-troubleshooting-guide.rst:7
msgid ""
"The Telemetry service has similar log settings as the other OpenStack "
"services. Multiple options are available to change the target of logging, "
"the format of the log entries and the log levels."
msgstr ""
#: ../telemetry-troubleshooting-guide.rst:11
msgid ""
"The log settings can be changed in :file:`ceilometer.conf`. The list of "
"configuration options are listed in the logging configuration options table "
"in the `Telemetry section <http://docs.openstack.org/liberty/config-"
"reference/content/ch_configuring-openstack-telemetry.html>`__ in the "
"*OpenStack Configuration Reference*."
msgstr ""
#: ../telemetry-troubleshooting-guide.rst:17
msgid ""
"By default ``stderr`` is used as standard output for the log messages. It "
"can be changed to either a log file or syslog. The ``debug`` and ``verbose`` "
"options are also set to false in the default settings, the default log "
"levels of the corresponding modules can be found in the table referred above."
msgstr ""
#: ../telemetry-troubleshooting-guide.rst:26
msgid "Recommended order of starting services"
msgstr ""
#: ../telemetry-troubleshooting-guide.rst:28
msgid ""
"As it can be seen in `Bug 1355809 <https://bugs.launchpad.net/devstack/"
"+bug/1355809>`__, the wrong ordering of service startup can result in data "
"loss."
msgstr ""
#: ../telemetry-troubleshooting-guide.rst:32
msgid ""
"When the services are started for the first time or in line with the message "
"queue service restart, it takes time while the **ceilometer-collector** "
"service establishes the connection and joins or rejoins to the configured "
"exchanges. Therefore, if the **ceilometer-agent-compute**, **ceilometer-"
"agent-central**, and the **ceilometer-agent-notification** services are "
"started before the **ceilometer-collector** service, the **ceilometer-"
"collector** service may lose some messages while connecting to the message "
"queue service."
msgstr ""
#: ../telemetry-troubleshooting-guide.rst:41
msgid ""
"The possibility of this issue to happen is higher, when the polling interval "
"is set to a relatively short period. In order to avoid this situation, the "
"recommended order of service startup is to start or restart the **ceilometer-"
"collector** service after the message queue. All the other Telemetry "
"services should be started or restarted after and the **ceilometer-agent-"
"compute** should be the last in the sequence, as this component emits "
"metering messages in order to send the samples to the collector."
msgstr ""
#: ../telemetry-troubleshooting-guide.rst:53
msgid "Notification agent"
msgstr ""
#: ../telemetry-troubleshooting-guide.rst:55
msgid ""
"In the Icehouse release of OpenStack a new service was introduced to be "
"responsible for consuming notifications that are coming from other OpenStack "
"services."
msgstr ""
#: ../telemetry-troubleshooting-guide.rst:59
msgid ""
"If the **ceilometer-agent-notification** service is not installed and "
"started, samples originating from notifications will not be generated. In "
"case of the lack of notification based samples, the state of this service "
"and the log file of Telemetry should be checked first."
msgstr ""
#: ../telemetry-troubleshooting-guide.rst:64
msgid ""
"For the list of meters that are originated from notifications, see the "
"`Telemetry Measurements Reference <http://docs.openstack.org/developer/"
"ceilometer/measurements.html>`__."
msgstr ""
#: ../telemetry-troubleshooting-guide.rst:71
msgid "Recommended ``auth_url`` to be used"
msgstr ""
#: ../telemetry-troubleshooting-guide.rst:73
msgid ""
"When using the Telemetry command line client, the credentials and the "
"``os_auth_url`` have to be set in order for the client to authenticate "
"against OpenStack Identity. For further details about the credentials that "
"have to be provided see the `Telemetry Python API <http://docs.openstack.org/"
"developer/python-ceilometerclient/>`__."
msgstr ""
#: ../telemetry-troubleshooting-guide.rst:79
msgid ""
"The service catalog provided by OpenStack Identity contains the URLs that "
"are available for authentication. The URLs have different ``port``\\s, based "
"on whether the type of the given URL is ``public``, ``internal`` or "
"``admin``."
msgstr ""
#: ../telemetry-troubleshooting-guide.rst:84
msgid ""
"OpenStack Identity is about to change API version from v2 to v3. The "
"``adminURL`` endpoint (which is available via the port: ``35357``) supports "
"only the v3 version, while the other two supports both."
msgstr ""
#: ../telemetry-troubleshooting-guide.rst:88
msgid ""
"The Telemetry command line client is not adapted to the v3 version of the "
"OpenStack Identity API. If the ``adminURL`` is used as ``os_auth_url``, the :"
"command:`ceilometer` command results in the following error message:"
msgstr ""
#: ../telemetry-troubleshooting-guide.rst:99
msgid ""
"Therefore when specifying the ``os_auth_url`` parameter on the command line "
"or by using environment variable, use the ``internalURL`` or ``publicURL``."
msgstr ""
#: ../telemetry-troubleshooting-guide.rst:103
msgid ""
"For more details check the bug report `Bug 1351841 <https://bugs.launchpad."
"net/python-ceilometerclient/+bug/1351841>`__."
msgstr ""
#: ../telemetry.rst:5
msgid "Telemetry"
msgstr ""
#: ../telemetry.rst:7
msgid ""
"Even in the cloud industry, providers must use a multi-step process for "
"billing. The required steps to bill for usage in a cloud environment are "
"metering, rating, and billing. Because the provider's requirements may be "
"far too specific for a shared solution, rating and billing solutions cannot "
"be designed in a common module that satisfies all. Providing users with "
"measurements on cloud services is required to meet the \"measured service\" "
"definition of cloud computing."
msgstr ""
#: ../telemetry.rst:15
msgid ""
"The Telemetry service was originally designed to support billing systems for "
"OpenStack cloud resources. This project only covers the metering portion of "
"the required processing for billing. This service collects information about "
"the system and stores it in the form of samples in order to provide data "
"about anything that can be billed."
msgstr ""
#: ../telemetry.rst:21
msgid ""
"In addition to system measurements, the Telemetry service also captures "
"event notifications triggered when various actions are executed in the "
"OpenStack system. This data is captured as Events and stored alongside "
"metering data."
msgstr ""
#: ../telemetry.rst:26
msgid ""
"The list of meters is continuously growing, which makes it possible to use "
"the data collected by Telemetry for different purposes, other than billing. "
"For example, the autoscaling feature in the Orchestration service can be "
"triggered by alarms this module sets and then gets notified within Telemetry."
msgstr ""
#: ../telemetry.rst:32
msgid ""
"The sections in this document contain information about the architecture and "
"usage of Telemetry. The first section contains a brief summary about the "
"system architecture used in a typical OpenStack deployment. The second "
"section describes the data collection mechanisms. You can also read about "
"alarming to understand how alarm definitions can be posted to Telemetry and "
"what actions can happen if an alarm is raised. The last section contains a "
"troubleshooting guide, which mentions error situations and possible "
"solutions to the problems."
msgstr ""
#: ../telemetry.rst:42
msgid ""
"You can retrieve the collected samples in three different ways: with the "
"REST API, with the command line interface, or with the Metering tab on an "
"OpenStack dashboard."
msgstr ""
#: ../ts-HTTP-bad-req-in-cinder-vol-log.rst:3
msgid "HTTP bad request in cinder volume log"
msgstr ""
# #-#-#-#-# ts-duplicate-3par-host.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# ts-eql-volume-size.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# ts-failed-attach-vol-after-detach.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# ts-failed-attach-vol-no-sysfsutils.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# ts-failed-connect-vol-FC-SAN.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# ts-HTTP-bad-req-in-cinder-vol-log.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# ts_multipath_warn.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# ts_no_emulator_x86_64.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# ts_non_existent_host.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# ts_non_existent_vlun.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# ts_vol_attach_miss_sg_scan.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../ts-HTTP-bad-req-in-cinder-vol-log.rst:6 ../ts-duplicate-3par-host.rst:6
#: ../ts-eql-volume-size.rst:6 ../ts-failed-attach-vol-after-detach.rst:6
#: ../ts-failed-attach-vol-no-sysfsutils.rst:6
#: ../ts-failed-connect-vol-FC-SAN.rst:6 ../ts_multipath_warn.rst:6
#: ../ts_no_emulator_x86_64.rst:6 ../ts_non_existent_host.rst:6
#: ../ts_non_existent_vlun.rst:6 ../ts_vol_attach_miss_sg_scan.rst:6
msgid "Problem"
msgstr ""
#: ../ts-HTTP-bad-req-in-cinder-vol-log.rst:8
msgid "These errors appear in the :file:`cinder-volume.log` file::"
msgstr ""
# #-#-#-#-# ts-duplicate-3par-host.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# ts-eql-volume-size.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# ts-failed-attach-vol-after-detach.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# ts-failed-attach-vol-no-sysfsutils.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# ts-failed-connect-vol-FC-SAN.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# ts-HTTP-bad-req-in-cinder-vol-log.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# ts_multipath_warn.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# ts_no_emulator_x86_64.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# ts_non_existent_host.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# ts_non_existent_vlun.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
# #-#-#-#-# ts_vol_attach_miss_sg_scan.pot (Cloud Administrator Guide 0.9) #-#-#-#-#
#: ../ts-HTTP-bad-req-in-cinder-vol-log.rst:41
#: ../ts-duplicate-3par-host.rst:18 ../ts-eql-volume-size.rst:132
#: ../ts-failed-attach-vol-after-detach.rst:11
#: ../ts-failed-attach-vol-no-sysfsutils.rst:21
#: ../ts-failed-connect-vol-FC-SAN.rst:24 ../ts_multipath_warn.rst:23
#: ../ts_no_emulator_x86_64.rst:12 ../ts_non_existent_host.rst:19
#: ../ts_non_existent_vlun.rst:18 ../ts_vol_attach_miss_sg_scan.rst:22
msgid "Solution"
msgstr ""
#: ../ts-HTTP-bad-req-in-cinder-vol-log.rst:43
msgid ""
"You need to update your copy of the :file:`hp_3par_fc.py` driver which "
"contains the synchronization code."
msgstr ""
#: ../ts-duplicate-3par-host.rst:3
msgid "Duplicate 3PAR host"
msgstr ""
#: ../ts-duplicate-3par-host.rst:8
msgid ""
"This error may be caused by a volume being exported outside of OpenStack "
"using a host name different from the system name that OpenStack expects. "
"This error could be displayed with the :term:`IQN` if the host was exported "
"using iSCSI::"
msgstr ""
#: ../ts-duplicate-3par-host.rst:20
msgid ""
"Change the 3PAR host name to match the one that OpenStack expects. The 3PAR "
"host constructed by the driver uses just the local hostname, not the fully "
"qualified domain name (FQDN) of the compute host. For example, if the FQDN "
"was *myhost.example.com*, just *myhost* would be used as the 3PAR hostname. "
"IP addresses are not allowed as host names on the 3PAR storage server."
msgstr ""
#: ../ts-eql-volume-size.rst:3
msgid ""
"Addressing discrepancies in reported volume sizes for EqualLogic storage"
msgstr ""
#: ../ts-eql-volume-size.rst:8
msgid ""
"There is a discrepancy between both the actual volume size in EqualLogic "
"(EQL) storage and the image size in the Image service, with what is reported "
"OpenStack database. This could lead to confusion if a user is creating "
"volumes from an image that was uploaded from an EQL volume (through the "
"Image service). The image size is slightly larger than the target volume "
"size; this is because EQL size reporting accounts for additional storage "
"used by EQL for internal volume metadata."
msgstr ""
#: ../ts-eql-volume-size.rst:16
msgid "To reproduce the issue follow the steps in the following procedure."
msgstr ""
#: ../ts-eql-volume-size.rst:18
msgid ""
"This procedure assumes that the EQL array is provisioned, and that "
"appropriate configuration settings have been included in :file:`/etc/cinder/"
"cinder.conf` to connect to the EQL array."
msgstr ""
#: ../ts-eql-volume-size.rst:22
msgid ""
"Create a new volume. Note the ID and size of the volume. In the following "
"example, the ID and size are ``74cf9c04-4543-47ae-a937-a9b7c6c921e7`` and "
"``1``, respectively:"
msgstr ""
#: ../ts-eql-volume-size.rst:48
msgid ""
"Verify the volume size on the EQL array by using its command-line interface."
msgstr ""
#: ../ts-eql-volume-size.rst:51
msgid ""
"The actual size (``VolReserve``) is 1.01 GB. The EQL Group Manager should "
"also report a volume size of 1.01 GB::"
msgstr ""
#: ../ts-eql-volume-size.rst:79
msgid "Create a new image from this volume:"
msgstr ""
#: ../ts-eql-volume-size.rst:101
msgid ""
"When you uploaded the volume in the previous step, the Image service "
"reported the volume's size as ``1`` (GB). However, when using ``glance image-"
"list`` to list the image, the displayed size is 1085276160 bytes, or roughly "
"1.01 GB:"
msgstr ""
#: ../ts-eql-volume-size.rst:107
msgid "Container Format"
msgstr ""
#: ../ts-eql-volume-size.rst:107
msgid "Disk Format"
msgstr ""
#: ../ts-eql-volume-size.rst:107
msgid "Size"
msgstr ""
#: ../ts-eql-volume-size.rst:110
msgid "*1085276160*"
msgstr ""
#: ../ts-eql-volume-size.rst:110
msgid "bare"
msgstr ""
#: ../ts-eql-volume-size.rst:110
msgid "image\\_from\\_volume1"
msgstr ""
#: ../ts-eql-volume-size.rst:115
msgid ""
"Create a new volume using the previous image (``image_id 3020a21d-ba37-4495 "
"-8899-07fc201161b9`` in this example) as the source. Set the target volume "
"size to 1 GB; this is the size reported by the ``cinder`` tool when you "
"uploaded the volume to the Image service:"
msgstr ""
#: ../ts-eql-volume-size.rst:128
msgid ""
"The attempt to create a new volume based on the size reported by the "
"``cinder`` tool will then fail."
msgstr ""
#: ../ts-eql-volume-size.rst:134
msgid ""
"To work around this problem, increase the target size of the new image to "
"the next whole number. In the problem example, you created a 1 GB volume to "
"be used as volume-backed image, so a new volume using this volume-backed "
"image should use a size of 2 GB:"
msgstr ""
#: ../ts-eql-volume-size.rst:165
msgid ""
"The dashboard suggests a suitable size when you create a new volume based on "
"a volume-backed image."
msgstr ""
#: ../ts-eql-volume-size.rst:168
msgid "You can then check this new volume into the EQL array::"
msgstr ""
#: ../ts-failed-attach-vol-after-detach.rst:3
msgid "Failed to attach volume after detaching"
msgstr ""
#: ../ts-failed-attach-vol-after-detach.rst:8
msgid "Failed to attach a volume after detaching the same volume."
msgstr ""
#: ../ts-failed-attach-vol-after-detach.rst:13
msgid ""
"You must change the device name on the ``nova-attach`` command. The VM might "
"not clean up after a ``nova-detach`` command runs. This example shows how "
"the ``nova-attach`` command fails when you use the ``vdb``, ``vdc``, or "
"``vdd`` device names::"
msgstr ""
#: ../ts-failed-attach-vol-after-detach.rst:31
msgid ""
"You might also have this problem after attaching and detaching the same "
"volume from the same VM with the same mount point multiple times. In this "
"case, restart the KVM host."
msgstr ""
#: ../ts-failed-attach-vol-no-sysfsutils.rst:3
msgid "Failed to attach volume, systool is not installed"
msgstr ""
#: ../ts-failed-attach-vol-no-sysfsutils.rst:8
msgid ""
"This warning and error occurs if you do not have the required ``sysfsutils`` "
"package installed on the compute node::"
msgstr ""
#: ../ts-failed-attach-vol-no-sysfsutils.rst:23
msgid ""
"Run the following command on the compute node to install the ``sysfsutils`` "
"packages::"
msgstr ""
#: ../ts-failed-connect-vol-FC-SAN.rst:3
msgid "Failed to connect volume in FC SAN"
msgstr ""
#: ../ts-failed-connect-vol-FC-SAN.rst:8
msgid ""
"The compute node failed to connect to a volume in a Fibre Channel (FC) SAN "
"configuration. The WWN may not be zoned correctly in your FC SAN that links "
"the compute host to the storage array::"
msgstr ""
#: ../ts-failed-connect-vol-FC-SAN.rst:26
msgid ""
"The network administrator must configure the FC SAN fabric by correctly "
"zoning the WWN (port names) from your compute node HBAs."
msgstr ""
#: ../ts_cinder_config.rst:3
msgid "Troubleshoot the Block Storage configuration"
msgstr ""
#: ../ts_cinder_config.rst:5
msgid ""
"Most Block Storage errors are caused by incorrect volume configurations that "
"result in volume creation failures. To resolve these failures, review these "
"logs:"
msgstr ""
#: ../ts_cinder_config.rst:9
msgid "cinder-api log (:file:`/var/log/cinder/api.log`)"
msgstr ""
#: ../ts_cinder_config.rst:11
msgid "cinder-volume log (:file:`/var/log/cinder/volume.log`)"
msgstr ""
#: ../ts_cinder_config.rst:13
msgid ""
"The cinder-api log is useful for determining if you have endpoint or "
"connectivity issues. If you send a request to create a volume and it fails, "
"review the cinder-api log to determine whether the request made it to the "
"Block Storage service. If the request is logged and you see no errors or "
"tracebacks, check the cinder-volume log for errors or tracebacks."
msgstr ""
#: ../ts_cinder_config.rst:22
msgid "Create commands are listed in the ``cinder-api`` log."
msgstr ""
#: ../ts_cinder_config.rst:24
msgid ""
"These entries in the :file:`cinder.openstack.common.log` file can be used to "
"assist in troubleshooting your block storage configuration."
msgstr ""
#: ../ts_cinder_config.rst:102
msgid ""
"These common issues might occur during configuration. To correct, use these "
"suggested solutions."
msgstr ""
#: ../ts_cinder_config.rst:105
msgid "Issues with ``state_path`` and ``volumes_dir`` settings."
msgstr ""
#: ../ts_cinder_config.rst:107
msgid ""
"The OpenStack Block Storage uses ``tgtd`` as the default iSCSI helper and "
"implements persistent targets. This means that in the case of a ``tgt`` "
"restart or even a node reboot your existing volumes on that node will be "
"restored automatically with their original :term:`IQN`."
msgstr ""
#: ../ts_cinder_config.rst:112
msgid ""
"In order to make this possible the iSCSI target information needs to be "
"stored in a file on creation that can be queried in case of restart of the "
"``tgt daemon``. By default, Block Storage uses a ``state_path`` variable, "
"which if installing with Yum or APT should be set to :file:`/var/lib/cinder/"
"`. The next part is the ``volumes_dir`` variable, by default this just "
"simply appends a :file:`volumes` directory to the ``state_path``. The result "
"is a file-tree :file:`/var/lib/cinder/volumes/`."
msgstr ""
#: ../ts_cinder_config.rst:121
msgid ""
"While the installer should handle all this, it can go wrong. If you have "
"trouble creating volumes and this directory does not exist you should see an "
"error message in the ``cinder-volume`` log indicating that the "
"``volumes_dir`` does not exist, and it should provide information about "
"which path it was looking for."
msgstr ""
#: ../ts_cinder_config.rst:127
msgid "The persistent tgt include file."
msgstr ""
#: ../ts_cinder_config.rst:129
msgid ""
"Along with the ``volumes_dir`` option, the iSCSI target driver also needs to "
"be configured to look in the correct place for the persistent files. This is "
"a simple entry in the :file:`/etc/tgt/conf.d` file that you should have set "
"when you installed OpenStack. If issues occur, verify that you have a :file:"
"`/etc/tgt/conf.d/cinder.conf` file."
msgstr ""
#: ../ts_cinder_config.rst:135
msgid "If the file is not present, create it with this command"
msgstr ""
#: ../ts_cinder_config.rst:141
msgid "No sign of attach call in the ``cinder-api`` log."
msgstr ""
#: ../ts_cinder_config.rst:143
msgid ""
"This is most likely going to be a minor adjustment to your :file:`nova.conf` "
"file. Make sure that your :file:`nova.conf` has this entry"
msgstr ""
#: ../ts_cinder_config.rst:150
msgid ""
"Failed to create iscsi target error in the :file:`cinder-volume.log` file."
msgstr ""
#: ../ts_cinder_config.rst:159
msgid ""
"You might see this error in :file:`cinder-volume.log` after trying to create "
"a volume that is 1 GB. To fix this issue:"
msgstr ""
#: ../ts_cinder_config.rst:162
msgid ""
"Change contents of the :file:`/etc/tgt/targets.conf` from ``include /etc/tgt/"
"conf.d/*.conf`` to ``include /etc/tgt/conf.d/cinder_tgt.conf``, as follows:"
msgstr ""
#: ../ts_cinder_config.rst:172
msgid ""
"Restart ``tgt`` and ``cinder-*`` services so they pick up the new "
"configuration."
msgstr ""
#: ../ts_multipath_warn.rst:3
msgid "Multipath call failed exit"
msgstr ""
#: ../ts_multipath_warn.rst:8
msgid ""
"Multipath call failed exit. This warning occurs in the Compute log if you do "
"not have the optional ``multipath-tools`` package installed on the compute "
"node. This is an optional package and the volume attachment does work "
"without the multipath tools installed. If the ``multipath-tools`` package is "
"installed on the compute node, it is used to perform the volume attachment. "
"The IDs in your message are unique to your system."
msgstr ""
#: ../ts_multipath_warn.rst:25
msgid ""
"Run the following command on the compute node to install the ``multipath-"
"tools`` packages."
msgstr ""
#: ../ts_no_emulator_x86_64.rst:3
msgid "Cannot find suitable emulator for x86_64"
msgstr ""
#: ../ts_no_emulator_x86_64.rst:8
msgid ""
"When you attempt to create a VM, the error shows the VM is in the ``BUILD`` "
"then ``ERROR`` state."
msgstr ""
#: ../ts_no_emulator_x86_64.rst:14
msgid ""
"On the KVM host, run ``cat /proc/cpuinfo``. Make sure the ``vmx`` or ``svm`` "
"flags are set."
msgstr ""
#: ../ts_no_emulator_x86_64.rst:17
msgid ""
"Follow the instructions in the `enabling KVM section <http://docs.openstack."
"org/liberty/config-reference/content/kvm.html#section_kvm_enable>`__ of the "
"Configuration Reference to enable hardware virtualization support in your "
"BIOS."
msgstr ""
#: ../ts_non_existent_host.rst:3
msgid "Non-existent host"
msgstr ""
#: ../ts_non_existent_host.rst:8
msgid ""
"This error could be caused by a volume being exported outside of OpenStack "
"using a host name different from the system name that OpenStack expects. "
"This error could be displayed with the :term:`IQN` if the host was exported "
"using iSCSI."
msgstr ""
#: ../ts_non_existent_host.rst:21
msgid ""
"Host names constructed by the driver use just the local hostname, not the "
"fully qualified domain name (FQDN) of the Compute host. For example, if the "
"FQDN was **myhost.example.com**, just **myhost** would be used as the 3PAR "
"hostname. IP addresses are not allowed as host names on the 3PAR storage "
"server."
msgstr ""
#: ../ts_non_existent_vlun.rst:3
msgid "Non-existent VLUN"
msgstr ""
#: ../ts_non_existent_vlun.rst:8
msgid ""
"This error occurs if the 3PAR host exists with the correct host name that "
"the OpenStack Block Storage drivers expect but the volume was created in a "
"different Domain."
msgstr ""
#: ../ts_non_existent_vlun.rst:20
msgid ""
"The ``hpe3par_domain`` configuration items either need to be updated to use "
"the domain the 3PAR host currently resides in, or the 3PAR host needs to be "
"moved to the domain that the volume was created in."
msgstr ""
#: ../ts_vol_attach_miss_sg_scan.rst:3
msgid "Failed to Attach Volume, Missing sg_scan"
msgstr ""
#: ../ts_vol_attach_miss_sg_scan.rst:8
msgid ""
"Failed to attach volume to an instance, ``sg_scan`` file not found. This "
"warning and error occur when the sg3-utils package is not installed on the "
"compute node. The IDs in your message are unique to your system:"
msgstr ""
#: ../ts_vol_attach_miss_sg_scan.rst:25
msgid "Run this command on the compute node to install the sg3-utils package:"
msgstr ""