diff --git a/doc/admin-guide-cloud/blockstorage/section_backup-block-storage-disks.xml b/doc/admin-guide-cloud/blockstorage/section_backup-block-storage-disks.xml index 1b0b7fe9dd..818ed7a003 100644 --- a/doc/admin-guide-cloud/blockstorage/section_backup-block-storage-disks.xml +++ b/doc/admin-guide-cloud/blockstorage/section_backup-block-storage-disks.xml @@ -77,7 +77,7 @@ /dev/cinder-volumes/VOLUME_NAME. The size does not have to be the same as the volume of the snapshot. The - size parameter + --size parameter defines the space that LVM reserves for the snapshot volume. As a precaution, the size should be the same as that of the original diff --git a/doc/admin-guide-cloud/blockstorage/section_increase-api-throughput.xml b/doc/admin-guide-cloud/blockstorage/section_increase-api-throughput.xml index 65894a746e..09e6be663c 100644 --- a/doc/admin-guide-cloud/blockstorage/section_increase-api-throughput.xml +++ b/doc/admin-guide-cloud/blockstorage/section_increase-api-throughput.xml @@ -20,7 +20,7 @@ cinder-api. To do so, use the Block Storage API service option - osapi_volume_workers. This option allows + . This option allows you to specify the number of API service workers (or OS processes) to launch for the Block Storage API service. To configure this option, open the diff --git a/doc/admin-guide-cloud/ch_compute.xml b/doc/admin-guide-cloud/ch_compute.xml index 34ffa74bfa..ce8be02758 100644 --- a/doc/admin-guide-cloud/ch_compute.xml +++ b/doc/admin-guide-cloud/ch_compute.xml @@ -162,7 +162,7 @@ but you can configure them by editing the policy.json file for user roles. For example, a rule can be defined so that a user must - have the admin role in order to be + have the admin role in order to be able to allocate a public IP address. A tenant limits users' access to particular images. Each user is assigned a user name and password. Keypairs @@ -210,8 +210,8 @@ different operating systems as Ext4 for Linux distributions, VFAT for non-Linux and non-Windows operating systems, and NTFS for Windows. However, it is possible to specify - any other filesystem type by using virt_mkfs or - default_ephemeral_format configuration options. + any other filesystem type by using or + configuration options. For example, the cloud-init package included into an Ubuntu's stock cloud image, by default, @@ -396,7 +396,7 @@ provides five flavors. By default, these are configurable by admin users, however that behavior can be changed by redefining the access controls for - compute_extension:flavormanage + compute_extension:flavormanage in /etc/nova/policy.json on the compute-api server. For a list of flavors that are available on your diff --git a/doc/admin-guide-cloud/compute/section_compute-configure-migrations.xml b/doc/admin-guide-cloud/compute/section_compute-configure-migrations.xml index bca228011a..7ea3defc42 100644 --- a/doc/admin-guide-cloud/compute/section_compute-configure-migrations.xml +++ b/doc/admin-guide-cloud/compute/section_compute-configure-migrations.xml @@ -413,7 +413,7 @@ HostA: 921515008 101921792 772783104 12% /var/lib/nova/instances To use block migration, you must use the - ==block-migrate parameter with the live migration + --block-migrate parameter with the live migration command. diff --git a/doc/admin-guide-cloud/compute/section_compute-instance-building-blocks.xml b/doc/admin-guide-cloud/compute/section_compute-instance-building-blocks.xml index 1d61f81799..95f368993a 100644 --- a/doc/admin-guide-cloud/compute/section_compute-instance-building-blocks.xml +++ b/doc/admin-guide-cloud/compute/section_compute-instance-building-blocks.xml @@ -69,7 +69,7 @@ +-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ By default, administrative users can configure the flavors. You can change this behavior by redefining the access controls for - compute_extension:flavormanage in + compute_extension:flavormanage in /etc/nova/policy.json on the compute-api server. diff --git a/doc/admin-guide-cloud/compute/section_compute-networking-nova.xml b/doc/admin-guide-cloud/compute/section_compute-networking-nova.xml index b047d7a9b2..e7a84bea7b 100644 --- a/doc/admin-guide-cloud/compute/section_compute-networking-nova.xml +++ b/doc/admin-guide-cloud/compute/section_compute-networking-nova.xml @@ -621,7 +621,7 @@ echo 'Extra user data here'
Using multinic In order to use multinic, create two networks, and attach them - to the tenant (named project on the command + to the tenant (named project on the command line): $ nova network-create first-net --fixed-range-v4 20.20.0.0/24 --project-id $your-project $ nova network-create second-net --fixed-range-v4 20.20.10.0/24 --project-id $your-project diff --git a/doc/admin-guide-cloud/compute/section_compute-recover-nodes.xml b/doc/admin-guide-cloud/compute/section_compute-recover-nodes.xml index e5a7a3fff6..38d503af32 100644 --- a/doc/admin-guide-cloud/compute/section_compute-recover-nodes.xml +++ b/doc/admin-guide-cloud/compute/section_compute-recover-nodes.xml @@ -105,17 +105,17 @@ task_state: NULL - Set the libvirt-qemu UID in + Set the libvirt-qemu UID in /etc/passwd to the same number on all hosts (for example, 119). - Set the nova group in + Set the nova group in /etc/group file to the same number on all hosts (for example, 120). - Set the libvirtd group in + Set the libvirtd group in /etc/group file to the same number on all hosts (for example, 119). @@ -129,7 +129,7 @@ task_state: NULL # find / -gid 120 -exec chgrp nova {} \; - Repeat all steps for the libvirt-qemu + Repeat all steps for the libvirt-qemu files, if required. @@ -172,7 +172,7 @@ task_state: NULL Create an active iSCSI session from the SAN to the cloud - controller (used for the cinder-volumes + controller (used for the cinder-volumes LVM's VG). @@ -259,7 +259,7 @@ task_state: NULL Instance state at this stage depends on whether you added an /etc/fstab entry for that volume. Images built with the cloud-init package - remain in a pending state, while others + remain in a pending state, while others skip the missing volume and start. This step is performed in order to ask Compute to reboot every instance, so that the stored state is preserved. It does not matter if not all @@ -302,7 +302,7 @@ done < $volumes_tmp_file follow these tips: - Use the errors=remount parameter in + Use the errors=remount parameter in the fstab file to prevent data corruption. This parameter will cause the system to disable the ability diff --git a/doc/admin-guide-cloud/compute/section_compute-rootwrap.xml b/doc/admin-guide-cloud/compute/section_compute-rootwrap.xml index 26968a67f9..38501e663e 100644 --- a/doc/admin-guide-cloud/compute/section_compute-rootwrap.xml +++ b/doc/admin-guide-cloud/compute/section_compute-rootwrap.xml @@ -39,7 +39,7 @@ file. Because it's in the trusted security path, it must be owned and writable by only the root user. The file's location is specified in both the sudoers entry and in the nova.conf configuration - file with the rootwrap_config=entry parameter. + file with the rootwrap_config=entry parameter. The rootwrap.conf file uses an INI file format with these sections and parameters: @@ -70,7 +70,7 @@ Their location is specified in the rootwrap.conf file.Filter definition files use an INI file format with a - [Filters] section and several lines, each with a + [Filters] section and several lines, each with a unique parameter name, which should be different for each filter you define:
diff --git a/doc/admin-guide-cloud/compute/section_compute-system-admin.xml b/doc/admin-guide-cloud/compute/section_compute-system-admin.xml index 1ec566f8d3..f055a2e75b 100644 --- a/doc/admin-guide-cloud/compute/section_compute-system-admin.xml +++ b/doc/admin-guide-cloud/compute/section_compute-system-admin.xml @@ -238,13 +238,13 @@ inject_password=true/etc/nova/nova.conf file:log-config=/etc/nova/logging.conf - To change the logging level, add DEBUG, - INFO, WARNING, or - ERROR as a parameter. + To change the logging level, add DEBUG, + INFO, WARNING, or + ERROR as a parameter. The logging configuration file is an INI-style configuration file, which must contain a section called - logger_nova. This controls the behavior of + logger_nova. This controls the behavior of the logging facility in the nova-* services. For example:[logger_nova] @@ -255,7 +255,7 @@ qualname = nova (which is less verbose than the default DEBUG setting).For more about the logging configuration syntax, including the - handlers and quaname + handlers and quaname variables, see the Python documentation on logging configuration files. @@ -362,14 +362,14 @@ local0.error @@172.20.1.43:1024On a compute node, edit the /etc/nova/nova.conf file: - In the [serial_console] section, + In the [serial_console] section, enable the serial console: [serial_console] ... enabled = true - In the [serial_console] section, + In the [serial_console] section, configure the serial console proxy similar to graphical console proxies: [serial_console] @@ -525,14 +525,14 @@ ws = websocket.create_connection( +-----------+------------+-----+-----------+---------+ - cpu: Number of CPUs + cpu: Number of CPUs - memory_mb: Total amount of memory, + memory_mb: Total amount of memory, in MB - disk_gb: Total amount of space for + disk_gb: Total amount of space for NOVA-INST-DIR/instances, in GB diff --git a/doc/admin-guide-cloud/networking/section_networking-adv-config.xml b/doc/admin-guide-cloud/networking/section_networking-adv-config.xml index 6e180ab964..690b457f6e 100644 --- a/doc/admin-guide-cloud/networking/section_networking-adv-config.xml +++ b/doc/admin-guide-cloud/networking/section_networking-adv-config.xml @@ -17,7 +17,7 @@ The neutron configuration file contains the common neutron configuration options. The plug-in configuration file contains the plug-in specific options. The plug-in that runs on the service is loaded through the - core_plugin configuration option. In some cases, a plug-in + core_plugin configuration option. In some cases, a plug-in might have an agent that performs the actual networking. Most plug-ins require an SQL database. After you install and start the database server, set a password for the root account and delete the anonymous accounts: diff --git a/doc/admin-guide-cloud/networking/section_networking-scenarios.xml b/doc/admin-guide-cloud/networking/section_networking-scenarios.xml index d6da3c9bbe..756a75c7ba 100644 --- a/doc/admin-guide-cloud/networking/section_networking-scenarios.xml +++ b/doc/admin-guide-cloud/networking/section_networking-scenarios.xml @@ -691,7 +691,7 @@ physical_interface_mappings = physnet2:eth1 used simultaneously. This section describes different ML2 plug-in and agent configurations with different type drivers and mechanism drivers. - Currently, there is no need to define SEGMENTATION_ID + Currently, there is no need to define SEGMENTATION_ID network provider attribute for GRE and VXLAN network types. The choice can be delegated to Networking, in such case ML2 plug-in tries to find a network in tenant network pools which respects specified provider network attributes. diff --git a/doc/admin-guide-cloud/networking/section_networking_adv_features.xml b/doc/admin-guide-cloud/networking/section_networking_adv_features.xml index 4290c8e82a..740847e747 100644 --- a/doc/admin-guide-cloud/networking/section_networking_adv_features.xml +++ b/doc/admin-guide-cloud/networking/section_networking_adv_features.xml @@ -1459,7 +1459,7 @@ configuration option to a non-zero value exclusively on a node designated for back-end status synchronization. - The fields=status parameter in Networking API requests + The fields=status parameter in Networking API requests always triggers an explicit query to the NSX back end, even when you enable asynchronous state synchronization. For example, GET /v2.0/networks/NET_ID?fields=status&fields=name. diff --git a/doc/admin-guide-cloud/networking/section_networking_config-agents.xml b/doc/admin-guide-cloud/networking/section_networking_config-agents.xml index 845d81cfbc..c4d7754edb 100644 --- a/doc/admin-guide-cloud/networking/section_networking_config-agents.xml +++ b/doc/admin-guide-cloud/networking/section_networking_config-agents.xml @@ -505,7 +505,7 @@ interface_driver = neutron.agent.linux.interface.OVSInterfaceDriverProject section of the dashboard. Change the option - to True in the + to True in the local_settings file (on Fedora, RHEL, and CentOS: /etc/openstack-dashboard/local_settings, diff --git a/doc/admin-guide-cloud/telemetry/section_telemetry-data-collection.xml b/doc/admin-guide-cloud/telemetry/section_telemetry-data-collection.xml index 0f9d538a46..7a5b3502d4 100644 --- a/doc/admin-guide-cloud/telemetry/section_telemetry-data-collection.xml +++ b/doc/admin-guide-cloud/telemetry/section_telemetry-data-collection.xml @@ -454,7 +454,7 @@ Pipeline configuration by default, is stored in a separate configuration file, called pipeline.yaml, next to the ceilometer.conf file. The pipeline - configuration file can be set in the pipeline_cfg_file + configuration file can be set in the parameter listed in the Description of configuration options for api table section in the @@ -642,7 +642,7 @@ sinks: name: "disk.kilobytes" unit: "KB" scale: "1.0 / 1024.0" - With the map_from and map_to + With the and : transformers: - name: "unit_conversion" @@ -660,27 +660,27 @@ sinks: Aggregator transformer A transformer that sums up the incoming samples until enough samples have come in or a timeout has been reached. - Timeout can be specified with the retention_time parameter. + Timeout can be specified with the parameter. If we want to flush the aggregation after a set number of samples have been aggregated, we can specify the size parameter. The volume of the created sample is the sum of the volumes of samples that came - into the transformer. Samples can be aggregated by the attributes project_id - , user_id and resource_metadata. + into the transformer. Samples can be aggregated by the attributes , and . To aggregate by the chosen attributes, specify them in the configuration and set which value of the attribute to take for the new sample (first to take the first sample's attribute, last to take the last sample's attribute, and drop to discard the attribute). - To aggregate 60s worth of samples by resource_metadata - and keep the resource_metadata of the latest received + To aggregate 60s worth of samples by + and keep the of the latest received sample: transformers: - name: "aggregator" parameters: retention_time: 60 resource_metadata: last - To aggregate each 15 samples by user_id and resource_metadata - and keep the user_id of the first received sample and - drop the resource_metadata: + To aggregate each 15 samples by and and keep the of the first received sample and + drop the : transformers: - name: "aggregator" parameters: @@ -772,7 +772,7 @@ sinks: Multiple ceilometer-collector process can be run at a time. It is also supported to start multiple worker threads per collector process. - The collector_workers configuration option has to be modified in the + The configuration option has to be modified in the collector section of the ceilometer.conf @@ -784,7 +784,7 @@ sinks: Database dispatcher When the database dispatcher is configured as data store, you have the option to set - a time_to_live parameter (ttl) for samples. By default the time to + a parameter (ttl) for samples. By default the time to live value for samples is set to -1, which means that they are kept in the database forever. The time to live value is specified in seconds. Each sample has a time stamp, and the diff --git a/doc/admin-guide-cloud/telemetry/section_telemetry-data-retrieval.xml b/doc/admin-guide-cloud/telemetry/section_telemetry-data-retrieval.xml index fe1e8743ba..7d4a0203e4 100644 --- a/doc/admin-guide-cloud/telemetry/section_telemetry-data-retrieval.xml +++ b/doc/admin-guide-cloud/telemetry/section_telemetry-data-retrieval.xml @@ -47,16 +47,16 @@ following items: - field + field - op + op - value + value - type + type Regardless of the endpoint on which the filter is applied on, it will @@ -134,16 +134,16 @@ operation. Complex query supports specifying a list of - orderby expressions. This means that the result + orderby expressions. This means that the result of the query can be ordered based on the field names provided in this list. When multiple keys are defined for the ordering, these will be applied sequentially in the order of the specification. The second expression will be applied on the groups for which the values of the first expression are the same. The ordering can be ascending or descending. The number of returned items can be bounded using the - limit option. - The filter, orderby - and limit fields are optional. + option. + The filter, orderby + and limit fields are optional. As opposed to the simple query, complex query is available via a separate API endpoint. For more information see the @@ -188,7 +188,7 @@ - The aggregate.param parameter is + The parameter is required. @@ -247,7 +247,7 @@ Similarly to other OpenStack command line clients, the ceilometer client uses OpenStack Identity for authentication. The proper credentials and - auth_url parameter have to be defined via command line + --auth_url parameter have to be defined via command line parameters or environment variables. This section provides some examples without the aim of completeness. These commands can be used for instance for validating an installation of Telemetry. @@ -277,7 +277,7 @@ in an ascending order based on the name of the meter. Samples are collected for each meter that is present in the list of meters, except in case of instances that are not running or deleted from the OpenStack Compute - database. If an instance is no more existing and there is time_to_live + database. If an instance is no more existing and there is value is set in the ceilometer.conf configuration file, then a group of samples are deleted in each expiration cycle. When the last sample is deleted for a meter, the database can be cleaned up by running @@ -362,7 +362,7 @@ --orderby - Contains the list of orderby expressions + Contains the list of orderby expressions in the form of: [{field_name: direction}, {field_name: direction}]. @@ -403,7 +403,7 @@ instance with the proper credentials: >>> import ceilometerclient.client >>> cclient = ceilometerclient.client.get_client(VERSION, username=USERNAME, password=PASSWORD, tenant_name=PROJECT_NAME, auth_url=AUTH_URL) - The VERSION parameter can be 1 or + The VERSION parameter can be 1 or 2, specifying the API version to be used. The method calls look like the following: >>> cclient.meters.list() @@ -493,7 +493,7 @@ notifier: - per_meter_topic + The value of it is 1. It is used for publishing the samples on additional metering_topic.sample_name topic queue @@ -501,7 +501,7 @@ - policy + It is used for configuring the behavior for the case, when the publisher fails to send the samples, where the possible predefined @@ -524,7 +524,7 @@ Used for creating an in-memory queue and retrying to send the samples on the queue on the next samples publishing period (the queue length - can be configured with max_queue_length, where + can be configured with , where 1024 is the default value). @@ -535,7 +535,7 @@ The following options are available for the file publisher: - max_bytes + When this option is greater than zero, it will cause a rollover. When the size is about to be exceeded, the file is closed and a new file is silently @@ -543,7 +543,7 @@ - backup_count + If this value is non-zero, an extension will be appended to the filename of the old log, as '.1', '.2', and so forth until the specified value is reached. diff --git a/doc/admin-guide-cloud/telemetry/section_telemetry-troubleshooting-guide.xml b/doc/admin-guide-cloud/telemetry/section_telemetry-troubleshooting-guide.xml index 0f9c2717a2..1c16697ca2 100644 --- a/doc/admin-guide-cloud/telemetry/section_telemetry-troubleshooting-guide.xml +++ b/doc/admin-guide-cloud/telemetry/section_telemetry-troubleshooting-guide.xml @@ -76,7 +76,7 @@ Python API reference of Telemetry. The service catalog provided by OpenStack Identity contains the available URLs that are available for authentication. - The URLs contain different ports, based on + The URLs contain different ports, based on that the type of the given URL is public, internal or admin. OpenStack Identity is about to change API version from v2 to v3. diff --git a/doc/common/section_cli_cinder_manage_volumes.xml b/doc/common/section_cli_cinder_manage_volumes.xml index e8d80d5aa0..c6337de8db 100644 --- a/doc/common/section_cli_cinder_manage_volumes.xml +++ b/doc/common/section_cli_cinder_manage_volumes.xml @@ -287,7 +287,7 @@ parameter. - While the auth_key property is + While the auth_key property is visible in the output of cinder transfer-create VOLUME_ID, it will not be available in subsequent diff --git a/doc/common/section_cli_nova_customize_flavors.xml b/doc/common/section_cli_nova_customize_flavors.xml index 3d26913846..4592306f6b 100644 --- a/doc/common/section_cli_nova_customize_flavors.xml +++ b/doc/common/section_cli_nova_customize_flavors.xml @@ -134,7 +134,7 @@ and a quota for maximum allowed bandwidth: - cpu_shares. Specifies the proportional weighted share + . Specifies the proportional weighted share for the domain. If this element is omitted, the service defaults to the OS provided defaults. There is no unit for @@ -145,7 +145,7 @@ value 1024. - cpu_period. Specifies the enforcement interval (unit: + . Specifies the enforcement interval (unit: microseconds) for QEMU and LXC hypervisors. Within a period, each VCPU of the domain is not allowed to consume more @@ -155,17 +155,17 @@ value 0 means no value. - cpu_limit. Specifies the upper limit for VMware machine CPU allocation in MHz. + . Specifies the upper limit for VMware machine CPU allocation in MHz. This parameter ensures that a machine never uses more than the defined amount of CPU time. It can be used to enforce a limit on the machine's CPU performance. - cpu_reservation. Specifies the guaranteed minimum CPU reservation in MHz for VMware. + . Specifies the guaranteed minimum CPU reservation in MHz for VMware. This means that if needed, the machine will definitely get allocated the reserved amount of CPU cycles. - cpu_quota. Specifies the maximum allowed bandwidth + . Specifies the maximum allowed bandwidth (unit: microseconds). A domain with a negative-value quota indicates that the domain has infinite bandwidth, which means diff --git a/doc/common/section_cli_nova_manage_images.xml b/doc/common/section_cli_nova_manage_images.xml index 0b3ff1aea6..e920a87851 100644 --- a/doc/common/section_cli_nova_manage_images.xml +++ b/doc/common/section_cli_nova_manage_images.xml @@ -150,7 +150,7 @@ name or ID of a source instance, and the name of the resulting back-up image, it requires the backup-type argument with the possible values daily or - weekly, and the rotation + weekly, and the rotation argument. The rotation number is an integer standing for the number of back-up images (associated with a single instance) to keep around. If this number exceeds the rotation threshold, the excess diff --git a/doc/common/section_keystone-keyring-support.xml b/doc/common/section_keystone-keyring-support.xml index 134e9ae765..dcdfb6f989 100644 --- a/doc/common/section_keystone-keyring-support.xml +++ b/doc/common/section_keystone-keyring-support.xml @@ -11,7 +11,7 @@ Keyring is used only if --os-use-keyring is specified or if the environment variable - OS_USE_KEYRING=true is defined. + is defined. A user specifies their username and password credentials to interact with OpenStack, using any client command. These credentials can be specified @@ -19,7 +19,7 @@ It is not safe to specify the password using either of these methods. For example, when you specify your password using the command-line client with the --os-password argument, anyone with access - to your computer can view it in plain text with the ps + to your computer can view it in plain text with the ps field. To avoid storing the password in plain text, you can prompt for the OpenStack password interactively. Then, the keyring can store the password diff --git a/doc/common/section_keystone_config_ldap-hardening.xml b/doc/common/section_keystone_config_ldap-hardening.xml index bddeea83a1..a4dcfc2f5d 100644 --- a/doc/common/section_keystone_config_ldap-hardening.xml +++ b/doc/common/section_keystone_config_ldap-hardening.xml @@ -51,21 +51,21 @@ allow, or never: - demand: a + demand: a certificate will always be requested from the LDAP server. The session will be terminated if no certificate is provided, or if the certificate provided cannot be verified against the existing certificate authorities file. - allow: a + allow: a certificate will always be requested from the LDAP server. The session will proceed as normal even if a certificate is not provided. If a certificate is provided but it cannot be verified against the existing certificate authorities file, the certificate will be ignored and the session will proceed as normal. - never: a + never: a certificate will never be requested. diff --git a/doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml b/doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml index 1db17ed7a8..cd70ec22ab 100644 --- a/doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml +++ b/doc/config-reference/block-storage/drivers/dell-equallogic-driver.xml @@ -73,14 +73,14 @@ ssh_max_pool_conn = 5SAN_UNAME The user name to login to the Group manager via SSH at - the san_ip. Default user name is grpadmin. + the san_ip. Default user name is grpadmin. SAN_PW The corresponding password of SAN_UNAME. - Not used when san_private_key is set. Default + Not used when san_private_key is set. Default password is password. @@ -104,7 +104,7 @@ ssh_max_pool_conn = 5EQLX_UNAME The CHAP login account for each - volume in a pool, if eqlx_use_chap is set + volume in a pool, if eqlx_use_chap is set to true. Default account name is chapadmin. @@ -121,7 +121,7 @@ ssh_max_pool_conn = 5 The filename of the private key used for SSH authentication. This provides password-less login to the - EqualLogic Group. Not used when san_password + EqualLogic Group. Not used when san_password is set. There is no default value. diff --git a/doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml b/doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml index 5536d5346a..f6a43a1134 100644 --- a/doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml +++ b/doc/config-reference/block-storage/drivers/hp-lefthand-driver.xml @@ -158,43 +158,43 @@ set for a volume type, the following defaults are used: hplh:provisioning - Defaults to thin provisioning, the valid values are, - thin + Defaults to thin provisioning, the valid values are, + thin and - full + full hplh:ao - Defaults to true, the valid values are, - true + Defaults to true, the valid values are, + true and - false. + false. hplh:data_pl Defaults to - r-0, + r-0, Network RAID-0 (None), the valid values are, - r-0, + r-0, Network RAID-0 (None) - r-5, + r-5, Network RAID-5 (Single Parity) - r-10-2, + r-10-2, Network RAID-10 (2-Way Mirror) - r-10-3, + r-10-3, Network RAID-10 (3-Way Mirror) - r-10-4, + r-10-4, Network RAID-10 (4-Way Mirror) - r-6, + r-6, Network RAID-6 (Dual Parity), @@ -413,9 +413,9 @@ san_is_local=False Add server associations on the VSA with the associated CHAPS and initiator information. The name should correspond to the - hostname + hostname of the - nova-compute + nova-compute node. For Xen, this is the hypervisor host name. To do this, use either CLIQ or the Centralized Management Console. diff --git a/doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml b/doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml index 3f12371e83..b7e0430d37 100644 --- a/doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml +++ b/doc/config-reference/block-storage/drivers/vmware-vmdk-driver.xml @@ -198,7 +198,7 @@ Optionally, for the - vmware_host_version + parameter, enter the version number of your vSphere platform. For example, 5.5. diff --git a/doc/config-reference/compute/section_compute-scheduler.xml b/doc/config-reference/compute/section_compute-scheduler.xml index 135caa1bc6..1dce270b7a 100644 --- a/doc/config-reference/compute/section_compute-scheduler.xml +++ b/doc/config-reference/compute/section_compute-scheduler.xml @@ -27,7 +27,7 @@ scheduler_driver_task_period = 60 scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler scheduler_available_filters = nova.scheduler.filters.all_filters scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter - By default, the scheduler_driver is + By default, the is configured as a filter scheduler, as described in the next section. In the default configuration, this scheduler considers hosts that meet all the following criteria: diff --git a/doc/config-reference/compute/section_hypervisor_vmware.xml b/doc/config-reference/compute/section_hypervisor_vmware.xml index 0cc921152d..b34f64afa9 100644 --- a/doc/config-reference/compute/section_hypervisor_vmware.xml +++ b/doc/config-reference/compute/section_hypervisor_vmware.xml @@ -747,7 +747,7 @@ trusty-server-cloudimg-amd64-disk1.vmdk IDE controller. Therefore, as the previous examples show, it is important to set the property correctly. The default adapter type is lsiLogic, which - is SCSI, so you can omit the vmware_adaptertype + is SCSI, so you can omit the property if you are certain that the image adapter type is lsiLogic. @@ -855,7 +855,7 @@ trusty-server-cloudimg-amd64-disk1.vmdk If multiple compute nodes are running on the same host, or have a shared file system, you can enable them to use the same cache folder on the back-end data store. To configure - this action, set the cache_prefix option + this action, set the option in the nova.conf file. Its value stands for the name prefix of the folder where cached images are stored. This can take effect only if compute nodes are running @@ -865,15 +865,15 @@ trusty-server-cloudimg-amd64-disk1.vmdk options in the DEFAULT section in the nova.conf file: - remove_unused_base_images + -Set this parameter to True to +Set this option to True to specify that unused images should be removed after the - duration specified in the remove_unused_original_minimum_age_seconds parameter. + duration specified in the option. The default is True. - remove_unused_original_minimum_age_seconds + Specifies the duration in seconds after which an unused image is purged from the cache. The default is 86400 (24 hours). diff --git a/doc/config-reference/image-service/section_image-service-backend-vmware.xml b/doc/config-reference/image-service/section_image-service-backend-vmware.xml index dfd0bd1417..8258800738 100644 --- a/doc/config-reference/image-service/section_image-service-backend-vmware.xml +++ b/doc/config-reference/image-service/section_image-service-backend-vmware.xml @@ -31,7 +31,7 @@ back end, use the SPBM feature. In the glance_store section, set the - default_store parameter to + parameter to vsphere, as shown in this code sample: [glance_store] @@ -90,7 +90,7 @@ vmware_api_insecure = False
Configure vCenter data stores for the back end You can specify a vCenter data store for the back end by - setting the vmware_datastore_name + setting the parameter value to the vCenter name of the data store. This configuration limits the back end to a single data store. @@ -104,13 +104,13 @@ vmware_api_insecure = False To configure a single data store If present, comment or delete the - vmware_pbm_wsdl_location - and vmware_pbm_policy + + and parameters. Uncomment and define the - vmware_datastore_name + parameter with the name of the vCenter data store. @@ -157,19 +157,19 @@ vmware_api_insecure = False Comment or delete the - vmware_datastore_name + parameter. Uncomment and define the - vmware_pbm_policy + parameter by entering the same value as the tag you defined and applied to the data stores in vCenter. Uncomment and define the - vmware_pbm_wsdl_location + parameter by entering the location of the PBM service WSDL file. For example, file:///opt/SDK/spbm/wsdl/pbmService.wsdl. diff --git a/doc/config-reference/object-storage/section_object-storage-features.xml b/doc/config-reference/object-storage/section_object-storage-features.xml index dd6e56a86e..0b83bd09e4 100644 --- a/doc/config-reference/object-storage/section_object-storage-features.xml +++ b/doc/config-reference/object-storage/section_object-storage-features.xml @@ -503,16 +503,16 @@ Sample represents 1.00% of the object partition space
Account quotas - The x-account-meta-quota-bytes + The x-account-meta-quota-bytes metadata entry must be requests (PUT, POST) if a given account quota (in bytes) is exceeded while DELETE requests are still allowed. - The x-account-meta-quota-bytes + The x-account-meta-quota-bytes metadata entry must be set to store and enable the quota. Write requests to this metadata entry are only permitted for resellers. There is no account quota limitation on a reseller account even if - x-account-meta-quota-bytes is set. + x-account-meta-quota-bytes is set. Any object PUT operations that exceed the quota return a 413 response (request entity too large) with a descriptive diff --git a/doc/networking-guide/section_networking_adv_agent.xml b/doc/networking-guide/section_networking_adv_agent.xml index 2f92324907..7721aa052c 100644 --- a/doc/networking-guide/section_networking_adv_agent.xml +++ b/doc/networking-guide/section_networking_adv_agent.xml @@ -14,7 +14,7 @@ The neutron configuration file contains the common neutron configuration options. The plug-in configuration file contains the plug-in specific options. The plug-in that runs on the service is loaded through the - core_plugin configuration option. In some cases, a plug-in + configuration option. In some cases, a plug-in might have an agent that performs the actual networking. Most plug-ins require an SQL database. After you install and start the database server, set a password for the root account and delete the anonymous accounts: diff --git a/doc/networking-guide/section_networking_adv_features.xml b/doc/networking-guide/section_networking_adv_features.xml index 5f27dd2292..01dc1a2003 100644 --- a/doc/networking-guide/section_networking_adv_features.xml +++ b/doc/networking-guide/section_networking_adv_features.xml @@ -1461,7 +1461,7 @@ configuration option to a non-zero value exclusively on a node designated for back-end status synchronization. - The fields=status parameter in Networking API requests + The fields=status parameter in Networking API requests always triggers an explicit query to the NSX back end, even when you enable asynchronous state synchronization. For example, GET /v2.0/networks/NET_ID?fields=status&fields=name. diff --git a/doc/user-guide/section_object-api-archive-auto-extract.xml b/doc/user-guide/section_object-api-archive-auto-extract.xml index c7aef9ff1e..c139ef174a 100644 --- a/doc/user-guide/section_object-api-archive-auto-extract.xml +++ b/doc/user-guide/section_object-api-archive-auto-extract.xml @@ -16,7 +16,7 @@
Auto-extract archive request To upload an archive file, make a &PUT; request. Add the - extract-archive=format + extract-archive=format query parameter to indicate that you are uploading a tar archive file instead of normal content. Valid values for the format @@ -70,7 +70,7 @@ The POSIX.1-2001 pax format. Use gzip or bzip2 to compress the archive. - Use the extract-archive + Use the extract-archive query parameter to specify the format. Valid values for this parameter are tar, diff --git a/doc/user-guide/section_object-api-bulk-delete.xml b/doc/user-guide/section_object-api-bulk-delete.xml index 9f0b481bbe..2bf44be64a 100644 --- a/doc/user-guide/section_object-api-bulk-delete.xml +++ b/doc/user-guide/section_object-api-bulk-delete.xml @@ -14,7 +14,7 @@
Bulk delete request To perform a bulk delete operation, add the - bulk-delete query parameter to + bulk-delete query parameter to the path of a &POST; or &DELETE; operation. The &DELETE; operation is supported for backwards diff --git a/doc/user-guide/section_object-api-create-large-objects.xml b/doc/user-guide/section_object-api-create-large-objects.xml index e1e00a927c..6da13275bb 100644 --- a/doc/user-guide/section_object-api-create-large-objects.xml +++ b/doc/user-guide/section_object-api-create-large-objects.xml @@ -95,7 +95,7 @@ List the name of each segment object along with its size and MD5 checksum in order. Create a manifest object. Include the - ?multipart-manifest=put query + ?multipart-manifest=put query string at the end of the manifest object name to indicate that this is a manifest object. The body of the &PUT; request on the manifest object @@ -150,7 +150,7 @@ You can also set the Content-Type request header and custom object metadata. When the &PUT; operation sees the - ?multipart-manifest=put query + ?multipart-manifest=put query parameter, it reads the request body and verifies that each segment object exists and that the sizes and ETags match. If there is a mismatch, the &PUT;operation @@ -163,19 +163,19 @@ manifest object, the response body contains the concatenated content of the segment objects. To download the manifest list, use the - ?multipart-manifest=get query + ?multipart-manifest=get query parameter. The resulting list is not formatted the same as the manifest you originally used in the &PUT; operation. If you use the &DELETE; operation on a manifest object, the manifest object is deleted. The segment objects are not affected. However, if you add the - ?multipart-manifest=delete + ?multipart-manifest=delete query parameter, the segment objects are deleted and if all are successfully deleted, the manifest object is also deleted. To change the manifest, use a &PUT; operation with the - ?multipart-manifest=put query + ?multipart-manifest=put query parameter. This request creates a manifest object. You can also update the object metadata in the usual way.
@@ -356,7 +356,7 @@
Copying the manifest object Include the - ?multipart-manifest=get + ?multipart-manifest=get query string in the © request. The new object contains the same manifest as the original. The segment objects are not diff --git a/doc/user-guide/section_object-api-large-lists.xml b/doc/user-guide/section_object-api-large-lists.xml index 349f884988..448b79d9bd 100644 --- a/doc/user-guide/section_object-api-large-lists.xml +++ b/doc/user-guide/section_object-api-large-lists.xml @@ -6,50 +6,50 @@ xml:id="large-lists"> Page through large lists of containers or objects If you have a large number of containers or objects, you can - use the marker, - limit, and - end_marker parameters to control + use the marker, + limit, and + end_marker parameters to control how many items are returned in a list and where the list starts or ends. - marker + marker When you request a list of containers or objects, Object Storage returns a maximum of 10,000 names for each request. To get subsequent names, you must make another request with the - marker parameter. Set + marker parameter. Set the marker parameter to the name of the last item returned in the previous list. You must URL-encode the - marker value before you + marker value before you send the HTTP request. Object Storage returns a maximum of 10,000 names starting after the last item returned. - limit + limit To return fewer than 10,000 names, use the - limit parameter. If the + limit parameter. If the number of names returned equals the specified - limit (or 10,000 if you - omit the limit parameter), + limit (or 10,000 if you + omit the limit parameter), you can assume there are more names to list. If the number of names in the list is exactly - divisible by the limit + divisible by the limit value, the last request has no content. - end_marker + end_marker Limits the result set to names that are less - than the end_marker + than the end_marker parameter value. You must URL-encode the - end_marker value before + end_marker value before you send the HTTP request. @@ -63,7 +63,7 @@ kiwis oranges pears - Use a limit of two: + Use a limit of two: # curl -i $publicURL/?limit=2 -X GET -H "X-Auth-Token: $token" apples bananas @@ -72,7 +72,7 @@ bananas Make another request with a - marker parameter set to the + marker parameter set to the name of the last item returned: # curl -i $publicURL/?limit=2&amp;marker=bananas -X GET -H "X-Auth-Token: $token" kiwis @@ -82,25 +82,25 @@ oranges Make another request with a - marker of the last item + marker of the last item returned: # curl -i $publicURL/?limit=2&amp;marker=oranges -X GET -H "X-Auth-Token: $token" pears You receive a one-item response, which is fewer than - the limit number of names. This + the limit number of names. This indicates that this is the end of the list. - Use the end_marker parameter + Use the end_marker parameter to limit the result set to object names that are less - than the end_marker parameter + than the end_marker parameter value: # curl -i $publicURL/?end_marker=oranges -X GET -H "X-Auth-Token: $token" apples bananas kiwis You receive a result set of all container names - before the end-marker + before the end-marker value. diff --git a/doc/user-guide/section_object-api-response-formats.xml b/doc/user-guide/section_object-api-response-formats.xml index 187c25ac6f..b095f999ed 100644 --- a/doc/user-guide/section_object-api-response-formats.xml +++ b/doc/user-guide/section_object-api-response-formats.xml @@ -75,7 +75,7 @@ JSON example with format query parameter For example, this request uses the - format query parameter to ask + format query parameter to ask for a JSON response: $ curl -i $publicURL?format=json -X GET -H "X-Auth-Token: $token" HTTP/1.1 200 OK @@ -138,7 +138,7 @@ Date: Wed, 22 Jan 2014 21:12:00 GMT The remainder of the examples in this guide use standard, non-serialized responses. However, all &GET; requests that perform list operations accept the - format query parameter or + format query parameter or Accept request header.