From 4c4c74acc415c82760e651bdefb2d82226b92eff Mon Sep 17 00:00:00 2001 From: Andreas Jaeger Date: Mon, 24 Feb 2014 21:19:29 +0100 Subject: [PATCH] Change command line to command-line Use "command-line" if it's used as an adjective, for example "command-line tool". Improve capitalization and tagging. Change-Id: I43c73ff295d3b6d0ad43c3c20566ce7bf3742c00 --- doc/admin-guide-cloud/ch_compute.xml | 26 ++++----- ...on_networking_adv_operational_features.xml | 4 +- doc/common/section_dashboard_sessions.xml | 7 ++- .../compute/section_hypervisor_hyper-v.xml | 27 +++++---- .../section_object-storage-features.xml | 55 +++++++++++-------- ...on_object-storage-general-service-conf.xml | 22 +++++--- doc/glossary/glossary-terms.xml | 4 +- ...est-practices-for-operator-mode-access.xml | 51 +++++++++++++---- doc/security-guide/ch026_compute.xml | 10 +++- ...001-ch005-vm-provisioning-walk-through.xml | 37 +++++++------ 10 files changed, 149 insertions(+), 94 deletions(-) diff --git a/doc/admin-guide-cloud/ch_compute.xml b/doc/admin-guide-cloud/ch_compute.xml index e677de6f37..64af3415a0 100644 --- a/doc/admin-guide-cloud/ch_compute.xml +++ b/doc/admin-guide-cloud/ch_compute.xml @@ -382,25 +382,25 @@ xlink:href="http://docs.openstack.org/developer/python-glanceclient/" > Python API. The OpenStack Image service can be controlled using a - command line tool. For more information about the - OpenStack Image command line tool, see the Image Management section in the OpenStack User Guide. Virtual images that have been made available through the Image service can be stored in a variety of ways. In order to use these services, you must have a working - installation of the Image service, with a working + installation of the Image Service, with a working endpoint, and users that have been created in the Identity - service. Additionally, you must meet the environment + Service. Additionally, you must meet the environment variables required by the Compute and Image clients. - The Image service supports these back end stores: + The Image Service supports these back end stores: File system - The OpenStack Image service stores virtual + The OpenStack Image Service stores virtual machine images in the file system back-end by default. This simple back end writes image files to the local file system. @@ -446,14 +446,14 @@
Instance management tools - OpenStack provides command line, web-based, and + OpenStack provides command-line, web-based, and API-based instance management tools. Additionally, a - number of third party management tools are available, + number of third-party management tools are available, using either the native API or the provided EC2-compatible API. The OpenStack python-novaclient package - provides a basic command line utility, which uses the + provides a basic command-line utility, which uses the nova command. This is available as a native package for most Linux distributions, or you can install the latest version using the @@ -463,7 +463,7 @@ For more information about python-novaclient and other - available command line tools, see the OpenStack End User Guide. @@ -493,7 +493,7 @@ header: Date: Thu, 13 Sep 2012 20:27:36 GMT Reference lists configuration options for customizing this compatibility API on your OpenStack cloud. - Numerous third party tools and language-specific SDKs + Numerous third-party tools and language-specific SDKs can be used to interact with OpenStack clouds, using both native and compatibility APIs. Some of the more popular third-party tools are: @@ -501,7 +501,7 @@ header: Date: Thu, 13 Sep 2012 20:27:36 GMT Euca2ools - A popular open source command line tool for + A popular open source command-line tool for interacting with the EC2 API. This is convenient for multi-cloud environments where EC2 is the common API, or for transitioning @@ -1948,7 +1948,7 @@ net.bridge.bridge-nf-call-ip6tables=0 Use the euca2ools commands For a command-line interface to EC2 API calls, - use the euca2ools command line tool. See http://open.eucalyptus.com/wiki/Euca2oolsGuide_v1.3 diff --git a/doc/admin-guide-cloud/section_networking_adv_operational_features.xml b/doc/admin-guide-cloud/section_networking_adv_operational_features.xml index 6053d0bfd0..bee0df17bb 100644 --- a/doc/admin-guide-cloud/section_networking_adv_operational_features.xml +++ b/doc/admin-guide-cloud/section_networking_adv_operational_features.xml @@ -8,7 +8,7 @@ Logging settings Networking components use Python logging module to do logging. Logging configuration can be provided in - neutron.conf or as command line + neutron.conf or as command-line options. Command options override ones in neutron.conf. To configure logging for Networking components, use one @@ -77,7 +77,7 @@ notification_driver = neutron.openstack.common.notifier.rpc_notifier # host = myhost.com # default_publisher_id = $host -# Defined in rpc_notifier for rpc way, can be comma separated values. +# Defined in rpc_notifier for rpc way, can be comma-separated values. # The actual topic names will be %s.%(default_notification_level)s notification_topics = notifications
diff --git a/doc/common/section_dashboard_sessions.xml b/doc/common/section_dashboard_sessions.xml index e1f501fa02..aed536ec0e 100644 --- a/doc/common/section_dashboard_sessions.xml +++ b/doc/common/section_dashboard_sessions.xml @@ -106,7 +106,7 @@ CACHES = { the scope of this documentation.
- Start the mysql command line client: + Start the mysql command-line client: $ mysql -u root -p @@ -196,7 +196,8 @@ No fixtures found.
Cached database To mitigate the performance issues of database queries, - you can use the Django cached_db session back end, which + you can use the Django cached_db session + back end, which utilizes both your database and caching infrastructure to perform write-through caching and efficient retrieval. Enable this hybrid setting by configuring both your @@ -206,7 +207,7 @@ No fixtures found.
Cookies - If you use Django 1.4 or later, the signed_cookies + If you use Django 1.4 or later, the signed_cookies back end avoids server load and scaling problems. This back end stores session data in a cookie, which is stored by the user’s browser. The back end uses a diff --git a/doc/config-reference/compute/section_hypervisor_hyper-v.xml b/doc/config-reference/compute/section_hypervisor_hyper-v.xml index 7a56db8781..87306d0aba 100644 --- a/doc/config-reference/compute/section_hypervisor_hyper-v.xml +++ b/doc/config-reference/compute/section_hypervisor_hyper-v.xml @@ -83,16 +83,18 @@ The following outlines the steps of shared nothing live migration. - The target hosts ensures that live migration is enabled and properly - configured in Hyper-V. + The target hosts ensures that live migration is + enabled and properly configured in Hyper-V. - The target hosts checks if the image to be migrated requires a base VHD and - pulls it from Glance if not already available on the target host. + The target hosts checks if the image to be + migrated requires a base VHD and pulls it from the + Image Service if not already available on the target + host. - The source hosts ensures that live migration is enabled and properly - configured in Hyper-V. + The source hosts ensures that live migration is + enabled and properly configured in Hyper-V. The source hosts initiates a Hyper-V live migration. @@ -132,18 +134,21 @@ members - The instances_path command line option/flag needs to be the same on all - hosts + The instances_path command-line option/flag needs to be the same on all + hosts. - The openstack-compute service deployed with the setup must run with domain - credentials. You can set the service credentials with: + The openstack-compute service + deployed with the setup must run with domain + credentials. You can set the service credentials + with: C:\sc config openstack-compute obj="DOMAIN\username" password="password" How to setup live migration on Hyper-V - To enable shared nothing live migration run the 3 PowerShell instructions below on + To enable 'shared nothing live' migration, run the 3 PowerShell instructions below on each Hyper-V host: PS C:\Enable-VMMigration diff --git a/doc/config-reference/object-storage/section_object-storage-features.xml b/doc/config-reference/object-storage/section_object-storage-features.xml index 2a1826f3ce..d6594db1b1 100644 --- a/doc/config-reference/object-storage/section_object-storage-features.xml +++ b/doc/config-reference/object-storage/section_object-storage-features.xml @@ -163,7 +163,7 @@ Health check Provides an easy way to monitor whether the swift proxy server is alive. If you access the proxy with the path - /healthcheck, it respond + /healthcheck, it responds with OK in the response body, which monitoring tools can use. CNAME lookup Middleware that translates an unknown domain in the host header to something that ends with the configured - storage_domain by looking up the given domain's CNAME + storage_domain by looking up the given domain's CNAME record in DNS. temp_url_expires - An expiration date, in Unix time. + An expiration date, in Unix time @@ -245,7 +245,7 @@ The expiry date as a Unix timestamp - the full path to the object + The full path to the object The secret key set as the @@ -275,20 +275,21 @@ url = s.format(host='swift-cluster.example.com', path=path, sig=sig, expires=exp PUT is allowed. Using this in combination with browser form post translation middleware could also allow direct-from-browser uploads to specific locations in - Swift. Note that + Object Storage. + Changing the X-Account-Meta-Temp-URL-Key invalidates any previously generated temporary URLs within 60 seconds (the memcache time for the - key). Swift supports up to two keys, specified by + key). Object Storage supports up to two keys, specified by X-Account-Meta-Temp-URL-Key and X-Account-Meta-Temp-URL-Key-2. Signatures are checked against both keys, if present. This is to allow for key rotation without invalidating all existing temporary URLs. - - Swift includes a script called + + Opject Storage includes a script called swift-temp-url that generates the query parameters automatically: $ bin/swift-temp-url GET 3600 /v1/AUTH_account/container/object mykey @@ -296,7 +297,7 @@ url = s.format(host='swift-cluster.example.com', path=path, sig=sig, expires=exp temp_url_sig=5c4cc8886f36a9d0919d708ade98bf0cc71c9e91& temp_url_expires=1374497657 Because this command only returns the path, you must - prefix the Swift storage host name (for example, + prefix the Object Storage host name (for example, https://swift-cluster.example.com). With GET Temporary URLs, a Content-Disposition header is set @@ -370,7 +371,8 @@ pipeline = pipeline = healthcheck cache tempurl create a new account solely for this usage. Next, you need to place the containers and objects throughout the system so that they are on distinct partitions. The - swift-dispersion-populate tool does this by making up + swift-dispersion-populate tool does this + by making up random container and object names until they fall on distinct partitions. Last, and repeatedly for the life of the cluster, you must run the @@ -445,7 +447,7 @@ Sample represents 1.00% of the object partition space Alternatively, the dispersion report can also be output in json format. This allows it to be more easily consumed - by third party utilities: + by third-party utilities: $ swift-dispersion-report -j {"object": {"retries:": 0, "missing_two": 0, "copies_found": 7863, "missing_one": 0, "copies_expected": 7863, "pct_found": 100.0, "overlapping": 0, "missing_all": 0}, "container": @@ -463,7 +465,7 @@ Sample represents 1.00% of the object partition space objects concurrently and afterwards download them as a single object. It is different in that it does not rely on eventually consistent container listings to do so. - Instead, a user defined manifest of the object segments is + Instead, a user-defined manifest of the object segments is used.
Container quotas - The container_quotas middleware implements simple quotas + The container_quotas middleware + implements simple quotas that can be imposed on swift containers by a user with the ability to set container metadata, most likely the account administrator. This can be useful for limiting the scope @@ -509,11 +512,13 @@ Sample represents 1.00% of the object partition space metadata entry must be requests (PUT, POST) if a given account quota (in bytes) is exceeded while DELETE requests are still allowed. - The x-account-meta-quota-bytes metadata entry must be + The x-account-meta-quota-bytes + metadata entry must be set to store and enable the quota. Write requests to this metadata entry are only permitted for resellers. There is no account quota limitation on a reseller account even if - x-account-meta-quota-bytes is set. + x-account-meta-quota-bytes is set. + Any object PUT operations that exceed the quota return a 413 response (request entity too large) with a descriptive body. @@ -536,21 +541,23 @@ X-Trans-Id: tx602634cf478546a39b1be-0051e6bc7a
Bulk delete - Use bulk-delete to delete multiple files from an account + Use bulk-delete to delete multiple files + from an account with a single request. Responds to DELETE requests with a header 'X-Bulk-Delete: true_value'. The body of the DELETE - request is a new line separated list of files to delete. + request is a new line-separated list of files to delete. The files listed must be URL encoded and in the form: /container_name/obj_name If all files are successfully deleted (or did not - exist), the operation returns HTTPOk. If any files failed - to delete, the operation returns HTTPBadGateway. In both - cases the response body is a JSON dictionary that shows - the number of files that were successfully deleted or not - found. The files that failed are listed. + exist), the operation returns HTTPOk. If any + files failed to delete, the operation returns + HTTPBadGateway. In both cases, the response body + is a JSON dictionary that shows the number of files that were + successfully deleted or not found. The files that failed are + listed. @@ -590,7 +597,7 @@ X-Trans-Id: tx602634cf478546a39b1be-0051e6bc7a https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix The name of each file uploaded is appended to the specified swift-url. So, you can upload - directly to the root of container with a url like: + directly to the root of container with a URL like: https://swift-cluster.example.com/v1/AUTH_account/container/ Optionally, you can include an object prefix to better separate different users’ uploads, such as: @@ -641,7 +648,7 @@ signature = hmac.new(key, hmac_body, sha1).hexdigest() on the account. Be certain to use the full path, from the /v1/ onward. - The command line tool + The command-line tool swift-form-signature may be used (mostly just when testing) to compute expires and signature. diff --git a/doc/config-reference/object-storage/section_object-storage-general-service-conf.xml b/doc/config-reference/object-storage/section_object-storage-general-service-conf.xml index aa170dec0d..a124c218bd 100644 --- a/doc/config-reference/object-storage/section_object-storage-general-service-conf.xml +++ b/doc/config-reference/object-storage/section_object-storage-general-service-conf.xml @@ -20,12 +20,16 @@ Configuration for servers and daemons can be expressed together in the same file for each type of server, or separately. If a required section for the service trying to start is missing there will be an - error. The sections not used by the service are ignored. + error. Sections not used by the service are ignored. - Consider the example of an object storage node. By convention - configuration for the object-server, object-updater, - object-replicator, and object-auditor exist in a single file + Consider the example of an Object Storage node. By convention + configuration for the object-server, object-updater, object-replicator, and + object-auditor exist + in a single file /etc/swift/object-server.conf: @@ -53,7 +57,7 @@ reclaim_age = 259200 Error: missing config path argument - If you omit the object-auditor section this file can not be used + If you omit the object-auditor section, this file cannot be used as the configuration path when starting the swift-object-auditor daemon: @@ -65,10 +69,10 @@ Error: missing config path argument the files in the directory with the file extension ".conf" will be combined to generate the configuration object which is delivered to the Object Storage service. This is referred to generally as - "directory based configuration". + "directory-based configuration". - Directory based configuration leverages ConfigParser's native + Directory-based configuration leverages ConfigParser's native multi-file support. Files ending in ".conf" in the given directory are parsed in lexicographical order. File names starting with '.' are ignored. A mixture of file and directory configuration @@ -84,7 +88,7 @@ Error: missing config path argument exist. - When using directory based configuration, if the same option under + When using directory-based configuration, if the same option under the same section appears more than once in different files, the last value parsed is said to override previous occurrences. You can ensure proper override precedence by prefixing the files in the @@ -104,6 +108,6 @@ Error: missing config path argument You can inspect the resulting combined configuration object using - the swift-config command line tool. + the swift-config command-line tool.
diff --git a/doc/glossary/glossary-terms.xml b/doc/glossary/glossary-terms.xml index 6da58a69bf..243b95bdcb 100644 --- a/doc/glossary/glossary-terms.xml +++ b/doc/glossary/glossary-terms.xml @@ -1771,7 +1771,7 @@ Each entry in a typical ACL specifies a subject and an operation. For instance, euca2ools - A collection of command line tools for + A collection of command-line tools for administering VMs, most are compatible with OpenStack. @@ -2016,7 +2016,7 @@ Each entry in a typical ACL specifies a subject and an operation. For instance, The point where a user interacts with a service, can be an API endpoint, the horizon dashboard, or - a command line tool. + a command-line tool. diff --git a/doc/security-guide/ch014_best-practices-for-operator-mode-access.xml b/doc/security-guide/ch014_best-practices-for-operator-mode-access.xml index 94339f1acb..8c479746fa 100644 --- a/doc/security-guide/ch014_best-practices-for-operator-mode-access.xml +++ b/doc/security-guide/ch014_best-practices-for-operator-mode-access.xml @@ -1,10 +1,19 @@ - + + Management Interfaces - It is necessary for administrators to perform command and control over the cloud for various operational functions. It is important these command and control facilities are understood and secured. + It is necessary for administrators to perform command and + control over the cloud for various operational functions. It is + important these command and control facilities are understood and + secured. OpenStack provides several management interfaces for operators and tenants: - OpenStack Dashboard (Horizon) + OpenStack dashboard (Horizon) OpenStack API @@ -13,7 +22,9 @@ Secure Shell (SSH) - OpenStack Management Utilities (nova-manage, glance-manage, etc.) + OpenStack Management Utilities (for example, + nova-manage, + glance-manage) Out-of-Band Management Interfaces (IPMI, etc.) @@ -21,7 +32,7 @@
Dashboard - The OpenStack Dashboard (Horizon) provides administrators and tenants a web-based graphical interface to provision and access cloud-based resources. The dashboard communicates with the back-end services via calls to the OpenStack API (discussed above). + The OpenStack dashboard (Horizon) provides administrators and tenants a web-based graphical interface to provision and access cloud-based resources. The dashboard communicates with the back-end services via calls to the OpenStack API (discussed above).
Capabilities @@ -53,7 +64,7 @@ Both the Horizon web service and the OpenStack API it uses to communicate with the back-end are susceptible to web attack vectors such as denial of service and must be monitored. - It is now possible (though there are numerous deployment/security implications) to upload an image file directly from a user’s hard disk to Glance through Horizon. For multi-GB images it is still strongly recommended that the upload be done using the Glance CLI + It is now possible (though there are numerous deployment/security implications) to upload an image file directly from a user’s hard disk to OpenStack Image Service through the dashboard. For multi-GB images it is still strongly recommended that the upload be done using the Glance CLI Create and manage security groups through dashboard. The security groups allows L3-L4 packet filtering for security policies to protect virtual machines @@ -67,11 +78,20 @@
OpenStack API - The OpenStack API is a RESTful web service endpoint to access, provision and automate cloud-based resources.  Operators and users typically access the API through command-line utilities (i.e. Nova, Glance, etc.), language-specific libraries, or third-party tools. + The OpenStack API is a RESTful web service endpoint to + access, provision and automate cloud-based resources. Operators + and users typically access the API through command-line + utilities (for example, nova or + glance), language-specific libraries, or + third-party tools.
Capabilities - To the cloud administrator the API provides an overall view of the size and state of the cloud deployment and allows the creation of users, tenants/projects, assigning users to tenants/projects and specifying resource quotas on a per tenant/project basis. + To the cloud administrator, the API provides an + overall view of the size and state of the cloud deployment + and allows the creation of users, tenants/projects, + assigning users to tenants/projects, and specifying + resource quotas on a per tenant/project basis. The API provides a tenant interface for provisioning, managing, and accessing their resources. @@ -104,9 +124,11 @@ Management Utilities The OpenStack Management Utilities are open-source Python command-line clients that make API calls. There is a client for - each OpenStack service (nova, glance, etc.). In addition to the + each OpenStack service (for example, nova, glance). In addition to the standard CLI client, most of the services have a management - command line which makes direct calls to the database. These + command-line utility which makes direct calls to the database. These dedicated management utilities are slowly being deprecated.
@@ -121,13 +143,18 @@
References - OpenStack End User Guide section command line clients overview + OpenStack End User Guide section command-line clients overview OpenStack End User Guide section Download and source the OpenStack RC file
Out-of-Band Management Interface - OpenStack management relies on out-of-band management interfaces such as the IPMI protocol to access into nodes running OpenStack components. IPMI is a very popular specification to remotely manage, diagnose and reboot servers whether the operating system is running or the system has crashed. + OpenStack management relies on out-of-band management + interfaces such as the IPMI protocol to access into nodes + running OpenStack components. IPMI is a very popular + specification to remotely manage, diagnose, and reboot servers + whether the operating system is running or the system has + crashed.
Security Considerations diff --git a/doc/security-guide/ch026_compute.xml b/doc/security-guide/ch026_compute.xml index f7e199c8df..e5acf171bd 100644 --- a/doc/security-guide/ch026_compute.xml +++ b/doc/security-guide/ch026_compute.xml @@ -19,7 +19,13 @@ The OpenStack Dashboard (Horizon) can provide a VNC console for instances directly on the web page using the HTML5 noVNC client.  This requires the nova-novncproxy service to bridge from the public network to the management network. - The nova command line utility can return a URL for the VNC console for access by the nova Java VNC client. This requires the nova-xvpvncproxy service to bridge from the public network to the management network. + The nova command-line utility can + return a URL for the VNC console for access by the + nova Java VNC + client. This requires the nova-xvpvncproxy service to + bridge from the public network to the management + network.
@@ -46,7 +52,7 @@ SPICE is supported by the OpenStack Dashboard (Horizon) directly on the instance web page.  This requires the nova-spicehtml5proxy service. - The nova command line utility can return a URL for SPICE console for access by a SPICE-html client. + The nova command-line utility can return a URL for SPICE console for access by a SPICE-html client.
diff --git a/doc/training-guides/module001-ch005-vm-provisioning-walk-through.xml b/doc/training-guides/module001-ch005-vm-provisioning-walk-through.xml index 9a990f1484..3949965b71 100644 --- a/doc/training-guides/module001-ch005-vm-provisioning-walk-through.xml +++ b/doc/training-guides/module001-ch005-vm-provisioning-walk-through.xml @@ -77,7 +77,8 @@ Roles control the actions that a user is allowed to perform. In the default configuration, most actions do not require a particular role, but this is configurable by the system - administrator editing the appropriate policy.json file that + administrator editing the appropriate policy.json + file that maintains the rules. For example, a rule can be defined so that a user cannot allocate a public IP without the admin role. A user's access to particular images is limited by tenant, but the @@ -127,16 +128,16 @@ typical virtual system within the cloud. There are many ways to configure the details of an OpenStack cloud and many ways to implement a virtual system within that cloud. These - configuration details as well as the specific command line + configuration details as well as the specific command-line utilities and API calls to perform the actions described are - presented in the Image Managementand Volume - Managementchapters. + presented in the Image Management and Volume + Management chapters. Images are disk images which are templates for virtual - machine file systems. The image service, Glance, is responsible + machine file systems. The OpenStack Image Service is responsible for the storage and management of images within OpenStack. Instances are the individual virtual machines running on - physical compute nodes. The compute service, Nova, manages + physical compute nodes. The OpenStack Compute Service manages instances. Any number of instances maybe started from the same image. Each instance is run from a copy of the base image so runtime changes made by an instance do not change the image it @@ -161,11 +162,13 @@ Initial State Images and Instances The following diagram shows the system state prior to - launching an instance. The image store fronted by the image - service, Glance, has some number of predefined images. In the - cloud there is an available compute node with available vCPU, + launching an instance. The image store fronted by the Image + Service has some number of predefined images. In the + cloud, there is an available Compute node with available vCPU, memory and local disk resources. Plus there are a number of - predefined volumes in the cinder-volume service. + predefined volumes in the + cinder-volume service. + Figure 2.1. Base image state with no running instances
@@ -177,12 +180,13 @@
Launching an instance - To launch an instance the user selects an image, a flavor - and optionally other attributes. In this case the selected + To launch an instance, the user selects an image, a flavor, + and other optional attributes. In this case the selected flavor provides a root volume (as all flavors do) labeled vda in the diagram and additional ephemeral storage labeled vdb in the diagram. The user has also opted to map a volume from the - cinder-volume store to the third virtual disk, vdc, on this + cinder-volume + store to the third virtual disk, vdc, on this instance. Figure 2.2. Instance creation from image and run time state @@ -202,7 +206,8 @@ present as the second disk (vdb). Be aware that the second disk is an empty disk with an emphemeral life as it is destroyed when you delete the instance. The compute node attaches to the - requested cinder-volume using iSCSI and maps this to the third + requested cinder-volume + using iSCSI and maps this to the third disk (vdc) as requested. The vCPU and memory resources are provisioned and the instance is booted from the first drive. The instance runs and changes data on the disks indicated in red in @@ -232,8 +237,8 @@ Once you launch a VM in OpenStack, there's something more going on in the background. To understand what's happening - behind the Dashboard, lets take a deeper dive into OpenStack’s + behind the dashboard, lets take a deeper dive into OpenStack’s VM provisioning. For launching a VM, you can either use - Command Line Interface or the OpenStack Horizon Dashboard. + the command-line interfaces or the OpenStack dashboard.