diff --git a/doc/admin-guide-cloud/compute/section_compute-system-admin.xml b/doc/admin-guide-cloud/compute/section_compute-system-admin.xml index 3b3ad9ddec..55efb17915 100644 --- a/doc/admin-guide-cloud/compute/section_compute-system-admin.xml +++ b/doc/admin-guide-cloud/compute/section_compute-system-admin.xml @@ -81,7 +81,7 @@
Manage Compute users Access to the Euca2ools (ec2) API is controlled by an access and secret key. The - user’s access key needs to be included in the request, and the request must be signed + user's access key needs to be included in the request, and the request must be signed with the secret key. Upon receipt of API requests, Compute verifies the signature and runs commands on behalf of the user. To begin using Compute, you must create a user with the Identity Service. diff --git a/doc/admin-guide-cloud/networking/section_networking-scenarios.xml b/doc/admin-guide-cloud/networking/section_networking-scenarios.xml index 05f3ffe3b9..43072da881 100644 --- a/doc/admin-guide-cloud/networking/section_networking-scenarios.xml +++ b/doc/admin-guide-cloud/networking/section_networking-scenarios.xml @@ -700,7 +700,7 @@ physical_interface_mappings = physnet2:eth1 scale out on large overlay networks. This traffic is sent to the relevant agent via encapsulation as a targeted unicast. Current Open vSwitch and Linux Bridge - tunneling implementations broadcast to every agent, even if they don’t host the + tunneling implementations broadcast to every agent, even if they don't host the corresponding network as illustrated below. diff --git a/doc/admin-guide-cloud/section_object-storage-monitoring.xml b/doc/admin-guide-cloud/section_object-storage-monitoring.xml index ac73952869..f9c96969c1 100644 --- a/doc/admin-guide-cloud/section_object-storage-monitoring.xml +++ b/doc/admin-guide-cloud/section_object-storage-monitoring.xml @@ -124,7 +124,7 @@
Statsdlog - Florian’s Florian's Statsdlog project increments StatsD counters based on logged events. Like Swift-Informant, it is also diff --git a/doc/admin-guide-cloud/telemetry/section_telemetry-data-collection.xml b/doc/admin-guide-cloud/telemetry/section_telemetry-data-collection.xml index 08094af183..bcaa72d2fe 100644 --- a/doc/admin-guide-cloud/telemetry/section_telemetry-data-collection.xml +++ b/doc/admin-guide-cloud/telemetry/section_telemetry-data-collection.xml @@ -608,7 +608,7 @@ sinks: from the sample values of the cpu counter, which represents cumulative CPU time in nanoseconds. The transformer definition above defines a scale factor (for nanoseconds, multiple CPUs, etc.), which is applied before the - transformation derives a sequence of gauge samples with unit ‘%’, from sequential + transformation derives a sequence of gauge samples with unit '%', from sequential values of the cpu meter. The definition for the disk I/O rate, which is also generated by the rate of change transformer: @@ -628,7 +628,7 @@ sinks: Unit conversion transformer Transformer to apply a unit conversion. It takes the volume of the meter and - multiplies it with the given ‘scale’ expression. Also supports map_from + multiplies it with the given 'scale' expression. Also supports map_from and map_to like the rate of change transformer. Sample configuration: transformers: @@ -664,7 +664,7 @@ sinks: , user_id and resource_metadata. To aggregate by the chosen attributes, specify them in the configuration and set which value of the attribute to take for the new sample (first to take the first - sample’s attribute, last to take the last sample’s attribute, and drop to discard + sample's attribute, last to take the last sample's attribute, and drop to discard the attribute). To aggregate 60s worth of samples by resource_metadata and keep the resource_metadata of the latest received @@ -699,7 +699,7 @@ sinks: meters and/or their metadata, for example: memory_util = 100 * memory.usage / memory A new sample is created with the properties described in the target - section of the transformer’s configuration. The sample’s volume is the + section of the transformer's configuration. The sample's volume is the result of the provided expression. The calculation is performed on samples from the same resource. diff --git a/doc/arch-design/ch_references.xml b/doc/arch-design/ch_references.xml index b8015c1fe9..e6bb0da80e 100644 --- a/doc/arch-design/ch_references.xml +++ b/doc/arch-design/ch_references.xml @@ -64,7 +64,7 @@ Open Compute - Project: The Open Compute Project Foundation’s mission is + Project: The Open Compute Project Foundation's mission is to design and enable the delivery of the most efficient server, storage and data center hardware designs for scalable computing. diff --git a/doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml b/doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml index 211d3d7935..e65d5425cd 100644 --- a/doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml +++ b/doc/arch-design/compute_focus/section_tech_considerations_compute_focus.xml @@ -23,7 +23,7 @@ consistently performant. This process is important because, when a service becomes a critical part of a user's infrastructure, the user's fate becomes wedded to the SLAs of - the cloud itself. In cloud computing, a service’s performance + the cloud itself. In cloud computing, a service's performance will not be measured by its average speed but rather by the consistency of its speed. There are two aspects of capacity planning to consider: diff --git a/doc/arch-design/generalpurpose/section_architecture_general_purpose.xml b/doc/arch-design/generalpurpose/section_architecture_general_purpose.xml index 3913ed0f3f..b4b34fdd73 100644 --- a/doc/arch-design/generalpurpose/section_architecture_general_purpose.xml +++ b/doc/arch-design/generalpurpose/section_architecture_general_purpose.xml @@ -441,7 +441,7 @@ are instances where the relationship between networking hardware and networking software are not as tightly defined. An example of this type of software is Cumulus Linux, which is - capable of running on a number of switch vendor’s hardware + capable of running on a number of switch vendor's hardware solutions. Some of the key considerations that should be included in the selection of networking hardware include: diff --git a/doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml b/doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml index 027009d44d..0aaab469d1 100644 --- a/doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml +++ b/doc/arch-design/generalpurpose/section_tech_considerations_general_purpose.xml @@ -172,7 +172,7 @@ encapsulating with VXLAN, and VLAN tags. Initially, it is suggested to design at least three network segments, the first of which will be used for access to the - cloud’s REST APIs by tenants and operators. This is generally + cloud's REST APIs by tenants and operators. This is generally referred to as a public network. In most cases, the controller nodes and swift proxies within the cloud will be the only devices necessary to connect to this network segment. In some @@ -508,7 +508,7 @@ delays in operation functions such as spinning up and deleting instances, provisioning new storage volumes and managing network resources. Such delays could adversely affect an - application’s ability to react to certain conditions, + application's ability to react to certain conditions, especially when using auto-scaling features. It is important to properly design the hardware used to run the controller infrastructure as outlined above in the Hardware Selection @@ -577,7 +577,7 @@ dedicated interfaces on the Controller and Compute hosts. When considering performance of OpenStack Object Storage, a - number of design choices will affect performance. A user’s + number of design choices will affect performance. A user's access to the Object Storage is through the proxy services, which typically sit behind hardware load balancers. By the very nature of a highly resilient storage system, replication @@ -617,7 +617,7 @@ access maintained in the OpenStack Compute code, provides a feature that removes a single point of failure when it comes to routing, and this feature is currently missing in OpenStack - Networking. The effect of legacy networking’s multi-host + Networking. The effect of legacy networking's multi-host functionality restricts failure domains to the host running that instance. On the other hand, when using OpenStack Networking, the diff --git a/doc/arch-design/introduction/section_methodology.xml b/doc/arch-design/introduction/section_methodology.xml index 48b92c0f26..a97a5b2ec2 100644 --- a/doc/arch-design/introduction/section_methodology.xml +++ b/doc/arch-design/introduction/section_methodology.xml @@ -27,7 +27,7 @@ Use case planning can seem counter-intuitive. After all, it takes about five minutes to sign up for a server with Amazon. Amazon does not know in advance what any given user is planning on doing with it, right? - Wrong. Amazon’s product management department spends plenty of time + Wrong. Amazon's product management department spends plenty of time figuring out exactly what would be attractive to their typical customer and honing the service to deliver it. For the enterprise, the planning process is no different, but instead of planning for an external paying @@ -77,7 +77,7 @@ As an example of how this works, consider a business goal of using the - cloud for the company’s E-commerce website. This goal means planning for + cloud for the company's E-commerce website. This goal means planning for applications that will support thousands of sessions per second, variable workloads, and lots of complex and changing data. By identifying the key metrics, such as number of concurrent transactions @@ -232,13 +232,13 @@ But not too paranoid: Not every application needs the - platinum solution. Architect for different SLA’s, service + platinum solution. Architect for different SLA's, service tiers and security levels. Manage the data: Data is usually the most inflexible and complex area of a cloud and cloud integration architecture. - Don’t short change the effort in analyzing and addressing + Don't short change the effort in analyzing and addressing data needs. @@ -269,7 +269,7 @@ Keep it loose: Loose coupling, service interfaces, - separation of concerns, abstraction and well defined API’s + separation of concerns, abstraction and well defined API's deliver flexibility. diff --git a/doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml b/doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml index b89bfa346f..9839be7789 100644 --- a/doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml +++ b/doc/arch-design/multi_site/section_prescriptive_examples_multi_site.xml @@ -22,7 +22,7 @@ very sensitive to latency and needs a rapid response to end-users. After reviewing the user, technical and operational considerations, it is determined beneficial to build a number - of regions local to the customer’s edge. In this case rather + of regions local to the customer's edge. In this case rather than build a few large, centralized data centers, the intent of the architecture is to provide a pair of small data centers in locations that are closer to the customer. In this use diff --git a/doc/arch-design/network_focus/section_tech_considerations_network_focus.xml b/doc/arch-design/network_focus/section_tech_considerations_network_focus.xml index 6aad8b8dbf..67a01f7580 100644 --- a/doc/arch-design/network_focus/section_tech_considerations_network_focus.xml +++ b/doc/arch-design/network_focus/section_tech_considerations_network_focus.xml @@ -254,7 +254,7 @@ A requirement for vendor independence. To avoid hardware or software vendor lock-in, the design should - not rely on specific features of a vendor’s router or + not rely on specific features of a vendor's router or switch. diff --git a/doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml b/doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml index 3fbea616e0..2bad0ccc87 100644 --- a/doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml +++ b/doc/arch-design/storage_focus/section_operational_considerations_storage_focus.xml @@ -37,7 +37,7 @@ management console, or other dashboards capable of visualizing SNMP data, will be helpful in discovering and resolving issues that might arise within the storage cluster. An example of - this is Ceph’s Calamari. + this is Ceph's Calamari. A storage-focused cloud design should include: @@ -273,7 +273,7 @@ nodes. In some cases, this replication can consist of extremely large data sets. In these cases, it is recommended to make use of back-end replication links which will not - contend with tenants’ access to data. + contend with tenants' access to data. As more tenants begin to access data within the cluster and their data sets grow it will become necessary to add front-end bandwidth to service data access requests. Adding front-end diff --git a/doc/arch-design/storage_focus/section_tech_considerations_storage_focus.xml b/doc/arch-design/storage_focus/section_tech_considerations_storage_focus.xml index 00bcd2395c..92334e3b93 100644 --- a/doc/arch-design/storage_focus/section_tech_considerations_storage_focus.xml +++ b/doc/arch-design/storage_focus/section_tech_considerations_storage_focus.xml @@ -90,7 +90,7 @@ Data grids can be helpful in deterministically answering questions around data - valuation. A fundamental challenge of today’s + valuation. A fundamental challenge of today's information sciences is determining which data is worth keeping, on what tier of access and performance should it reside, and how long should it remain in a diff --git a/doc/common/section_dashboard_sessions.xml b/doc/common/section_dashboard_sessions.xml index 9d4b383795..15960b4bda 100644 --- a/doc/common/section_dashboard_sessions.xml +++ b/doc/common/section_dashboard_sessions.xml @@ -214,7 +214,7 @@ No fixtures found. If you use Django 1.4 or later, the signed_cookies back end avoids server load and scaling problems. This back end stores session data in a cookie, which is - stored by the user’s browser. The back end uses a + stored by the user's browser. The back end uses a cryptographic signing technique to ensure session data is not tampered with during transport. This is not the same as encryption; session data is still readable by an @@ -224,7 +224,7 @@ No fixtures found. scales indefinitely as long as the quantity of session data being stored fits into a normal cookie. The biggest downside is that it places session data into - storage on the user’s machine and transports it over the + storage on the user's machine and transports it over the wire. It also limits the quantity of session data that can be stored. See the Django Account reaper In the background, the account reaper removes data from the deleted accounts. - A reseller marks an account for deletion by issuing a DELETE request on the account’s + A reseller marks an account for deletion by issuing a DELETE request on the account's storage URL. This action sets the status column of the account_stat table in the account database and replicas to DELETED, marking the account's data for deletion. Typically, a specific retention time or undelete are not provided. However, you can set a @@ -19,10 +19,10 @@ The account reaper runs on each account server and scans the server occasionally for account databases marked for deletion. It only fires up on the accounts for which the server - is the primary node, so that multiple account servers aren’t trying to do it simultaneously. + is the primary node, so that multiple account servers aren't trying to do it simultaneously. Using multiple servers to delete one account might improve the deletion speed but requires coordination to avoid duplication. Speed really is not a big concern with data deletion, and - large accounts aren’t deleted often. + large accounts aren't deleted often. Deleting an account is simple. For each account container, all objects are deleted and then the container is deleted. Deletion requests that fail will not stop the overall process but will cause the overall process to fail eventually (for example, if an object delete diff --git a/doc/common/section_objectstorage-components.xml b/doc/common/section_objectstorage-components.xml index ef53a40d5b..34b9ddfbe5 100644 --- a/doc/common/section_objectstorage-components.xml +++ b/doc/common/section_objectstorage-components.xml @@ -18,7 +18,7 @@ Zones. Isolate data from other zones. A - failure in one zone doesn’t impact the rest of the cluster because data is + failure in one zone doesn't impact the rest of the cluster because data is replicated across zones. @@ -100,7 +100,7 @@ item separately or the entire cluster all at once. Another configurable value is the replica count, which indicates how many of the partition-device assignments make up a single ring. For a given partition number, each - replica’s device will not be in the same zone as any other replica's device. Zones can + replica's device will not be in the same zone as any other replica's device. Zones can be used to group devices based on physical locations, power separations, network separations, or any other attribute that would improve the availability of multiple replicas at the same time. diff --git a/doc/common/section_objectstorage-ringbuilder.xml b/doc/common/section_objectstorage-ringbuilder.xml index 68e0f34291..133c4fb659 100644 --- a/doc/common/section_objectstorage-ringbuilder.xml +++ b/doc/common/section_objectstorage-ringbuilder.xml @@ -37,12 +37,12 @@
Partition assignment list - This is a list of array(‘H’) of + This is a list of array('H') of devices ids. The outermost list contains an - array(‘H’) for each replica. Each - array(‘H’) has a length equal to + array('H') for each replica. Each + array('H') has a length equal to the partition count for the ring. Each integer in the - array(‘H’) is an index into the + array('H') is an index into the above list of devices. The partition list is known internally to the Ring class as _replica2part2dev_id. @@ -54,7 +54,7 @@ part2dev_id in self._replica2part2dev_id] account for the removal of duplicate devices. If a ring has more replicas than devices, a partition will have more than one replica on a device. - array(‘H’) is used for memory + array('H') is used for memory conservation as there may be millions of partitions.
diff --git a/doc/common/section_objectstorage-troubleshoot.xml b/doc/common/section_objectstorage-troubleshoot.xml index 852210ebb6..3d30bf4935 100644 --- a/doc/common/section_objectstorage-troubleshoot.xml +++ b/doc/common/section_objectstorage-troubleshoot.xml @@ -14,7 +14,7 @@ unmounted. This will make it easier for Object Storage to work around the failure until it has been resolved. If the drive is going to be replaced immediately, then it is just best to replace the drive, format it, remount it, and let replication fill it up. - If the drive can’t be replaced immediately, then it is best to leave it + If the drive can't be replaced immediately, then it is best to leave it unmounted, and remove the drive from the ring. This will allow all the replicas that were on that drive to be replicated elsewhere until the drive is replaced. Once the drive is replaced, it can be re-added to the ring. @@ -31,8 +31,8 @@ comes back online, replication will make sure that anything that is missing during the downtime will get updated. If the server has more serious issues, then it is probably best to remove all - of the server’s devices from the ring. Once the server has been repaired and is - back online, the server’s devices can be added back into the ring. It is + of the server's devices from the ring. Once the server has been repaired and is + back online, the server's devices can be added back into the ring. It is important that the devices are reformatted before putting them back into the ring as it is likely to be responsible for a different set of partitions than before. diff --git a/doc/config-reference/block-storage/drivers/coraid-driver.xml b/doc/config-reference/block-storage/drivers/coraid-driver.xml index e38f53d3e1..53843e7a38 100644 --- a/doc/config-reference/block-storage/drivers/coraid-driver.xml +++ b/doc/config-reference/block-storage/drivers/coraid-driver.xml @@ -346,7 +346,7 @@ coraid_repository_key = coraid_repository_key Create a volume. - $ cinder type-create ‘volume_type_name + $ cinder type-create 'volume_type_name' where volume_type_name is the name you assign the volume. You will see output similar to the following: @@ -362,7 +362,7 @@ coraid_repository_key = coraid_repository_keyAssociate the volume type with the Storage Repository. - # cinder type-key UUID set coraid_repository_key=’FQRN + # cinder type-key UUID set coraid_repository_key='FQRN' diff --git a/doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml b/doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml index 04c97837b4..c26e09ee64 100644 --- a/doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml +++ b/doc/config-reference/block-storage/drivers/windows-iscsi-volume-driver.xml @@ -36,7 +36,7 @@ Installing using the OpenStack cinder volume installer In case you want to avoid all the manual setup, you can use - Cloudbase Solutions’ installer. You can find it at https://www.cloudbase.it/downloads/CinderVolumeSetup_Beta.msi. It installs an independent Python environment, in order to avoid conflicts diff --git a/doc/config-reference/object-storage/section_object-storage-features.xml b/doc/config-reference/object-storage/section_object-storage-features.xml index 7801a452aa..a24d253563 100644 --- a/doc/config-reference/object-storage/section_object-storage-features.xml +++ b/doc/config-reference/object-storage/section_object-storage-features.xml @@ -348,8 +348,8 @@ pipeline = pipeline = healthcheck cache tempurl instance, a common deployment has three replicas of each object. The health of that object can be measured by checking if each replica is in its proper place. If only 2 - of the 3 is in place the object’s health can be said to be - at 66.66%, where 100% would be perfect. A single object’s + of the 3 is in place the object's health can be said to be + at 66.66%, where 100% would be perfect. A single object's health, especially an older object, usually reflects the health of that entire partition the object is in. If you make enough objects on a distinct percentage of the @@ -583,7 +583,7 @@ X-Trans-Id: tx602634cf478546a39b1be-0051e6bc7a The name of each file uploaded is appended to the specified swift-url. So, you can upload directly to the root of container with a URL like: https://swift-cluster.example.com/v1/AUTH_account/container/ - Optionally, you can include an object prefix to better separate different users’ + Optionally, you can include an object prefix to better separate different users' uploads, such as: https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix diff --git a/doc/networking-guide/section_ha-dvr.xml b/doc/networking-guide/section_ha-dvr.xml index 8a3207ad54..42e2b48ac5 100644 --- a/doc/networking-guide/section_ha-dvr.xml +++ b/doc/networking-guide/section_ha-dvr.xml @@ -30,7 +30,7 @@ by the router, the DVR agent populates the ARP entry. By pre-populating ARP entries across compute nodes, the distributed virtual router ensures traffic goes to the correct destination. The integration bridge on a particular - compute node identifies the incoming frame’s source MAC address as a + compute node identifies the incoming frame's source MAC address as a DVR-unique MAC address because every compute node l2 agent knows all configured unique MAC addresses for DVR used in the cloud. The agent replaces the DVR-unique MAC Address with the green subnet interface MAC diff --git a/www/developer/openstack-projects.html b/www/developer/openstack-projects.html index 3d055cb3f9..6eb02a1759 100644 --- a/www/developer/openstack-projects.html +++ b/www/developer/openstack-projects.html @@ -116,7 +116,7 @@ os-cloud-config - — Provides a set of tools to perform up-front configuration for OpenStack + - Provides a set of tools to perform up-front configuration for OpenStack clouds, currently used primarily by TripleO. @@ -134,118 +134,118 @@ oslo.concurrency - — Provides support for managing external processes and + - Provides support for managing external processes and task synchronization.
oslo.config - — Parses config options from command line and config files. + - Parses config options from command line and config files.
oslo.db - — Provides database connectivity. + - Provides database connectivity.
oslo.i18n - — Internationalization and translation utilities. + - Internationalization and translation utilities.
oslo.log - — A logging configuration library. + - A logging configuration library.
oslo.messaging - — Provides inter-process communication. + - Provides inter-process communication.
oslo.middleware - — A collection of WSGI middleware for web service development. + - A collection of WSGI middleware for web service development.
oslo.rootwrap - — Provides fine filtering of shell commands to run as root. + - Provides fine filtering of shell commands to run as root.
oslo.serialization - — Provides serialization functionality with special handling + - Provides serialization functionality with special handling for some common types.
oslo.utils - — Provides library of various common low-level utility modules. + - Provides library of various common low-level utility modules.
oslo.vmware - — Provides common functionality required by VMware drivers in + - Provides common functionality required by VMware drivers in several projects.
oslosphinx - — Provides theme and extension support for Sphinx documentation. + - Provides theme and extension support for Sphinx documentation.
oslotest - — Provides a unit test and fixture framework. + - Provides a unit test and fixture framework.
cliff - — Builds command-line programs in Python. + - Builds command-line programs in Python.
pbr - — Manages setuptools packaging needs in a consistent way. + - Manages setuptools packaging needs in a consistent way.
PyCADF - — Creates CADF events to capture cloud-related events. + - Creates CADF events to capture cloud-related events.
stevedore - — Manages dynamic plug-ins for Python applications. + - Manages dynamic plug-ins for Python applications.
TaskFlow - — Makes task execution easy, consistent, and reliable. + - Makes task execution easy, consistent, and reliable.
Tooz - — Distributed primitives like group membership protocol, lock service and leader elections. + - Distributed primitives like group membership protocol, lock service and leader elections.