Fix some inconsistencies in doc

It's more common to format words like "nova-compute" with "``"
instead of "`", and there is no need to format words like "RPC".
This patch also fixes a few syntax and typo issues.

And as requested, some typos in aggregates.rst are corrected to
improve readability.

Change-Id: I4d9c184e1448c8ea9973302f53e1773a7b66cd1e
This commit is contained in:
Chen
2018-05-24 23:08:08 +08:00
parent e2b0b469be
commit 900558dc42
2 changed files with 16 additions and 16 deletions

View File

@@ -22,12 +22,12 @@ Host Aggregates
Host aggregates can be regarded as a mechanism to further partition an
availability zone; while availability zones are visible to users, host
aggregates are only visible to administrators. Host aggregates started out as
a way to use Xen hypervisor resource pools, but has been generalized to provide
a way to use Xen hypervisor resource pools, but have been generalized to provide
a mechanism to allow administrators to assign key-value pairs to groups of
machines. Each node can have multiple aggregates, each aggregate can have
multiple key-value pairs, and the same key-value pair can be assigned to
multiple aggregates. This information can be used in the scheduler to enable
advanced scheduling, to set up Xen hypervisor resources pools or to define
advanced scheduling, to set up Xen hypervisor resource pools or to define
logical groups for migration. For more information, including an example of
associating a group of hosts to a flavor, see :ref:`host-aggregates`.
@@ -56,7 +56,7 @@ between aggregates and availability zones:
list of availability zones, they have no way to know whether the default
availability zone name (currently *nova*) is provided because an host
belongs to an aggregate whose AZ metadata key is set to *nova*, or because
there are at least one host belonging to no aggregate. Consequently, it is
there is at least one host not belonging to any aggregate. Consequently, it is
highly recommended for users to never ever ask for booting an instance by
specifying an explicit AZ named *nova* and for operators to never set the
AZ metadata for an aggregate to *nova*. That leads to some problems
@@ -65,7 +65,7 @@ between aggregates and availability zones:
moved to another aggregate or when the user would like to migrate the
instance.
.. note:: Availablity zone name must NOT contain ':' since it is used by admin
.. note:: Availability zone name must NOT contain ':' since it is used by admin
users to specify hosts where instances are launched in server creation.
See :doc:`Select hosts where instances are launched </admin/availability-zones>` for more detail.
@@ -92,7 +92,7 @@ The OSAPI Admin API is extended to support the following operations:
* start host maintenance (or evacuate-host): disallow a host to serve API requests and migrate instances to other hosts of the aggregate
* It has been deprecated since microversion 2.43. Use `disable service` instead.
* stop host maintenance: (or rebalance-host): put the host back into operational mode, migrating instances back onto that host
* stop host maintenance (or rebalance-host): put the host back into operational mode, migrating instances back onto that host
* It has been deprecated since microversion 2.43. Use `enable service` instead.

View File

@@ -28,18 +28,18 @@ and generating responses to the REST calls.
RPC messaging is done via the **oslo.messaging** library,
an abstraction on top of message queues.
Most of the major nova components can be run on multiple servers, and have
a `manager` that is listening for `RPC` messages.
The one major exception is nova-compute, where a single process runs on the
a manager that is listening for RPC messages.
The one major exception is ``nova-compute``, where a single process runs on the
hypervisor it is managing (except when using the VMware or Ironic drivers).
The manager also, optionally, has periodic tasks.
For more details on our `RPC` system, please see: :doc:`/reference/rpc`
For more details on our RPC system, please see: :doc:`/reference/rpc`
Nova also uses a central database that is (logically) shared between all
components. However, to aid upgrade, the DB is accessed through an object
layer that ensures an upgraded control plane can still communicate with
a nova-compute running the previous release.
To make this possible nova-compute proxies DB requests over `RPC` to a
central manager called `nova-conductor`
a ``nova-compute`` running the previous release.
To make this possible nova-compute proxies DB requests over RPC to a
central manager called ``nova-conductor``.
To horizontally expand Nova deployments, we have a deployment sharding
concept called cells. For more information please see: :doc:`cells`
@@ -54,11 +54,11 @@ of a typical (non-cells v1) Nova deployment.
:width: 100%
* DB: sql database for data storage.
* API: component that receives HTTP requests, converts commands and communicates with other components via the **oslo.messaging** queue or HTTP
* Scheduler: decides which host gets each instance
* Network: manages ip forwarding, bridges, and vlans
* API: component that receives HTTP requests, converts commands and communicates with other components via the **oslo.messaging** queue or HTTP.
* Scheduler: decides which host gets each instance.
* Network: manages ip forwarding, bridges, and vlans.
* Compute: manages communication with hypervisor and virtual machines.
* Conductor: handles requests that need coordination(build/resize), acts as a
* Conductor: handles requests that need coordination (build/resize), acts as a
database proxy, or handles object conversions.
While all services are designed to be horizontally scalable, you should have significantly more computes then anything else.
While all services are designed to be horizontally scalable, you should have significantly more computes than anything else.